If you ever studied data science and algorithms, you are sure to have or of Big O before. But who or what is it?

In computer science, the Big O notation is a way of expressing the upper bound of the time complexity of an algorithm. In other words, it’s a way of measuring how well an algorithm scales as the size of the input increases.

Here’s how it works:

Let’s say you have an algorithm that takes an input of size `n`

and processes it. The time complexity of the algorithm is the amount of time it takes to run the algorithm as a function of `n`

. For example, if the algorithm takes 1 second to run when `n`

is 10, and 2 seconds to run when `n`

is 100, the time complexity of the algorithm is O(n).

The Big O notation is a way of expressing the upper bound of the time complexity of your algorithm.

Here are some examples of the different Big O notations and what they mean:

- O(1) refers to an algorithm that has a constant time complexity. This means that the running time of the algorithm does not depend on the size of the input. In other words, the running time is the same no matter how big the input is.
- O(n) refers to an algorithm that has a linear time complexity. This means that the running time of the algorithm is directly proportional to the size of the input. In other words, if the input is twice as big, the running time will be twice as long.
- O(2n) refers to an algorithm that has an exponential time complexity. This means that the running time of the algorithm grows exponentially as the size of the input increases. In other words, if the input is twice as big, the running time will be 2^2 = 4 times as long.
- O(n^2) refers to an algorithm that has a quadratic time complexity. This means that the running time of the algorithm is directly proportional to the square of the size of the input. In other words, if the input is twice as big, the running time will be four times as long.

In general, the Big O notation is used to describe the performance of an algorithm in the worst-case scenario. This is because it provides a useful upper bound on the performance of the algorithm, which can be useful for comparing the performance of different algorithms.