### What is it

If you have ever seen something like `O(n)`

or `O(2n)`

when looking at some algorithm, like a sorting algorithm for example, that was Big O notation.

It’s a way of describing or representing an algorithm efficiency.

Basically, how long is takes to run or how much memory is occupies (time or space efficiency).

Big O is a very useful concept for any programmer, but is often neglected or not very well understood, by either programmers with degrees and those self-taught.

Understanding it will give you not only a better idea of how the most popular algorithms behave and perform but, more importantly, *you will be able to
better judge how good or bad your own algorithms can be*.

And remember an algorithm could mean any particular piece of code you write to solve a particular problem, not just those you read online about sorting or searching things.

### What it means

Take `O(1)`

for instance.

The `O`

is the convention. It means Big O.

The stuff inside the parentheses, represents how the algorithm *scales* with it’s input. ```
<br/>
Here, it tell you that this algorithm scale is
```

1` or constant,no matter how big the input gets, it always takes the same time.

Here’s a silly example:

```
let array = [1,2,3,4];
const get_array_at = (array = [], index = 0) => {
if(index <= array.length-1) { //1 operation: step to check for bounds
return array[index]; //1 operation: step to get the array index
}
return null; //1 operation: step to return
}
```

We don’t know about the machine we are running this on, or the underlying implementation of the array, so we don’t take those into account. We are describing *our* algorithm

How long does it take to run our `get_array_at`

if the array had `10`

elements? What about `1000000000`

elements?

Isn’t it the same? we always do the same number of operation regarless of the size of the array. No matter how big our array is our algorithm always does the same number of steps.

Therefore our `get_array_at`

scale is constant - `O(1)`

Let’s look at another example: `O(n)`

:

```
let array = [1,2,3,4];
const print_array = (array = []) => {
let n = array.length;
for(var i=0; i<=n; i++) {
//n operations: one for each element o the array
console.log(`array idx:${i} has value ${array[i]}`);
}
}
```

What about `print_array`

? how long does it take to run with `10`

items? what about `1000000000`

items?

The more elements we have, the more operations it takes.

Our algorith takes longer the bigger it’s input `n`

is.

If we were to graph the number of operation our `O(n)`

algorithm would do, depending on the size of the input, it would look something like this:

As you can see, this is a linear graph.

`O(n)`

mean linear time. (or space if that is what you are measuring).

Here’s a few others that are most common:

`O(1)`

is constant. No matter how big the input, always takes the same time.

`O(N^2)`

(N^2 is the same as N^{2}) is quadratic time.

`O(2^N)`

is exponential time.

`O(log N)`

is logarithmic time.

Here’s how they look, if you were to graph them:

As you can clearly see, some are definitely better than others!
No one, if they could help it, would want to have an algorithm have `O(n^n)`

runtime efficiency.

There is no fixed list of possible `O(`

…`)`

.

### How Big O is measured

*Hope for the best. Prepare for the worst.*

Like you’ve seen, Big O is a measurement. However, it is not meant to be scientifically precise. It would be very hard to find the exact `O(`

..`)`

of more complex algorithms.

In fact, an algorithm can have different run times in particular scenarios or edge cases.

Instead, it is an estimate, *generally accounting for the expected possible scenario, or most common.*.

For example, consider sorting.

Generally, if we try to sort an array that is already sorted, that is considered the best case scenario, or if the array is completely reversed, that would be the worst case scenario, where it would take the longest runtime.

Some other more complex algorithms may take orders of magnitude longer than it’s expected case. In most cases, though, the worst possible scenario is the same as the most common, or expected one.

*In order to simplify, Big O, describes the expected scenario.*

That means, we know that an algorithm, while it make take less time, it will *never* take longer that it’s Big O.

*The Big in Big O is about Big numbers*

As previously said, Big O reflects how an algorithm’s efficiency *scales*, and by scale, we mean a big one.

This means that if you put really big numbers into a Big O representation with more than one term, you can often simplify it.

Let’s look at two examples:

- Removing constants

Something like `O(2n)`

can be simplified as `O(n)`

, by dropping the constant.

If you think about it, when using really large `n`

numbers, the `2`

will have little or almost no impact on the overral performance. If you remove 2 pretty much the same.

- Dropping smaller terms

What if you have something like `O(2*2^n + 10*n)`

or `O(n + n^2)`

?

The same rule applies. For really big numbers of `n`

the bigger term will become far more impactful than all the others.
We can drop the others ones and simplify it.

Remember, big O describes how an algorithm’s efficiency scales with it’s input (how it grows/shrinks).

`O(n*2^n + 10*n)`

, the biggest factor is the exponent, so we keep that, in terms of scale it is essentially `O(2^n)`

.

`O(n + n^2)`

can be the same as `O(n^2)`

That’s why, generally, most algorithm’s can be expressed using the most common notation in that graph above. Because you can simplify a lot of the cases.

There is more that can be said about Big O.

In fact, in order to be able correctly use it for your own algorithms, or other people’s, it takes some practice.

However just having a general idea of what it is and what the most common Big O’s are, goes a long way to demystifying it.

Nest time you run into one of these, or someone mentions an “O of something” in a conversation, you will know.