4

Big O Notation - explained as easily as possible

 3 years ago
source link: https://thatcomputerscientist.com/big-o-notation-explained-as-easily-as-possible
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Big O Notation - explained as easily as possible

Featured on Hashnode

Subscribe to my newsletter and never miss my upcoming articles

Data Structures and Algorithms is about solving problems efficiently. A bad programmer solves their problems inefficiently and a really bad programmer doesn't even know why their solution is inefficient. So, the question is, How do you rank an algorithm's efficiency?

The simple answer to that question is the Big O Notation. How does that work? Let me explain!

Say you wrote a function which goes through every number in a list and adds it to a total_sum variable.

Screenshot 2021-01-16 at 2.02.19 PM.png

If you consider "addition" to be 1 operation then running this function on a list of 10 numbers will cost 10 operations, running it on a list of 20 numbers costs 20 operations and similarly running it on a list of n numbers costs the length of list (n) operations.

Screen Recording 2021-01-16 at 2.01.09 PM.gif

Now let's assume you wrote another function that would return the first number in a list.

Screenshot 2021-01-16 at 2.09.39 PM.png

Now, no matter how large this list is, this function will never cost more than one operation. Fairly, these two algorithms have different time complexity or relationship between growth of input size and growth of operations executed. We communicate these time complexities using Big O Notation.

Big O Notation is a mathematical notation used to classify algorithms according to how their run time or space requirements grow as the input size grows.

Referring to the complexities as 'n', common complexities (ranked from good to bad) are:

  • Constant - O(1)
  • Logarithmic O(log n)
  • Linear - O(n)
  • n log n - O(n log n)
  • Quadratic - O(n²)
  • Exponential - O(2ⁿ)
  • Factorial - O(n!)

Screenshot 2021-01-16 at 2.23.08 PM.png

Our first algorithm runs in O(n), meaning its operations grew in a linear relationship with the input size - in this case, the amount of numbers in the list. Our second algorithm is not dependent on the input size at all - so it runs in constant time. Let's take a look at how many operations a program has to execute in function with an input size of n = 5 vs n = 50.

n = 5n = 50O(1)11O(log n)46O(n)550O(n log n)20300O(n²)252500O(2ⁿ)321125899906842624O(n!)1203.0414093e+64

It might not matter when the input is small, but this gap gets very dramatic as the input size increases.

Screen Recording 2021-01-16 at 2.33.46 PM.gif

If n were 10000, a function that runs in log(n) would only take 14 operations and a function that runs in n! would set your computer on fire!

For Big O Notation, we drop constants so O(10.n) and O(n/10) are both equivalent to O(n) because the graph is still linear.

Screenshot 2021-01-16 at 2.47.52 PM.png

Big O Notation is also used for space complexity, which works the same way - how much space an algorithm uses as n grows or relationship between growth of input size and growth of space needed.

So, yeah! This has been the simplest possible explanation of the Big O Notation from my side and I hope you enjoyed reading this. If you found this information helpful, share it with your friends on different social media platforms and consider clicking that follow button up there, it really motivates me to write more.

Screenshot 2021-01-17 at 10.17.42 AM.png

And If you REALLY liked the article, consider buying me a coffee 😊. If you have any suggestions or feedback, feel free to let me know that in the comments.

Happy Programming!

Twitter Follow GitHub followers


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK