Understanding the O-Notation
2024-11-07
During the lectures, O-notations appear everywhere, but I never got the hang of it.
It sounds like you didn't review your Data Structures and Algorithms lecture.
Yeah, but I wouldn't be asking you if I wanted to, would I?
...Fair enough.
Imagine you wrote a piece of code:
TypeScriptfunction hellO() { console.log("Hello World!"); }
and you want to see how fast it is. You could let it run multiple times and measure the average time it took in miliseconds, but would your piece of code take the same amount of time in another computer?
Well, that depends. If it has a much better processor, then it'd probably take much less time to run it.
Exactly. We therefore need a unit independent from seconds in order to measure the runtime.
Another issue occurs when the function has an input:
TypeScriptfunction hellO(n: number) { for (let i = 0; i < n; i++) { console.log("Hello World!"); } }
Unlike the first example, the runtime of this function depends on the input number. If we assume the console.log("Hello World!")
part takes a constant time of , then the runtime of the function would be around . In this case, we say the function above has a linear runtime.
Okay, but I still don't get why we need the O-notation.
Well, the O-notation helps us to formally group programs with similar runtime behavior into the same box, known as complexity classes. Suppose we want to group all programs with a linear runtime in the same group. How would you describe the runtime of the function below?
TypeScriptfunction hello3(n: number) { console.log("Hello World!"); console.log("Hello World!"); for (let i = 0; i < n; i++) { console.log("Hello World!"); console.log("Hello World!"); } }
Using the same assumptions as above, it'd be something like , right?
Exactly. Both and are linear functions, identical apart from the principal coefficient and a constant term. The O-notation helps us to ignore these factors focus on the most significant part of the runtime.
And how does that work?
Let's look at the definiiton of the O-notation together.
Let be two functions defined over the set of natural numbers. We say , if there are natural numbers so that for all .
Don't worry if definitions scare you. Let's break it down.
The central idea of the O-notation is to express that from a certain point on, the function does not grow faster than another function apart from a constant factor.
We can say here that by setting e.g. both and to , resulting in for all .
Similarly, the expression also holds. Do you see why?
Uhh, Calculus was a long time ago... We can use the inequality for , right? So we can set and ?
Exactly! And we can do this for all functions with a linear runtime.
In lectures, I've only seen expressions like or , but I never saw something like or . I didn't even know we could do that.
We could surely notate the class of all linear functions as or even , but remember that the O-notation makes the constant factor irrelevant? That's why we usually use the simplest form such as .
Ohh, so the O-notation is just a way to classify functions while ignoring the constant factors, I see. But what do we need the for?
If we get rid of from the definition, we would need to show that the equality holds for the all natural numbers. Here's a trivial example why that can be a problem.
Since it's clear that grows much faster than , we should be able to say . However, there are still points where is smaller than , e.g. ! Multiplying by a constant factor doesn't help much here, so we need to provide a starting line from which the inequality holds, so to speak. That's what is for.
Oh, I've also seen expressions like . What does that mean?
If we can use the O-notation bidirectionally, we use the Greek alphabet instead of .
So saying means that both and ?
That's correct.
And this doesn't apply in the normal O-notation?
No, it doesn't. The O-notation only tells us: "This function definitely doesn't grow faster than that function", but it doesn't tell us anything about the other way around. You can say that , but is wrong.
So just to be clear, and ? And this goes for all following with a natural number ?
Exactly.
The notation is so confusing! Like, if you write , is also some kind of function?
It's, uh, complicated. Using the example you brought up, is a set of functions that grow at most as fast as . Some argue for using the set membership symbol instead of the equality symbol to avoid this confusion. In some cases, is in the meaning of "an arbitrary function that grows at most as fast as ", used to simplify smaller function terms like .
There's one more thing. You told me earlier that we need a measure independent from time, like, in a physical sense. You never introduced the unit though and went straight to the O-notation. So if I say "My function has a runtime of ", I now get that it means something like "With the input length of , my function takes no more than to execute with some constant ", but in what, apples? Bananas?
The unit of our choice is steps of a Turing Machine.
What's a Turing Machine?
That's an excellent question! Let's discuss that in the next section.
Something's telling me I shouldn't have asked that question.