MathJax

Thursday, June 6, 2019

Time Complexity, Part 2: Notation Station

Continuation from Time Complexity, Part 1: Intro to Big-O

Last time we spoke about Time Complexity, we broached the topic of Big-O analysis.  We said that Big-O analysis helps us judge how our algorithms' speed is effected as data-sets become large. We also implied that Big-O analysis is done assuming the least optimal conditions possible for our algorithms.  In upcoming articles, we will conduct proper Big-O analysis, but first we need the 'vocabulary' necessary to talk about our code. 

In the same way that a Ferrari on a long stretch of open road will perform better than a beat up ice cream truck: some algorithms' speed will scale fabulously and others poorly as our data-sets get larger.  This article is about how we can express those differences.

'Constant Time' also known as $O(1)$.

 

Writing an algorithm that scales in "constant time" is like inventing a car that can travel around the earth in the same time as it takes to travel down the block.  It implies that your algorithm doesn't slow down at all, regardless of the data size.  In most programming languages, arithmetic operators, inequalities, and most hash-map operations (when keys are already known) occur in constant time.  If you have solved a technical interview question with an algorithm that scales in constant time, chances are that you're in great shape.  Consider that let x = 0; x += 5; executes at the same speed as let y = 0; y += 1000000; even though the integer 1000000 requires more storage space than the number 5. If we were to graph this behavior, it would look like this:


'Linear Time' also known as $O(n)$. 

 

Brief, but necessary, tangent.

The first time I saw the term $O(n)$, the knee-jerk question that came to mind was, "What does $n$ mean?"  The variable $n$ is a convention.  It assumes your algorithm is processing one data-set and refers to the size of that data-set.  The notation $O(\ldots )$ refers to how the runtime of your algorithm will scale.  In my car analogies, $n$ refers to the distance the car has to travel while $O(\ldots )$ describes the time required to reach the destination.

In the real world, it's a rookie mistake to only think in terms of $n$, since there may be multiple data-sets that your algorithm is processing.  To illustrate that point, here is an excerpt from Cracking the Coding Interview, 6th Edition:
Suppose we had an algorithm that took in an array of strings, sorted each string, and then sorted the full array. What would the runtime be?

Many candidates will reason the following: sorting each string is $O(n\cdot \log n)$ and we have to do this for each string, so that's $O(n\cdot n\cdot \log n)$. We also have to sort this array, so that's an additional $O(n\cdot \log n)$ work.  Therefore,the total runtime is $O(n^2\cdot \log n + n\cdot \log n)$, which is just $O(n^2\cdot \log n)$.

This is completely incorrect. Did you catch the error?

The problem is that we used $n$ in two different ways. In one case, it's the length of the string (which string?). And in another case, it's the length of the array.
Don't worry if you didn't understand all the notation that was included in the excerpt, just focus on the take home message.  In Big-O notation, different data-sets get their own variable.  You'll see examples of this in upcoming articles.

From this point onward, I'll be using a code examples to illustrate my point.  While not strictly required, it may be useful for you to type out the code for yourself and execute it.  The concepts that I'm discussing can be generalized to any programming language; I will be using Javascript with Node.  You can use this online Node IDE to follow along, if you'd like.

Back to 'Linear Time.'

Writing an algorithm that operates in linear time is like inventing a Toyota Carolla.  The more road between you and your destination, the longer your commute (in proportion to the distance).  Similarly, in a linear time algorithm, if the data-set gets bigger, the runtime is increased by a steady amount (in proportion to $n$). Your algorithms will scale in linear time if they contain a loop of $n$ cycles optionally with respect to an iterable data-type containing $n$ items.  This is because we have to "touch" each item in the data-set at least once. The last few sentences were a mouthful! To illustrate them, consider this:

const data = [1, 2, 3];
const size = data.length;

let count = 0;

for (let i = 0; i < size; i++) {
  count = count + 1;
}

console.log(size, count);

  • In the above code, size represents the size of the data-set, which is the same as $n$.
  • The count variable represents the number of operations that were were conducted. Since the operation we are performing occurs in constant time and we are performing the same operation each time, we can use the count variable to approximate the behavior of $O(n)$, which describes the runtime of the code.
  • In comparing the size variable to the count variable, we are actually comparing how our data size is effecting our runtime.  Let's execute our code to see what it prints:

$node main.js
3 3

It doesn't matter how many items are in the data variable, size will always equal count so long as the rest of the code remains unaltered.  In other words, our runtime is proportional to the size of our data.  If we were going to graph this behavior, it would look something like this:


Addition in Linear Time

 

If you've spent any time coding, you've probably had to loop through your data multiple times.  Let's consider the following code:

const data = [1, 2, 3];
const size = data.length;

let count = 0;

for (let i = 0; i < size; i++) {
  count = count + 1;
}

for (let i = 0; i < size; i++) {
  count = count + 1;
}

console.log(size, count);

How would we express the time complexity of that code in Big-O notation? Well, we looped through our entire data-set twice, so it would be $O(n) + O(n) = O(2n)$.  If we were to execute the above code, it prints:

$node main.js
3 6

Since we're looping through the data twice, we are having to do twice as many operations. That is why count is now double the value of size. In the real world, the ability to optimize an algorithm from $O(2\cdot n)$ to $O(n)$ is likely a valuable optimization. That said, in Big-O analysis, we typically don't care about the difference between $O(2n)$ and $O(n)$ because the type of scaling is still linear. If we were to graph the runtime of the above code as it scales with larger data-sets, it would look like this:



In the next article, I'll discuss why Big-O analysis emphasizes the type of scaling instead of the actual scaling.  For now, we need to talk about one more operation.

Multiplication in Linear Time.

 

We have seen addition, but what about multiplication? Ask yourself, "What is the relationship between addition and multiplication?" When we say $3\times 3$, what do we mean? Well, that's $3 + 3 + 3$. We are essentially looping over the "plus 3" operation 3 times. When we multiply $O(n)\times O(n)$, we're looping over your data to perform some operation "data" times. Consider this code:

const data = [1, 2, 3];
const size = data.length;

let count = 0;

for (let i = 0; i < size; i++) {
  for (let j = 0; j < size; j++) {
    count = count + 1;
  }
}

console.log(size, count);

It's very important to contrast the multiplication example with the addition example. In both instances, there are two loops.  However, the multiplication contains a loop within a loop; also called a "nested loop."  This is a very common point of confusion and its important to distinguish the two because multiplication results in another type of scaling.  When we execute the code above, this is printed:

$node main.js
3 9

This result may not look too bad, but that's because our example data-set is small.  Try adding a few more elements to data and see what happens.  You will see that the scaling looks like this:



'Quadratic Time' also known as $O(n^2)$

 

If you've written an algorithm in quadratic time, you've just invented a car that slows down as the road becomes longer. This means that as your data-set gets larger, your algorithm will have to perform exponentially more operations.  Quadratic time complexity results when your code contains a nested loop.  All of this should be ringing a bell.  We discussed this in the "Multiplication in Linear Time" section above.  After all, $O(n)\times O(n) = O(n^2)$.

Wrapping Up

 

Lets summarize what we've talked about:
  • $n$ means the size of the data that your processing (assuming only one data-set).
  • $O(\ldots )$ describes the worst-case runtime of your algorithm.
  • 'Constant time' or $O(1)$ means that your algorithm's speed is independent of your data-size.
  • 'Linear time' or $O(n)$ means that your algorithm's speed is proportional to your data-size.
    • Linear addition occurs when you loop through your data-set sequentially.
    • Linear multiplication occurs when you loop through your data within a loop through your data - or a nested loop. 
  • 'Quadratic time' or $O(n^2)$ means that your algorithm's speed slows down as your data-set gets larger.  It is the result of linear multiplication.
Now that we have some vocabulary under our belt, in the next article we're going discuss the "rules" of Big-O analysis and why they exist.  We'll also analyze our first bit of code using Big-O analysis.

P.S. - You may be wondering about other types of scalings, such $O(\log n)$, $O(n\log n)$, $O(2^n)$, $O(n!)$, or $O(\infty )$.  Those will all be covered in a future article.  Also, I'm well aware that you don't necessarily have to "loop" through your data to achieve an $O(n)$ complexity, but it's a good place to begin learning the concept.  In future articles, I'll be going through tons of examples which will hopefully add a more nuanced understanding to the concepts I've described here.

Monday, June 3, 2019

Time Complexity, Part 1: Intro to Big-O

When I think of the term "Time Complexity," I think of the Star Trek episode where Jean-Luc Picard had to travel through time to destroy a cosmic anomaly.  Thankfully, the term actually refers to something a whole lot simpler.

"All that really belongs to us is time; even he who has nothing else has that."  - Baltasar Gracian


Consider this question: If you were living in New York City, what would be the best way of getting to Los Angeles?  I hope you didn't say "walking."  Right now, the best way to go from NYC to LA is to fly there by airplane.  If it were possible, most of us would just teleport.  People value their time, so they'd like to travel fast to preserve it.  The same holds true for algorithms.  We want them to be as fast as possible.  Time complexity analysis helps us judge how fast our algorithms can work.

Notice, I used the word "judge" and not "measure." You can measure how fast your car is going by looking at the odometer.  However, that information doesn't tell you much about the car itself.  How does your car handle on city roads? highways? or rough terrain?  How long can your car drive before it breaks down?   Measuring an algorithm's speed is useful, but it doesn't give you a good idea of the whole picture.

To judge our algorithm's speed, we need to do an analysis of our code.  Since we are going to talk about code analysis in upcoming articles, lets first try to understand, on a gut level, why we analyze the way we do.

"It is a capital mistake to theorize before one has data." - Sherlock Holmes


Think back to our car analogy.  The speed of a car can be measured in miles per hour.  "Miles" refers to a distance and "hour" refers to a unit of time.  When we're judging our algorithm, we're not really talking about distance; our code is not going to sprout legs and run away from us.  In most cases, our algorithms are processing data.  It takes a car longer to travel across the country than it would to travel across a town; likewise, the more data an algorithm has to process, the more time the algorithm will need to process it.

It's time for a fun thought experiment.  Let's pretend that we've written a function.  Let's now time that function as we run it with larger and larger data-sets.  We then plot those times on a graph with the x-axis representing the amount of data that the function has to churn through and the y-axis that represents the amount of time it takes for the function to complete it's defined tasks.  That graph might look something like this (the red line represents our function):


"We demand rigidly defined areas of doubt and uncertainty!" - Douglas Adams


If you wanted to buy a car, and you wanted to judge how fast the car could go - would you feel more confident about your judgement if you took it for a test drive around the block or around the city?  Personally, the more road I get to drive on, the better I would feel about my judgement.  Similarly, when judging the speed of an algorithm we really only care about how it handles a large amount of data.  That helps us be more sure about what we're thinking.  In the graph above, we really only care about how our function performs after point $k$.

Back to the car analogy: No matter how much we test drive our car, no matter how confident we feel about our judgement, we are really only making a guess and hoping that we're in the ballpark of what the actual speed might be.  When it comes to judging algorithms, it's the same concept.  We are not actually measuring the speed of the algorithm - we're judging it based on where we think the ballpark is going to be.  The two dotted blue lines in the above graph represents the ballpark.  The top blue line is the slowest the function could be and the bottom blue line is the fastest the function could be.

In this fun thought experiment, we pretended to measure the time it takes for our function to perform given larger and larger data-sets.  In the real world, all we really have to make our initial judgement are the blue lines.  In the upcoming articles, we'll talk about how to determine where those lines might be using time complexity analysis.

"The optimist proclaims that we live in the best of all possible worlds; and the pessimist fears this is true." - James Branch Cabell


The most common type of time complexity analysis is called Big-O analysis.  When we talk about Big-O analysis, we're really just trying to reason about one of the blue lines in that graph above.  Specifically, the top blue line: the one that represents the slowest we think our algorithm can perform.

You might be asking yourself, "Why do folks only care about the slowest my algorithm will perform?" To answer that, let's reason by analogy: Person A and Person B both have the exact same commute time of 30 minutes and are both required to arrive at work by 9:00 AM.  But there is one big difference:

⦁    Person A leaves their home at 8:30 AM so they can arrive at work by 9:00 AM exactly. 
⦁    Person B leaves their home at 7:30 AM so they can arrive at work an hour early, by 8:00 AM.

Both people will usually arrive on time but over the course of a few years, Person A will be late more often then Person B.  Person B will then be seen as more reliable.  Why? Because Person B took into account all the bad things that could slow him down in the morning.  In the same way, when we do a Big-O analysis, we need to take account of all the bad things that can happen to our algorithms that could potentially slow them down.  

"From a certain point onward, there is no turning back. That is the point that must be reached." - Franz Kafka

From a certain point onward there is no longer any turning back. That is the point that must be reached.
Read more at: https://www.brainyquote.com/topics/onward
From a certain point onward there is no longer any turning back. That is the point that must be reached.
Read more at: https://www.brainyquote.com/topics/onward

Let's summarize what we've talked about:

⦁    Time Complexity analysis helps us judge how fast an algorithm will perform.  We are not actually measuring the speed of the algorithm directly.
⦁    When analyzing our algorithms in this way, we only care about how the speed of our algorithm changes as the data-sets become very large.
⦁    When performing Big-O analysis, we need to take into account all the bad things that can happen to slow our algorithm down.

In the next article, we're going to discuss how the speed of our algorithm scales as we feed it more and more data.  We also discuss how we can describe the types of scaling that occur using Big-O notation!