Running Time Analysis of Algorithms – Introduction

Introduction

When we enter the world of programming and algorithm creation, one of the biggest concerns we have is the time it takes for an algorithm to execute.

The first thing that usually comes to mind is how long it takes to execute in milliseconds, seconds, or minutes. Although this is valid, it is usually the time it takes for the algorithm to execute in a specific context.

For example, if an algorithm processes 1000 employees, it may take 5 seconds on your laptop, but if you run it on a tablet, it may take up to 100 seconds, and if you run it on another PC, it may take 15 seconds due to its characteristics. The time can vary, even due to external factors such as the network or the database.

Therefore, in order to determine if the time is efficient, we need to use an abstract measure to analyze the time complexity of an algorithm’s execution. In this case, we use Big O notation to analyze the time complexity of an algorithm.

What is the basis of this abstract measure (Big O)?

The time complexity of an algorithm, as mentioned before, is based on an abstract measure of time and is not necessarily related to specific units like seconds or milliseconds. Instead, it focuses on the relationship between the size of the input data and how the algorithm executes.

The time complexity is generally expressed in terms of Big O notation, which provides an asymptotic description of the growth of the execution time as the size of the input data approaches infinity. For example, a time complexity of O(n^2) indicates that the algorithm’s execution time increases quadratically with the size of the input data.

Therefore, this measure is used as a relative guide to compare algorithms and evaluate their efficiency, to determine if the algorithm is the most efficient rather than providing an accurate measure of time in specific units.

Explaining the Running Time Analysis

So, first of all, there are two crucial parameters of algorithms, the memory complexity and the running time complexity.

Memory complexity refers to the amount of memory required by a given algorithm to run. Since memory is relatively inexpensive, the focus is usually more on the running time complexity rather than the memory complexity.

Running time complexity, on the other hand, determines how much time an algorithm needs to execute. And here’s a spoiler alert! We will reach the conclusion that there is a tradeoff between memory and running time. In other words, if we can allocate more memory, we can significantly reduce the running time.

As mentioned earlier, when analyzing the time complexity, we don’t measure the actual time in seconds or milliseconds. Instead, we focus on the number of steps the algorithm takes relative to the size of the input data. As the input data increases in size, the algorithm will require more steps to complete its execution.

Why the Running Time Analysis is a better approach ?

The Running Time Analysis is a better approach because this measure is more generic, flexible and finally is machine independent. And the measure of the exactly time will change is not a better approach because the time that will take to be executed the algorithm will be based on the computer where is executed, beside of the external factors like the network, database, etc.

To understand more the Running Time Analysis, consider the next scenario:

We have an input size of the 10 items that runs in 100 ms and 100 items in 1000 ms.

10 items -> 100 ms
100 items -> 1000 ms
GOOD

In this scenario, the algorithm processes 10 items in 100 ms, and 100 items in 1000 ms. This behavior indicates that the algorithm scales with the input size, N. It is considered favorable when the running time increases linearly with the input size.

Now, let’s consider the next scenario:

We have an input size of the 10 items that runs in 100 ms and 100 items in 100000 ms.

10 items -> 100 ms
100 items -> 100000 ms
BAD

In this scenario, the algorithm processes 10 items in 100 ms, and 100 items in 100000 ms. This behavior with the previous scenario indicates that the algorithm scales in a quadruple with the input size, N. It is considered unfavorable when the running time increases quadratic with the input size because it can freeze our application.

Remember we prefer deterministic algorithms where the running times were approximately linear.

In the case of YouTube where finds the video and returns a result to the user in milliseconds, the implementation of that search algorithm is working extreme fast.

Remember, even the algorithm doesn’t scale well, then it means that it is slow.

Conclusion

In conclusion, is not a good approach to measure the exact running time of the algorithm. We should measure the number of steps the algorithm requires instead with respect to the input size; it’s more generic and machine independent).

Leave a comment