Complexity Theory

This theory aims to analyze and classify problems according to their degree of difficulty and find theoretical limits on the efficiency of algorithms.

In Complexity Theory, the “Big O notation” is used to describe the complexity of algorithms in terms of time and space. For example, an algorithm with a complexity of O(n) indicates that its execution time increases proportionally to the size of the input “n”, while an algorithm with a complexity of O(n2) indicates that its execution time grows quadratically with respect to the size of the input.

Let’s the consider a sorting algorithm # 1, with the follow execution results:

This algorithm is called O(n) linear running time.

In this algorithm, we observe that the time it takes to run increases in a linear manner. This means that as the size of the input grows, the running time also grows at the same rate. We can say that the running time scales linearly with the input size.

For example, on a smartphone, the algorithm may be executed in 4 ms, while on a supercomputer, it may be executed in 1 ms. The exact time values don’t matter for this algorithm because the running time depends on the underlying machine. We need to focus on the fact that the running time scales linearly (O(n)) with respect to the input size. We can see that 10 items are processed in 1 ms, 20 items in 2 ms, and if we process 100 items, they are processed in 10 ms. This indicates that the running time scales linearly (O(n)) with the input size, which means it will work fine.

And this is why we don’t care about the exact times but rather the linear running time of the algorithm.

Remember, we aim for the linear running time algorithms!!!

Now, let’s consider the next sorting algorithm # 2

This algorithm is called O(n2) quadratic running time.

In algorithm #2, while 10 items take 1 ms, when we try to sort 20 items, it takes 4 ms instead of the ideal 2 ms. This indicates that we have four times the running time with double the input size. If we sort 100 items, it takes 100 ms, which is much higher than the original time for 10 items.

Based on this, we can conclude that the running time of algorithm #2 scales quadratically with the input size.

Therefore, we prefer algorithm #1, which has a linear running time, over algorithm #2, which has a quadratic running time.

When we analyze and choose the most efficient solution among different algorithms, we need to perform an Asymptotic Analysis. This analysis helps determine how algorithms will scale with larger inputs and make predictions about their efficiency. It aids in understanding the inherent complexity of problems and designing efficient algorithms to solve them.

In conclusion, the analysis of algorithm complexity, represented by the Big O notation, allows us to understand how algorithms perform in terms of time and space. By examining the scalability of algorithms with different input sizes, we can make informed decisions about their efficiency and choose the most suitable solution for a given problem.

Asymptotic analysis plays a crucial role in this process, enabling us to determine how algorithms will behave as input sizes grow and make predictions about their performance.

By understanding the inherent complexity of problems and designing algorithms that efficiently tackle.

Thank you for reading! and Happy Learning!!!

Leave a comment