What is Time Complexity of Algorithms?

What is Time Complexity of Algorithms?

Introduction

Before we understand Time Complexity of Algorithms, we should be well known to Complexity or more specifically saying Computational Complexity. If you know what is complexity you can continue here for Time Complexity.

Computational Complexity

Computational complexity is a branch of computer science that examines algorithms based on the number of resources needed to execute them. Complexity is often stated as a function of n, where n is the size of the input since the number of necessary resources fluctuates with the size of the input.

Due to the classic Halting issue, it is hard to estimate the runtime complexity of a program analytically (without running it). In fact, it is typically impossible to know whether or not an arbitrary program completes successfully, much alone how efficiently it does so.

However, the temporal complexity of a program may be approximated quantitatively. In fact, I would suggest that doing so is a highly beneficial learning tool that will aid in the development of your intuition.

Types of Computational Complexity

There are two main types of complexity i.e. Time complexity and Space complexity. When examining an algorithm, it is essential to consider both the time complexity and the space complexity.

Time Complexity

Time Complexity of an algorithm is calculated by counting the number of basic operations executed by the algorithm, assuming that each elementary operation takes a constant amount of time to execute. Time complexity depends upon the input size.

Space Complexity

Space complexity is the amount of memory needed to solve a problem relative to the size of the input. Even though space complexity is significant when examining an algorithm, we will mostly focus on Time complexity in this article

Time Complexity

As you’re reading this article till now, then that means you’re much familiar with time complexity because we already explain it at the beginning of the article but still we will try to explain it in some different way. Now, let’s try to understand it with a short description from Wikipedia.

In computer science, the time complexity is the computational complexity that describes the amount of time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform.

wikipedia

Time complexity is directly propositional to the total number of key/fundamental operations i.e. for only fundamental operations processing cycles we consumed Processor.

Let’s see the time complexity of for loop with many circumstances

# Here n is a constant
for (1,n,1):
# some O(1) expressions

o(1) + o(1) +…..+ up to n times and here, n is a constant so it will take O(constant) time.

Types of Time Complexity

Suppose we have the unordered list [1, 4, 2, 8, 2, 3, 5, 7, 9] and need a linear search to discover the index of a value. Now, let’s understand all the 3 cases:

  • Best-case: this is the problem-solving difficulty for the optimal input. In this example, it would be optimal to look for the number 1. Since this is the first item in the list, it will be encountered during the first loop.
  • Average case: This is the typical level of difficulty associated with addressing the issue. This complexity is determined in relation to the distribution of input data values. Maybe this is not the finest example, but based on our data, we can claim that the typical scenario is when we’re looking for a value in the “middle” of the list, such as 2.
  • Worst case: The difficulty of solving the issue with the worst possible input of size n. In our example, the worst-case scenario would be searching for the number 9, the final item on the list.

Asymptotic Notations of Time Complexity

Asymptotic Notation is a mathematical notation that depicts the study of a function’s limiting nature when its arguments go towards a value or infinity.

asymptotic notations of time complexity
  • Big O (oh) Notation: It represents the worst-case behavior of the algorithm. It gives an upper bound on running time. Basically, it’s used for comparing the execution time of 2 algorithms or more than that. for instance: f(n)<=O(g(n)) here, g(n) is greater than or equal to the f(n).

2^n=O(3^n) It’s true because g(n) is larger then f(n).

  • Big Omega Notation: It covers the Best case behavior of the algorithm. and it captures a lower bound on the running time. symbolic representation: f(n)>= Ω(g(n)) here, g(n) is less or smaller than or equal to f(n).
  • Big Theta Notation: It works on Average Bound which means highest upper bound + higher lower bound. It gives the lower and upper bound on running time. Symbolic representation: f(n)= (Θ(g(n))) here, g(n) and f(n) are asymptotically equal.

Common time complexities:

NameTime of Complexity
Constant TimeO(1)
Logarithmic TimeO(log(n))
Linear TimeO(n)
Polynomial Linear TimeO(nlog(n))
Quadratic TimeO(n^2)
cubic timeO(n^3)
polynomial timeO(n^k)
Exponential timeO(c^n)
Factor TimeO(n!)

Summary

To summarize, if you want to measure the performance of an algorithm or code, you can measure it in two main ways i.e. how much time(time complexity) is taken by the algorithm and how much resources(space complexity) are used by the algorithm.

Thank you for reading this article, click here to start learning Python in 2022.


Also Read:

Share:

Author: Ayush Purawr