This chapter will introduce you to the concept of time complexity and the efficiency of computer algorithms.

In this chapter, we will discuss which complexity measures are important for an algorithm and the techniques to perform these important measurements

Note: If you have already covered our course Data Structures in Python: An Interview Refresher you may skip this chapter, since a chapter on Algorithmic Complexity Measures was a part of the said course, too.

Introduction

‘Complexity’ is an approximate measure of the efficiency of an algorithm. There are 2 kinds of complexities: time and space. Time complexity and space complexity are essentially approximations of how much time and how much space an algorithm will take to process certain input respectively. There are several variables which can affect an algorithm’s performance: whether your machine is single or multiprocessor, the machine’s read and write speed to memory, the machine’s architecture (32/64 bit), its disk usage, and input. Besides input, all of these will vary from setup to setup and cannot be controlled. So, we measure the rate of growth of time/space with respect to the growth in the input. This doesn’t take varying setup parameters into account and says something robust about an algorithm’s efficiency that is standardized cross-setup. In an interview, you may not be asked to calculate the complexity of an algorithm directly, but you might be asked to calculate the complexity of an algorithm you wrote or to improve upon the complexity of an algorithm given to you.

Time Complexity

Time complexity is an algorithm’s rate of growth of time taken with respect to the input. Simply put, the time complexity is an approximation of how much time it takes for a particular algorithm to execute as the input data grows in size.

How do you calculate time complexity?

Let’s assume our algorithm is running on a machine that

  • Has a single processor
  • Has 32-bit architecture
  • Executes code sequentially, and therefore does no parallel processing
  • Takes one ‘unit’ or ‘constant’ amount of time for basic operations.

In other words, imagine that we have a completely standardized machine. Basic operations include assignment, comparison, and arithmetic operations. Here is a list of examples of basic operations.

1. x += 1
2. variable = 3
3. if(variable == 10):
4. variable = variable + 7
5. print("I'm basic too!")
6. x = variable
7. return x

The running time complexity is calculated by considering two things: the problem and the size of the data. Here is how to compute the time complexity of an algorithm:

  1. List down all the basic operations in the code
  2. Count the number of times each gets executed
  3. Sum all the counts to get an equation

A Simple Example

Let’s calculate the time complexity of the following Python algorithm.

Press + to interact
def add_one(x):
x += 1
return x
add_one(5)
Operation Number of executions
x += 1 1
return x 1

Time complexity = 1 + 1 2\text{Time complexity } \text{= 1 + 1 } \Rightarrow 2

So, an algorithm like the above would take 2 units of time regardless of the size of the input. Algorithms such as these that don’t depend on the input size are called constant time algorithms.


In the next few lessons, we will take a look at a series of examples of measuring the time complexity of an algorithm.

Level up your interview prep. Join Educative to access 80+ hands-on prep courses.