Whenever we write a piece of code or algorithm to solve a problem, we need to understand how much time will the algorithm take to solve the given problem. This is important information as we have to make a decision based on amount of input data, that if the given solution will be sufficient or needs further improvement. The time complexity is mostly expressed in big O notation, called as- order of.

In simple words, complexity of the code can be known by checking amount of times a particular piece of code gets executed and how that is dependent on input data.

1. Lowest level of time complexity (hence best), which can be achieved is constant time complexity or is Order of 1, O(1). This means the time taken by algorithm will always be constant irrespective of input data.

e.g. Find sum of N consecutive numbers.

No matter what the value of N is, we just need to find (N*(N+1)/2).

OR print a string str 100 times.

for (i=0 to 100){print str;};

In above cases the code will take a constant time independent of the input.

2. Second best complexity that can be achieved by a code is of order of log n or O(log n). If you are familiar with logarithms, you already know what I am talking about. If you are not comfortable with logs, read on. When we say log n, we are by default assuming base as 2 (if not specified otherwise).

So if I say

log (base 2) n=x, this can also be thought of: n=2^x.

For further simplicity, all we are trying to say here is, that with every iteration of the code, the previous step problem dataset gets reduced to half (½).

A classic example for this would be binary search. We have a sorted array and we are trying to find put a number.

Array arr={1,3,6,8,11,13,17, 22, 28}

search if x=3 is available or not

Algo-

1. take 2 variables start and end at start(1) and end(28) of array.

2. Find mid of the array (position 5, No 11).

3. Check if it is the desired number x

4. if No at mid position is >x, set end to mid, else set start to mid.

5. Repeat 2 to 4

Notice that every time we execute step 4, we are reducing our data set from previous step to half.

3. Complexity of order N, O(N)that is directly proportional to number of input data set. Note when we say O(N), actual complexity can be 1*N, 2*N, 100*N, k*N where k is any constants which does not change with N.

A very simple example here would be searching a number in a random array

Array arr={4, 3, 88, 15, 67, 13, 56}

find x= 15

we just need to loop through whole array once (in worst case), from 0 to N and check if the number is available at the position.

4. O(n log n).If you look carefully, we have just added above 2 complexities for log n and n. So we are talking about a code which has complexity of log n and gets executed n number of times.

A very simple example is of tree sorting, where we create a binary search tree for a given array. We know insertion in binary search tree taken log n time. Now instead of adding just one element, we are trying to add n elements (whole array), so we are talking about n* log n complexity.

5. O (n^2). If you have n elements as input, and you have to loop twice all these elements, we are using n*n steps. Example would be bubble or selection sorting

Array arr={4, 3, 88, 15, 67, 13, 56}

Task- sort the array

for ( i=0 to N-1)

for(j=i+1 to N-1)

{

if(arr[i] is greater than arr[j])

swap(arr[i],arr[j]);

}