Algorithm efficiency to understand the time-efficiency of algorithms to understand the space-efficiency of algorithms 

Algorithm efficiency

Algorithm complexity is just a way to formally measure how fast a program or algorithm runs.

Let's consider two algorithms, which both calculate the sum of the first n integers.

Which algorithm is more efficient? Why?

Sum = 0

For i from 0 to n

Sum = Sum + i


Output Sum

Sum = n * (n+1) / 2

Output Sum

n operations.

Time complexity is basically n

1 operation.

Time complexity is a constant



Time efficiency of algorithm

Algorithms may be compared on how much time they need to solve a particular problem. 
This is referred to as the time complexity of the algorithm. 

The goal is to design algorithms which will run quickly.

In order to compare the efficiency of different algorithms in terms of execution time, we need to quantify the number of basic operations or steps that the algorithm will need, in terms of the number of items to be processed.

Time efficiency is measuring the number of seconds an algorithm takes to complete. 

Time taken - the number of steps in the algorithm. 


Big O

Big O notations are used to measure how well a computer algorithm scales as the amount of data involved increases.

Big-O establishes a worst-case run time

Another mathematical concept used by Big-O is permutations. Permutation relates to the act of arranging all the members of a set into order (described by factorial - meaning that if we need to put N members in order this can be done with N! ways).

Permutation is a mathematical calculation of the number of ways a particular set can be arranged, where order of the arrangement matters

O(1) (Constant time)

O(1) describes an algorithm that takes constant time (the same amount of time) to execute regardless of the size of the input data set.

Suppose array a has n items. The statement
length  <-- len(a)
will take the same amount of time to execute however many items are held in the array.

O(n) (linear time)

O(n) describes an algorithm whose performance will grow in linear time, in direct proportion to the size of the data set. For example, a linear search of an array of 1000 unsorted items will take 1000 times longer than searching an array of 1 item.

input key
for i=1 to n
    if a[i]=key then break

How many comparisons in the worst case?

O(n2) (Polynomial time)

O(n2) describes an algorithm whose performance is directly proportional to the square of the size of the data set. A program with two nested loops each performed n times will typically have an order of time complexity O(n2). The running time of the algorithm grows in polynomial time.

O(log n) (Logarithmic time)

The time taken to execute an algorithm of order O(log n) (logarithmic time) will grow very slowly as the size of the data set increases.  Doubling the size of the data set has very little effect on the time the algorithm takes to complete. (Note that in Big-O notation the base of the logarithm, 2 in this case, is not specified because it is irrelevant to the time complexity, being a constant factor.)

O(n!) (Exponential time)

The time taken to execute an algorithm of order O(n!) will grow very quickly, faster than O(2n). Suppose that the problem is to find all the permutations of n letters. If n=2, there are 2 permutations to find. If n=6, there are 720 permutations - far more than 2n, which is only 64.


Space efficiency of algorithm

Spaсу efficiency of an algorithm is the use of the minimum amount of resources such as memory

Space used - the amount of data that the algorithm needs to keep track of at once.   

First example "Swapping data"

A = 10
B = 4
C = A
A = B
B = C

A = 10
B = 4
A = A+B
B = A-B
A = A-B

Used three variables
Memory reserves space for holding three values

Used two variables
Memory reserves space for holding two values

Second example "Look through the items in the list"

FOR i FROM 1 TO LEN(list) 
... do something 

list_length = LEN(list)
FOR i FROM 1 TO list_length
... do something

On the left, fewer variables are used, which means less memory is used. On the right side the LEN function is only called once. It is often the case that using extra memory improves time efficiency

Choose data type

When we use different variable we define data type for variable.

Which data type appropriate for code symbol of ASCII.

Why would other types of data be inefficient to use?



  1. Give definition for term "algorithm complexity".
  2. Explain the main idea in time efficiency algorithm.
  3. Explain what space efficiency algorithm is.
  4. What describes Big-O notation.


Exam questions:

Question. A hacker trying to discover a password starts by checking a dictionary containing 170,000 words. What is the maximum number of words he will need to try out?(Marks: )
  • Answer. The first method cannot use a binary search as the hacker does not know what word he is looking for. His algorithm has to do a linear search of 170,000 items to check the lower case letters.
Question. This procedure fails to find the password. He now needs to try random combinations of the letters in the password. He starts with 6-letter combinations of a-z, A-Z. Explain why the second procedure will take so much longer than the first.(Marks: )
  • Answer. Using the second method, each letter could be one of (26 + 26) different characters. The hacker has to check 526 combinations which is vastly more. Mixing uppercase and lowercase letters and including numbers and special symbols makes a password much more difficult to hack.
Категория: Programming languages | Добавил: bzfar77 (16.02.2021)
Просмотров: 15 | Теги: Big O, time efficiency, space efficiency, atgorithm efficiency, O notation | Рейтинг: 5.0/1
Всего комментариев: 0