What is Recurrence?
Recurrence is a mathematical function that is defined recursively. It is a type of mathematical equation or formula that defines a sequence of values based on one or more previous values in the sequence. In computer science, recurrences are often used to describe the time complexity of algorithms.
Master Method for Solving Recurrence
The master method is a technique for solving recurrences of the form T(n) = aT(n/b) + f(n), where T(n) represents the running time of an algorithm, a is the number of subproblems that the algorithm breaks the original problem into, n/b is the size of each subproblem, and f(n) is the time required to divide the problem and combine the solutions. The master method provides a way to determine the time complexity of an algorithm in terms of its recurrence relation.
The master method has three cases based on the values of a, b, and f(n):
- If f(n) = O(n^c) for some constant c and a < b^c, then T(n) = Θ(n^c).
- If f(n) = Θ(n^c) for some constant c and a = b^c, then T(n) = Θ(n^c log n).
- If f(n) = Ω(n^c) for some constant c and a > b^c, and if af(n/b) <= cf(n) for some constant c < 1 and sufficiently large n, then T(n) = Θ(f(n)).
In the first case, the time required for dividing the problem and combining the solutions is small compared to the time required to solve the subproblems. This means that the total running time of the algorithm is dominated by the time required to solve the subproblems, which is proportional to n^c. Therefore, the time complexity of the algorithm is Θ(n^c).
In the second case, the time required for dividing the problem and combining the solutions is roughly equal to the time required to solve the subproblems. This means that the total running time of the algorithm is dominated by the time required to divide the problem and combine the solutions, which is proportional to n^c log n. Therefore, the time complexity of the algorithm is Θ(n^c log n).
In the third case, the time required for dividing the problem and combining the solutions is large compared to the time required to solve the subproblems. This means that the total running time of the algorithm is dominated by the time required to divide the problem and combine the solutions, which is proportional to f(n). Therefore, the time complexity of the algorithm is Θ(f(n)).
In conclusion, the master method provides a simple and efficient way to determine the time complexity of an algorithm in terms of its recurrence relation. By analyzing the values of a, b, and f(n), we can determine the appropriate case for the recurrence and obtain an asymptotic bound for the time complexity of the algorithm.
How to Use Master Theorem?
To apply the master theorem to an algorithm, we need to identify the values of a, b, and f(n) from its recurrence relation. Once we have these values, we can determine which case of the master theorem applies and compute the time complexity of the algorithm.
Example: Master Theorem for Merge Sort
T(n) = 2*T(n/2) + n
Here, a = 2, b = 2, and f(n) = n. Therefore, we can apply the master theorem’s first case, which states that if f(n) = O(n^(log_b(a – ε))) for some constant ε > 0, then T(n) = Θ(n^(log_b(a))). In our case, log_2(2) = 1, so n^(log_b(a)) = n. Since n is O(n), we can conclude that T(n) = Θ(n*log n).
Master Theorem’s Limitations
The master theorem is a powerful tool for analyzing the time complexity of algorithms that follow a specific form of recurrence relation. However, it has some limitations. For instance, the master theorem assumes that all subproblems have equal sizes, which may not be the case in some algorithms. Furthermore, the master theorem only works for a limited class of recurrence relations and cannot be applied to all algorithms.
Extended Master Theorem
To address some of the limitations of the master theorem, researchers have developed an extended version of the theorem that applies to more general recurrence relations. The extended master theorem provides a formula to compute the time complexity of algorithms that follow recurrence relations of the form:
T(n) = a*T(n/b) + g(n)
Here, g(n) represents the time it takes to solve the subproblems and combine their solutions.
Example: Extended Master Theorem for Quick Sort
Let us apply the extended master theorem to the Quick Sort algorithm, which has the recurrence relation:
T(n) = T(k) + T(n-k-1) + n
Here, k is the size of the partition, and n-k-1 is the size of the other partition. To apply the extended master theorem, we define g(n) as n and h(n) as k. Then, the recurrence relation becomes:
T(n) = T(h(n)) + T(n-h(n)-1) + g(n)
We can assume that h(n) divides n, so h(n) = n/2. Then, we can apply the extended master theorem’s case 1, which states that if g(n) = O(n^(log_b(a – ε))) for some constant ε > 0, then T(n) = Θ(n^(log_b(a)) + g(n)).
In our case, a = 2, b = 2, and g(n) = n. Therefore, log_2(2) = 1, and n^(log_b(a)) = n. Since n is O(n), we can conclude that T(n) = Θ(n*log n)
- Q-2.
- What do you understand by time complexity ? What is big O notation ? Write the Quick sort algorithm and discuss its complexity.
- Q-3
- What do you understand by dynamic programming ? discuss using the example of matrix chain multiplication.
- Q-4
- What is Huffman code ? discuss the greedy algorithm for constructing the Huffman code.
- Q-5
- What is minimum spanning tree ? discuss the steps for finding minimum spanning tree using Prim’s Algorithm
- Q-6
- Discuss Bellman-Ford algorithm that compute shortest paths from a single source vertex to all of the other vertices in a weighted diagraph
- Q-7
- What is bitonic sequence ? discuss bitonic sort algorithm and it’s time complexity.
- Q-8
- What is sorting Network ? Discuss the structure of bubble sorting network.