Recurrence relations often arise in calculating the time and space complexity of algorithms. The amount of time (or space) that a program takes, Tk, is typically a function of the size, k, of the input data. A recurrence relation arises when Tk is some function of Tk' for k'
Logarithmic Complexity
The following recurrence
Tk = Tk/2 + a if k!=1
= b if k=1
has the solution
Tn = a*log2(n) + b
when n is a power of 2. The solution should be no surprise because when the problem size is doubled from k/2 to k the resource used only increase by a constant a. Repeated doubling, i.e. 1, 2, 4, 8, 16, 32, ..., reaches any limit n in about log2(n) steps. Unless otherwise stated, logarithms are to base two.
The mathematical proof of the solution by induction has two stages. First the base case for k=1 is shown to be a particular case of the general solution. Secondly, assuming the solution up to n/2, it is then shown to hold for the next value, n.
(i) T1 = a*log 1 + b
= b
(ii) Tn = Tn/2 + a
= (a*log(n/2) + b) + a
= a*(log n - log 2) + b + a
= a*log n + b
The binary-search algorithm over n (sorted) objects has this time-complexity. It is a very good complexity to have because Tn grows only slowly as n gets large.
Linear Complexity
The solution of the following recurrence
Tk = b + Tk-1 if k~=1
= a if k=1
can be found as follows
Tn = b + Tn-1
= b + (b+Tn-2)
= 2b + Tn-2
...
= (n-1)b + T1
= (n-1)b + a
= n*b + (a-b)
Many linear-algorithms, linearly-recursive or iterative, have this time-complexity. See the section on linear search. While not as good as logarithmic complexity, linear complexity is somehow "fair": double the size of the problem and the algorithm uses twice as much resource.
Exponential Complexity
A third class of algorithms has complexity given by the following recurrence relation.
Tk = 2*Tk-1 + a if k~=0
= b if k=0
The solution for Tn is:
Tn = (a+b)*2n-a
The solution is by induction:
(i) T0 = (a+b)*20 -a
= (a+b)-a
= b
(ii) Tn = 2*Tn-1+a
= 2*((a+b)2n-1-a)+a
= (a+b)*2n - a
Many binary-recursive procedures have this time-complexity; see for example the (slow) Fibonacci routine. Exponential complexity grows very rapidly indeed with the size of the problem. Just adding one to the problem size causes the complexity to roughly double. If a problem requires algorithms with this complexity and the problem size is large then its solution by computer may be possible in principle but infeasible in practice.
Notation
- f is O(g) iff
- there are positive constants k and m s.t. 0≤f(n)≤k*g(n) for all n≥m.
i.e. f grows no more rapidly than g, in the long run, give or take a multiplicative constant. - f is Ω(g) iff
- there are positive constants k and m s.t. f(n)≥k*g(n)≥0 for all n≥m; equivalently g(n)≤(1/k)*f(n) so g is O(f).
i.e. f grows at least as quickly as g, in the long run, give or take a multiplicative constant. - f is Θ(g) iff
- f is O(g) and f is Ω(g), or equivalently f is O(g) and g is O(f).
i.e. f and g grow at the same rate, in the long run, give or take a multiplicative constant. - f is o(g) iff
- f is O(g) and f is not Θ(g).
i.e. f grows strictly more slowly than g, in the long run, give or take a multiplicative constant.
No comments:
Post a Comment