Convergence of Algorithms#
Many numerical algorithms converge to a solution, meaning they produce better and better approximations to a solution. We’ll let
In practice, we typically don’t know the value of
Alternatively, we can monitor the difference
In this class, we aren’t going to worry too much about proving that algorithms converge. However, we do want to be able verify that an algorithm is converging, measure the rate of convergence, and generally compare two algorithms using experimental convergence data.
Rate of Convergence#
There are a variety of ways in which the rate of convergence is defined. Mostly, we’re interested in the ratio
for some constant
and is called linear convergence is called quadratic convergence
Larger values of
Additionally, we have the following terms:
and is called sublinear (slower than linear) convergence is superlinear (faster than linear) convergence
Clearly, if
Measuring the Rate of Convergence#
Let’s say we have a sequence
Assume we look at large enough
Let’s look at the ratio
This means that if
If
import numpy as np
import matplotlib.pyplot as plt
# generates a linearly converging sequence
n = 50
C = 0.5
q = 1
eps1 = [1.0]
for i in range(n):
eps1.append(C * eps1[-1]**q)
plt.semilogy(eps1)
plt.xlabel('k')
plt.ylabel(r"$\epsilon$")
plt.title("linear convergence")
plt.show()

# generates a quadratically converging sequence
n = 9
C = 2.0
q = 2
eps2 = [0.2]
for i in range(n):
eps2.append(C * eps2[-1]**q)
plt.semilogy(eps2)
plt.xlabel('k')
plt.ylabel(r"$\epsilon$")
plt.title("quadratic convergence")
plt.show()

# generates a sub-linear (logarithmic) converging sequence
n = 50
eps3 = []
for k in range(1,n):
eps3.append(1.0/k)
plt.semilogy(eps3)
plt.xlabel('k')
plt.ylabel(r"$\epsilon$")
plt.title("sublinear convergence")
plt.show()

# all three in same plot
plt.semilogy(eps1, label='linear')
plt.semilogy(eps2, label='superlinear')
plt.semilogy(eps3, label='sublinear')
plt.xlabel('k')
plt.ylabel(r"$\epsilon$")
plt.ylim(1e-16, 1e0)
plt.title("types of convergence")
plt.legend()
plt.show()

Estimating q#
There are a variety of ways to estimate
def estimate_q(eps):
"""
estimate rate of convergence q from sequence esp
"""
x = np.arange(len(eps) -1)
y = np.log(np.abs(np.diff(np.log(eps))))
line = np.polyfit(x, y, 1) # fit degree 1 polynomial
q = np.exp(line[0]) # find q
return q
print(estimate_q(eps1)) # linear
print(estimate_q(eps2)) # quadratic
print(estimate_q(eps3)) # sublinear
1.0
2.0
0.9466399163308074
Exercises#
Generate a converging sequence with
, and estimate using the above methodGenerate sublinear sequences with
(as above) for lengths 50, 100, 500, 1000, 5000, and 10000. How does our estimate of change with the length of the sequence?
# generates a quadratically converging sequence
n = 9
C = 1.5
q = 1.5
eps2 = [0.2]
for i in range(n):
eps2.append(C * eps2[-1]**q)
plt.semilogy(eps2)
plt.xlabel('k')
plt.ylabel(r"$\epsilon$")
plt.title("q = {} convergence".format(q))
plt.show()

def gen_sublinear(n):
eps = []
for k in range(1,n):
eps.append(1.0/k)
return eps
ns = [50, 100, 500, 1000, 5000, 10000]
for n in ns:
eps = gen_sublinear(n)
print(estimate_q(eps))
0.9466399163308074
0.9720402367424484
0.994118748919332
0.9970337839193604
0.9994017352310425
0.9997004753046203