We derived the golden section search method today, see also the class notes
What is described below was distributed in class, and available for downloading in postscript format and as a pdf file.
Derivation of the method of the golden section search to find the minimum of a function f(x) over the interval [a,b].
We assume
Note that the method applies as well to finding the maximum.
The conditions above remind us to the bisection method and we
will apply a similar idea: narrow the interval that contains
the minimum comparing function values.
In designing the method we seek to satisfy two goals:
One choice inspired by the bisection method would be to compute the midpoint m = (a+b)/2 and to evaluate at x1 and x2, defined by x1 = m - eps/2 and x2 = m + eps/2, for some small value eps for which f(x1) /= f(x2). If f(x1) < f(x2), then we are left with [a,x1], otherwise [x2,b] is our new search interval. While this halves the search interval in each step, we must take two new function evaluation in each step. This is not optimal.
So we only want to perform one new function evaluation in each step. Furthermore, we want a constant reduction factor, say c, for the size of the interval.
For x1 and x2 somewhere in [a,b], there are two cases:
x2 - a = c*(b-a) x2 = a + c*b - c*a x2 = (1-c)*a + c*b
b - x1 = c*(b-a) -x1 = c*b - c*a - b x1 = c*a + (1-c)*b
Thus, once we know c, we know the location of x1 and x2. Without loss of generality, we focus on the case f(x1) < f(x2). For the ease of calculation, take [a,b] = [0,1].
If f(x1) < f(x2), then we recycle x1 = 1-c and have to determine where to evaluate next, either at the left, or at the right of 1-c.
1 - c = (1 - c)*0 + c*c => c^2 + c - 1 = 0The positive root leads to c = (-1 + sqrt(5))/2, which equals approximately 0.6180.
1 - c = c*0 + (1-c)*c => (1-c)^2 = 0The (double) root of this equation is 1, which gives no reduction, so the only right choice for the new function evaluation is at the left of the other point.
Below is a simple MATLAB function (save as gss.m) to run the golden section search method:
function [a,b] = gss(f,a,b,eps,N) % % Performs golden section search on the function f. % Assumptions: f is continuous on [a,b]; and % f has only one minimum in [a,b]. % No more than N function evaluation are done. % When b-a < eps, the iteration stops. % % Example : % [a,b] = gss('myfun',0,1,0.01,20) % c = (-1+sqrt(5))/2; x1 = c*a + (1-c)*b; fx1 = feval(f,x1); x2 = (1-c)*a + c*b; fx2 = feval(f,x2); fprintf('------------------------------------------------------\n'); fprintf(' x1 x2 f(x1) f(x2) b - a\n'); fprintf('------------------------------------------------------\n'); fprintf('%.4e %.4e %.4e %.4e %.4e\n', x1, x2, fx1, fx2, b-a); for i = 1:N-2 if fx1 < fx2 b = x2; x2 = x1; fx2 = fx1; x1 = c*a + (1-c)*b; fx1 = feval(f,x1); else a = x1; x1 = x2; fx1 = fx2; x2 = (1-c)*a + c*b; fx2 = feval(f,x2); end; fprintf('%.4e %.4e %.4e %.4e %.4e\n', x1, x2, fx1, fx2, b-a); if (abs(b-a) < eps) fprintf('succeeded after %d steps\n', i); return; end; end; fprintf('failed requirements after %d steps\n', N);
To run it, we need a function (save as myfun.m):
function y = myfun(x) % % returns y = e^x - 2*x % y = exp(x) - 2*x;
Then we use gss.m in a MATLAB session like
>> [a,b] = gss('myfun',0,1,0.01,20) ------------------------------------------------------ x1 x2 f(x1) f(x2) b - a ------------------------------------------------------ 3.8197e-01 6.1803e-01 7.0123e-01 6.1921e-01 1.0000e+00 6.1803e-01 7.6393e-01 6.1921e-01 6.1884e-01 6.1803e-01 7.6393e-01 8.5410e-01 6.1884e-01 6.4106e-01 3.8197e-01 7.0820e-01 7.6393e-01 6.1393e-01 6.1884e-01 2.3607e-01 6.7376e-01 7.0820e-01 6.1408e-01 6.1393e-01 1.4590e-01 7.0820e-01 7.2949e-01 6.1393e-01 6.1504e-01 9.0170e-02 6.9505e-01 7.0820e-01 6.1371e-01 6.1393e-01 5.5728e-02 6.8692e-01 6.9505e-01 6.1374e-01 6.1371e-01 3.4442e-02 6.9505e-01 7.0007e-01 6.1371e-01 6.1375e-01 2.1286e-02 6.9194e-01 6.9505e-01 6.1371e-01 6.1371e-01 1.3156e-02 6.9002e-01 6.9194e-01 6.1372e-01 6.1371e-01 8.1306e-03 succeeded after 10 steps a = 0.6869 b = 0.6950 >>