楼主: ReneeBK
1804 18

Optimization,Rajesh Kumar Arora 2015 [推广有奖]

11
NewOccidental 发表于 2015-12-14 09:45:43

Newton's Method in Optimization

A comparison of gradient descent(green) and Newton's method (red) for minimizing a function (with small step sizes). Newton's method uses curvature information to take a more direct route. In calculus, Newton's method is an iterative method for finding the roots of a differentiable function f (i.e. solutions to the equation f(x)=0). In optimization, Newton's method is applied to the derivative f ′ of a twice-differentiable function f to find the roots of the derivative (solutions to f ′(x)=0), also known as the stationary points of f.
  1. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  2. % MATLAB code modified_newton.m
  3. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  4. %
  5. % n_of_var -> number of design variables
  6. % x = [-1.5 1.5] -> starting value of x
  7. % epsilon1, epsilon2 -> constant used for terminating
  8. %                       the algorithm
  9. % delx -> required for gradient computation
  10. % falpha_prev -> function valye at first / previous iteration
  11. % deriv -> gradient vector
  12. % sec_deriv -> hessian matrix
  13. % search -> search direction (vector)
  14. clear all
  15. clc
  16. n_of_var = 2;
  17. x = [-3 2];
  18. epsilon1 = 1e-7;
  19. epsilon2 = 1e-7;
  20. delx = 1e-3;
  21. f_prev = func_multivar(x);
  22. fprintf('Initial function value =  %7.4f\n ',f_prev)
  23. fprintf(' No.       x-vector      f(x)      Deriv \n')
  24. fprintf('__________________________________________\n')

  25. for i = 1:20
  26.     falpha_prev = func_multivar(x);
  27.     deriv = grad_vec(x,delx,n_of_var);
  28.     sec_deriv = hessian(x,delx,n_of_var);
  29.     search = -inv(sec_deriv)*deriv';
  30.     [alpha,falpha] = golden_funct1(x,search');
  31.     if abs(falpha-falpha_prev)<epsilon1 || norm(deriv)<epsilon2
  32.     break;
  33.     end
  34.     falpha_prev = falpha;
  35.     x = x + alpha*search';
  36.     f = func_multivar(x);
  37.     fprintf('%3d %8.3f %8.3f  % 8.3f  %8.3f    \n',i,x,falpha,norm(deriv))
  38. end

  39. fprintf('%3d %8.3f %8.3f  % 8.3f  %8.3f  \n',i,x,falpha,norm(deriv))
  40. fprintf('__________________________________________\n')
  41. %
  42. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
复制代码



12
NewOccidental 发表于 2015-12-14 09:47:26

The Nelder–Mead method or downhill simplex method or amoeba method is a commonly applied numerical method used to find the minimum or maximum of an objective function in a multidimensional space. It is applied to nonlinear optimization problems for which derivatives may not be known. However, the Nelder–Mead technique is a heuristic search method that can converge to non-stationary points[1] on problems that can be solved by alternative methods.[2]The Nelder–Mead technique was proposed by John Nelder & Roger Mead (1965).[3]

  1. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  2. % MATLAB code neldermead.m
  3. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  4. % n_of_var -> number of design variables
  5. % lb, ub -> lower/upper bound of variable
  6. %(optinal for generating initial feasible points randomly)
  7. % ybest -> best value of the objective function in the iteration
  8. % ysecondbest -> second best value of the objective function
  9. % yworst -> worst value of the objective function in the iteration
  10. % xworst -> corresponding value of the variable for yworst
  11. % xc -> centroid of the polygon
  12. % fcentroid -> function value at xc
  13. % deviation -> sum square deviation of function values from centroid
  14. % xr => reflected point
  15. % freflec => function value at reflected point
  16. % xe => expanded point
  17. % fexp => function value at expanded point
  18. % xcont -> contracted point
  19. % fcont => function value at contracted point
  20. %
  21. clear all
  22. clc
  23. n_of_var = 2;
  24. epsilon = 1e-4;
  25. alpha = 1;
  26. gamma = 2;
  27. rho = -0.5;
  28. lb = [-1 -1];
  29. ub = [1 1];
  30. fprintf(' Iteration   Deviation        f(x)   \n')
  31. fprintf('__________________________________________\n')
  32. for JJ =1:50
  33. for i =1:length(lb)
  34.     for j = 1:n_of_var+1
  35.         a(i,j) = lb(i) + (ub(i)-lb(i))*rand;
  36.     end
  37. end
  38. if JJ~=1
  39.     a=x';
  40. end
  41. for i =1:n_of_var+1
  42.     for j = 1:n_of_var
  43.         x(i,j) = a(j,i);
  44.     end
  45.     fval(i) = func_multivar(x(i,:));
  46. end
  47. [yworst,I] = max(fval);
  48. [ybest,II] = min(fval);

  49. % compute centroid
  50. for i =1:length(lb)
  51.     sum(i) = 0;
  52.     for j = 1:n_of_var+1
  53.         if (j ~= I)
  54.         sum(i) = sum(i) + a(i,j);
  55.         else
  56.             xworst(:,i) = a(i,j);
  57.         end
  58.     end
  59. end
  60. xc = sum./n_of_var;
  61. fcentroid = func_multivar(xc);
  62. sum1 = 0;
  63. for i =1:n_of_var+1
  64.     sum1 = sum1 + (fcentroid-fval(i))^2;
  65. end
  66. deviation = sqrt(sum1/(n_of_var+1));

  67.     if deviation < epsilon
  68.         break;
  69.     end

  70. fval(I) = [];
  71. [ysecondworst,Isw] = max(fval);
  72. xr = xc + alpha*(xc-xworst);
  73. freflec = func_multivar(xr);
  74. if freflec < ybest
  75.     %expansion
  76.     xe = xc + gamma*(xc-xworst);
  77.     fexp = func_multivar(xe);
  78.     if fexp < freflec
  79.         x(I,:) = xe;
  80.     else
  81.         x(I,:) = xr;
  82.     end
  83. else
  84.    if freflec < ysecondworst
  85.        x(I,:) = xr;
  86.    else
  87.     xcont = xc + rho*(xc-xworst);
  88.     fcont = func_multivar(xcont);
  89.     if fcont < yworst
  90.         x(I,:) = xcont;
  91.     end
  92.    end
  93. end

  94. fprintf('%3d %15.4f %15.3f    \n',JJ,deviation,ybest)

  95. end
  96. fprintf('__________________________________________\n')
  97. xc
  98. %
  99. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
复制代码


13
NewOccidental 发表于 2015-12-14 10:01:37

Gradient Descent Algorithms

Gradient descent is a first-order optimization algorithm. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or of the approximate gradient) of the function at the current point. If instead one takes steps proportional to the positive of the gradient, one approaches a local maximum of that function; the procedure is then known as gradient ascent.

Gradient descent is also known as steepest descent, or the method of steepest descent. However, gradient descent should not be confused with the method of steepest descent for approximating integrals

  1. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  2. % MATLAB code steep_des.m
  3. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  4. %
  5. % n_of_var -> number of design variables
  6. % x = [-1.5 1.5] -> starting value of x
  7. % epsilon1,epsilon2 ->constants used for terminating the algorithm
  8. % delx -> required for gradient computation
  9. % falpha_prev -> function valye at first / previous iteration
  10. % deriv -> gradient vector
  11. % search -> search direction (set to negative of gradient)
  12. %
  13. clear all
  14. clc
  15. n_of_var = 2;
  16. x = [-3 2];
  17. epsilon1 = 1e-6;
  18. epsilon2 = 1e-6;
  19. delx = 1e-3;

  20. falpha_prev = func_multivar(x);
  21. fprintf('Initial function value =  %7.4f\n ',falpha_prev)
  22. fprintf(' No.       x-vector      f(x)      Deriv \n')
  23. fprintf('__________________________________________\n')

  24. for i = 1:3000
  25. deriv = grad_vec(x,delx,n_of_var);
  26. search = -deriv;
  27. [alpha,falpha] = golden_funct1(x,search);
  28. if abs(falpha-falpha_prev)<epsilon1 || norm(deriv)<epsilon2
  29.     break;
  30. end
  31. falpha_prev = falpha;
  32. x = x + alpha*search;
  33. fprintf('%3d %8.3f %8.3f  % 8.3f  %8.3f  \n',i,x,falpha,norm(deriv))
  34. end

  35. fprintf('__________________________________________\n')
  36. %
  37. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
复制代码

14
Lisrelchen 发表于 2015-12-14 10:07:00
  1. In mathematical optimization theory, duality means that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem (the duality principle). The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem.[1] However in general the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition.
复制代码

  1. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  2. % MATLAB code dual.m
  3. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  4. % The matrix A and b corresponds to equation Ax=b
  5. % c -> vector of cost coefficients
  6. % basic_set -> set of basic variables
  7. % nonbasic_set -> set of nonbasic variables
  8. % B -> matrix containing basic variable columns of A
  9. % N -> matrix containing nonbasic variable columns of A
  10. % xb -> basic variables
  11. % y -> simplex multipliers
  12. % cb -> cost coefficients of basic variables
  13. % cn -> cost coefficients of nonbasic variables
  14. %
  15. clear all
  16. clc
  17. format rational
  18. format compact

  19. A =[-1 0 1 0 0 0;
  20.     0 -1 0 1 0 0;
  21.     -2 -1 0 0 1 0;
  22.     -1 -3 0 0 0 1];
  23. b = [-3;-4;-25;-26];
  24. c = [9; 8; 0; 0; 0;0];
  25. basic_set = [3 4 5 6];
  26. nonbasic_set = [1 2];
  27. for i=1:length(basic_set)
  28.     B(:,i) = A(:,basic_set(i));
  29.     cb(i) = c(basic_set(i));
  30. end

  31. for i=1:length(nonbasic_set)
  32.     N(:,i) = A(:,nonbasic_set(i));
  33.     cn(i) = c(nonbasic_set(i));
  34. end

  35. cn_cap = cn;
  36. cb_ini = cb;
  37. b_cap = b;
  38. zz =0;
  39. fprintf('\n ________________________________________\n')
  40. basic_set
  41. nonbasic_set
  42. Initial_Table = [B N b_cap]
  43. Cost =[cb cn_cap zz]

  44. for i=1:4
  45.       [minvalue leaving_basic_variable] = min(b_cap);
  46.        mat1 = inv(B)*N;
  47. entering_row = mat1(leaving_basic_variable,:);
  48. ratios =-1*(cn_cap'./entering_row');
  49. [min_ratio entering_basic_variable] = min(ratios);

  50. while min_ratio<0
  51.     ratios(entering_basic_variable) = inf;
  52.     [min_ratio entering_basic_variable] = min(ratios);
  53. end

  54. temp_basic_set = basic_set;
  55. temp_nonbasic_set = nonbasic_set;
  56. temp_cb = cb;
  57. temp_cn = cn;
  58. basic_set(leaving_basic_variable) = temp_nonbasic_set(entering_basic_variable);
  59. nonbasic_set(entering_basic_variable) = temp_basic_set(leaving_basic_variable);
  60.   cb(leaving_basic_variable) = temp_cn(entering_basic_variable);
  61.   cn(entering_basic_variable) = temp_cb(leaving_basic_variable);
  62. aa(nonbasic_set) = cn;
  63. cn = aa(sort(nonbasic_set));
  64. nonbasic_set = sort(nonbasic_set);

  65. for ii=1:length(basic_set)
  66.     B(:,ii) = A(:,basic_set(ii));
  67. end

  68. for ii=1:length(nonbasic_set)
  69.     N(:,ii) = A(:,nonbasic_set(ii));
  70. end

  71. xb = inv(B)*b;
  72. y = cb*inv(B);
  73. cn_cap = cn-y*N;
  74. b_cap = xb;
  75. zz = cb*xb;
  76. fprintf('\n ________________________________________\n')
  77. basic_set
  78. nonbasic_set
  79. Table = [eye(length(B)) inv(B)*N  b_cap]
  80. Cost =[cb_ini cn_cap -zz]
  81. if b_cap >=0
  82.      break;
  83. end

  84. end
  85. fprintf('\n ------FINAL SOLUTION------\n')
  86. basic_set
  87. xb
  88. zz
  89. %
  90. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
复制代码

15
Lisrelchen 发表于 2015-12-14 10:07:47
  1. In mathematical optimization theory, duality means that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem (the duality principle). The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem.[1] However in general the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition.
复制代码

  1. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  2. % MATLAB code dual.m
  3. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  4. % The matrix A and b corresponds to equation Ax=b
  5. % c -> vector of cost coefficients
  6. % basic_set -> set of basic variables
  7. % nonbasic_set -> set of nonbasic variables
  8. % B -> matrix containing basic variable columns of A
  9. % N -> matrix containing nonbasic variable columns of A
  10. % xb -> basic variables
  11. % y -> simplex multipliers
  12. % cb -> cost coefficients of basic variables
  13. % cn -> cost coefficients of nonbasic variables
  14. %
  15. clear all
  16. clc
  17. format rational
  18. format compact

  19. A =[-1 0 1 0 0 0;
  20.     0 -1 0 1 0 0;
  21.     -2 -1 0 0 1 0;
  22.     -1 -3 0 0 0 1];
  23. b = [-3;-4;-25;-26];
  24. c = [9; 8; 0; 0; 0;0];
  25. basic_set = [3 4 5 6];
  26. nonbasic_set = [1 2];
  27. for i=1:length(basic_set)
  28.     B(:,i) = A(:,basic_set(i));
  29.     cb(i) = c(basic_set(i));
  30. end

  31. for i=1:length(nonbasic_set)
  32.     N(:,i) = A(:,nonbasic_set(i));
  33.     cn(i) = c(nonbasic_set(i));
  34. end

  35. cn_cap = cn;
  36. cb_ini = cb;
  37. b_cap = b;
  38. zz =0;
  39. fprintf('\n ________________________________________\n')
  40. basic_set
  41. nonbasic_set
  42. Initial_Table = [B N b_cap]
  43. Cost =[cb cn_cap zz]

  44. for i=1:4
  45.       [minvalue leaving_basic_variable] = min(b_cap);
  46.        mat1 = inv(B)*N;
  47. entering_row = mat1(leaving_basic_variable,:);
  48. ratios =-1*(cn_cap'./entering_row');
  49. [min_ratio entering_basic_variable] = min(ratios);

  50. while min_ratio<0
  51.     ratios(entering_basic_variable) = inf;
  52.     [min_ratio entering_basic_variable] = min(ratios);
  53. end

  54. temp_basic_set = basic_set;
  55. temp_nonbasic_set = nonbasic_set;
  56. temp_cb = cb;
  57. temp_cn = cn;
  58. basic_set(leaving_basic_variable) = temp_nonbasic_set(entering_basic_variable);
  59. nonbasic_set(entering_basic_variable) = temp_basic_set(leaving_basic_variable);
  60.   cb(leaving_basic_variable) = temp_cn(entering_basic_variable);
  61.   cn(entering_basic_variable) = temp_cb(leaving_basic_variable);
  62. aa(nonbasic_set) = cn;
  63. cn = aa(sort(nonbasic_set));
  64. nonbasic_set = sort(nonbasic_set);

  65. for ii=1:length(basic_set)
  66.     B(:,ii) = A(:,basic_set(ii));
  67. end

  68. for ii=1:length(nonbasic_set)
  69.     N(:,ii) = A(:,nonbasic_set(ii));
  70. end

  71. xb = inv(B)*b;
  72. y = cb*inv(B);
  73. cn_cap = cn-y*N;
  74. b_cap = xb;
  75. zz = cb*xb;
  76. fprintf('\n ________________________________________\n')
  77. basic_set
  78. nonbasic_set
  79. Table = [eye(length(B)) inv(B)*N  b_cap]
  80. Cost =[cb_ini cn_cap -zz]
  81. if b_cap >=0
  82.      break;
  83. end

  84. end
  85. fprintf('\n ------FINAL SOLUTION------\n')
  86. basic_set
  87. xb
  88. zz
  89. %
  90. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
复制代码

16
Lisrelchen 发表于 2015-12-14 10:09:48
  1. In mathematical optimization theory, duality means that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem (the duality principle). The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem.[1] However in general the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition.
复制代码

  1. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  2. % MATLAB code dual.m
  3. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  4. % The matrix A and b corresponds to equation Ax=b
  5. % c -> vector of cost coefficients
  6. % basic_set -> set of basic variables
  7. % nonbasic_set -> set of nonbasic variables
  8. % B -> matrix containing basic variable columns of A
  9. % N -> matrix containing nonbasic variable columns of A
  10. % xb -> basic variables
  11. % y -> simplex multipliers
  12. % cb -> cost coefficients of basic variables
  13. % cn -> cost coefficients of nonbasic variables
  14. %
  15. clear all
  16. clc
  17. format rational
  18. format compact

  19. A =[-1 0 1 0 0 0;
  20.     0 -1 0 1 0 0;
  21.     -2 -1 0 0 1 0;
  22.     -1 -3 0 0 0 1];
  23. b = [-3;-4;-25;-26];
  24. c = [9; 8; 0; 0; 0;0];
  25. basic_set = [3 4 5 6];
  26. nonbasic_set = [1 2];
  27. for i=1:length(basic_set)
  28.     B(:,i) = A(:,basic_set(i));
  29.     cb(i) = c(basic_set(i));
  30. end

  31. for i=1:length(nonbasic_set)
  32.     N(:,i) = A(:,nonbasic_set(i));
  33.     cn(i) = c(nonbasic_set(i));
  34. end

  35. cn_cap = cn;
  36. cb_ini = cb;
  37. b_cap = b;
  38. zz =0;
  39. fprintf('\n ________________________________________\n')
  40. basic_set
  41. nonbasic_set
  42. Initial_Table = [B N b_cap]
  43. Cost =[cb cn_cap zz]

  44. for i=1:4
  45.       [minvalue leaving_basic_variable] = min(b_cap);
  46.        mat1 = inv(B)*N;
  47. entering_row = mat1(leaving_basic_variable,:);
  48. ratios =-1*(cn_cap'./entering_row');
  49. [min_ratio entering_basic_variable] = min(ratios);

  50. while min_ratio<0
  51.     ratios(entering_basic_variable) = inf;
  52.     [min_ratio entering_basic_variable] = min(ratios);
  53. end

  54. temp_basic_set = basic_set;
  55. temp_nonbasic_set = nonbasic_set;
  56. temp_cb = cb;
  57. temp_cn = cn;
  58. basic_set(leaving_basic_variable) = temp_nonbasic_set(entering_basic_variable);
  59. nonbasic_set(entering_basic_variable) = temp_basic_set(leaving_basic_variable);
  60.   cb(leaving_basic_variable) = temp_cn(entering_basic_variable);
  61.   cn(entering_basic_variable) = temp_cb(leaving_basic_variable);
  62. aa(nonbasic_set) = cn;
  63. cn = aa(sort(nonbasic_set));
  64. nonbasic_set = sort(nonbasic_set);

  65. for ii=1:length(basic_set)
  66.     B(:,ii) = A(:,basic_set(ii));
  67. end

  68. for ii=1:length(nonbasic_set)
  69.     N(:,ii) = A(:,nonbasic_set(ii));
  70. end

  71. xb = inv(B)*b;
  72. y = cb*inv(B);
  73. cn_cap = cn-y*N;
  74. b_cap = xb;
  75. zz = cb*xb;
  76. fprintf('\n ________________________________________\n')
  77. basic_set
  78. nonbasic_set
  79. Table = [eye(length(B)) inv(B)*N  b_cap]
  80. Cost =[cb_ini cn_cap -zz]
  81. if b_cap >=0
  82.      break;
  83. end

  84. end
  85. fprintf('\n ------FINAL SOLUTION------\n')
  86. basic_set
  87. xb
  88. zz
  89. %
  90. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
复制代码

17
Lisrelchen 发表于 2015-12-14 10:13:06
  1. In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming.[1][2][3][4][5] The journal Computing in Science and Engineering listed it as one of the top 10 algorithms of the twentieth century.[6]

  2. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin.[7] Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicial cones, and these become proper simplices with an additional constraint.[8][9][10][11] The simplicial cones in question are the corners (i.e., the neighborhoods of the vertices) of a geometric object called a polytope. The shape of this polytope is defined by the constraints applied to the objective function.
复制代码
  1. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  2. % MATLAB code simplex.m
  3. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  4. %
  5. % The matrix A and b corresponds to equation Ax=b
  6. % c -> vector of cost coefficients
  7. % basic_set -> set of basic variables
  8. % nonbasic_set -> setof nonbasic variables
  9. % B -> matrix containing basic variable columns of A
  10. % N -> matrix containing nonbasic variable columns of A
  11. % xb -> basic variables
  12. % y -> simplex multipliers
  13. % cb -> cost coefficients of basic variables
  14. % cn -> cost coefficients of nonbasic variables
  15. %
  16. clear all
  17. clc
  18. format rational
  19. format compact

  20. A =[3 1 1  0 0;
  21.     1 2 0  1 0;
  22.     1 0 0  0 1];
  23. b = [10;8;3];
  24. c = [-6;-7;0;0;0];
  25. basic_set = [3 4 5];
  26. nonbasic_set = [1 2];

  27. for i=1:length(basic_set)
  28.     B(:,i) = A(:,basic_set(i));
  29.     cb(i) = c(basic_set(i));
  30. end

  31. for i=1:length(nonbasic_set)
  32.     N(:,i) = A(:,nonbasic_set(i));
  33.     cn(i) = c(nonbasic_set(i));
  34. end

  35. cn_cap = cn;
  36. cb_ini = cb;
  37. b_cap = b;
  38. zz1 = 0;
  39. fprintf('\n ________________________________________\n')
  40. basic_set
  41. nonbasic_set
  42. Initial_Table = [B N b_cap]
  43. Cost =[cb cn_cap -zz1]

  44. for i=1:3
  45.       [minvalue entering_basic_variable] = min(cn_cap);
  46. entering_column = inv(B)*A(:,nonbasic_set(entering_basic_variable));
  47. ratios = b_cap'./entering_column';
  48. [min_ratio leaving_basic_variable] = min(ratios);
  49. while min_ratio<0
  50.     ratios(leaving_basic_variable) = inf;
  51.     [min_ratio leaving_basic_variable] = min(ratios);
  52. end
  53. temp_basic_set = basic_set;
  54. temp_nonbasic_set = nonbasic_set;
  55. temp_cb = cb;
  56. temp_cn = cn;
  57. basic_set(leaving_basic_variable) = temp_nonbasic_set(entering_basic_variable);
  58. nonbasic_set(entering_basic_variable) = temp_basic_set(leaving_basic_variable);
  59. cb(leaving_basic_variable) = temp_cn(entering_basic_variable);
  60. cn(entering_basic_variable) = temp_cb(leaving_basic_variable);
  61. aa(nonbasic_set) = cn;
  62. cn = aa(sort(nonbasic_set));
  63. nonbasic_set = sort(nonbasic_set);
  64. for ii=1:length(basic_set)
  65.     B(:,ii) = A(:,basic_set(ii));
  66. end
  67. for ii=1:length(nonbasic_set)
  68.     N(:,ii) = A(:,nonbasic_set(ii));
  69. end
  70. xb = inv(B)*b;
  71. y = cb*inv(B);
  72. cn_cap = cn-y*N;
  73. b_cap = xb;
  74. zz = zz1+cb*xb;
  75. fprintf('\n ________________________________________\n')
  76. basic_set
  77. nonbasic_set
  78. Table = [eye(length(B)) inv(B)*N  b_cap]
  79. Cost =[cb_ini cn_cap -zz]
  80. if cn_cap >=0
  81.      break;
  82. end

  83. end
  84. fprintf('\n ------SOLUTION------\n')
  85. basic_set
  86. xb
  87. zz
  88. %
  89. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
复制代码

18
Lisrelchen 发表于 2015-12-14 10:27:03
  1. % ---------------------------------------------------------
  2. % Roulette Wheel Selection Algorithm. A set of weights
  3. % represents the probability of selection of each
  4. % individual in a group of choices. It returns the index
  5. % of the chosen individual.
  6. % Usage example:
  7. % fortune_wheel ([1 5 3 15 8 1])
  8. %    most probable result is 4 (weights 15)
  9. % ---------------------------------------------------------
  10. function choice = fortune_wheel(weights)
  11.   accumulation = cumsum(weights);
  12.   p = rand() * accumulation(end);
  13.   chosen_index = -1;
  14.   for index = 1 : length(accumulation)
  15.     if (accumulation(index) > p)
  16.       chosen_index = index;
  17.       break;
  18.     end
  19.   end
  20.   choice = chosen_index;
复制代码

http://onlinelibrary.wiley.com/doi/10.1002/9780470106280.app1/pdf

19
Lisrelchen 发表于 2015-12-14 10:30:21
  1. Minimization Using Simulated Annealing Algorithm

  2. Open This Example
  3. This example shows how to create and minimize an objective function using Simulated Annealing in the Global Optimization Toolbox.

  4. A Simple Objective Function
  5. Coding the Objective Function
  6. Minimizing Using SIMULANNEALBND
  7. Bound Constrained Minimization
  8. How Simulated Annealing Works
  9. An Objective Function with Additional Arguments
  10. Minimizing Using Additional Arguments
  11. A Simple Objective Function
  12. We want to minimize a simple function of two variables

  13.    min f(x) = (4 - 2.1*x1^2 + x1^4/3)*x1^2 + x1*x2 + (-4 + 4*x2^2)*x2^2;
  14.     x
  15. The above function is known as 'cam' as described in L.C.W. Dixon and G.P. Szego (eds.), Towards Global Optimisation 2, North-Holland, Amsterdam, 1978.

  16. Coding the Objective Function
  17. We create a MATLAB file named simple_objective.m with the following code in it:

  18.    function y = simple_objective(x)
  19.    y = (4 - 2.1*x(1)^2 + x(1)^4/3)*x(1)^2 + x(1)*x(2) + ...
  20.        (-4 + 4*x(2)^2)*x(2)^2;
  21. The Simulated Annealing solver assumes the objective function will take one input x where x has as many elements as the number of variables in the problem. The objective function computes the scalar value of the objective and returns it in its single return argument y.

  22. Minimizing Using SIMULANNEALBND
  23. To minimize our objective function using the SIMULANNEALBND function, we need to pass in a function handle to the objective function as well as specifying a start point as the second argument.

  24. ObjectiveFunction = @simple_objective;
  25. X0 = [0.5 0.5];   % Starting point
  26. [x,fval,exitFlag,output] = simulannealbnd(ObjectiveFunction,X0)
  27. Optimization terminated: change in best function value less than options.TolFun.

  28. x =

  29.    -0.0896    0.7130


  30. fval =

  31.    -1.0316


  32. exitFlag =

  33.      1


  34. output =

  35.      iterations: 2948
  36.       funccount: 2971
  37.         message: 'Optimization terminated: change in best function value l...'
  38.        rngstate: [1x1 struct]
  39.     problemtype: 'unconstrained'
  40.     temperature: [2x1 double]
  41.       totaltime: 4.5000

  42. The first two output arguments returned by SIMULANNEALBND are x, the best point found, and fval, the function value at the best point. A third output argument, exitFlag returns a flag corresponding to the reason SIMULANNEALBND stopped. SIMULANNEALBND can also return a fourth argument, output, which contains information about the performance of the solver.

  43. Bound Constrained Minimization
  44. SIMULANNEALBND can be used to solve problems with bound constraints. The lower and upper bounds are passed to the solver as vectors. For each dimension i, the solver ensures that lb(i) <= x(i) <= ub(i), where x is a point selected by the solver during simulation. We impose the bounds on our problem by specifying a range -64 <= x(i) <= 64 for x(i).

  45. lb = [-64 -64];
  46. ub = [64 64];
  47. Now, we can rerun the solver with lower and upper bounds as input arguments.

  48. [x,fval,exitFlag,output] = simulannealbnd(ObjectiveFunction,X0,lb,ub);

  49. fprintf('The number of iterations was : %d\n', output.iterations);
  50. fprintf('The number of function evaluations was : %d\n', output.funccount);
  51. fprintf('The best function value found was : %g\n', fval);
  52. Optimization terminated: change in best function value less than options.TolFun.
  53. The number of iterations was : 2428
  54. The number of function evaluations was : 2447
  55. The best function value found was : -1.03163
  56. How Simulated Annealing Works
  57. Simulated annealing mimics the annealing process to solve an optimization problem. It uses a temperature parameter that controls the search. The temperature parameter typically starts off high and is slowly "cooled" or lowered in every iteration. At each iteration a new point is generated and its distance from the current point is proportional to the temperature. If the new point has a better function value it replaces the current point and iteration counter is incremented. It is possible to accept and move forward with a worse point. The probability of doing so is directly dependent on the temperature. This unintuitive step sometime helps identify a new search region in hope of finding a better minimum.

  58. An Objective Function with Additional Arguments
  59. Sometimes we want our objective function to be parameterized by extra arguments that act as constants during the optimization. For example, in the previous objective function, say we want to replace the constants 4, 2.1, and 4 with parameters that we can change to create a family of objective functions. We can re-write the above function to take three additional parameters to give the new minimization problem.

  60.    min f(x) = (a - b*x1^2 + x1^4/3)*x1^2 + x1*x2 + (-c + c*x2^2)*x2^2;
  61.     x
  62. a, b, and c are parameters to the objective function that act as constants during the optimization (they are not varied as part of the minimization). One can create a MATLAB file called parameterized_objective.m containing the following code.

  63.    function y = parameterized_objective(x,a,b,c)
  64.    y = (a - b*x(1)^2 + x(1)^4/3)*x(1)^2 + x(1)*x(2) + ...
  65.        (-c + c*x(2)^2)*x(2)^2;
  66. Minimizing Using Additional Arguments
  67. Again, we need to pass in a function handle to the objective function as well as a start point as the second argument.

  68. SIMULANNEALBND will call our objective function with just one argument x, but our objective function has four arguments: x, a, b, c. We can use an anonymous function to capture the values of the additional arguments, the constants a, b, and c. We create a function handle 'ObjectiveFunction' to an anonymous function that takes one input x, but calls 'parameterized_objective' with x, a, b and c. The variables a, b, and c have values when the function handle 'ObjectiveFunction' is created, so these values are captured by the anonymous function.

  69. a = 4; b = 2.1; c = 4;    % define constant values
  70. ObjectiveFunction = @(x) parameterized_objective(x,a,b,c);
  71. X0 = [0.5 0.5];
  72. [x,fval] = simulannealbnd(ObjectiveFunction,X0)
  73. Optimization terminated: change in best function value less than options.TolFun.

  74. x =

  75.     0.0898   -0.7127


  76. fval =

  77.    -1.0316
复制代码

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2025-12-31 13:35