Optimization Technique

119_Modified Newton Method

elif 2024. 3. 28. 14:33

The Modified Newton Method is a numerical approach for solving optimization problems, designed to overcome the difficulty that the Newton Method may have in guaranteeing convergence under specific conditions.

 

 

The Newton Method is used to find the extremum of a function and works effectively when the Hessian of the function is continuous and positive definite. In each iteration, it calculates the next estimate using the gradient and the Hessian of the function based on the current estimate.

 

 

The Modified Newton Method adopts strategies such as using an approximate inverse of the Hessian or adding a small value to the Hessian to ensure it remains positive definite. This makes it useful in situations where calculating the Hessian is challenging or when it has singular values.

 

Explain the process of finding the minimum value of a given function using the Modified Newton Altorithm through an example.

 

 

The starting point is (5,10) and the stopping criterion $\varepsilon $ is 0.0001.

First, calculate the gradient vector ${{\text{c}}^{(0)}}$ at the starting point and find its norm.

 

 

Since the convergence criterion is not satisfied, the process continues. Calculate the hessian at the starting point, which is as follows.

 

 

Since all eigenvalues of the Hessian are positive, it is positive definite. This means that the Newton direction satisfies the descent condition at each iteration. Therefore, the Hessian matrix and the gradient vector are used to determine the direction of the design change.

 

 

Calculate the step size that minimizes the function $f$ value.

 

 

Using the calculated step size and the design change direction, calculate the next point.

 

 

Calculate the cost function and its gradient at the next point.

 

 

Since the magnitude of the gradient is less then $\varepsilon $, we know that the Newton method has found the solution is just one iteration. If it were grater than $\varepsilon $, the above process would be repeated until the convergence condition is satisfied, continuing th search for the minimum.

 

Through this process, we can see how the Modified Newton Algorithm effectively and quickly finds the minimum value of the given function.

'Optimization Technique' 카테고리의 다른 글

121_DFP Method Example  (0) 2024.03.30
120_Inverse Hessian Updating  (0) 2024.03.29
118_Conjugate Gradient Algorithm  (0) 2024.03.27
115_KKT Conditions for LP Problem  (0) 2024.03.24
114_Linear Programming  (0) 2024.03.23