THE GRADIENT(MOST OFTEN SALTWORKS) METHOD
Keywords:
Gradient method, steepest descent, iterative method, symmetric matrix, positive definiteness, functional, optimization, Python, algorithm, error vector.Abstract
This paper discusses the steepest descent algorithm of the gradient method used to solve systems of linear algebraic equations. The method of finding the solution of the system by minimizing a functional is explained step by tep. The theoretical foundations of the iterative approach based on gradient and error vectors are presented. As an example, a system of four-variable equations is solved using the Python programming environment, and the convergence rate and accuracy of the algorithm are demonstrated. The results show that the gradient method is a simple and efficient computational tool suitable for solving large-scale linear systems.
References
I.K. Linnik, Methods of Minimizing Quadratic Functionals, Nauka, Moscow, 1970.
R.F. Lichman, Numerical Methods, Moscow, Higher School, 2003.
Taha H.A., Operations Research, Moscow, Williams, 2006.
Sharipov A.A., Optimization Methods, Tashkent, 2019.
Burden R., Faires J. D., Numerical Analysis, Cengage Learning, 10th edition, 2015.
S.P. Shevchuk, Numerical Methods of Linear Algebra, Kiev, Higher School, 1991.
Numpy Documentation – https://numpy.org/doc/
Saad Y., Iterative Methods for Sparse Linear Systems, SIAM, 2003.