UID:
almahu_9947362860302882
Format:
XX, 782 p.
,
online resource.
ISBN:
9781461206637
Series Statement:
Applied Mathematical Sciences, 124
Content:
This book deals with optimality conditions, algorithms, and discretization tech niques for nonlinear programming, semi-infinite optimization, and optimal con trol problems. The unifying thread in the presentation consists of an abstract theory, within which optimality conditions are expressed in the form of zeros of optimality junctions, algorithms are characterized by point-to-set iteration maps, and all the numerical approximations required in the solution of semi-infinite optimization and optimal control problems are treated within the context of con sistent approximations and algorithm implementation techniques. Traditionally, necessary optimality conditions for optimization problems are presented in Lagrange, F. John, or Karush-Kuhn-Tucker multiplier forms, with gradients used for smooth problems and subgradients for nonsmooth prob lems. We present these classical optimality conditions and show that they are satisfied at a point if and only if this point is a zero of an upper semicontinuous optimality junction. The use of optimality functions has several advantages. First, optimality functions can be used in an abstract study of optimization algo rithms. Second, many optimization algorithms can be shown to use search directions that are obtained in evaluating optimality functions, thus establishing a clear relationship between optimality conditions and algorithms. Third, estab lishing optimality conditions for highly complex problems, such as optimal con trol problems with control and trajectory constraints, is much easier in terms of optimality functions than in the classical manner. In addition, the relationship between optimality conditions for finite-dimensional problems and semi-infinite optimization and optimal control problems becomes transparent.
Note:
Contents: Unconstrained Optimization -- Optimality Conditions -- Algorithm Models and Convergence Conditions I -- Gradient Methods -- Newton's Method -- Methods of Conjugate Directions -- Quasi-Newton Methods -- One Dimensional Optimization -- Newton's Method for Equations and Inequalities -- Finite Minimax and Constrained Optimization -- Optimality Conditions for Minimax -- Optimality Conditions for Constrained Optimization -- Algorithm Models and Convergence Conditions II -- First-Order Minimax Algorithms -- Newton's Method for Minimax Problems -- Phase I. Phase II Methods of Centers -- Penalty Function Algorithms -- An Augmented Lagrangian Method.
In:
Springer eBooks
Additional Edition:
Printed edition: ISBN 9781461268611
Language:
English
DOI:
10.1007/978-1-4612-0663-7
URL:
http://dx.doi.org/10.1007/978-1-4612-0663-7