## Gegenbauer collocation integration methods: advances in computational optimal control theory

2017-03-01T03:21:21Z (GMT)
The analytic solutions of simple optimal control problems may be found using the classical tools such as the calculus of variations, dynamic programming, or the minimum principle. However, in practice, a closed form expression of the optimal control is difficult or even impossible to determine for general nonlinear optimal control problems. Therefore such intricate optimal control problems must be solved numerically. The numerical solution of optimal control problems has been the subject of a significant amount of study since the last century; yet determining the optimal control within high precision remains very challenging in many optimal control applications. The classes of direct orthogonal collocation methods and direct pseudospectral methods are some of the most elegant numerical methods for solving nonlinear optimal control problems nowadays. These methods offer several advantages over many other popular discretization methods in the literature. The key idea of these methods is to transcribe the infinite-dimensional continuous-time optimal control problem into a finite-dimensional nonlinear programming problem. These methods are based on spectral collocation methods, which have been extensively applied and actively used in many areas. Many polynomials approximations and various discretization points have been introduced and studied in the literature for the solution of optimal control problems using control and/or state parameterizations. The commonly used basis polynomials in direct orthogonal collocation methods and direct pseudospectral methods are the Chebyshev and Legendre polynomials, and the collocation points are typically chosen to be of the Gauss or Gauss-Lobatto type of points. The integral operation in the cost functional of an optimal control problem is usually approximated by the well-known Gauss quadrature rules. The differentiation operations are frequently calculated by multiplying a constant differentiation matrix known as the spectral differentiation matrix by the matrix of the function values at a certain discretization/collocation nodes. Thus, the cost functional, the dynamics, and the constraints of the optimal control problem are approximated by a set of algebraic equations. Unfortunately, there are two salient limitations associated with the applications of typical direct orthogonal collocation methods and direct pseudospectral methods: (i) The spectral differentiation matrix, especially those of higher-orders, are widely known to be ill-conditioned; therefore, the numerical computations may be very sensitive to round-off errors. In fact, for a higher-order spectral differentiation matrix, the ill-conditioning becomes very extreme to the extent that the development of efficient preconditioners is a necessity. (ii) The popular spectral differentiation matrix employed frequently in the literature of direct orthogonal collocation methods and direct pseudospectral methods is a square and dense matrix. Therefore, to determine approximations of higher-orders, one usually has to increase the number of collocation points in a direct pseudospectral method, which in turn increases the number of constraints and the dimensionality of the resulting nonlinear programming problem. Also increasing the number of collocation points in a direct orthogonal collocation method increases the number of constraints of the reduced nonlinear programming problem. Eventually, the increase in the size of the spectral differentiation matrix leads to larger nonlinear programming problems, which may be computationally expensive to solve and time-consuming. The research goals of this dissertation are to furnish an efficient, accurate, rapid and robust optimal control solver, and to produce a significantly small-scale nonlinear programming problem using considerably few collocation points. To this end, we introduce a direct optimization method based on a novel Gegenbauer collocation integration scheme which draws upon the power of the well-developed nonlinear programming techniques and computer codes, and the well-conditioning of the numerical integration operators. This modern technique adopts two principle elements to achieve the research goals: (i) The discretization of the optimal control problem is carried out within the framework of a complete integration environment to take full advantage of the well-conditioned numerical integral operators. (ii) The integral operations included in the components of the optimal control problem are approximated through a novel optimal numerical quadrature in a certain optimality measure. The introduced numerical quadrature outperforms classical spectral quadratures in accuracy, and can be established efficiently through the Hadamard multiplication of a constant rectangular spectral integration matrix by the vector of the integrand function values at some optimal Gegenbauer-Gauss interpolation nodes, which usually differ from the employed integration/collocation nodes. The work presented in this dissertation shows clearly that the rectangular form of the developed numerical integration matrix is substantial for the achievement of very precise solutions without affecting the size of the reduced nonlinear programming problem. Chapter 1 is an introductory chapter highlighting the strengths and the weaknesses of various solution methods for optimal control problems, and provides the motivation for the present work. The chapter concludes with a general framework for using Gegenbauer expansions to solve optimal control problems and an overview for the remainder of the dissertation. Chapter 2 presents some preliminary mathematical background and basic concepts relevant to the solution of optimal control problems. In particular, the chapter introduces some key concepts of the calculus of variations, optimal control theory, direct optimization methods, Gegenbauer polynomials, Gegenbauer collocation, in addition to some other essential topics. Chapter 3 presents a published article in Journal of Computational and Applied Mathematics titled “Optimal Gegenbauer quadrature over arbitrary integration nodes.” In this chapter, we introduce a novel optimal Gegenbauer quadrature to efficiently approximate definite integrations numerically. The novel numerical scheme introduces the idea of exploiting the strengths of the Chebyshev, Legendre, and Gegenbauer polynomials through a unified approach, and using a unique numerical quadrature. In particular, the numerical scheme developed employs the Gegenbauer polynomials to achieve rapid rates of convergence of the quadrature for the small range of the spectral expansion terms. For a large-scale number of expansion terms, the numerical quadrature has the advantage of converging to the optimal Chebyshev and Legendre quadratures in the \$L^{infty}\$-norm and \$L^2\$-norm, respectively. The developed Gegenbauer quadrature can be applied for approximating integrals with any arbitrary sets of integration nodes. Moreover, exact integrations are obtained for polynomials of any arbitrary degree \$n\$ if the number of columns in the developed Gegenbauer integration matrix is greater than or equal to \$n\$. The error formula for the Gegenbauer quadrature is derived. Moreover, a study on the error bounds and the convergence rate shows that the optimal Gegenbauer quadrature exhibits very rapid convergence rates faster than any finite power of the number of Gegenbauer expansion terms. Two efficient computational algorithms are presented for optimally constructing the Gegenbauer quadrature, and to ideally maintain the robustness and the rapid convergence of the discrete approximations. We illustrate the high-order approximations of the optimal Gegenbauer quadrature through extensive numerical experiments including comparisons with conventional Chebyshev, Legendre, and Gegenbauer polynomial expansion methods. The present method is broadly applicable and represents a strong addition to the arsenal of numerical quadrature methods. Chapter 4 presents a published article in Advances in Computational Mathematics titled “On the optimization of Gegenbauer operational matrix of integration.” The chapter is focused on the intriguing question of “which value of the Gegenbauer parameter \$alpha\$ is optimal for a Gegenbauer integration matrix to best approximate the solution of various dynamical systems and optimal control problems?” The chapter highlights those methods presented in the literature which recast the aforementioned problems into unconstrained/constrained optimization problems, and then add the Gegenbauer parameter \$alpha\$ associated with the Gegenbauer polynomials as an extra unknown variable to be optimized. The theoretical arguments presented in this chapter prove that this naive policy is invalid since it violates the discrete Gegenbauer orthonormality relation, and may in turn produce false optimization problems analogs to the original problems with poor solution approximations. Chapter 5 presents a published article in Journal of Computational and Applied Mathematics titled “Solving boundary value problems, integral, and integro-differential equations using Gegenbauer integration matrices.” The chapter resolves the issues raised in the previous chapter through the introduction of a hybrid Gegenbauer collocation integration method for solving various dynamical systems such as boundary value problems, integral and integro-differential equations. The proposed method recasts the original problems into their integral formulations, which are then discretized into linear systems of algebraic equations using a hybridization of the Gegenbauer integration matrices developed in Chapter 3. (...)sertation including some suggestions for future research.

In Copyright