ON THE CONVERGENCE OF A METHOD FOR SOLVING TWO POINT BOUNDARY VALUE PROBLEMS BY OPTIMAL CONTROL

. Using the idea of the least squares method, a nonlinear two point boundary value problem is transformed into an optimal control problem. For solving the optimal control problem it is used the gradient method. The convergence of the method is investigated and numerical results are reported.

Sokolowski, Matsumura and Sakawa [12] used optimal control methods to solve two point boundary value problems of the form − d dt a t, y(t), dy dt dy dt + qy(t) = f (t), t ∈ [0, 1], y(0) = y(1) = 0.The nonlinear two point boundary value problems and the optimal control problems are connected.The necessary optimality conditions, as Pontryagin's maximum principle, lead for some optimal control problem to a nonlinear two point boundary value problem such as (3)- (5).
In our case, the derived OCP may be solved efficiently using the gradient method.The application of the gradient method to solve optimal control problems is well known: Polak E. [10], Polak E., Klessig R., 1973; Fedorenko P. R., 1878 and Miele A. [9].
Although the NTPBVP ( 1)-( 2) has not a very general form, thanks to the boundary conditions, our approach emphasizes a class of NTPBVP which may be efficiently solved using optimization techniques.

Consider the NTPBVP (1)-(2).
We assume that the NTPBVP has an unique solution and that f is continuous together with his partial derivates of first and second order.If x(t) is the solution of the NTPBVP (1)-( 2) then the pair (u(t), x(t)) is the solution of the following OCP (6) minimize and (2).Denoting x 1 (t) = x(t), x 2 (t) = ẋ(t), . . .x m (t) = ẋm−1 (t), u(t) = ẋm (t) and x = (x 1 , x 2 , . . ., x m ), the above problem may be written as an OCP for a first order differential system: (8) minimize where For given u the solution of the linear system ( 9) is where Using the shooting method, in order to satisfy the boundary condition (10), the vector c is the solution of the following algebraical system We suppose that the matrix R = A + BH(b) is not singular.It results that (12) x where To solve the OCP ( 8)-( 10) by the gradient method, it requires to construct the sequence The descent parameter µ k is usually computed as the solution of the one dimensional optimization problem 2 then the Gâteaux derivate of the cost functional is where the functions δx and δu satisfy the linear boundary value problem From (12) it results that and then Hence the expression of the gradient becomes For the problem (3)-( 5) the gradient of the cost functional may be computed by ( 13) where p u 1 and p u 2 are the solutions of the following two point boundary value problem (the co-state system) In this case, for the control function u the corresponding trajectory is given by From ( 14)-( 17) it follows that

THE CONVERGENCE RESULT
We state a convergence result for the method considered above applied to the problem (3)-( 5).
If u 0 ∈ L 2 [0, T ] we denote by M I(u 0 ) the set defined by and we introduce the assumption: (H) For any u, v ∈ M I(u 0 ) there exists C > 0 such that for any t ∈ [0, T ] and k ∈ {1, 2}.
If the function f has continuous and bounded first and second order partial derivatives then the assumption (H) is satisfied.
We state some preliminary results.
Theorem 1.There exist the positive constants C 1 and C 2 such that for any u, v ∈ L 2 [0, T ] and any t ∈ [0, T ].
Proof.From (18) we find It follows that Thus Theorem 2. If the assumption (H) is valid, then there exists the positive constants C 3 and C 4 such that for any u, v ∈ M I(u 0 ) .
Proof.(i) First, from using the assumption (H) we deduce that Then, from using the above inequalities we have we obtain Hence Let U be a Hilbert space and J : U → R a Gâteaux differentiable functional.We shall establish an adequate convergence theorem for the gradient method used to solve the optimization problem min u∈U J(u).

) J is Gâteaux differentiable and bounded below;
(2) There exists L > 0 such that then there exists δ ∈ (0, 1 L ) such that the sequence (u k ) k∈N , defined by Proof.First we prove that there exists δ ∈ (0, 1 L ) such that for any u ∈ M J(u 0 ) , for any µ ∈ E δ and for any t ∈ [0, µ] we have u − tJ (u) ∈ M J(u) .
Let us suppose, by contrary, that for any δ ∈ (0, 1 L ) there exists The following relations are then valid Consequently, the assertions of the theorem follows from the inequalities Because the functional I is Gâteaux differentiable, bounded below and satisfies the Lipschitz property (Theorem 3.2), as a consequence of the Theorem 3.3 we obtain the following result.
Using the formulas (18), ( 19), ( 20) and ( 6) x k 1,h , x k 2,h , p k 2,h and I(u k h ) were computed with the trapezoidal rule of integration. If 2,i i = 0, 1, . . ., n, then using an algorithm of one dimensional optimization based on a parabolic interpolation, it is find µ k as The next approximation is given by The stopping condition is given by The results are presented in Table 1.On the other hand, the value of the cost functional I(u k h ) and the error are presented in Table 2.
Example 2. Consider the equation with the solution x(t) = t − t 2 (Sokolowski J., Matsumura T., Sakawa Y., [12]).In this case The results are presented in Table 3 and Table 4, respectively.

Table 1 .
The discrete solution.

Table 2 .
The evolution of the cost functional.

Table 3 .
The discrete solution.

Table 4 .
The evolution of the cost functional.