NEW TECHNIQUE FOR SOLVING MULTIVARIATE GLOBAL OPTIMIZATION

. In this paper, we propose an algorithm based on the Branch and Bound method to underestimate the objective function and reductive transformation which transformed all multivariable functions into univariate functions. We also propose and demonstrate several quadratic lower bound functions, which are better/preferable to the ones mentioned in the literature. In this regard, our experimental results are more effective when we face different nonconvex functions.


INTRODUCTION
In convex optimization, we seek a local solution (widely enough) to determine the optimal solution [1,2,20], when the objective of global optimization is to find the globally best solution of possibly nonlinear models, in the possible or known presence of multiple local optima. Formally, global optimization seeks global solutions of a constrained optimization model [19]. Nonlinear models are ubiquitous in many applications such as advanced engineering design, biotechnology, data analysis, environmental management, financial planning, process control, risk management, scientific modeling, etc. Their solution often requires a global search approach [18,4,22,3,21]. A variety of adaptive partition strategies has been proposed to solve global optimization models. They are based upon partition, sampling and subsequent lower with upper bounding procedures. These operations are applied iteratively to the collection of active subsets within the feasible set. In this connection, several works have been proposed among others. Adjiman et al. [5] presented the detailed implementation of the alpha BB approach and computational studies in the process design problems such as heat exchange networks, reactor-separator networks and batch design under uncertainty. Akrotirianakis and Floudas [7] presented computational results of the new class of convex underestimators embedded in a branch-and-bound framework for box constrained NLPs. They also proposed a hybrid global optimization method that includes the randomlinkage stochastic approach with the aim of improving computational performance. Caratzoulas and Floudas [9] proposed novel convex underestimators for trigonometric functions. Recently, years of univariate global optimization problems have attracted common attention since they arise in many real-life applications and the obtained results can be easily generalized to the multivariate case [6,8,13,16]. In this paper, we propose two approaches to find a global minimum of a univariate objective function with multivariate functions. In the following, we will present our two techniques.

A Piecewise Quadratics Underestimations (KBBm):
The main idea consists of constructing piecewise quadratic underestimation functions closer to the given nonconvex f in a successive reduced interval [a k , b k ] with their minimums explicitly given. Instead of using a single large square away from the objective function [15], the determination of its minimum implies a local method [5]. We propose an explicit method of quadratic relaxation for building global optimization problems with bounded variables. This construction is based on the work of authors in [15], using the quadratic splines. The generated quadratic programs have exactly explicit optimal solutions in each interval in the target underestimated by several quadratic splines reliable to calculate the lower bounds.

A Coupling Branch and Bound with Alienor method:
The main idea of this technique is to transform the multivariate optimization problem into a univariate problem, using a reductive transformation in order to apply the proposed algorithm to a single dimension [24]. In this technique, we focus on the multivariate global optimization problems and generalization of previous techniques used in the Branch and Bound algorithm by underestimation of the objective it gives a limit of the directions of research and preserves the enormous advantage of the underestimation represented by the explicit solutions of the generated quadratic problems. In this way, we combine the advantages of the two methods introduced by coupling the Alienor method [23] with Branch and Bound. The advantage of this adaptation is to reduce the dimension which causes one direction of search for the minimum f . Secondly, this process still allows us to apply the most effective methods designed for the one-dimensional global optimization. The structure of the rest of the paper is defined as follows: Section 2 presents the two underestimators proposed in [15,5]. Section 3 discusses the construction of a new lower bound on the objective function and describes a proposed algorithm (KBBm) to solve the univariate global optimization problem with box constrained. Section 4 describes a coupling (KBBm) with Alienor methods to solve a multivariate global optimization problem. Section 5 also presents some numerical examples of different nonconvex objective functions while we conclude the paper in Section 6.

BACKGROUND
It can be considered the following global minimization problem: where f is a nonconvex twice differentiable function on X.
In what follows we give two underestimators developed by the authors, respectively in [5,15].

Quadratic Underestimator in (KBB) Method
This quadratic underestimator satisfies the following properties:

Advantages and Disadvantages of Two Methods.
(1) The advantage of αBB is the best initial lower bound obtained. Also the underestimator is close to the objective function (see Table 2, Table 3). (2) The disadvantage of αBB is that it uses a local method for determining the values of the lower bounds. (3) The advantage of KBB is the values of the lower bounds given explicitly.
(4) The disadvantage of KBB is the initial lower which is very far from the optimal solution. Also the underestimator is far away from objective function (see Table 2, Table 3).

THE PROPOSED UNDERESTIMATOR(KBB m )
In this section, we present a new lower bound. In this lower bound, we merge the advantages of KBB and αBB.
Let X = [a, b] be a bounded and closed interval in the set of the real numbers R, f be a continuously twice differentiable function on X, x 0 and x 1 be two real numbers in [a, b] such that x 0 ≤ x 1 and also l 0 , l 1 be real valued functions defined by It is known that f (x) is a univariate function that needs to be underestimated in the interval [a, b]. Suppose that the nodes are chosen to be equally spaced in [a, b], so that x i = a + ih, h = b−a n , i = 0, . . . , n for every interval [x i , x i+1 ]. We construct the corresponding local quadratic underestimator as follows (6) p , and K i is an upper bound of the second derivative which is valid for [x i , x +1 ]. Instead of considering one quadratic lower bound over [a, b], we construct a piecewise quadratic lower bound.
Therefore, in the following theorem, we show that the new lower bound is tighter than the lower bound constructed in the reference [15]. On the other hand it is implied that first inequality of (7) is verified. To justify the second inequality we consider the function ϕ defined on [x i , x i+1 ] by (10) as follows.
and it gives us ϕ is a concave function. Hence, we obtain the following inequality.
The second inequality of (7) is also proved. □ One has to compute a quadratic lower bound underestimator of the objective function f in each sub-interval [x i , x i+1 ], (i = 0, . . . , n) as follows, (12) x Now, we compute the values of p i (x * i ) in order to detect the best lower bound we compare all lower bounds and preserve the smallest one as follows: (13) LB k = min p i (x * i ). The upper bound is calculated by the following comparisons and maintain the best ever. The objective function is evaluated at different points so has to determine the upper bound.  Table 5. The different steps for solving the problem (P u ) are summarized in the following proposed algorithm: Algorithm Input: The accuracy.
• f : The objective function.
• n: The number of quadratic. Output: • x * : The global minimum of f . (1) Initialization step k = 0 (a) for all i = 0, . . . , n Compute x i = a + b−a n i, and set M = (3) x * = x k is the optimal solution corresponding to the best U B k found. end algorithm Theorem 3 (Convergence of the algorithm). The algorithm mentioned above is classified as follow: • Algorithm is finite or • Algorithm generates a bounded sequence {x k }. Additionally any accumulation point of the sequence is a global optimal solution of (P u ), and U B k ↘ α, LB k ↗ α are satisfied.
Proof. Assume that the algorithm is infinite. Then, it generates an infinite sequence of intervals {T k } whose lengths h i (with i = 1, . . . , n) decreases to zero. So, the terms of sequence {T k } shrink to a singleton. Since the values of U B k are obtained by evaluating f (x) in different points of [a, b], the sequence {U B k } is bounded below by α = min f (x). On the other hand, the values of LB k are the minimum quadratic underestimate the objective function and it can not exceed α. Then the sequence {LB k } is bounded above by α. Subsequently, LB k ≤ α ≤ U B k is verified. It suffices to prove that {U B k } is a decreasing sequence and {LB k } is a an increasing sequence. From the description of the algorithm we see that the value of U B k+1 is selected as the lesser between current U B k and the new value to be determined which always results U B k+1 ≤ U B k , ∀k ≥ 0 at each iteration k + 1, k ≥ 0. So, {U B k } is a decreasing sequence. Similarly, the value of the lower bound LB k+1 is selected as the minimum of a certain quadratic located in the interior of a big quadratic covering the current interval [a k+1 , b k+1 ], and underestimate the objective on the [a k+1 , b k+1 ], which automatically leads LB k+1 ≥ LB k , ∀k ≥ 0 at each iteration k + 1, k ≥ 0. Then the sequence {LB k } is increasing on [a, b]. Therefore the theorem is proved. □

COUPLING THE ALIENOR METHOD WITH KBB m
The fundamental principle of the method for the reductive transformation [12] is to perform a transformation which will be returned the multidimensional problem to one dimensional problem for implementing the most effective optimization methods adapted to the case of a single variable. The basic idea is to draw all feasible by a parametric (α-dense) curve continuous and fairly regular. Multivariate function is transformed into a function of a single variable. So our problem is reduced to a problem easier to solve since there is only one direction to explore.
We combine the Alienor method which is developed by the authors [19] with our algorithm (KBB m ).
Let us f be a nonconvex function twice differentiable on a box X = Π n i=1 [a i , b i ] and consider the following global optimization problem The idea is to transform a function of several variables to a function of a single variable. Then it becomes quite simple to determine the global minimum. h 2 (θ), . . . , h n (θ)) is satisfied. Now, we apply the Alienor method to global optimization [23,24]. Let x 1 , . . . , x n be n variables. So, method is to express these variables using one densifying X = Π n i=1 [a i , b i ] with simple curve. Then, we construct a parametric curve.
The transformation is It is easy to extend this curve to It is sufficient to set: (16) x Hence, the original minimization problem (P ) is approximated by the onedimensional minimization problem as follow (17) ( for f * (θ) = f (h 1 (θ), . . . , h n (θ)). θ max can be taken the largest value, for (x 1 , x 2 , . . . , Now, we can use our algorithm (KBB m ), to arrive at θ * the optimal solution to the transformed problem. Finally, we must apply the "Step back" to get the approximated solution of the original problem as follows: Step back; should be found that (18) x

COMPUTATIONAL ASPECTS AND RESULTS
To measure the performances of our KBB m algorithm, we perform a comparative study with KBB and αBB. These algorithms are implemented in C−programming language with double precision floating point by running on a computer with an Intel (R) core (TM) i3-311MCP4 with CPU 2.40GHz. Numerical tests are performed as three parts on the set of test functions. In the first experiment, we compare the performances of the KBB, αBB and the KBB m algorithms on the set of 10 functions. Here, we include a method that  computes the positive numbers α and K [14]. The number of the quadratic functions are used in KBB m at each iteration as fixed to n = 16 as well as the accuracy fixed to ε = 10 −6 . In the second experiment we test the KBB m algorithm according to the initial lower bound obtained for different numbers of quadratic function used on the set of 20 functions. In the third experiment, we compared the performance of our approach with the generalization to the case of multivariate function developed by authors in [17] through the Rastrigin function which has special features [11].
In our results, we consider the following notations as you see in the table: • f * is the optimum obtained, • LB 0 is the initial lower bound, • T CP U is the execution time in seconds, • m is the total number of interval, • m e is the number of intervals eliminated, • LM is the number of local minimum, • GM is the number of global minimum, • ( * ), an asterisk denotes that the bond is equal to the known global optimum f * within six decimal digits of accuracy.  The execution time required to achieve the optimal value is considered as a reliable criterion to the algorithm performances. According to the numerical results, we summarize them in Table 3 and Table 4. The performances of the proposed method is clearly better than the performance of the KBB method. As the best obtained initial lower bound remains an important criterion for   measuring the validity of the underestimator. In Table 2, Table 3 and Table 4, the comparative study of the quality of the initial lower bound is found by three algorithms. They show that our method is better than the two methods. Table 5 just confirmed the competence of our method by doubling the number of quadratic. We can notice that the values of the lower bound are improved. Table 6 and Table 7, clearly show that our approach is better than the method developed in [17] in terms of execution time for multivariate global optimization. Regarding the results of the two dimensions, results encourage and promise us in terms of execution time as well.

CONCLUSION
We presented a method of underestimation of nonconvex objective based on piecewise quadratic functions which have explicit minimums. The comparison of the lower bounds favors such quadratic against others, which guarantees the underestimation of the objective. This approach is validated by considering a deterministic Branch and Bound which are fully detailed. They allow  certifying still coaching the value of the global minimum at the end of the performance. However, the extension of this technique to the multidimensional case still requires the preservation of benefits already required. The coupling of Alienor method with the Branch and bound reduces the size of the problem and makes it easier to solve. This coupling has already been proved for two dimensional problems proved very effectively.problems and proved very effective. Many digital experiences are performed and confirmed the effectiveness of this new acceleration technique. The performance of the proposed procedure depends on the quality of the chosen lower bound of f . Such that, our piecewise quadratics lower bounding functions is better than the two underestimators introduced and presented in [15,5].