Complexity Analysis of Interior Point Methods for Convex Quadratic Programming Based on a Parameterized Kernel Function

abstract: The kernel functions play an important role in the amelioration of the computational complexity of algorithms. In this paper, we present a primal-dual interior-point algorithm for solving convex quadratic programming based on a new parametric kernel function. The proposed kernel function is not logarithmic and not self-regular. We analysis a large and small-update versions which are based on a new kernel function. We obtain the best known iteration bound for large-update methods, which improves significantly the so far obtained complexity results. This result is the first to reach this goal.


Introduction
Convex quadratic programs (CQP) appear in many areas of applications, for example in finance, agriculture, economics, optimal control, geometric problems and also as sub-problems in sequential quadratic programming.
There are a variety of solution approaches for CQP which have been studied intensively. Among them, the interior point methods (IPMs) gained more attention than others methods. Feasible primal-dual path following methods are the most attractive methods of IPMs [13,14]. Their derived algorithms achieved important results such as polynomial complexity and numerical efficiency. However, in practice these methods don't always find a strictly feasible centered point to starting their algorithms. So, it is worth 2 N. Boudjellal, H. Roumili and Dj. Benterki analyzing other cases when the starting points are not centered. Thus leads to define a new technique which is bases on finding initial strictly feasible point not necessarily centered.
Primal-dual interior point methods based on a kernel function were studied extensively by many authors for linear optimization (LO). Bai et al. [1] presented a large class of eligible kernel functions, which is fairly general and includes the classical logarithmic functions and the self-regular functions, as well as many non-self-regular functions as special cases. For some other related kernel function, we refer to [3,4,5,6,7,8,9,12].
In 2001, Peng et al. [9] introduced a new paradigm for primal-dual interior-point algorithms for LO, which has O qn q+1 2q log n ε complexity for large-update method with q > 1.
In 2002, Bai et al. [2] proposed a new parametric kernel function for LO, which has O qn log n ε complexity for large-update method with q > 1.
In this paper, we propose a primal-dual interior-point method for solving CQP based on a new parametric kernel function, this function is used for determining the new search directions and for measuring the distance between the given iterate and the center. We present some complexity results for the generic algorithm and prove that the bound for large-update methods enjoys O pn The paper is organized as follows: In section 2, the statement of the problem is presented and we recall the basic concepts of IPMs for CQP. Section 3 contains some properties of the kernel functions. An analysis of interior-point algorithm is described in section 4 as well as several properties and the growth behavior of the barrier function, the estimate of the step size and the decrease in behavior of barrier function. We derive the complexity bound of the algorithm in section 5. In section 6, we present a new kernel function and its properties. Section 7 contains some numerical experimentations and commentaries. In section 8, a conclusion is stated.
The following notations are used throughout the paper. Let ℜ n be the n-dimensional Euclidean space with the inner product ., . and . 2-norm. ℜ n + and ℜ n ++ denote the nonnegative orthant and positive orthant, respectively. For x, z ∈ ℜ n , x min and x i z i denote the smallest component of the vector x and the component wise product of the vector x and z, respectively. X = diag(x) denotes a diagonal matrix that has components of the vector x ∈ ℜ n , e denotes the n-dimensional vector of ones. For f, g for some positive constants C 2 and C 3 .

Preliminaries
We consider the standard primal convex quadratic programming problem where Q is a given n × n real symmetric positive semidefinite matrix, A is a given m × n real matrix, c ∈ ℜ n , b ∈ ℜ m , x ∈ ℜ n . The dual problem of (P ) can be formulated as where z ∈ ℜ n and y ∈ ℜ m .

Central path for CQP
Throughout the paper, we make the following assumptions: (H1) The matrix A has full row rank (rank(A) = m < n).

3
It is well known that finding an optimal solution for (P ) and (D) is equivalent to solve the nonlinear equations: The basic idea of primal-dual IPMs is to replace the complementarity condition xz = 0 in (2.2) by the parameterized equation xz = µe, one obtains the following perturbed system: where µ is a positive parameter. It is shown that, under our assumptions the system (2.3) has a unique solution (x(µ), y(µ), z(µ)), for each µ > 0. x(µ) and (y(µ), z(µ)) are called the µ-center of (P ) and (D), respectively. The set of all µ-centers forms the so called central path for (P ) and (D).
The principal idea of IPMs is to follow this central path and approach the optimal set of CQP as µ goes to zero.
From a theoretical point of view, the IPC can be assumed without loss of generality. In fact, we may assume that µ 0 = 1, x(1) = z(1) = e to simplify the theoretical contributions see [13].

The search directions determined by kernel function
Applying Newton's method in (2.3) for a given feasible point (x, y, z) then the Newton's direction (∆x, ∆y, ∆z) at this point is the unique solution of the following linear system of equations: (2.4) In this paper we follow [3], to reformulate the Newton's direction search in a different way. Let's introduce the following notation: Note that if x is primal feasible and z is dual feasible then the pair (x, z) coincides with the µ-center (x(µ), z(µ)) if and only if v = e. And defining the scaled search directions d x and d z according to: System (2.4) can be rewritten as follows: . It is not difficult to verify that the right-hand side of the third equation in (2.6) equals minus the derivative of the classical logarithmic barrier function Φ(v) : ℜ n ++ → ℜ + is defined as follows: Moreover, we call ψ the kernel function of the logarithmic barrier function Φ(v). The system (2.6) can be rewritten as follows:Ā (2.8) We use Φ(v) as the proximity function to measure the distance between the current iterate and the µ-center for given µ > 0. We also define the norm-based proximity measure, δ(v) : ℜ n ++ → ℜ + , as follows: The result of a Newton step with step size α is denoted as where α satisfies 0 < α ≤ 1.

The generic interior-point algorithm for CQP
It is clear from the above description that the closeness of (x, z) to (x(µ), z(µ)) is measured by the value of Φ(v) with τ > 0 as a threshold value. If Φ(v) ≤ τ , then we start a new outer iteration by performing a µ-update; otherwise, we enter an inner iteration by computing the search directions at the current iterates with respect to the current value of µ and apply (2.10) to get new iterates. If necessary, we repeat the procedure until we find iterates that are in the neighborhood of (x(µ), z(µ)). Then µ is again reduced by the factor 1 − θ with 0 < θ < 1, and we apply Newton's method targeting the new µ-centers, and so on. This process is repeated until µ is small enough, say until nµ ≤ ε. At this stage, we have found an ε-approximate solution of CQP.
The parameters τ , θ and the step size α should be chosen in such a way that the algorithm is optimized in the sense that the number of iterations required by algorithm is as small as possible. Now, we give the generic form of the algorithm.

Kernel functions and its properties
We call ψ : ℜ ++ → ℜ + a kernel function if ψ is twice differentiable and satisfies the following conditions: We call ψ eligible if and only if it satisfies the following conditions: The following lemma plays an important role in the analysis of the algorithm.

Upper bound of Φ(v) after each outer iteration
During the course of the algorithm the largest values of Φ(v) occur just after the updates of µ. In this subsection we derive an estimate for the effect of a µ-update on the value of Φ(v).
We offer important theorem, which is valid for all kernel functions that satisfy (3.5).
Now let v be the variance vector of (x, z) with respect to µ. Then one easily understands that the variance vector v + of (x, z) with respect to

Decrease of the barrier function during an inner iteration
In this subsection, we compute a default step size α and the resulting decrease of the barrier function. After a damped step we have: Using (2.5), we obtain: So, we have: is the difference of proximities between a new iterate and a current iterate for fixed µ. By (3.3) and Lemma 3.1, we have: Therefore, we have: Obviously, f (0) = f 1 (0) = 0. Taking the first two derivatives of f 1 (α) with respect to α, we have: Using (2.8) and (2.9), we have For convenience, we denote v min = min i (v i ), δ = δ(v) and Φ = Φ(v). The next lemma is valid for all kernel function that satisfy (3.1) which the same lemma in the LO case (see [1]).

Lemma 4.2.
Let f 1 (α) be as defined in (4.2) and δ be as defined in (2.9). Then we have: Proof. According to the system (2.8), we observe that According to (3.1) (ψ ′′ is strictly decreasing) and (4.3), we obtain: From this stage on we can apply exactly the same arguments as in the LO case to obtain the following results which require no further proof.

Iteration bound
We need to count how many inner iterations are required to return to the situation where Φ ≤ τ . Let (Φ) 0 is an upper bound for Φ(v + ) during the process of the algorithm, the subsequent values in the same outer iteration are denoted as (Φ) k , k = 1, 2, .., K, where K denotes the total number of inner iterations in the outer iteration.
According to Lemma 4.6 with α ≤ᾱ, we have: then we suppose that they existκ > 0 and γ ∈]0, 1], such as Lemma 5.1. ( [10]) Let t 0 , t 1 , .., t K be a sequence of positive numbers that verifies: such thatκ > 0 and γ ∈]0, 1], then: According to (5.1) and using Lemma 5.1 for t k = Φ k − τ > 0 then K as follows: The number of outer iterations is bounded above by log n ε θ (see [13]). Through multiplying the number of outer iterations by the number of inner iterations and according to (4.1), we get an upper bound for the total number of iterations, namely:

Remark 5.2. After the analysis of complexity and the inequality (5.3)(The same inequality in linear case see inequality 2.4.3 in [15]), it may be clear that for any given kernel function ψ in the CQP case will yield the same complexity results as in the LO case.
We summarize the complexity result of large-update methods using some polynomial kernel functions in Table 1.

New kernel function
We define a new kernel function ψ(t) as follows:

Eligibility of the new kernel
We give the first three derivatives with respect to t as follows: Obviously, we have: Next lemma serves to prove some properties of eligibility of our new kernel function (6.1). (6.1). Then,

Lemma 6.8. Letᾱ be that defined in Lemma 4.4. If
.

Complexity of algorithm
Our aim is to compute iteration bounds for large and small-update methods based on our new kernel function. Large-update methods are characterized by τ = O(n) and θ = Θ(1) and small-update methods are characterized by τ = O(1) and θ = Θ( 1 √ n ). Using (6.19) and (5.1), we have: ,κ = 1 36(p + 4) . Using (6.16) for small-update methods, we distinguish the two cases: If we take p = log n 2 − 2, we obtain the best know complexity bound for large-update methods namely O √ n log n log n ε which is the minimum of O (p + 2)n p+3 2(p+2) log n ε iterations complexity.

Numerical results
To prove the effectiveness of our new kernel function and evaluate its effect on the behavior of the algorithm, we conduct a numerical comparative tests with the kernel function 1 and 2. Our new kernel function is noted by B.
In the tables of results: m is the number of constraints, n is the number of variables, Itr(A) and (time A(s)) represent the number of inner iterations and the computation time by second respectively, to obtain the optimal solution using the kernel function A. Examples are stated under the following form: The initial strictly feasible interior point is: The initial strictly feasible interior point is:  The obtained primal-dual optimal solution is:  Tables 2, 3 and the example 5 in  the Tables 4 and 5.

Comments
The realized numerical experiments show the effectiveness of our new efficient kernel function. We note that when the dimension of the problem becomes large, the difference between our new kernel function, that of Peng et al. [9] and that of Bai et al. [2] becomes large in terms of number of inner iteration and computation time. These numerical results consolidate and confirm our theoretical results.

Conclusion
In this paper, we proposed a new kernel function consisting of a polynomial function in its barrier term defined by (6.1) for the primal-dual interior point methods for convex quadratic programming. A simple analysis for the primal-dual IPMs based on the proximity function induced by the new kernel function is provided. The proposed kernel function is not logarithmic and not self-regular. We proved that the iteration bound of interior point method based on this function for large-update method is