The Linear Complementarity Problem and a Modified Newton’s Method to Find its Solution

In this paper, we present a new interior-point method of convergence order six to solve the linear complementarity problem. Computational efficiency in its general form is discussed and a comparison between the efficiency of the proposed method and existing ones is made. The performance is tested through numerical experiments on some test problems and a practical example of bio-economic equilibrium model.


Introduction
The linear complementarity problem, denoted by LCP (q, M ), is to find a column vector z ∈ IR n such that 0 z⊥(M z + q) 0, or showing that no such vector exists, where M ∈ IR n×n and q ∈ IR n are given data. More concretely, it is to find a column vector z ∈ IR n satisfying    z 0 M z + q 0 z T (M z + q) = 0 (1) or to show that no such vector exists. The linear complementarity problem, introduced by Cottle [1], is one of the most widely studied mathematical programming problems. Solving LCP (q, M ) for an arbitrary matrix M is NPcomplete [2], while there are several classes of matrices M for which the associated LCP s can be solved efficiently. For details of the theory of LCP s, see the books of Cottle et al. [3], Murty [4] and El foutayeni et al. [5][6][7][8].
Using the shorter notation, the linear complementarity problem defined above can be expressed as the LCP (q, M ). From the constraints z 0, w = M z + q 0, and z i w i = 0; follows that z and w are required to be nonnegative, and that at least one of the component-pair (z i , w i ) must be zero, that is, if z i > 0, then w i = 0 and if w i > 0, then z i = 0. The notation x ⊥ y means that x T y = 0. Inequalities involving vectors are understood to hold component-wise.
In this paper, we assume that M is a P − matrix; recall that a matrix M ∈ IR n×n is a P − matrix if the determinants of all principal submatrices are positive. The feasible set and the strict feasible set of the LCP (q, M ), are denoted respectively by: S = {z ∈ IR n : z 0, M z + q 0} andS = {z ∈ IR n : z > 0, M z + q > 0}. Let us assume that the strict feasible set of LCP (q, M ) is nonempty. We wish to find a column vector z * in S such that z * ⊥ (M z * + q); such a vector is the unique solution of LCP (q, M ). Let F (z) = zw = (z 1 w 1 , z 2 w 2 , ..., z n w n ) T . Our aim is to construct an interior-point method for solving LCP (q, M ), more precisely, the goal is to build a sequence z (k) ∈ S such that F (z (k) ) ϵ, where ϵ is given; this means that z (k) converges to z * .
In the second section, we give the main result of this paper: we present a three-step iterative method of convergence order six for solving the linear complementarity problem. In the third section, we discuss the efficiency and behavior of uniform convergence of the method and to verify the theoretical results through numerical experiments on some test problems and a practical example of bio-economic equilibrium model. The last section contains the concluding remarks.

The main result
The continuously rising success of interior point techniques applied to Linear Programming LP has stimulated research in various related fields. One possible line of generalization consists in looking at linear complementarity problem. This type of generalization is studied in the present paper.
As is done in this area, we use Z to denote the diagonal matrix with diagonal z and employ analogous notation for other quantities. Also we use I to denote the unit matrix whose dimension will vary with the context. A point z ∈ IR n is said to be feasible for problem LCP (q, M ) if z 0 and M z + q 0.
Therefore, Z (k) and W (k) are positive semidefinite matrices; and since M is a P − matrix, it follows that F ′ ( z (k) ) is nonsingular. This completes the proof of the Lemma.
The error relationship r (k) = z (k) − z * is the basis for us to establish convergence theorems, it holds that Proof. Let us use the result of Taylor's expansion on vector functions for F about z: where F (i) denotes the i th Fréchet derivative of function F , and

BMSA Volume 15
Then setting z = z * and using the fact that F (z * ) = 0 in (2), we obtain where where I denotes the unit matrix. where The inverse of B 0 is given by where Now, from (4) and (7), it follows that This completes the proof of the Lemma. Expanding In the same way, we have This completes the proof of the Lemma.
and let r (k) y = y (k) −z * and using the previous Lemma, it follows that r (k) where using (8), it follows that Expanding F ( y (k) ) about z * we have This completes the proof of the Lemma.
converges to the zero of the function F with convergence order six.

BMSA Volume 15
where Then, the error equation (12) on using (8) and (11) becomes which shows the sixth order of convergence. This completes the proof of the Theorem.
Remark that to prove lim k→+∞ z (k) = z * solution of LCP (q, M ), it is equivalent to demonstrate the convergence of the iteration sequence {z (k) } +∞ k=0 generated by previous Theorem and to show that for all k ∈ IN, we have The relationship (14) is guaranteed by the following theorem.
Proof. We aim to show that ∀k 0, (A) Applying Lemma 2 and using the fact that then we have (B) Applying Lemmas 3 and 4, and using the fact that then we have (C) By setting that and applying Lemma 4 and Theorem 5, then we have (D) Using (17) and using the fact that w (k+1) = M z (k+1) + q, then we have This completes the proof of the Theorem.
converges to z * the solution of the Linear Complementarity Problem LCP (q, M ).
Now, we give the following algorithm for solving the linear complementarity problem.

Numerical tests and discussion
In this section, we provide numerical examples to demonstrate the efficiency of our method. To do so, we conducted the numerical experiments on some test problems. In the following, we will implement our algorithm in MATLAB 7.2 and run it on a personal computer with a 2.60 GHZ CPU processor and 3.5 Go memory. We stop the iterations if the condition ||F (k) (z)|| 10 −6 is satisfied. In order to do this, we implement the MATLAB code of the following functions F (z), F ′ (z) and ModifiedNewton function for solving F (z) = 0 (see Appendix A). Now, let's take some known examples for solving LCP.

22
BMSA Volume 15 Example 1. Let us consider the following linear complementarity problem LCP (q, M ), find vector .., n and zero in the rest and q = (q i ) 1≤i≤n such as q i = −1.
This example is used by C. Geiger [9]. In order to calculate the optimal solution of this problem by using the method suggested, we implement the MATLAB program for calculating number of iterations, time, and Error norm (see Appendix B). Table 1 lists number of iterations (noted "iter") required, time (noted "Time") taken and the error norm (noted "ErrorNorm") when solving the example 1. Following examples would demonstrate the concept. Let us plot, the curve of the variation of the number of iterations (Fig. 1), the curve of the execution time (Fig. 2), and the curve of the relative residual (Fig. 3), according to the size of the problem. Computations are carried out with parameters: problem size n varies from 1 up to 300, with an increment of 2, and z (0) = (10 −2 nI − M −1 ) q. The tolerance ϵ is set to 10 −6 . We create a script file and we type the code given in Appendix C. When we run the file, MATLAB displays the following plots (Figures 1-3). Determining the number of iterations: in simulation, one major question is how many iterations are needed to reach the optimal solution in the results. Simulation as a tool provides an approximation of the actual relationship between the input and output variables. The precision of the approximation is based on the number of iterations of the simulation done. More iterations lead to greater precision. But the relationship between iterations and precision depends on the relationship between the variables in the precision. In addition, the analyst must decide which output variable is the variable of interest, and what degree of precision is required. The next step is to determine the number of iterations required to achieve a solution to problem 1.
In the first figure, the y-axis shows number of iterations which varies between 1 and 5 (except (a) where the number of iterations varies between 1 and 13), and the x-axis is the size problem which varies between 1 and 300, with an increment of 1. Please note that these graphs show that the number of iterations is uniformly bounded for every value of n ∈ {1, 2, ..., 300}. Moreover, the number of iterations necessary to reach the optimal solution does not exceed 3 iterations when the size of the problem varies between 40 and 70, it does not exceed 4 iterations when n varies between 70 and 250, and it does not exceed 5 iterations when the size of the problem varies between 250 and 300, as shown in the Fig. 1, and this,   Analyzing the algorithm's performance: the stopwatch timer function cputime enable us to get information on how our algorithm is performing and help us to make interpretations. It is useful for measuring relative execution time and identifying specific performance bottlenecks in algorithm. The cputime function provides a robust measurement of the time required for algorithm execution. The Fig. 2 is to determine the execution time required to achieve a solution of problem 1. Note that the cputime function returns the total CPU time (in seconds) used by the algorithm from the time it was started. This number can overflow the internal representation and wrap around. Fig. 2 shows that the execution time always varies between 0 and about 0.35 (sec) when the size of the problem varies between 1 and 300, with an increment of 1. Most of the time, between 0 and 0.015 (sec) are required. It can be noted that, if the size of problem n=300, the execution time CPU is lower than 0.30 (sec). Recall that although we can measure performance using the timeit or tic and toc functions, the cputime function is better for this purpose. Generally for CPU-intensive calculations run on Microsoft(R) Windows(R) machines, the elapsed time using cputime and using tic and toc are close in value, ignoring any first time costs. There are cases, however, that show a significant difference between these methods. For example, in the case of a Pentium 4 with hyperthreading running Windows, there can be a significant difference between the values returned by cputime versus tic and toc.

24
BMSA Volume 15 From Fig. 3, it can be observed that a moderate increase of the error norm (while increasing n), in which the y-axis shows error norm which varies between 0 and 2.5 10 −16 as shown in Fig. 2 (a), but most of the time, it varies between 0 and 1.0 10 −28 as shown in Fig. 2 (b)-(d). The x-axis is the size problem which varies between 1 and 300, with an increment of 1. Now we compare the results obtained by our method with that obtained by the Yu method [10] and the CHKS method [11]. In order to do this, we implement the MATLAB program for calculating the optimal solution z*, the final values F(z*), number of iterations and time (see Appendix D). The results are summarized in Tables 2-4, where Time denotes the total cost time (in second) for solving the problem; z* and F (z*) denote the final values of the iteration point and the function F(x); Iter denotes the iteration number when the algorithm terminates. From Tables 2-4, we can see that our method can comparable with the Yu method and the CHKS method from the iteration number and the CPU times, which shows that the performance of our method is effective.  Table 3 Numerical results of the Yu method (first example).  Table 4 Numerical results of the CHKS method (first example). This example is used by Z. Yu [10]. To illustrate the convergence behavior and computational efficiency, we will compare four methods that work directly on the linear complementarity problem: Lemke method noted by (LM), Chen method [11] noted by (CHKSM), Yu method [10] noted by (YQM), and method proposed in this paper noted by (EKM). We stop the iterations if the condition ||F ( z (k) ) || 10 −6 is satisfied. For every method, we analyze the number of iterations needed to converge to the solution. In the comparison of performance of methods, we also include CP U time utilized in the execution, where Iter denotes the iteration number when the algorithm terminates; and Time denotes the total cost time (in second) for solving the LCP (q, M ). Note the exact solution of LCP (q, M ) is given by z i = n i . Firstly, we study the performance of the proposed method in the case of size of problem n=4 and n=8. In particular, we compare the results obtained by our method with that obtained by the Yu method [10] and the CHKS method [11]. In order to do this, we implement the MATLAB program for calculating the optimal solution z*, the final values F(z*), number of iterations and time (see Appendix E). The results are summarized in Tables 5-7, where Time denotes the total cost time (in second) for solving the problem; z* and F (z*) denote the final values of the iteration point and the function F(x); Iter denotes the iteration number when the algorithm terminates. From Tables 5-7, we can see that our method can comparable with the Yu method and the CHKS method from the iteration number and the CPU times, which shows that the performance of our method is effective. Table 5 Numerical results of our method (second example).   1.6,1.3333,1.1428,1) -1.1102E-16,2.2204E-16,0,0)

26
BMSA Volume 15 Table 7 Numerical results of the CHKS method (second example). z * F (z * ) Iter Time(s) n=4 (4,2,1.33333,1) (-6.11511E-13,-1.22324E-12, 5 0.016 -1.83498E-12,-2.44638E-12) n=8 (8,4,2.66667,2, (-3.16676E-10,-6.33363E-10, 7 0.031 1.6,1.33333,1.14286,1) -9.50053E-10,-1.26674E-9, -1.58342E-9,-1.90009E-9, -2.21673E-9, -2.53336E-9) Secondly, we study the performance of the proposed method in the case of size of problem n=100, n=500 and n=1000. In particular, we compare the results obtained by our method with that obtained by the Lemke method, the Yu method [10] and the CHKS method [11]. The test results of this example are summarized in Table 8 (we implement the same MATLAB program in Appendix E by replacing "nn= [4,8]" with "nn=[100,500,1000]"). From Table 8, we can see that our method can comparable with the Lemke method, the Yu method, and the CHKS method from the iteration number and the CPU times. However, our method can solve the problem while the Lemke method and CHKS method failed to solve the problem, which shows that the performance of our method is effective. Example 3. We will now provide a practical example of this paper to illustrate the effectiveness of the proposed method. To do so, we will consider a bio-economic equilibrium model which describes the dynamics of a fish population fished by several fishermen seeking to maximize their profits. Specifically, we consider a bio-economic equilibrium model of a fish population exploited by several fishermen represented by their fishing efforts (E i ) i=1,...,n , where n is the number of fishermen. For more details, we refer to Y. EL Foutayeni et al. [12]. The objective is to find the fishing effort E * i maximizing each fisherman's profit π * i (E * ), at biological equilibrium, without any consultation between the fishermen. All of them have to respect two constraints, the first one is the sustainable management of the resources and the second one is the preservation of the biodiversity.
Each fisherman strives to maximize its profit choosing a fishing effort strategy. With these considerations, our problem leads to the following Generalized Nash Equilibrium Problem (GNEP): Each fisherman i ∈ {1, ..., n} must solve problem (P i ) where p is the price of the fish population; r is the intrinsic growth rate; K is the area's environmental carrying capacity or saturation level for a given fish population; q i is the catchability coefficient of fisherman i; and c i is the harvesting costs per fishing effort employed by fisherman i. For more details and description of variables, refer to Y. EL Foutayeni et al. [12].
We recall that the Generalized Nash Equilibrium Problem (GNEP) is an extension of the Nash Equilibrium Problem (NEP), in which each fisherman's strategy (fishing effort) set is dependent on the rival fishermen strategies. Mathematically, (E * 1 , ..., E * n ) is called generalized Nash equilibrium point, if and only if, E * i is a solution of the problem (P i ) for (E * j ) j=1,...,n;j̸ =i are given. The fishing effort E * = (E * 1 , ..., E * n ) depends on: (a) the catchability coefficients q i ; (b) the costs of fishing c i ; and (c) the price of fish population p. It was shown that (see Y. EL Foutayeni et al. [12]) solving Generalized Nash Equilibrium Problem (GNEP) is equivalent to solving LCP(q,M) defined by: Find a column vector z = (qE For solving the linear complementarity problem (18) we first show that the matrix M is a P-matrix. to do so, we denote by (M i ) i=1,...,n+1 the submatrix of M , then we have det(M i ) = i + 1 > 0 for all i = 1, ..., n and det(M ) = n + 1. So the matrix M is P-matrix and therefore the Lcp(q, M ) admits one and only one solution.
In the following, we will implement the proposed method in MATLAB 7.2. To illustrate the convergence behavior and computational efficiency, we will compare our method noted by (EKM) with the existing methods Lemke noted by (LM), Chen [11] noted by (CHKSM), and Yu [10] noted by (YQM). We stop the iterations if the condition ||F ( z (k) ) || 10 −6 is satisfied. For every method, we analyze the number of iterations needed to converge to the solution. In the comparison of performance of methods (see Table 9), we also include CP U time utilized in the execution. When looking for an approximation with six significant digits, we obtain that using the proposed method who does not require a lot of iterations and not a lot of CPU time by against, the other methods require a lot of arithmetic operations, and therefore, it is too difficult, time consuming, and expensive to find an approximate solution of the exact solution. However, in the particular case when n = 4 (four different fishermen), the fishing effort which maximize the profit is given by (see Y. EL Foutayeni et al. [ Table 9 compares the performance of the proposed method with the existing methods Lemke, Chen, and Yu. Column 2 shows the approximation to the fishing effort which maximize the profit; column 3 reports the number of iterations; and column 4 reports the number of times. The numerical results reported in Table 9 show that the proposed method performs well for this example. In particular, we can see that it can produce an approximate solution with fewer iterations and shorter CPU-time for this example and can solve large-scale problems. where Iter denotes the iterations number when the algorithm terminates; and Time denotes the total cost time (in second) for solving the linear complementarity problem (18). From this table, one can see that the proposed method (EKM) for solving the linear complementarity problem converges very rapidly relative to (LM), (CHKSM) and (YQM) from the iteration number and the CPU times, which shows that the performance of our method is effective.

Conclusion
In this paper we have given an interior-point method for solving linear complementarity problems. This method converges very rapidly compared to all other methods. The results discussed in this paper are very favorable. The results of numerical experiments on some test problems and practical example showed that this method works well for solving LCP. With regard to the nice theoretical results of our algorithm, the computational results reported are very encouraging. We expect our algorithm can also solve large-scale problems well. As one of the comments that we wish to emphasize is that we want to show in the future that we can use this new method for solving nonlinear complementarity problems.