A New Class of Fredholm Integral Equations of the Second Kind with Non Symmetric Kernel: Solving by Wavelets Method

: In this paper, we introduce an eﬃcient modiﬁcation of the wavelets method to solve a new class of Fredholm integral equations of the second kind with non symmetric kernel. This method based on orthonormal wavelet basis, as a consequence three systems are obtained, a Toeplitz system and two systems with condition number close to 1. Since the preconditioned conjugate gradient normal equation residual (CGNR) and preconditioned conjugate gradient normal equation error (CGNE) methods are applicable, we can solve the systems in O (2 n log( n )) operations, by using the fast wavelet transform and the fast Fourier transform.


Introduction
Integral equation perform role effectively in many fields of science and engineering. Recently, there are a lot of orthonormal basis function that have been used to find an approximate solution, mention Fourier functions [2], Legendre polynomials 68 A. Mennouni, N. Ramdani, Kh. Zennir [21] and wavelets [10,12,13,16,17,19,20,26]. Although, the wavelet bases are one of the most interesting basis, especially for large scale problems, in which the kernel can be constituted as sparse matrix. We reminder that usually it is difficult to construct the exact solution of linear and nonlinear Fredholm integral equation via the well-known methods. A lot of different useful methods have been developed to approximate the solutions of these equations. For instance, collocation methods are studied in [15,24], spectral methods are given in [14,18], transform methods are introduced in [1,3,23], and homotopy perturbation method is presented in [8] and others. More recently, the multiresolution analysis has been considered by many researchers (see [11,12,17,19,20,28]). We mention that wavelets method play a key role to find the unique solution for some Fredholm integral equations. In the present paper, we present wavelet basis to find the approximate solution of the following Fredholm integral equation of second kind: where u(.) is the unknown function, f (.) is the known function and k(s − t) is a non symmetric kernel. A considerable part of this proposal is based on a study by [Jin and Yuan, 1998], in which the authors focused on new class the first kind with symmetric kernel. In contrast to their work, we focused on the second kind with non symmetric kernel and as we know that the symmetric property is necessary condition to apply conjugate gradient method and in our case we don't have this property so we dealt with the equivalent two systems that have the symmetric property. The outline of the paper is as follows: In section 2, we describe the basic formulation of wavelets and preliminary which are necessary for our development. Section 3 is devoted to the discretization of the integral equation. In section 4, we study the condition number of the matrix operator and we give the operation cost to solve the systems.

Wavelet bases
The basic tool for our method to approximate the solution of (1.1) is wavelet Bases. For the convenience of the reader, we recall here some basic concepts and well-known results concerning the multiresolution analysis (MRA for short). As in [7,11], let us consider a function ϕ ∈ L 2 (R) called the father wavelet (or scaling function), with a compact support [0, a], a > 0. We assume that form an orthonormal sequence in L 2 (R). Let V 0 be the closed linear subspace of L 2 (R) generated by (2.1). The multiresolution analysis (MRA), depending on the ϕ(.) consists of: (iv) The sequence (2.1) forms a Riesz basis of V 0 .
Let W j be the orthogonal complement of V j in V j+1 , i.e., According to the above definition, we have Following [6,11,22], there exists at least one function ψ ∈ W 0 such that is an orthonormal basis of W 0 . The function ψ is called the mother wavelet. A wavelet φ ∈ L 2 (R) is called orthonormal if the family of functions generated from φ by φ j,k (s) = 2 j/2 φ(2 j s − k), j, k ∈ Z, is orthonormal, that is, φ j,k , φ m,n = δ j,m δ k,n . Let us introduce the following two wavelet sequences: ϕ j,k (s) = 2 j/2 ϕ(2 j s − k), j, k ∈ Z, and ψ j,k (s) = 2 j/2 ψ(2 j s − k), j, k ∈ Z. We recall that ψ m,k , ϕ m,l = ψ n,k , ϕ n,l , for all m, n, k, l ∈ Z.
Therefore, the wavelet sequence {ψ j,k } forms a Riesz basis of H s (R) for s ≥ 0.
Assume that B 1 and B 2 two bases in V n with: We note that B 1 and B 2 follow from the father wavelet ϕ and the mother wavelet ψ, respectively.
Definition 2.2 (Discrete wavelet transform). The discrete wavelet transform of the father wavelet ψ is defined by

Condition number
Condition number of a matrix gives the information about the singularity of the corresponding matrix.
The condition number of an n × n invertible matrix A is defined as the ratio of its maximum singular value to its minimum singular value, that is, for

Preconditioning and diagonal scaling
A preconditioner P of a matrix A is given by P −1 A which its condition number smaller than the original matrix. In order to solve linear systems of the form Ax = b, preconditioners are used for numerous iterative methods. Then, while the condition number of the matrix A decreases, for a lot of iterative linear system solvers the rate of convergence increases. Hence, preconditioning is a very effective tool which uses to reduce the condition number of the matrix A.
Diagonal scaling (DS) is a special case of preconditioning and it is an efficient tool used to reduce the condition number of matrix A for ensuring the convergence and the accuracy of the first method. In our case, to reduce the condition number of the matrix A we apply the diagonal matrix D, in a way to speed up the method.

Conjugate gradient method
Conjugate gradient (CG) method uses to solve linear system of the form Ax = b, this method can be used also to obtain a quick convergence when κ(A) is smaller.
Generally, conjugate gradient method uses for solving large problems in order to attain a modest accuracy in a reasonable number of iterations.
2.5.1. Conjugate gradient normal equation residual and error. The conjugate gradient method can be applied to solve the normal equations. The CGNE and CGNR methods are important variants of this approach, which are the simplest methods for non symmetric or indefinite systems. Since other methods for such systems are in general rather more complicated than the conjugate gradient method. These methods transform a linear system to a symmetric definite one for applying the conjugate gradient method. CGNR solves the system

Discretization of integral equation
Let H s (R) and H t (R) be two Sobolev spaces, with s ≥ t ≥ 0. Letting we assume that k(2 a . − 2 a .) ∈ H s (R) is continuous non symmetric kernel. The integral operator K from H s (R) into H t (R) is compact. Eq. (1.1) can be rewritten in operator form as follows: We assume that 1 is not a spectrum value of K. Hence, the equivalent variational form follows: where it follows that Ku, v is a continuous bilinear form on H t (R) × H s (R). We assume that 3.1. Projection of (I − A) with respect to B 1 and B 2 • Let the matrix (I − A n ) relative to the basis B 1 , which is the projection of the matrix (I − A) on the subspace V n .
The elements of the matrix (I − A n ) are given as follows t p,q := ϕ n,p , ϕ n,q − Kϕ n,p , ϕ n,q (3.3) For all u, v ∈ H s (R), we assume that u n , v n are the projections of u, v on V n respectively. Which implies that (3.2) becomes By substituting (3.5) into (3.4), we get a linear system given as follows where (I − T ∞ ) p,q = t p,q is given by (3.3), and We mention that ϕ has the compact support [0, a], which leads us to t p,q = t p−q .
Hence (I − T ∞ ) is a Toeplitz matrix.
• The matrix representation of (I −A n ) relative to the basis B 2 has the elements given as follows a p,q,i,j := ψ p,q ψ i,j − Kψ p,q , ψ i,j (3.7) for −∞ < p, i < n and − ∞ < q, j < +∞. Writing u n = p,q x p,q ψ p,q , and v n = ψ p,q , −∞ < p < n, for all q ∈ Z. (3.8) We substitute (3.8) into (3.4), we obtain the linear system where (I − A ∞ ) p,q,i,j = a p,q,i,j is unsymmetric given by (3.7), x = (x p,q ) T and d = (d p,q ) T are vectors with d p,q := +∞ 0 f (s)ψ p,q (s)ds.

Condition number
From the previous section we obtained two different linear systems. One of them is the Toeplitz system (3.6) (relative to B 1 ) and the other one is the systems (3.9) (relative to B 2 ).
Let us focus on studying the condition number of the last linear system. Actually, we will develop the idea of Zhang [28]. In order to do that, firstly, we present the following Lemma which plays an important role for reducing the condition number of the matrix.
where r is the regularity of the MRA. Moreover, since {ψ j,k } is a Riesz basis of H s (R), we also have where C 2 ≥ C 1 > 0 are constants.
Secondly, we know that (I − A ∞ ) in system (3.9) is unsymmetric. Then, system (3.9) becomes Now, let φ ∈ V n with φ = j,k w j,k ψ j,k . We have and where w := (w j,k ) T is a vector. By the assumption that is a continuous elliptic bilinear from on the space H s (R) × H s (R), i.e., Since φ ∈ V j , we get φ ∈ H s . Consequently, By using (4.1), we obtain Thus, Consequently, for some constants C 5 ≥ C 6 > 0. By using diagonal scaling D, we get where · is the L 2 -norm. In the end, the condition number of (I − A ∞ ) T (I − A ∞ ) is close to 1, that is, 4.1.2. Condition number of system (4.5). From (4.5) and (4.6), we get By following the same steps of the previous system we obtain that the condition number of (I − A ∞ )(I − A ∞ ) T after a diagonal scaling is

Operation cost of the corresponding systems
In order to numerically solve the system (3.6), we use a finite interval. For this reason, let us consider the finite section T n of T ∞ . Thus, the Toeplitz system (3.6) becomes an n − by − n system (4.7) Now, we introduce the relation between (I − T n ) and (I − A n ), which is similar to the one given by the authors of [11] as follows where (I − A n ) is the finite section of (I − A ∞ ) and W n is a finite section of W which is the wavelet transform matrix between two orthonormal wavelet bases B 1 and B 2 . Hence, we solve the Toeplitz system (3.6) by solving its equivalent form i.e., (I − A n )x =b, (4.8) wherex := W n x andb := W n b. Now, we are going to solve the system (4.8). However, the matrix (I − A n ) does not have a small condition number. Then we would like to apply PCG method with diagonal preconditioner D n in order to obtain a new matrix with a smaller condition number. Unfortunately, (I − A n ) does not have the symmetric property. That means the PCG method will not work. Thus, two systems are obtained with symmetric property.
with (I − A n ) T (I − A n ) and (I − A n )(I − A n ) T are symmetric. Now, in order to solve the system (4.8), we solve its two equivalent systems (4.9) and (4.10). We know that the matrices (I − A n ) T (I − A n ) and (I − A n )(I − A n ) T do not have a small condition number. Thus, we apply conjugate gradient normal equation residual CGNR method to (4.9) and the conjugate gradient normal equation error CGNE method to (4.10) with diagonal preconditioner D n in order to obtain a new matrices with a smaller condition number.
More precisely, by applying the diagonal preconditioner to (4.9), we have then the following preconditioned system with the condition number We apply again the diagonal preconditioner to (4.10), we get the following preconditioned system with the condition number Hence, we can solve the system (4.11) by applying the conjugate gradient normal equation residual CGNR method and (4.12) by applying the conjugate gradient normal equation error CGNE method which give as a linear convergence rate (see [9]).
Well, after some updates to CG method, we can solve the systems D nx = y 1 and D nỹ = y 2 respectively. For solving the above systems, we use the algorithm presented in [9].
• For the case (I − A n ) T v 1 , since n v 1 , and by using FWT we could then compute u 1 in O(n) operations ( [4,25]).
In addition, by using FFT we could then compute (I − T n )u 1 in O(n log n) operations ( [5,27]).
In the end, to solve (I − A n )v 1 = (W −1 n ) T (I − T n )u 1 we use FWT and Strang's algorithm given in [27]. Therefore, the operation cost decreased to O(n log n).
Regarding the system D nx = y 1 we just need O(n) operations.
Hence, the cost per iteration for (4.9) is O(n log n).
• For the case (I − A n )v 2 , by the similar way as above, we get the cost per iteration for (4.10) is O(n log n).
Consequently, the total cost per iteration is O(2n log n). Finally, we can solve the systems (4.7), (4.8) in O(2n log n), as a result of the independence of the iterations and n.