Efficient Procedure to Generate Generalized Gaussian Noise Using Linear Spline Tools

In this paper, we propose a simple method to generate generalized Gaussian noises using the inverse transform of cumulative distribution. This inverse is expressible by means of the inverse incomplete Gamma function. Since the implementation of Newton’s method is rather simple, for approximating inverse incomplete Gamma function, we propose a better and new initial value exploiting the close relationship between the incomplete Gamma function and its piecewise linear interpolant. The numerical results highlight that the proposed method simulates well the univariate and bivariate generalized Gaussian noises.

The study and application of univariate and multivariate generalized Gaussian distributions (GGD) is an active field of research in theoretical and applied statistics. This class of processes has merited considerable attention by many scientists for two fundamental reasons: firstly, because it captures the observed heavy tails; secondly, because the probability density characterizes this class has fully parametric form. Recently, GGD have been widely adopted in modeling various physical phenomena [3,4,19,21]. As known, the generation of multivariate distributions has not been investigated extensively and we have not enough literature on Monte Carlo techniques for synthesizing multivariate random processes, except from the work done by Johnson [14]. Recently, new initiatives designed to target this area of research in order to better understand and control the multivariate random processes [6,23]. It is obvious that there is no universal method for simulating a random variable or random vector from known distributions with Monte Carlo methods for the study of the behavior of statistics with unknown sampling distribution. In general, the generation of multivariate distributions is more complicated to implement, because the usual method based on the inverse of the cumulative distribution function used with univariate distributions can not be applied. An option to circumvent the problem of generation multivariate random is to use multivariate extensions of generation methods for univariate random variable such as ratio-of-uniforms method, acceptance-rejection (AR) methods or transformation method. An other way, in multidimensional approach leads to generating random variates, is to use the conditional distribution method [14]. These current standards methods of simulating mentioned before suffer from limitations and become hopelessly inefficient when applied to realizations of stochastic multidimensional processes see. [23]. When the computation of the conditional distribution is difficult, a transformation of vector can probably be considered. However, for several multivariate distributions, it is hard to define transformations of random vectors from which samples can be easily obtained.
In this contribution, we introduce the multivariate generalized Gaussian distribution which is a special case of the larger class of elliptical distributions. For the sake of simplicity we deal with one and two dimensions, but the results can be easily extended to higher dimensions. We present so computational algorithm to generate univariate and bivariate generalized Gaussian distribution using transformation approach. Gómez, Gómez-Villegas and Marín [13] have proved that the generation of multivariate generalized gaussian by using polar coordinates is very easy, if we can simulate sequences from Gamma distribution. We are going to use classical Monte carlo procedure to simulate sampling from Gamma distribution. In this case, we should invert incomplete Gamma function. This special function has no elementary inverse of its argument, hence requiring numerical evaluation to be performed to obtain an approximation of this function.
The regularized upper and lower incomplete Gamma functions are defined by Efficient Procedure to Generate Generalized Gaussian Noise 3 the integrals [20] where we assume that a > 0 and x ≥ 0, and Γ(a) = ∞ 0 t a−1 exp(−t)dt is the wellknown Gamma function. The problem of inverting functions Q(a, x) and P (a, x) is one of the central problems in statistical analysis and applied probability, with various applications [8,2].
However, it is difficult to compute the inverse P −1 (a, ·) of the function P (a, ·) because its shape depend on the parameter a in a significant way. Several approaches are available in the literature for computing P −1 (a, ·). One possibility is to use the Newton methods to computing x when p = P (a, x) is given, see [7,12]. However, this approach has some practical difficulties. If the initial value is not good enough, the method might diverge. This leads to the question of how to find a good starting value, which is tricky in itself. More recently, in [12], the analytical approach from earlier literature is summarized and new initial estimates are derived for starting the fourth order Newton method. This method is not, however, as flexible as is desirable because the starting values need: dividing the domain of computation, Taylor expansion, continued fraction, uniform asymptotic expansion, asymptotic inversion, inversion of the complementary error function. On the other hand, it is rare to use fourth and higher-order formulas in solving a single nonlinear equation in practical computations, see Section 9.4 of [22].
Furthermore, in the present case, the functions P (a, ·) and whose derivatives may have singularities near the endpoint 0 for some values of a. As an alternative, we take care of the first point by starting from a sufficiently precise solution prepared by the piecewise linear interpolant of P (a, ·) with special graded grids. Computational experience indicates that at few iterations of Newton method are sufficient to achieve the double-precision solution in a reasonable time frame.
On the other hand, the numerical inversion techniques require evaluation of the incomplete Gamma function P (a, x). As explained in Chapter 5 of [11], numerical quadratures can be an important tool for evaluating special functions. In particular, when selecting suitable integral representations for these functions. In this paper, we show that the numerical quadratures for weakly singular integrals by nonlinear spline approximations proposed in [15], is particularly efficient for computing numerically P (a, x). Theoretical results and numerical experiments indicate that these methods perform adequately when the Gamma probability density function is smooth or has weak singularities.
The main advantage of the proposed numerical procedure, to evaluate the incomplete Gamma function and its inverse, is that is very simple to implement and does not require any elaborate special mathematical functions software, and can be straightforwardly coded in any programming language with standard algebra.
The contents of this paper is as follows. Section 2 introduces the definition of Generalized Elliptical Distributions, with presentation of the most famous examples. We also give the procedures for synthesizing univariate and bivariate generalized Gaussian processes. In Section 3, we apply the Gauss-Legendre-type quadrature to compute the incomplete Gamma function ratios for positive values of a and x. The procedure for computing x when a and P (a, x) are given is described in Section 4. Simulations for univariate and bivariate generalized Gaussian are provided and results are interpreted in Section 5. Section 6 concludes the paper.

Elliptical Distributions
As known multivariate normal distribution has enjoyed a significant role in many practical applications such as in behavioral and social sciences, biometrics, econometrics, environmental sciences, and finance. Real data are often not normally distributed in behavioral sciences, especially when the tails are thicker or thinner than those of normal distributions. The nonnormal distributions were chosen because they were thought to be realistic representations of distributions encountered in the behavioral sciences. Many alternative methods exists when the normality assumption is not justifiable. One choice is the elliptical family of distributions which include the normal one and share many of its flexible properties. This class of distributions has received an increasing attention in the statistical literature, particularly due to the fact of including important distributions as, for example, Student-t, Generalized Gaussian, Logistic, Laplace, among others, with heavier or lighter tails than the normal one. This class of distributions was introduced by Kelker [16] and was widely discussed in Fang, et al. [9]. Elliptical distributions are very often used, particulary in risk and financial mathematics [24].
The random vector X = (X 1 , X 2 , . . . , X p ) T is said to have an elliptical distribution with parameters vector ν(p × 1) and definite positive symmetric matrix Λ(p × p), if its characteristic function can be expressed as for some scalar function Φ called characteristic generator of X, and where t T = (t 1 , t 2 , · · · , t n ). It should be mentioned that, in general, the elliptical distribution random vector doesn't necessarily have a density, but in this work we will consider a class of elliptical distribution having an analytic expression of density. This density function is given by ( see. Fang et al. [10, p. 35]) • Λ is positive definite; −∞ g(y T y)dy 1 dy 2 · · · dy p = 1.
Efficient Procedure to Generate Generalized Gaussian Noise

5
If X has this distribution and Y = L −1 (X − ν), where LL T = Λ, then Y has probability density function (abbr. pdf) g(y T y), spherically contoured. A nice property of an elliptical distribution function is the fact that its multivariate density function may be expressed via the density generator.

Simulation of elliptical distributions
Let (y p ) T be an p-dimensional vector, every y i can be written in polar coordinates as . . .
The jacobian determinant form If the pdf of Y is g(y ′ y), then the pdf of R, Θ is R and Θ are independent, and the marginal pdf of R is Note that 2π corresponds to the uniform density on the unit hypersphere.

Description of some elliptical distributions
In the next, we discuss two families of elliptical distributions: the normal which is the most famous member of this family, and Generalized Gaussian distribution which is a general case of the normal distribution.
2.2.1. Multivariate Gaussian distribution. The Gaussian distribution, also called normal, is a special case of the larger class of elliptical distributions with the density generator g(t) = exp(−t/2). A random vector X = (X 1 , X 2 , . . . , X p ) T follows a multivariate normal distribution, symbolically denoted by X ∼ N p (µ, Σ), then the joint density of X is given by [13] f (x; Σ, µ) = 1 Synthesizing multivariate Normal data is relatively easy and fast. It has therefore been used for many purposes and in vast number of applications. In many applications, however, the multivariate data that arise in practice are not well approximated by a multivariate Gaussian distribution, especially when we have a heavy tail behavior.

Multivariate Generalized Gaussian distribution.
An elliptical vector X is said to have a multivariate Generalized Gaussian distribution if its density generator can be expressed as g(t) = exp(−αt β ), for α, β > 0. Let X = (X 1 , X 2 , . . . , X p ) T be an p-dimensional random vector distributed as a multivariate generalized Gaussian distribution, if its probability density function has the form [21] f where α and β are two parameters which can represent the spherical shape of the model and η p indicates a normalized constant defined by α, β and the dispersion matrix Σ, symmetric and positive definite. Note however that the multivariate Gaussian pdf is recuperated by setting β = 1, we will call the distribution with β = 1/2 the multivariate Laplace distribution. If β < 1, the distribution (2.4) has heavier tails compared to the multivariate Gaussian distribution.
If we suppose that the random quantity X has a zero-mean and identity matrix covariance, called in this case standard generalized Gaussian, the probability density function (2.4) will become We restrict our attention to these conditions knowing that arbitrary mean µ = E[X] and covariance matrix Σ We note that multivariate distributions with circular contours are a class of distributions that are useful for modelling heavy tailed multivariate data. In [1], the case of circular contours defined by polar coordinates is analyzed including its simulation procedure which is based on a rejection method to simulate a Gamma random variable. However, the rejection method has some important limitations.
Another way of obtaining draws from some density of interest is to use the inverse transform method. This method is considered useful and effective, especially for sampling from univariate distributions. The method also preserves monotonicity and correlation. Further, with inversion method, small distribution parameter perturbations cause small changes in the produced variates. This effect is useful for sensitivity analysis. Contrast this to rejection method, see [17]. Mention must be made that some researchers avoid using this method for simulation, because for many distribution functions we do not have an explicit expression for the inverse of cumulative distribution function. We call numeric computation to resolve this problem.
For simplicity we will focus on the univariate and bivariate cases, but the results can be easily generalized to p dimensions.
Univariate case : The probability density function of generalized Gaussian noise is defined as: for x ∈ R, α > 0 (shape parameter), and λ = (Γ(1/α)/Γ(3/α)) 1/2 . The cumulative distribution function (cdf) corresponding to (2.6) is (see, e.g., Monir [18]) It is well known that the inverse F −1 α can be expressed in terms of Q −1 (inverse of the regularized incomplete Gamma function), i.e., Equation (2.8) solves the problem of synthesizing a generalized Gaussian noise with shape parameter α from a uniform process, provided we are able to handle the numerical evaluation of the function Q −1 .
Bivariate case : We consider a random variable pair (X 1 , X 2 ) distributed as 2D generalized Gaussian, we can assume that the variables X 1 and X 2 are zero mean and uncorrelated. The joint probability density function is given by: where β > 0, and σ 2 > 0. Every x = (x 1 , x 2 ) T can be written in polar coordinates as: x 1 = r sin(θ), x 2 = r cos(θ). Let X 1 = R cos(Θ) and X 2 = R sin(Θ), where R ≥ 0 and 0 ≤ Θ < 2π. Then, the joint pdf in polar form is given by: By reassigning the parameters, the marginal pdf of R is obtained by f R (r) = 2βr It is clear that the variable Θ has a uniform distribution Θ ∼ U [0 2π] , the second variable R follows generalized Gamma distribution R ∼ GG(α, 2, 2β). The cumulative distribution function of R is where P (, ) is regularized lower incomplete gamma function. We want to invert F (r) of (2.12) and show that this inverse can be expressed in terms of P −1 (a, ·). We write for inversion under the equivalent form F = P 1 β , r α 2β , which is invertible under the form r = α× P −1 1 β , F 1 2β , yielding . (2.13) If we know how to evaluate P −1 (a, ·), the equation (2.13) solves the problem of synthesizing variable R with marginal density f R from a uniform variable. Once we have solved the equation(2.13), we can easily generate independently points (X 1 , X 2 ) from the joint standard generalized Gaussian distribution with shape parameter β as follows: 1. Generate two independent random numbers U 1 and U 2 from U [0 1] distribution.
3. Then set X 1 = R cos(Θ), Efficient Procedure to Generate Generalized Gaussian Noise 9 This simulation scheme is particularly simple for the case of circular contours and provides a general method for simulation of the resulting families with different parameter shape values β.
Mention must be made that in both cases, we have to evaluate the inverse of incomplete Gamma function. We now address this problem. For α > −1 and nonnegative integer k, define Type(α, k, 0), see [15], as the set of all functions u ∈ C k ]0, d] such that

Numerical evaluation of the incomplete Gamma function
The parameter α is called the index of singularity. The modulus of continuity of a function u ∈ C[0, d] is defined to be The collection of Hölder continuous real-valued functions with exponent α is denoted by H α . We say u ∈ H α if there is a constant M > 0 so that We first rewrite P (a, x) as where the Gamma probability density function f a is given by A classical and efficient technique for the numerical evaluation of integrals of the form (3.1), where f a is a smooth or nonsmooth function, is to approximate I(f a )(x) by I( f a )(x) where f a is a spline approximant of f a , for which I( f a )(x) can be easily calculated. In [15], the authors have considered the case where f a is the (k − 1)th degree continuous piecewise Lagrangian interpolant of f a and they have studied the convergence order when f a is smooth or has weak singularities. A more detailed discussion of this quadrature can be found in [15].
For given integers n, k let be a (strict) partition of the finite interval [0, x], where r ≥ 1 is a real number which depends upon the index of singularity a. Set where P k−1 denotes the set of polynomials of degree not exceeding k − 1. Note that the elements of S (−1) k−1 (π a ) may have jump discontinuities at the interior points of the grid π a .
Define a piecewise polynomial f a,n,k over [0, x] with π a by f a,n, Then, we use I( f a,n,k )(x) to approximate I(f a )(x), where If a > 1, for any integer k we have f a ∈ Type(α, 2k, 0), see [15]. Furthermore, since α > 0 it follows that f a is an α-Hölder continuous function. In this case, define a piecewise polynomial f a,n,k over [0, x] with knots π a by the following rule: Hence, I(f a )(x) is now approximated by If r = 2k 1 /α with k 1 ≤ k. In this case, we have Finally, if a > 2k +1 or a is a nonnegative integer then f a ∈ C 2k [0, x]. Moreover, we use the classical composite Gauss numerical integration scheme to approximate P (a, x) with r = 1. In this case we have |I(f a )(x) − I (P n,k f a ) (x)| = O n −2k+1 .
A high-precision evaluation of P (a, x) is available in Matlab using the function gammainc. In the following tables 1-2, for several values of a and x, the quadrature errors between the values obtained for P (a, x) using the Gauss numerical integration schemes and the Matlab function gammainc are shown. The primary region of difficulty for computing P (a, x) has been when a is large and x ∼ a. The results obtained, see Table 2, confirm the stability and accuracy of Gauss numerical integration schemes used for computing P (a, x). Table 1: Errors of the approximation of P (a, 100) using k = 8, k 1 = 4. a = 1/2 a = 3/2 a = 5/2 n Errors Errors Errors 2 3 8.4376e − 006 8.4375e − 006 6.8647e − 009 2 4 2.0134e − 009 2.0134e − 009 6.0085e − 013 2 5 8.8818e − 016 8.8817e − 016 6.6613e − 016 Table 2: Errors of the approximation of P (10 5 , x) using r = 1, k = 10 and n = 2 12 . x/a

Inversion approximations for incomplete Gamma function
For the numerical inversion we solve the equations  When q is sufficiently small we use the asymptotic expansion introduced in [7,12]. A good initial approximation x 0 > 0 of x(q, a) is obtained from the equation Higher approximations of x are obtained in the form

Approximation for
For ǫ very close to 0, let x ǫ := Q −1 (a, ǫ). Then x ǫ is an approximation of P −1 (a, 1 − ǫ). Let φ p (x) = P (a, x) − p, p ∈]0, 1 − ǫ[. We compute x = P −1 (a, p) at a given p ∈]0, 1 − ǫ], as the root of φ p (x) = 0, using Newton's iterates Since in the general case, Newton's method converges only when x 0 is close enough to the solution, a good starting point is essential for convergence and efficiency. We do this by using the zeros of the piecewise linear interpolation of φ p (x) as an initial guess for the zeros of φ p (x). For the approximation of P (a, x), we introduce the special nonuniform grid with r ≥ 1. If r = 1 then the grid points are distributed uniformly, for r > 1, the grid points are more densely clustered near the left endpoint of the interval [0, x ǫ ].
The parameter r has several very important consequences, some of which we will explore in more detail below. It is easy to see that 0 < x j+1 − x j < rx ǫ n , j = 0, . . . , n − 1.
Clearly, P (a, x) is an increasing function. Then, the simplest way to obtain a continuous shape preserving approximation to a set of ordered data points is to connect neighboring data points with straight lines. Moreover, the piecewise linear interpolant I n , which interpolates P (a, ·) at the points x j , has the nice property of being a local construction: the interpolant I n P (a, ·) on an interval [x j , x j+1 ] is completely defined by the value of P (a, ·) at x j and x j+1 . This piecewise linear interpolant preserves the shape of the data extremely well: preservation of concavity, preservation of monotonicity. Basically, if a ∈]0, 2[ the order of this method is lost at the regions when the derivatives of P (a, ·) have singularities. In order to remedy this we give the class of piecewise linear interpolant eliminating the effect of the singularity point for the approximation accuracy. Define S 1 (x) to be a composite piecewise linear interpolant We now discuss the approximation accuracy of S 1 (x).
Here, C is a positive constant which is independent of n.

15
This completes the proof. ✷ Let p ∈]0, 1−ǫ[. The first step is to search the table to find the interval [x i , x i+1 ] such that p ∈ [P (a, x i ), P (a, x i+1 ]. On the first iteration, the value x 0 is computed by linearly interpolating x i and x i+1 . Then, a good initial approximation x 0 for the iterative procedure (4.1) is obtained from the equation The zero of (4.3) is the zero of the linear segment connecting the two points The zero is characterized by the equation which has the solution The estimate for the zero is given by the formula  with ǫ = 10 −16 and n = 8. The accuracy of the initial estimates is listed in Table  3. Numerical experiments indicate that at few iterations of Newton method are sufficient to achieve the double-precision solution in a reasonable time frame.

Simulating of univariate generalized Gaussian noise
As an application, we used the proposed procedure to computing Q −1 in order to simulate a generalized Gaussian noise with exponent α = 3/4. For illustration, a typical evolution of this process, over an interval of time with length 10 3 , is shown in Figure 1.   We have performed an estimation of its probability density function based on 10 7 values drawn for the generalized Gaussian and collected into bins of width △x = 0.1 . The estimated density presented in Figure 2 is compared with the theoretical model of (2.6) and shows very good agreement.

Simulation of bivariate generalized Gaussian noise
In this section, some simulations with different parameter shape values β are presented. Figure 3 shows realizations of bivariate standard generalized Gaussian, generated with the algorithm 1 described above over an interval of time with length 10 3 . We have also performed an estimation of the joint probability density function (2.9) based on 10 6 simulated values. Figures [4,6] show the simulated probabilities and the theoretical density (2.9). As can be seen in the figures, the probability density function is well approximated by the empirical density.

Conclusion and discussion
In this paper, we introduced the multivariate generalized Gaussian distribution along with a procedure to generate samples from the 1D and 2D standard generalized Gaussian distribution, the provided procedure is based on the utilization of the inverse transform of Gamma cumulative distribution expressed in terms of the inverse of the regularized incomplete Gamma function. We have proposed a simple method for computing the incomplete Gamma function ratios, based on numerical quadratures for weakly singular integrals by nonlinear spline approximations proposed in [15]. To approximate P −1 (a, .), we have proposed a very simple and efficient method for computing a good starting point for iterative procedure. The main advantage of this method is that it not requires considerable computational time, and is very simple to implement and does not require any elaborate special mathematical functions software, and can be straightforwardly coded in any programming language with standard algebra. Likewise, simulation becomes especially simple for families of distributions defined by circular contours in polar coordinates. Therefore, we can say that the obtained results highlight that the proposed procedure allows to generate correctly the univariate and bivariate standard generalized Gaussian noises with the shape parameter.
Univariate and bivariate generalized Gaussian distribution seems interesting to investigate their possible applications in modelling noises especially in signal and image processing. This is left for future research.