Chaoticity Properties of Fractionally Integrated Generalized Autoregressive Conditional Heteroskedastic Processes

Fractionally integrated generalized autoregressive conditional heteroskedasticity (FIGARCH) arises in modeling of financial time series. FIGARCH is essentially governed by a system of nonlinear stochastic difference equations. In this work, we have studied the chaoticity properties of FIGARCH (p,d,q) processes by computing mutual information, correlation dimensions, FNNs (False Nearest Neighbour), the largest Lyapunov exponents (LLE) for both the stochastic difference equation and for the financial time series by applyingWolf’s algorithm, Kant’z algorithm and Jacobian algorithm. AlthoughWolf’s algorithm produced positive LLE’s, Kantz’s algorithm and Jacobian algorithm which are subsequently developed methods due to insufficiency of Wolf’s algorithm generated negative LLE’s constantly. So, as well as experimenting Wolf’s methods’ inefficiency formerly pointed out by Rosenstein (1993) and more recently Dechert and Gencay (2000), based on Kantz’s and Jacobian algorithm’s negative LLE outcomes, we concluded that it can be suggested that FIGARCH (p,d,q) is not deterministic chaotic process. Introduction Detection of chaotic behavior in financial and economic (both micro and macro) data has been the topic of numerous scientific studies such as (Dechert and Gençay, 2000; Das and Das, 2006-7; Moeni et al., 2007; Günay, 2015). Existence of chaos in data favors the short-term predictability and controllability (Abarbanel, 1996) of the underlying difference equations which draws the attention of the scientific circles. However, when dealing with financial and economic data one should always bear in mind that GARCH (Generalized autoregressive conditional heteroscedasticity) models (Francq and Zaqoian, 2010) mimic the stylized facts. The latter is a set of nonlinear stochastic difference equations, therefore, it is quite challenging idea to associate it with deterministic chaos. Here we focus on the chaoticity properties of FIGARCH (Fractionally Integrated generalized autoregressive conditional heteroscedasticity) model by considering correlation dimension and Lyapunov exponents. FIGARCH model was introduced by Baillie, Bollerslev and Mikkelsen (1996) by modifying GARCH model to provide more persistence on the conditional variance. The model allows slow hyperbolic rate of decay for the innovations in the conditional variance and it has an ability to estimate long memory of conditional volatility. Recent researches have found evidence of long-range dependence for a variety of financial assets and strong evidence of long memory in volatility (Cujaeiro, D. O. et al. 2008). Bulletin of Mathematical Sciences and Applications Submitted: 2016-03-09 ISSN: 2278-9634, Vol. 15, pp 69-82 Revised: 2016-04-11 doi:10.18052/www.scipress.com/BMSA.15.69 Accepted: 2016-04-15 2016 SciPress Ltd., Switzerland Online: 2016-05-18 SciPress applies the CC-BY 4.0 license to works we publish: https://creativecommons.org/licenses/by/4.0/ Since its discovery four decades ago, chaos theory has been attracting a lot of interest. The existence of chaotic behavior has been studied in many types of disciplines, ranging from atmospheric dynamics (e.g. Lorenz, 1969; Essex et al., 1987), geophysics (e.g. Hense, 1987; Wilcox et al., 1991; Lorenz, 1996; Sivakumar, 2004), medicine (e.g. Almog et al., 1990; Goldberger et al. 1988; Babloyantz, 1985; Sviridova et al., 2015), turbulence (e.g. Abarbanel, 1994), to financial markets (e.g. Hsieh, 1991; DeCoster, 1992; Brooks, 1998; Frezza, 2014), and electrical circuits (e.g. Yim et al., 2004). Chaotic systems are deterministic systems which are unpredictable in the long term due to their sensitivity to even a very small change of initial conditions. In the deterministic picture, irregularity can be autonomously generated by the nonlinearity of the intrinsic dynamics. The most direct link between chaos theory and the real world is the analysis of time series data in terms of nonlinear dynamics. Chaos theory has inspired a new set of useful time series tools and provides a new language to formulate time series problems (Schreiber T., 1999). This paper investigates the existence of chaoticity in nonlinear FIGARCH model by using simulated time series and nonlinear difference equation directly. As a start point, assuming that FIGARCH is a deterministic chaotic system, as a common practice, phase space is reconstructed by employing delay coordinate embedding technique by Takens’ theorem which justifies that with appropriate embedding dimension and delay time, the reconstructed phase space is the one to one image of the original system and has got the same mathematical properties. Therefore, in third section embedding dimension and delay time are determined. In order to estimate appropriate time delay, the mutual information method is applied. Then, embedding dimension is found out by false nearest neighbor method. The correlation dimension provides a tool to quantify self-similarity. Therefore, in the next section, correlation dimension is calculated by employing Grassberger and Procaccia’s procedure. In fifth section, Lyapunov exponent is calculated to quantify the sensitivity to initial conditions which is the most essential characteristic of chaos. For this purpose, different algorithms are employed. Wolfs’ and Kantz’s algorithms are employed to simulated time series as well as utilizing more direct method by constructing dimensional map from difference equation. The summary and conclusions of this paper are presented in Section 6. Fractionally Integrated Generalized Autoregressive Conditional Heteroskedasticity Generalized Autoregressive Conditional Heteroscedastic (GARCH) model and Integrated GARCH (IGARCH) model were developed by Bollerslev (1986) and Engle and Bollerslev (1986) respectively. GARCHmodel suffers from several problems, such as non-negativity problem and issue with leverage effects. Besides, the model doesn’t allow for any direct feedback between the conditional variance and the conditional mean. On the other side, in most of the empirical situations, the IGARCHmodel seems to be too restrictive as it implies infinite persistence of a volatility shock. Inspired by these problems, Fractionally Integrated GARCH (FIGARCH) model was introduced by Baillie,Bollerslev, andMikkelsen (1996) as a new process, generalizing the well-knownGARCH to allow persistence in the conditional variance. It was developed for the purpose of a more flexible class of processes for the conditional variance that are more capable of explaining and representing the observed temporal dependencies in financial market volatility (Baillie, Bollerslev and Mikkelsen,1996). FIGARCH model is simply obtained by replacing the first difference operator with fractional differencing operator in GARCH (p, q). So, FIGARCH(p, d, q) is written as, φ(L)(1− L)ut = ω + [1− β(L)]εt. (1) Where 0<d<1,and all the roots of φ(L) and [1−β(L)] lie outside the unit circle. If FIGARCH(p,d,q) can be rearranged as; [1− β(L)]σ t = ω + [1− β(L)]− φ(L)(1− L)ut . (2) 70 Volume 15


Introduction
Detection of chaotic behavior in financial and economic (both micro and macro) data has been the topic of numerous scientific studies such as (Dechert and Gençay, 2000;Das and Das, 2006-7;Moeni et al., 2007;Günay, 2015). Existence of chaos in data favors the short-term predictability and controllability (Abarbanel, 1996) of the underlying difference equations which draws the attention of the scientific circles. However, when dealing with financial and economic data one should always bear in mind that GARCH (Generalized autoregressive conditional heteroscedasticity) models (Francq and Zaqoian, 2010) mimic the stylized facts. The latter is a set of nonlinear stochastic difference equations, therefore, it is quite challenging idea to associate it with deterministic chaos. Here we focus on the chaoticity properties of FIGARCH (Fractionally Integrated generalized autoregressive conditional heteroscedasticity) model by considering correlation dimension and Lyapunov exponents.
FIGARCH model was introduced by Baillie, Bollerslev and Mikkelsen (1996) by modifying GARCH model to provide more persistence on the conditional variance. The model allows slow hyperbolic rate of decay for the innovations in the conditional variance and it has an ability to estimate long memory of conditional volatility. Recent researches have found evidence of long-range dependence for a variety of financial assets and strong evidence of long memory in volatility (Cujaeiro, D. O. et al. 2008).
Chaotic systems are deterministic systems which are unpredictable in the long term due to their sensitivity to even a very small change of initial conditions. In the deterministic picture, irregularity can be autonomously generated by the nonlinearity of the intrinsic dynamics. The most direct link between chaos theory and the real world is the analysis of time series data in terms of nonlinear dynamics. Chaos theory has inspired a new set of useful time series tools and provides a new language to formulate time series problems (Schreiber T., 1999).
This paper investigates the existence of chaoticity in nonlinear FIGARCH model by using simulated time series and nonlinear difference equation directly. As a start point, assuming that FIGARCH is a deterministic chaotic system, as a common practice, phase space is reconstructed by employing delay coordinate embedding technique by Takens' theorem which justifies that with appropriate embedding dimension and delay time, the reconstructed phase space is the one to one image of the original system and has got the same mathematical properties.
Therefore, in third section embedding dimension and delay time are determined. In order to estimate appropriate time delay, the mutual information method is applied. Then, embedding dimension is found out by false nearest neighbor method. The correlation dimension provides a tool to quantify self-similarity. Therefore, in the next section, correlation dimension is calculated by employing Grassberger and Procaccia's procedure. In fifth section, Lyapunov exponent is calculated to quantify the sensitivity to initial conditions which is the most essential characteristic of chaos. For this purpose, different algorithms are employed. Wolfs' and Kantz's algorithms are employed to simulated time series as well as utilizing more direct method by constructing dimensional map from difference equation. The summary and conclusions of this paper are presented in Section 6.

Fractionally Integrated Generalized Autoregressive Conditional Heteroskedasticity
Generalized Autoregressive Conditional Heteroscedastic (GARCH) model and Integrated GARCH (IGARCH) model were developed by Bollerslev (1986) and Engle and Bollerslev (1986) respectively. GARCH model suffers from several problems, such as non-negativity problem and issue with leverage effects. Besides, the model doesn't allow for any direct feedback between the conditional variance and the conditional mean. On the other side, in most of the empirical situations, the IGARCH model seems to be too restrictive as it implies infinite persistence of a volatility shock.
Inspired by these problems, Fractionally Integrated GARCH (FIGARCH) model was introduced by Baillie,Bollerslev, and Mikkelsen (1996) as a new process, generalizing the well-known GARCH to allow persistence in the conditional variance. It was developed for the purpose of a more flexible class of processes for the conditional variance that are more capable of explaining and representing the observed temporal dependencies in financial market volatility (Baillie, Bollerslev and Mikkelsen,1996).
FIGARCH model is simply obtained by replacing the first difference operator with fractional differencing operator in GARCH (p, q). So, FIGARCH(p, d, q) is written as, Where 0<d<1,and all the roots of ϕ(L) and [1 − β(L)] lie outside the unit circle. If FIGARCH(p,d,q) can be rearranged as;
Conrad and Haag (2006) also introduced a set of conditions that guarantees the non-negativity of the conditional variance in all situations. Moreover, Davidson (2004) had shown that FIGARCH model possesses more memory than a GARCH or IGARCH model.

CHAOTIC BEHAVIOR and FIGARCH
Chaotic behavior is irregularity of motion, unpredictability and sensitivity to initial conditions. Our purpose is to identify FIGARCH nonlinear stochastic difference equation's chaotic properties in this sense. Its chaotic nature will basically give us an idea about its predictability horizon and forecasting quality.
In order to measure chaos of time series data, Lyapunov exponent can be estimated which is a measure of the average speed with which infinitesimally close states separate. To be able to find out Lyapunov exponents of the data, first it is needed to go from scalar time series data to the multivariate state or phase space which is required for chaotic motions to occur in the first place.
Reconstructing Phase Space. The answer of the question of how to go from scalar to multivariate state is the geometric theorem called the embedding theorem attributed to Takens and Mane (1981). Abarnel asserts that all variables in a nonlinear process is generically connected and they influence each other.
Considering that we have time series data of {x 0 , x 1 , ...x i , ...x n }, Takens implies that the reconstructed attractor of the original system can be written as the vector sequence Where T represents the embedding delay and d represents embedding dimension. Takens also states that for a large enough d, many important properties of the original system are reproduced without ambiguity in the new space of vectors. In other words, the attractor constructed will have the same mathematical properties as the original system (such as dimension, Lyapunov exponents etc.).
As it is seen from the formula, in order to be able to reconstruct the attractor, proper values of the embedding delay and embedding dimension must be determined.
Data. In order to analyze FIGARCH data, several FIGARCH(1,d,1) models are generated. While alpha, beta and omega coefficients remained unchanged at 0.01 to be able to keep ranging d values from 0.05 to 0.9 with 8192 number of data points. For each d value, 50 random samples are generated and results are tested. Each model are simulated by using Kevin Sheppards's MFE Toolbox and then for verification, as a second source Oxmetrics Garch package is applied.
Mutual Information. Embedding delay (T) value is determined by looking for the first minimum of the nonlinear correlation function named mutual information. The mutual information is introduced by Fraser and Swinney (1986), between x i and x i+T as a suitable quantity for determining T.
There are two important principles in estimation of T.
1. T has to be large enough so that the information in x i+T is significantly different from the information in x i .
2. T shouldn't be too large that x i+T and x i are completely independent in statistical sense.
Giving sets A = {a i } and B = {b j } mutual information between them in bits written as; Where P A (a i ) and P B (b j ) are the individual probability densities, while P AB (a i , b j ) is the joint probability density of A and B.
If a i and b j is completely independent P AB (a i , b j ) = P A (a i )P B (b j ) and the mutual information is zero. The average of all measurements of information statistic between A and B measurements is written as; ] .
Mutual information measures mutual dependence of two sets based on the notion of the information between them. So, for the measurements s(t) at time t which are connected to the measurement s(t+T), if we rewrite; When T → ∞, I(T ) → 0 since correlation between s(n) and s(n+T) disappears (Abarbanel, 1996). Fraser suggests that as I(T) is a kind of autocorrelation function, therefore it is appropriate to choose time delay value at first minimum of the mutual information although it could sometimes be misleading considering linear nature of the method.
For each model, in order to identify proper embedding delay, mutual information of models are computed and results are found as in Table 1.  The nearest neighbor in phase space will be a vector; If the vector y N N (i) is a false neighbor of y(i) having arrived its neighborhood by projection from a higher dimension because the present dimension d doesn't unfold the attractor, then by going to next dimension d + 1 this false neighbor may be removed out of the neighborhood of y(i).
By looking at every data point y(i) and asking at what dimension all false neighbors are removed, we will sequentially intersections of orbits of lower and lower dimension are removed until at last point intersections are removed. At that juncture d will have been identified where the attractor is unfolded.
Comparing the distance between the vectors y(i) and y N N (i) in dimension d with the distance between the same vectors in dimension d + 1, it can easily be established which are true neighbors and which false. It only needs to be compared x (i+dT ) − x N N (i+dT ) with the Euclidian distance |y i − y N N i | between nearest neighbors in dimension d.
If the additional distance is large compared to the distance in dimension d between nearest neighbors, then we have a false neighbor.
The square of the Euclidian distance between the nearest neighbor points as seen in dimension d is; while dimension d+1 it is; The distance between points when seen in dimension d + 1 relative to the distance in dimension d is; When this quantity is larger than some threshold, we have a false neighbor (Kennel M B, Brown R and Abarbanel H D 1992). Plot of percentage of false neighbors show the unfolded geometry and where there is no unfolding any more. With the correct choice of d dimension, modelling the data in d number of dynamical degrees of freedom will be adequate to capture the properties of the source.
The figure 2 shows minimum embedding dimension where percentage of nearest neighbors goes to zero taken into account some threshold r tol . Disappearance of false neighbors indicates minimum embedding dimension r tol is false neighbor Euclidian distance tolerance and a tol is neighbor tolerance based on attractor size. The neighbors are declared false neighbors, when the ratio of the Euclidian distances between neighbor candidates in successive embedding dimensions exceeds r tol .

Correlation Dimension
Correlation dimension is a widely used and accepted tool to analyze degree of complexity. Grassberger and Procaccia (1983) was introduced a useful method in order to compute correlation dimension which measure an attractor dependent on a contraction rate of a fractal measure in some phase space in a given set. They defined correlation sum which approximates the probability of having pair of points with separation distance less than a given size ε as,

Volume 15
Where Θ is the Heaviside step function is, When N → ∞, for small values of ε, C follows a power law; Where D C is the correlation dimension. Therefore, D C is defined as; Because of the slow convergence, slope of the straight line by using least square fit in a plot of lnC(ε) vs. ln(ε) was computed for the estimation of correlation dimension in this work.
Correlation dimension is calculated for each Figarch d value simulation for embedding delays (from 1 to 20) and embedding dimensions from 1 to 20 (Figure 3).
According to determined embedding delay values by mutual information estimation, correlation dimension-embedding dimension results appeared in Figure 4. Correlation dimension value doesn't converge in any of them. So, embedding dimension can't be determined.
Then, by assuming, for different embedding delay values, there might be a convergence in correlation dimension values, for several embedding delay values (1 to 20) and for embedding dimensions from 1 to 20, correlation dimension values are calculated. Again, no clear convergence is observed.
In order to find out embedding dimensions for Figarch simulations, for the application of correlation dimension and the false nearest neighbor method, Kostelich and Swinney suggest that both methods work well when applied to low dimensional (3 or less) chaotic attractors. However, the convergence of the nearest neighbor method seems better for high dimensional attractors. So, considering high dimensional attractors, FNN results are assumed to be sufficient for further analysis.

Lyapunov Exponent
The Lyapunov exponent is a parameter characterizing the behavior of a dynamical system. It gives the average rate of exponential divergence from nearby initial conditions. If the Lyapunov exponent is positive, then it is suggested that the system is chaotic; if it is negative, the system will converge to a periodic state; and if it is zero, there is a bifurcation.
While there are several Lyapunov exponents, the largest Lyapunov exponent is the most widely used to test chaotic behavior. When the attractor is chaotic, the trajectories diverge, on average, at an exponential rate characterized by the largest Lyapunov exponent.
Where L 0 is the Euclidian distance between nearest neighbors of initial point, t evolv is fixed evolution time with the same order of magnitude as the embedding delay L evolv is final distance between the evolved points and M is the total number of replacement steps. After gathering two essential input for accurate computations of Lyapunov exponent, correct embedding delay and embedding dimension, in final step, maximal Lyapunov Exponent is calculated first by using Wolf's algorithm.
As shown in Figure 6, maximal Lyapunov exponents converge to positive values in all Figarch d values suggesting extreme sensitivity to changes in initial conditions which is the indication of chaotic behavior. Kantz's Algorithm. Kantz (1994) and Rosenstein (1993) independently proposed a consistent estimator for the maximal Lyapunov exponent. In order to calculate the largest Lyapunov exponent, all neighbors closer to a reference point than given size ε is identified and average distance of all trajectories to the reference trajectory is calculated. .
Where p i are embedding vectors and |Ω i | is the number of neighbors in the neighborhood Ω i of the reference state p i . If S(t) presents a linear increase, the slope of the fitted line can be taken as an estimate of the maximal exponent.
Contrary to Wolf's algorithm results, calculations with Kantz's algorithm produce negative values for maximal Lyapunov exponents for all figarch d values contradicting that FIGARCH is a deterministic chaotic system. Direct Approach by Constructing Dimensional Map. Finally, we compute maximal Lyapunov Exponent by using the dynamical rules of the map directly from the equation rather than simulated data for which we made calculations with Wolf's and Kantz's algorithm.
To sum up, results here matches with the findings with Kantz's algorithm by delivering negative Fig. 8: Maximal Lyapunov Exponent results for FIGARCH d=0.80.