Serial correlation of detrended time series

Abstract

A preliminary essential procedure in time series analysis is the separation of the deterministic component from the random one. If the signal is the result of superposing a noise over a deterministic trend, then the first one must estimate and remove the trend from the signal to obtain an estimation of the stationary random component. The errors accompanying the estimated trend are conveyed as well to the estimated noise, taking the form of detrending errors. Therefore the statistical errors of the estimators of the noise parameters obtained after detrending are larger than the statistical errors characteristic to the noise considered separately. In this paper we study the detrending errors by means of a Monte Carlo method based on automatic numerical algorithms for nonmonotonic trends generation and for construction of estimated polynomial trends alike to those obtained by subjective methods. For a first order autoregressive noise we show that in average the detrending errors of the noise parameters evaluated by means of the autocovariance and autocorrelation function are almost uncorrelated to the statistical errors intrinsic to the noise and they have comparable magnitude. For a real time series with significant trend we discuss a recursive method for computing the errors of the estimated parameters after detrending and we show that the detrending error is larger than the half of the total error.

Authors

Călin Vamoş
Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy

Maria Crăciun
Tiberiu Popoviciu Institute of Numerical Analysis, Romanian Academy

Keywords

Paper coordinates

C. Vamoş, M. Crăciun, Serial correlation of detrended time series, Physical Review E, Vol. 78 (2008) article id. 036707,
doi: 10.1103/PhysRevE.78.036707

About this paper

Journal

Physical Review E

Publisher Name

?

Print ISSN

Not available yet.

Online ISSN

Not available yet.

Google Scholar profile

google scholar link

[1] G. M. Viswanathan, S. V. Buldyrev, E. K. Garger, V. A. Kashpur, L. S. Lucena, A. Shlyakhter, H. E. Stanley, and J. Tschiersch, Phys. Rev. E 62, 4389 2000.
[2] W. Knospe, L. Santen, A. Schadschneider, and M. Schreckenberg, Phys. Rev. E 65, 056133 2002.
[3] K. Kiyono, Z. R. Struzik, N. Aoyagi, S. Sakata, J. Hayano, and Y. Yamamoto, Phys. Rev. Lett. 93, 178103 2004; K. Kiyono, Z. R. Struzik, N. Aoyagi, F. Togo, and Y. Yamamoto, ibid. 95, 058101 2005.
[4] W. M. Macek, R. Bruno, and G. Consolini, Phys. Rev. E 72, 017202 2005.
[5] K. Kiyono, Z. R. Struzik, and Y. Yamamoto, Phys. Rev. Lett. 96, 068701 2006.
[6] H.-D. Xi, Q. Zhou, and K.-Q. Xia, Phys. Rev. E 73, 056312 2006.
[7] P. Weber, F. Wang, I. Vodenska-Chitkushev, S. Havlin, and H. E. Stanley, Phys. Rev. E 76, 016109 2007.
[8] K. Hu, P. Ch. Ivanov, Z. Chen, P. Carpena, and H. E. Stanley, Phys. Rev. E 64, 011114 2001.
[9] Z. Chen, K. Hu, P. Carpena, P. Bernaola-Galvan, H. E. Stanley, and P. Ch. Ivanov, Phys. Rev. E 71, 011104 2005.
[10] J. W. Kantelhardt, E. Koscielny-Bunde, H. H. A. Rego, S. Havlin, and A. Bunde, Physica A 295, 441 2001.
[11] P. J. Brockwell and R. A. Davis, Time Series: Theory and Methods Springer Verlag, New York, 1996.
[12] P. J. Brockwell, R. A. Davis, Introduction to Time Series and Forecasting Springer Verlag, New York, 2003.
[13] J. D. Hamilton, Time Series Analysis Princeton University Press, Princeton, NJ, 1994.
[14] D. Maraun, H. W. Rust, and J. Timmer, Nonlinear Processes Geophys. 11, 495 2004.
[15] E.-J. Wagenmakers, S. Farrell, and R. Ratcliff, Psychon. Bull. Rev. 11, 579 2004; T. L. Thorton and D. L. Gilden, ibid. 12, 409 2005.
[16] J. Timmer, U. Schwarz, H. U. Voss, I. Wardinski, T. Belloni, G. Hasinger, M. van der Klis, and J. Kurths, Phys. Rev. E 61, 1342 2000.
[17] S. Yue and P. Pilon, Water Resour. Res. 39, 1077 2003.
[18] C. Stărică and C. Granger, Rev. Econ. Stat. 87, 495 2005.
[19] J. Gao, J. Hu, W.-W. Tung, Y. Cao, N. Sarshar, and V. P. Roychowdhury, Phys. Rev. E 73, 016117 2006.
[20] C. Vamoş, Phys. Rev. E 75, 036705 2007.
[21] A. Carbone, G. Castelli, and H. E. Stanley, Phys. Rev. E 69, 026105 2004.
[22] A. Carbone and H. E. Stanley, Physica A 384, 21 2007.
[23] C.-K. Peng, S. V. Buldyrev, S. Havlin, M. Simons, H. E. Stanley, and A. L. Goldberger, Phys. Rev. E 49, 1685 1994.
[24] G. Box, G. Jenkins, and G. Reinsel, Time Series Analysis: Forecasting and Control, 3rd ed. Prentice-Hall, Upper Saddle River, NJ, 1994.
[25] C. Vamoş, Ş. M. Şoltuz, and M. Crăciun, e-print arXiv:0709.2963.
[26] The observational data, not yet published, was kindly provided by V. V. Morariu.
[27] F. Brochard and J. F. Lennon, J. Phys. Paris 36, 1035 1975.
[28] H. Strey, M. Peterson, and E. Sackmann, Biophys. J. 69, 478 1995.
[29] H.-G. Döbereiner, G. Gompper, C. K. Haluska, D. M. Kroll, P. G. Petrov, and K. A. Riske, Phys. Rev. Lett. 91, 048301 2003.
[30] S. Zhao and G. W. Wei, Comput. Stat. Data Anal. 42, 219 2007.

Serial correlation of detrended time series Călin Vamoş * and Maria Crăciun T. Popoviciu Institute of Numerical Analysis, Romanian Academy, P.O. Box 68, 400110 Cluj-Napoca, Romania Received 15 January 2008; revised manuscript received 24 July 2008; published 23 September 2008 A preliminary essential procedure in time series analysis is the separation of the deterministic component from the random one. If the signal is the result of superposing a noise over a deterministic trend, then the first one must estimate and remove the trend from the signal to obtain an estimation of the stationary random component. The errors accompanying the estimated trend are conveyed as well to the estimated noise, taking the form of detrending errors. Therefore the statistical errors of the estimators of the noise parameters obtained after detrending are larger than the statistical errors characteristic to the noise considered separately. In this paper we study the detrending errors by means of a Monte Carlo method based on automatic numerical algorithms for nonmonotonic trends generation and for construction of estimated polynomial trends alike to those obtained by subjective methods. For a first order autoregressive noise we show that in average the detrending errors of the noise parameters evaluated by means of the autocovariance and autocorrelation func- tion are almost uncorrelated to the statistical errors intrinsic to the noise and they have comparable magnitude. For a real time series with significant trend we discuss a recursive method for computing the errors of the estimated parameters after detrending and we show that the detrending error is larger than the half of the total error. DOI: 10.1103/PhysRevE.78.036707 PACS numbers: 05.40.Ca, 02.60.Ed I. INTRODUCTION Many observed time series are a result of the superposi- tion of independent phenomena. If their characteristics are different enough, then one can separate and individually ana- lyze them. One of the most frequent situations occurring in practice is the superposition of a stationary noise over a de- terministic trend. In this case the noise characteristics are determined subsequently to the detrending of the total signal. We illustrate the diversity of the phenomena requiring de- trending by some of the recent applications: atmospheric ra- dioactivity 1, the velocity of highway traffic 2, integrated heart interbeat intervals 3, solar wind velocity 4, log- returns of financial indices 5, mean wind in turbulent ther- mal convection 6, financial volatility 7, etc. The trend estimation is accompanied by errors which are transferred to the estimated noise obtained by detrending. We call detrending errors the errors affecting the statistical esti- mators of the noise due to the differences between the esti- mated and real trend. In applications it is important to evalu- ate both the total error and the part due to detrending. However, according to our knowledge, such estimations are missing. In this paper we present a numerical method to evaluate the detrending errors of the functions describing the serial correlation of the noise and of the parameters derived from these functions. We perform Monte Carlo numerical simulations on statis- tical ensembles composed by artificially generated time se- ries x n = f n + z n , 1 where f n , n =1,2,..., Nis a discretized deterministic trend and z n , n =1,2,..., Nis a realization of a stationary sto- chastic process. Using only the values of the total signal x n we compute the estimated trend f ˜ n and then the estimated noise z ˜ n = x n - f ˜ n . 2 As a rule, all the quantities affected by detrending errors are denoted with tilde. Because the estimated trend is always different from the real trend, the estimated noise does not coincide with the real noise. For artificial series the detrend- ing errors can be exactly computed because we know both f n and z n . But for observed time series we do not know the two components and the detrending errors can be only estimated by an iterative method for building statistical en- sembles containing artificial time series. The results of such simulations are useful only if the members of the statistical ensemble have a diversity compa- rable with that of the real time series. On the other hand, if the number of the parameters controlling the individual time series is too large, then the analysis of the simulation results becomes intricate and nonintuitive. But a time series defined by Eq. 1is characterized at least by four parameters: one for the trend, two for the noise the variance and the serial correlation, and one for the ratio between the noise and trend amplitude. In order to preserve the presentation clarity we are compelled to use as simple as possible models for trend and noise retaining in the same time the essential char- acteristics of a real time series. The main difficulty in building the statistical ensembles is to generate realistic nonmonotone trends. The generation of a large number of trends with a significant variability using a fixed functional form requires a large number of parameters. For example, a polynomial trend must have a large enough degree, hence the number of its coefficients is also large. If we consider the polynomial degree as the single parameter characterizing the trend, then the coefficients are chosen by * cvamos@ictp.acad.ro craciun@ictp.acad.ro PHYSICAL REVIEW E 78, 036707 2008 1539-3755/2008/783/03670711©2008 The American Physical Society 036707-1
means of a random algorithm and the form of the generated trend is difficult to be controlled. Usually the resulting trend has only a few parts with significant monotonic variation. In Appendix A we present the generation of a trend by joining together monotonic semiperiods of sinusoid, i.e., the part limited by two successive extrema, with random ampli- tudes and lengths. In this way we obtain a large enough variability for the generated trend using the number of its monotonic parts as the single parameter. The trend shapes obtained by this algorithm are much more diverse than those used in similar Monte Carlo simulations. For example, the study of the effects of trends on detrended fluctuation analy- sis DFAis performed only for monotonic linear, power- law, exponential, and logarithmicand periodic sinusoidal trends 810. The noise in Eq. 1is a realization of an ARqstochastic process autoregressive of order q. The properties of the autoregressive processes have been studied in detail and they are the basis of the linear stochastic theory of time series 1113. In fact, for most of the numerical simulations we have limited ourselves to AR1processes because their se- rial correlation is described by a single parameter. The analy- sis of the detrending errors for models with more parameters is similar to that described in this paper, although more elaborate. The ARqstochastic processes have short range correla- tion while the most of the present researches in physics in- cluding those cited at the beginning of this sectionmodel the noise with stochastic processes with long range correla- tion, usually of the type 1 / f . There are very few cases when these two types of noise models are compared to establish the optimum one. For instance, such model selection has been performed for atmospheric temperature 14and human reaction time 15. It is worth noticing that the AR model has been successfully applied to phenomena in climatology 14, astrophysics 16, hydrology 17, finance 18, pattern rec- ognition 19, etc. Even if our method to evaluate errors using statistical en- sembles containing numerically generated time series can also be applied to noises with long range correlation or cha- otic signals, the results could be different from those pre- sented in this paper. In the case of long-range correlation the separation of the real trend from the stochastic trend, i.e., the long scale variations of the noise with long-range correlation, is difficult. In Ref. 20we have shown that for monotonic trends the polynomial fitting gives slightly better results than the moving average. The scaling properties of the stochastic trend for noise with long-range correlation have been studied in Ref. 21and it have been used to compute the informa- tion entropy of the noise 22. There are methods to analyze the noise in the time series of the form given by Eq. 1which avoid the explicit com- putation of the trend. For example DFA polynomially de- trends parts of variable length from the summed signal 23. Such a method is unbiased only if the detrending errors are independent of the statistical errors intrinsic to the noise. In this paper we show that this property holds for AR noises, but for long range correlated noises it must be also checked. In the following section we present the serial correlation functions that we shall use. In Sec. III the characteristics of the artificial time series making up the statistical ensembles are described. Then we present the results obtained for the detrending errors of the estimated serial correlation functions using Monte Carlo simulations and the influence of these errors on the estimated parameters of the AR1noise. In Sec. V we study the detrending errors of an observed time series. The last section is dedicated to conclusions. The two appendixes contain the automatic algorithms for generating realistic nonmonotonic trends and choosing visually accept- able estimated trends. II. SERIAL CORRELATION FUNCTIONS In order to illustrate the types of the serial correlation functions we consider the fluctuations of the relative area of a human red blood cell freely floating in a fluid x n , N = 968 Fig. 1a. In Sec. V we describe the experimental method used to obtain this time series and we justify the existence of a deterministic trend independent of the cell fluctuations. Figure 1aalso shows the estimated polyno- mial trends f ˜ n of degrees q = 1 , 3 , 7 , 14. When the degree of the polynomial trend increases, the shape of the trend does not change monotonically. For the chosen degrees the trend has significant changes, while for the other polynomial de- grees the shape remains practically unchanged. We use poly- nomial fitting because the polynomial trends are character- ized by a single parameter, their degree, therefore an automatic algorithm for trend estimation is easier to be de- signed. A complete discussion of the detrending errors is laborious even in this simple case and the same method can be applied to study other more sophisticated algorithms for trend estimation. The estimated serial correlation is measured by the esti- mated sample autocovariance function ACvF ˜ h= 1 N - h n=1 N-h z ˜ n z ˜ n+h 3 or by the estimated sample autocorrelation function ACrF 200 400 600 800 0.8 0.9 1 1.1 (a) n x n 0 2 4 6 8 10 0 1 2 3 x 10 -3 h γ (h) ~ (b) q=1 q=3 q=7 q=14 0 2 4 6 8 10 -0.5 0 0.5 1 h ρ (h) (c) ~ q=1 q=3 q=7 q=14 0 2 4 6 8 10 0 0.5 1 1.5 x 10 -3 h Q(h) ~ (d) q=1 q=3 q=7 q=14 FIG. 1. The fluctuations of the relative area of a human red blood cell freely floating in a fluid and the estimated polynomial trends of degrees q =1,3,7,14 a. The estimated sample ACvF b, ACrF c, and RACF dfor the estimated noise. CĂLIN VAMOŞ AND MARIA CRĂCIUN PHYSICAL REVIEW E 78, 036707 2008 036707-2
˜ h= ˜ h ˜ 0 . These two quantities are represented for h 10 in Figs. 1b and 1c. One can see that ˜ hhas greater variability with respect to the degree of the estimated trend than ˜ h. There- fore, the detrending error of the noise serial correlation is smaller for ACrF than for ACvF. The variability reduction of the estimated sample ACrF obtained by dividing with ˜ 0in Eq. 3is not uniformly distributed with respect to h. In order to reduce furthermore the dependence on the degree of the estimated trend we ob- serve that ˜ has similar shapes regardless of the degree q Fig 1b. Therefore we define the estimated sample reverse autocovariance function RACF Q ˜ h= ˜ 0- ˜ h, which, for h =0, has the fixed value Q ˜ 0=0. As shown in Fig. 1d, this quantity has a smaller variability than ˜ and ˜ for all the values of h. These estimated serial correlation functions must be com- pared with those calculated using the actual values of the noise. A measured noise z n is composed by the values z n taken by the random variables Z n , respectively. These values are affected by random fluctuations and all the statistical es- timators computed using them differ from the theoretical ones. Due to statistical errors, the sample ACvF ˆ h= 1 N - h n=1 N-h z n z n+h 4 computed with the observed values z n fluctuates around the theoretical ACvF hcomputed using the stochastic process Z n . We measure the difference of the first H values of the two functions by the formula ˆ H= 1 H +1 ˆ - H , H 0, 5 where ˆ - H = h=0 H ˆ h- h 2 1/2 is the usual square norm. We divide the norm by H +1 be- cause we intend to compare the statistical error for different values of H. In accordance with the usual practice 24, we compute ACvF only for H N / 4. The estimated sample ACvF defined by Eq. 3does not coincide with the sample ACvF given by Eq. 4because it contains also the detrending errors besides the fluctuations due to the random nature of the noise. Analogously to Eq. 5 we define the detrending error of the estimated sample ACvF ˜ H= 1 H +1 ˜ - ˆ H . 6 Because for ACrF and RACF the first value is fixed, the denominator in Eqs. 5and 6is equal to H. Our goal is to analyze the relation between these two types of errors using statistical ensembles of numerically generated time series. III. ARTIFICIAL TIME SERIES An AR1process is an infinite stationary stochastic pro- cess Z n , n =0, 1, 2,... , each random variable Z n satis- fying the relation Z n = Z n-1 + G n , 7 where G n are uncorrelated Gaussian random variables with zero mean and variance G 2 and is a real parameter, 1. The theoretical ACvF deduced from the properties of the stochastic process Z n is h= 2 h , where 2 = G 2 1- 2 -1 is the variance of the AR1process. For =0 the AR1process reduces to a white Gaussian noise and as increases the serial correlation becomes larger. The theoretical ACrF is equal with h= h and the theoretical RACF is Qh= 2 1- h . To generate a numerical series which is the realization of a finite sample of an AR1process with given and , we proceed as follows. Using a random number generator we obtain a series g n , n =1,2,..., Nas a realization of a white Gaussian noise with zero mean and the variance G 2 = 2 1- 2 . The series z n , n =1,2,..., Nis obtained mak- ing the transformations z 1 = / G g 1 and z n = z n-1 + g n for n 1. Then it can be shown that the theoretical ACvF of this finite AR1process is equal to that of the infinite AR1 process 725. Thus the transient region at the beginning of the numerically generated time series is eliminated. If 1, then the AR1process is causal, i.e., the random variable Z n can be expressed using only the previous terms G m , m n. The time series obtained with the algorithm described in Appendix A are characterized by the following parameters: the series length N, the standard deviation of the AR1noise , the parameter describing the serial correlation of the noise , the minimum number of points in a monotonic part N min , the number of monotonic parts of the trend P, the ratio between the amplitude of the trend and of the noise r. Depending on the aim of the numerical test we generate sta- tistical ensembles choosing different values for these param- eters. We impose a superior limit for the values of to 0.9 because the AR1process with closer to unit has a special behavior, similar to the Brownian motion, which must be analyzed with special methods 13. Also, we consider for only positive values because few of the phenomena of inter- est are characterized by an anticorrelated noise. Hence the maximum range for the serial correlation parameter is 0,0.9. The noise standard deviation has the fixed value =1. The maximum number of monotonic parts of the gener- ated trend is limited up to 5 in order to avoid too large degrees of the estimated polynomial trends, allowing at the same time the numerically generated trends to take a large enough diversity of shapes. For the minimum number of points in a monotonic part of the trend we have chosen the value N min = 20. SERIAL CORRELATION OF DETRENDED TIME SERIES PHYSICAL REVIEW E 78, 036707 2008 036707-3
The time series is dominated by noise if r 0,1or by trend if r 1, . Since for r =0 the trend A2vanishes, we eliminate the small values of r and choose the minimum value for r equal to 0.25 because, as shown in Fig. 2a, in this situation the shape of the total signal allows us to assume the presence of a trend. The maximum value of r is 4 and corresponds to the signal in Fig. 2bin which the noise still has a large enough amplitude to allow its estimation. The interval chosen for the variation of r 0.25,4contains more signals dominated by trend because we are interested first of all in applications for which the existence of the trend can easily be supposed, such that its removal should be nec- essary. In order to choose the value of N we have calculated the root mean square error RMSEˆ of the sample standard deviation ˆ on statistical ensembles of 1000 numerically generated AR1time series with =1 for given values of and N. It depends significantly on the length of the series N, the statistical error being larger when the series is shorter Fig. 3. Only for N = 3000 the error is smaller than 5% for all the values of . Therefore we shall use in our numerical simulations time series with N = 3000, for shorter time series, even for the sample standard deviation, the statistical errors due to the noise being too large. We call maximal statistical ensemble an ensemble con- taining time series with the variable parameters randomly chosen in the maximal intervals specified above. Most of the numerical tests in this paper are run on this maximal statis- tical ensemble. In the following we specify the intervals over which the parameters take values only if they are reduced. In order to construct a Monte Carlo simulation to evaluate the detrending errors we need an automatic algorithm to de- termine the estimated trend f ˜ n . Such an automatic algo- rithm is available only for monotonic trends 20. For this reason, in Appendix B we describe a numerical algorithm simulating a subjective method for estimating a nonmono- tonic trend by introducing some quantitative criteria for choosing visually acceptable polynomial trends. Only the nu- merically generated time series satisfying these criteria are retained in the statistical ensembles. We generate enough time series that in the end all the statistical ensembles should contain exactly 1000 time series. IV. DETRENDING ERRORS OFARTIFICIAL DATA The detrending error 6can be separated from the error intrinsic to the noise 5using the algebraic identity ˜ - H 2 = ˜ - ˆ H 2 + ˆ - H 2 +2M H, 8 where M H= h=1 H ˜ h- ˆ hˆ h- h . The average on the maximal statistical ensemble of the last term in Eq. 8is negligible, at the most 3% in absolute value in comparison with the left side term. This result also holds for the other two functions, the last right side term is at the most 4% for ˜ and 1% for Q ˜ . The average correlation coef- ficient of the two types of error lies within the interval -0.07,0.01, i.e., the two types of errors are uncorrelated 0 500 1000 1500 2000 2500 3000 -4 -3 -2 -1 0 1 2 3 4 n x n (a) 0 500 1000 1500 2000 2500 3000 -15 -10 -5 0 5 10 n x n (b) FIG. 2. Numerically generated trends obtained by superposing an AR1noise with =1 and = 0.9 over a deterministic trend con- tinuous linecomposed by P = 5 semiperiods of sinus with random lengths and amplitudes. The ratio between the amplitude of the trend and of the noise is r = 0.25 aand r =4 b. The dashed line is the estimated polynomial trend with the maximum resemblance with the real trend minimumbut visually unacceptable due to local misfits, as in the zoom-in window. 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 φ ε(σ) ^ N=100 N=500 N=1000 N=3000 FIG. 3. The RMSE of the standard deviation of an AR1time series for different lengths N in terms of the serial correlation pa- rameter . CĂLIN VAMOŞ AND MARIA CRĂCIUN PHYSICAL REVIEW E 78, 036707 2008 036707-4
and the total error can be separated into the two parts of different origin. Figure 4 shows the average of the statistical and detrend- ing errors on the maximal statistical ensemble for different values H. Excepting a few small values of H, the statistical errors of all the functions are comparable. The larger values of ˆ for H 5 is a consequence of the fact that ACvF does not have the first value fixed. The average detrending errors ˜ and ˜ are larger, while Q ˜ are smaller than the intrinsic error of the noise. When H increases, ˜ and ˜ decrease similarly to ˆ and ˆ , but Q ˜ has a reverse variation. For H 100 the values of Q ˜ are smaller than the minimum values of ˜ and ˜ which shows that Q ˜ is less altered by the detrending errors. Now we analyze the statistical errors intrinsic to noise of the estimated AR1parameters in terms of the number H of serial correlation values used to compute them. An AR1 model of a time series is given by the most similar theoreti- cal ACvF h= 2 h to the sample ACvF. The estimated values of and are obtained using the first H values of ˆ hto impose the condition that the function F , ; H= ˆ h- 2 h H should be minimum. Since F , ;0= ˆ 0- 2 , for H = 0 we can compute only the parameter of the AR1 model. For H =1 we obtain the estimated parameters ˆ 0 1/2 ˆ and ˆ 1/ ˆ 0such that F , ;1=0. For H 1 the function F , ; Hhas generally a nonzero mini- mum. By minimizing the function F , ; H= ˆ h- h H we estimate only the value of . Since F , ;0vanishes identically, the first value provided by this function is ˆ 1 which coincides with that obtained by . For RACF we have to minimize the function F Q , ; H= Q ˆ h- 2 1- h  H . In this case as well F , ;0=0, but for H =1 we have an infinity of values for which the function F , ;1vanishes. The first nontrivial solution is obtained for H =2, = Q ˆ 1 2 / 2Q ˆ 1- Q ˆ 2, and = Q ˆ 2/ Q ˆ 1-1. The RMSE of the estimated parameters with the fitting method described above is presented in Figs. 5aand 5c. The smallest error for both parameters is obtained using ACvF with H = 1. So the information on the parameters of an AR1series is concentrated on the first two values of ACvF. Introducing more values causes the increase of the error of the estimated parameters. This property is used by the algo- rithms of the time series theory to determine the AR models using the first values of the ACvF 12. Since the best esti- 10 0 10 1 10 2 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 (a) H average statistical error of serial correlation γ ρ Q 10 0 10 1 10 2 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 (b) H average detrending error of serial correlation γ ρ Q FIG. 4. The average of the statistical errors intrinsic to the noise aand the detrending error bof the functions describing the serial correlation using their first H values. 1 10 100 0 0.02 0.04 0.06 0.08 0.1 H ε(σ) ^ (a) γ Q 1 10 100 0 0.02 0.04 0.06 0.08 0.1 H ε(σ;σ) ~^ (b) γ Q 1 10 100 0.015 0.02 0.025 0.03 H ε(φ) ^ (c) γ ρ Q 1 10 100 0 0.05 0.1 0.15 H ε(φ;φ) ~^ (d) γ ρ Q FIG. 5. The statistical error of the noise standard deviation ˆ a and of the reference value of the serial correlation parameter ˆ c for an AR1noise without trend and the detrending error of the estimated noise standard deviation ˜ b, and of the estimated serial correlation coefficient ˜ dwith respect to the number H of the values of the sample serial correlation used in the fitting computation. SERIAL CORRELATION OF DETRENDED TIME SERIES PHYSICAL REVIEW E 78, 036707 2008 036707-5
mation of is the standard deviation ˆ , we denote by ˆ = ˆ 1/ ˆ 0the best estimation of the parameter . We shall use these two values as reference values for the evaluation of the detrending errors. The errors of the parameters estimated with RACF have a different behavior: they decrease when H increases reaching a stationary value for H 100. The minimum error of is equal to that of ˆ , but the minimum error of is greater than that of ˆ . Hence, the statistical information is also concen- trated at the beginning of the time series for RACF, but on a much longer length. We also notice that the difference between the average parameters evaluated by means of the three functions and the value used to generate the time series is with an order smaller than the errors in Fig. 5 result not presented here. So all the estimators described above are unbiased. If we apply the same method to the estimated noise 2 and not to the real noise, then the error of the estimated parameters is larger because of the addition of the detrending error to that intrinsic to the noise. As shown above, the ref- erence values with the smallest error for the noise without trend are given by the sample standard deviation ˆ and ˆ = ˆ 1/ ˆ 0. Then, for a given realization s from a statistical ensemble, we can separate the detrending error ˜ s H- ˆ s ˜ s H- = ˜ s H- ˆ s + ˆ s - , 9 where = 1. In contrast with ˆ s and , the estimated value ˜ s depends on the type of the serial correlation function and on the number of values H used for fitting. By squaring and summing up on the entire statistical ensemble, we obtain the decomposition of the mean square error 2 ˜ H = 2 ˜ H; ˆ + 2 ˆ +2M H, 10 where the detrending error of ˜ is defined as 2 ˜ H; ˆ = 1 S s=1 S ˜ s H- ˆ s 2 and the last term is given by M H= 1 S s=1 S ˜ s H- ˆ s ˆ s - . As discussed above, the terms in Eq. 10cannot be com- puted for . The detrending and the noise errors of ˜ for the other functions and Qon the maximal statistical en- semble are represented in Fig. 5b. In contrast with the noise intrinsic error which has a monotonic variation with respect to H Fig. 5a, the detrending error ˜ ; ˆ in Fig. 5bhas a minimum for the two functions. The minimum detrending error for Q is several times smaller than that for and also than the error due to the noise. In comparison with the left side term, the last term in Eq. 10is on average at most 4%, hence the total error has the same form as the detrending error to which the constant value of the noise error is added. The absolute value of the correlation coefficient of the two types of errors is less than 0.1 showing that they are slightly correlated and the errors due to detrending are not influenced by the randomness in- troduced by the noise. We notice that the estimators ˜ and ˆ are unbiased since the averages of the terms in Eq. 9are several times smaller than the corresponding terms in Eq. 10results not presented here. A relation similar to Eq. 10is also valid for ˜ with the single modification that the actual value s is not fixed, but it varies with the realization s and the reference value is equal to ˆ = ˆ 1/ ˆ 0. In Fig. 5dthe same analysis as in Fig. 5bis presented, but for ˜ which can be calculated by means of all the three functions of the serial correlation, including . The main conclusions drawn from Fig. 5bre- main valid. For ˜ as well, the two types of error are slightly correlated and the estimators ˜ and ˆ are unbiased. In con- trast with the detrending error of ˜ , for ˜ the detrending error calculated by means of and has the same behavior as the error intrinsic to the noise Fig. 5c, their variation is monotonic increasing and the error for is smaller than that for . The minimum detrending error for Q is smaller than that for and . The maximal statistical ensemble on which we have ob- tained the previous results contains time series depending on three parameters , r, and Ptaking random values over their variation intervals. Therefore, it is necessary to analyze the dependence of the detrending error on the three param- eters separately considered. From the results not presented here it follows that the detrending error does not depend significantly on the number P of monotonic parts of the sig- nal. We compare the magnitude of the detrending errors ob- tained using different serial correlation functions for particu- lar values of and r on statistical ensembles containing 1000 time series with P random in 1,,5. We estimate the AR1parameters with H = H min for which the detrending error is minimum. Because the value ˜ estimated by means of is identical to that estimated using , Fig. 6 shows the difference of the minimum detrending errors computed with and Q , Q= ˜ H min ; ˆ - Q ˜ H min ; ˆ . The detrending error of ˜ is always smaller for Q, but that of ˜ depends on the values of and r. Therefore, the two serial correlation functions are complementary and for every par- ticular time series we have to establish the most suited to model the noise. V. DETRENDING ERRORS OF OBSERVATIONALDATA Unlike the numerically generated time series, in the case of observational time series the trend and the stochastic pro- cess generating the noise are unknown. Therefore we cannot set up with the same precision the statistical ensemble used to evaluate the errors affecting the estimated parameters. In- stead we have to resort to a method of successive approxi- mations. In the following we exemplify the computation of the detrending and total errors in the case of the time series in Fig. 1a26. Cell membrane undulation is a common phenomenon in the world of living cells. By far, the red blood cell shape CĂLIN VAMOŞ AND MARIA CRĂCIUN PHYSICAL REVIEW E 78, 036707 2008 036707-6
fluctuations, also known as flickering 2729, is the best known. Such fluctuations consist of submicron and out-of- plane displacements of the cell membrane in the frequency range of 0.3– 30 Hz. Usually the investigations are per- formed on red blood cells adhering firmly and irreversibly to glass substratum. The mechanical restrictions imposed by the substratum can be eliminated if the cells are freely floating, but then the various motions of the cell induce nonstationary contributions on which the fluctuations of the flickering itself are superposed. The time series in Fig. 1arepresents the time evolution of the relative area of a selected free erythro- cyte photographed at a rate of ten images per second, the images being numerically processed. The minimum flickering frequency min = 0.3Hz implies a semiperiod of 16 time steps. From a visual inspection of Fig. 1aone notices that the time series has at least P =7 mono- tonic parts lasting longer than a flickering semiperiod. These fluctuations are caused by phenomena of greater scale than the cell flickering, probably by the movement of the floating cell. Hence, they must be included in the deterministic trend independent on the noise produced by the membrane undu- lations. For trend estimation one could utilize various methods 30, however, we limit ourselves to the polynomial fitting discussed in the previous sections. Only for degrees q ˜ 14 the polynomial trends have P = 7 well defined monotonic parts. For smaller degrees the maximums in the first half of the series are cut off. There are many situations when such indications on the nature of the analyzed phenomenon are not available, therefore we evaluate the detrending errors for all the estimated polynomial trends of degrees q ˜ 1,3P. For each degree q ˜ , the corresponding polynomial trend f ˜ n 0 and the estimated noise z ˜ n 0 are determined as the zero order approximation. We have applied the Durbin-Levinson algorithm 11to determine the AR10model of the esti- mated noise. In all cases the first coefficient is the dominant term, its value being contained in the interval 1 0.586 , 0.654. The next most significant coefficient is 5 0.109 , 0.124, much smaller than 1 , proving that the AR1approximation is acceptable for the noise in the flick- ering data. We compute the detrending and total errors of the AR1 parameters for each polynomial trend in the same way as in Sec. IV. However, the results obtained on the maximal sta- tistical ensemble presented in the previous sections charac- terize the average behavior of the detrending error useful to optimize the automatic processing of a large number of time series having diverse characteristics. In order to obtain infor- mation for a single observational time series, the maximal statistical ensemble must be reduced and adapted to the mea- sured data by limiting the variation intervals of the param- eters of the numerically generated time series. The statistical ensembles for flickering data are composed of time series f ˜ n 0 + z n 1 , where the estimated trend f ˜ n 0 is kept fixed for a given degree q ˜ . The noise z n 1 is numeri- cally generated as an AR1process with the same charac- teristics as the zero order estimated noise z ˜ n 0 , i.e., its pa- rameters are equal to the reference values defined in Sec. III computed for z ˜ n 0 , ˜ 0 = ˆ , and ˜ 0 = ˆ . We also must take into account that the length of the observed series is N = 968, unlike N = 3000 as it was in the previous simulations. Because the trend f ˜ n 0 is kept fixed, the ratio of the trend and noise amplitudes r is no more a variable for these statis- tical ensembles. We estimate the first order polynomial trend f ˜ n 1 of the same degree q ˜ as the degree of the zero order polynomial trend f ˜ n 0 for each time series f ˜ n 0 + z n 1 . The two trends differ from each other because f ˜ n 1 is influenced by the fluctuations of the numerically generated noise z n 1 . Be- cause the ratio r and the parameter have now fixed values, the detrending errors of the first order approximation of the noise z ˜ n 1 = f ˜ n 0 + z n 1 - f ˜ n 1 depend only on the number H of the values of the sample serial correlation used in the fitting computation. Figure 7 shows the minimum with respect to H of the detrending errors ˜ 1 ; ˆ 1 and ˜ 1 ; ˆ 1 for ACvF and RACF. These errors measure the difference be- tween the order one estimated and generated noise z ˜ n 1 - z n 1 = f ˜ n 0 - f ˜ n 1 . For small degrees q ˜ 14the order zero esti- mated trend f ˜ n 0 cannot follow the fluctuations due to the cell movement and the detrending errors are strongly under- estimated. Therefore the minimum detrending error is in- creasing with respect to q ˜ . According to Eq. 10, the total error is obtained by add- ing the error intrinsic to the noise and the term proportional 0 0.2 0.4 0.6 0.8 1 0 0.005 0.01 0.015 0.02 0.025 0.03 φ δσ(γ,Q) r=0.25 r=0.5 r=1 r=2 r=4 0 0.2 0.4 0.6 0.8 1 -0.02 0 0.02 0.04 0.06 0.08 φ δφ(γ,Q) r=0.25 r=0.5 r=1 r=2 r=4 FIG. 6. The difference between the minimum detrending error of the estimated AR1parameters computed with ACvF and RACF for different values of the serial correlation and the ratio r. SERIAL CORRELATION OF DETRENDED TIME SERIES PHYSICAL REVIEW E 78, 036707 2008 036707-7
to the covariance of the two types of errors to the detrending error. For small degrees q ˜ the long-range fluctuations due to the real trend are incorrectly ascribed to the estimated noise z ˜ n 0 , but the two additional terms in the total error formula could compensate the underestimated detrending error. In- deed, the minimum total error of the noise standard deviation ˜ 1 ; ˜ 0 in Fig. 7abecomes decreasing, but the varia- tion of the minimum total errors of the serial correlation parameter ˜ 1 ; ˜ 0 in Fig. 7bremains similar to the detrending error. From information inaccessible through statistical methods we know that only for q ˜ 14 the polynomial trend describes all the fluctuations which cannot be attributed to the cell flickering. For these trends the minimum total errors reach an almost stationary value, except the minimum total error of for which is increasing. If we assume that the best model for the flickering time series is given by that with the mini- mum total error for the noise parameters, then the optimum degree of the estimated trend is q ˜ =14. In this case the order one approximation of the noise standard deviation is ob- tained by RACF which has a smaller total error than ACvF Fig. 7a, ˜ 1 = 0.0015. The smallest total error for the serial correlation parameter is obtained for ACvF Fig. 7b, ˜ 1 = 0.038. Using the functions and the number of values H for which the total errors are minimum, we compute the estimated noise parameters from z ˜ n 1 and we obtain ˜ 1 = 0.0354 0.0015 and ˜ 1 = 0.681 0.038. These estimations could be improved if more extended statistical ensembles were constructed such that the number of the monotonic part of the trend P and the ratio r would be allowed to vary or if approximations with higher order were be used. For example, Fig. 8 shows the total error of the two coefficients of an AR5process with only two nonzero terms Z n = 1 Z n-1 + 5 Z n-5 + G n applied to the flickering data detrended by a polynomial of degree q ˜ = 14. For greater order of the AR processes the only difference from the method described now is that the alge- braic formula of the serial correlation function becomes too long. Therefore we use the numerical algorithm given in Ref. 13p. 59. Unlike the AR1process, now the minimum total error is obtained for RACF when H =22 for 1 and H =14 for 5 . With these information we can compute the noise parameters 1 = 0.694 0.039 and 5 = 0.168 0.035. VI. CONCLUSIONS The detrending errors occur when we remove from a time series an estimated trend which does not coincide with the real one. We have analyzed these detrending errors for AR1 noise whose serial correlation is described by a single param- eter and for an AR5process with two nonzero coeffi- cients. When the method is applied to other stochastic mod- els, as the autoregressive processes of higher order, then the analysis becomes more elaborate because it is necessary to trace simultaneously the behavior of several coefficients of the serial correlation. The numerical algorithm by means of which we have generated the statistical ensembles provides time series with a diversity comparable to that occurring in practical applications. The trend estimation has been achieved automatically by an algorithm which simulates the subjective visual selection of a polynomial trend, allowing 10 30 100 200 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 H ε(φ i ) γ Q FIG. 8. The total errors of the estimated AR5parameters for the flickering time series in terms of the number H of the values of the serial correlation function used in calculations. The continuous line represents the error of 1 and the dashed line that of 5 0 5 10 15 20 0 0.5 1 1.5 2 2.5 3 x 10 -3 (a) q ε[σ (1) ;σ (0) ] ~ ~ γ Q 0 5 10 15 20 0 0.01 0.02 0.03 0.04 0.05 ~ ~ ε[φ (1) ;φ (0) ] (b) q γ Q FIG. 7. The minimum detrending continuous lineand the minimum total dashed lineerrors of the estimated AR1parameters for the flickering time series in terms of the degree of the estimated polynomial trend. CĂLIN VAMOŞ AND MARIA CRĂCIUN PHYSICAL REVIEW E 78, 036707 2008 036707-8
Monte Carlo simulations on statistical ensembles large enough to obtain statistically relevant results. The detrending errors for an AR1noise are almost un- correlated with the statistical errors due to the noise random- ness. As expected, the detrending errors are larger when the signal is dominated by trend. The detrending error for the estimated standard deviation of the noise increases with its serial correlation, while the detrending error of the serial correlation parameter decreases. In some situations the detrending errors can be reduced if instead of the autocovariance function ACvFor autocorre- lation function ACrF, the serial correlation is described by the reverse autocovariance function RACFdefined as the difference between the noise variance and the autocovariance function. According to Fig. 6, the detrending error decrease is significant for time series dominated by trend r 1and with small serial correlation 0.5. The more the ratio r decreases and the parameter increases, the more the de- trending error obtained by ACvF becomes smaller than the one obtained by RACF, the relationship between them de- pending as well on the series length. Therefore, for a particu- lar time series it is necessary to determine what function provides the best accuracy. As shown in Sec. V, the method for estimating the total and detrending errors can be applied to observed time series if the statistical ensembles are adapted to the analyzed series and to the chosen theoretical model of the noise. By means of this method we can determine the degree q ˜ of the esti- mated polynomial trend, the serial correlation function, and the number H of the values of the function for which the noise parameters are computed with minimum error. If we knew the real trend and the parameters of the sto- chastic process generating the noise, then we could build for each type of serial correlation function and each value of H an ideal statistical ensemble of artificial time series by means of which we could evaluate the total and detrending errors of the estimated noise parameters. In this way we could deter- mine the serial correlation function and the value of H for which the total error was minimum, i.e., the function and the value allowing the estimation of the noise parameters with maximum precision. But, since the trend and noise parameters of a real time series are unknown, these ideal statistical ensembles can be only approximated. The noise parameters used to generate the artificial time series do not have fixed values, but values randomly chosen into intervals characteristic to the consid- ered time series. Obviously, in this case the error is impre- cisely estimated and the minimum of the total error does not indicate the serial correlation function and the value H which provides the maximum precision. At best, we reduce the variation intervals of the parameters of the artificial series such that we obtain a better approximation of the ideal sta- tistical ensemble. So we can build an iterative method to reduce the statistical ensembles until they do not allow a precision improvement. As the first step of the iteration process we estimate the polynomial trend for different degrees q ˜ and apply to the estimated noise the theoretical model in the form used for noises without trend. For each noise parameter we obtain a variation range from the results obtained for different orders q ˜ . The numerically generated trends can be a combination of the polynomial trends initially estimated or, as in Sec. V, they can be kept fixed and then the statistical ensembles depend additionally on q ˜ . The main difficulty in generalizing this method is to find an automatic numerical algorithm of building statistical en- sembles. For example, trend estimation by means of another method than polynomial fitting implies that, instead of the algorithm presented in Appendix B, another automatic algo- rithm simulating realistically the chosen method must be de- signed. The applications for 1 / f noises or chaotic signals impose the choice of an explicit theoretical noise model to generate the artificial series in the statistical ensembles. ACKNOWLEDGMENT This work was supported by Grant No. 2-CEx06-11-96/ 19.09.2006. APPENDIX A: AUTOMATIC GENERATION OF TRENDS The automatic algorithm for the generation of a determin- istic trend f n , n =1,2,..., Nhas two steps. First we gener- ate P subintervals of random length and then on each sub- interval we construct a monotonic semiperiod of a sinusoid, i.e., the part limited by two successive extrema, with random amplitude and the variation opposite to the previous semipe- riod. The subinterval p P contains the terms f n with index n satisfying the condition N p n N p+1 , where N p N are nonnegative integer numbers. We denote N 1 =0 and N P+1 = N, such that the number of time steps of any interval is equal with N p = N p+1 - N p . The length of the subinterval p is a random number d p uniformly distributed within the interval d min ,1and d min is a parameter that will be chosen such that N p N min , where N min has a given value. This condition assures that each sinusoidal part is described with an acceptable reso- lution. The union of all subintervals is an interval with length d = i=1 P d i which must be divided into N - 1 equal bins corre- sponding to the N values of the time series. Hence we choose for p 1 N p =1+ N -1d -1 i=1 p d i , where ¯is the integer part function. Then the number of time steps of the part p is approximately equal to N p d p N -1/ d and it is minimum if d p = d min and d p =1 for pp. From the condition minN p = N min it follows that d min = P -1N min N -1- N min . To each subinterval p we associate a semiperiod of a si- nusoid with the amplitude equal to a random number a p 0,1with a uniform probability distribution. The prelimi- nary value of the trend at a point n of the part p, N p n N p+1 , is given by the recurrence relation SERIAL CORRELATION OF DETRENDED TIME SERIES PHYSICAL REVIEW E 78, 036707 2008 036707-9
g n = g N p + -1 p a p 1 - sin 2 1+2 n - N p N p , A1 where the free term g N p is equal to the last term of the pre- vious part p -1, and for p =1 we choose g 0 =0. The first sinusoidal part is decreasing and the monotony of the other parts alternates such that the trend looks like a distorted si- nusoid. The continuity between the successive parts is as- sured by the free term g N p . In accordance with Eq. 1, we superpose over the trend g n the noise z n obtained using the algorithm described in Sec. III. The ratio between the amplitude of the trend and of the noise is described by a new parameter r and we make the following transformation of the trend A1 f n = rg n maxz n - minz n maxg n - ming n . A2 If r 1, the signal 1is dominated by trend and when r 1 by noise. Finally from the trend A2we subtract its mean. APPENDIX B: AUTOMATIC ESTIMATION OF TRENDS We take advantage of the fact that the trend f n of the numerically generated time series is known. The global re- semblance of the shape of the estimated trend with the real one is quantified by the index = f ˜ n - f n f n . The estimated trend f ˜ n is obtained by polynomial fitting and it depends only on the degree q of the polynomial trend. We denote by q 0 the degree for which has the minimum value. In Fig. 2 such estimated trends are represented with dashed line and they reproduce as a whole the shape of the real trend, although they are not visually acceptable because in some regions they significantly move away from the real trend. Therefore the polynomial trend is considered accept- able only if it satisfies some additional local conditions. The local difference between the estimated trend and the real one is quantified by the maximum of the absolute value of their difference = max n  f ˜ n - f n  . A first local condition limits the value of with respect to the amplitude of the entire signal, such that when the noise is dominant, the estimated trend variation should be limited to a fraction of the signal amplitude c 1 max n x n - min n x n  . B1 The second local condition limits the value of with respect to the standard deviation of the noise , such that when the trend is dominant the estimated trend is contained within the boundaries of the time series c 2 . B2 The values chosen for the constants are c 1 =0.2 and c 2 = 1.5, such that the percent of rejected time series is smaller than half of the generated series. For example, the percent of ac- cepted polynomial trends is 70% when the trends have P = 5 monotonic parts, and for smaller values of P this percent increases. To complete the algorithm of the automatic selection of the estimated polynomial trend we must specify the interval q min , q max in which we look for q 0 . If this interval is too small, then for many generated time series q 0 cannot take its real value and it will be limited to the boundary q 0 = q max . These errors are corrected by the conditions B1and B2, because if the difference between the polynomial trend and the real one is too large, then the time series is eliminated from the statistical ensemble. However, such situations must be avoided as much as possible. The minimum degree of a polynomial trend that can de- scribe a function with P - 1 extremes, i.e., with P monotonic parts, is q min = P. As the degree of the polynomial increases, the estimated trend describes more accurate the real trend, but for large degrees it begins to follow the fluctuations of 0 5 10 15 20 25 30 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 q η (a) P=1 P=2 P=3 P=4 P=5 0 5 10 15 20 25 30 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 q 〈η〉 (b) P=1 P=2 P=3 P=4 P=5 FIG. 9. The average resemblance index of the estimated trend with the real one in terms of the degree of the estimated polynomial trend for different number P of monotonic parts of the trend and for the extreme values of the ratio r, r = 0.25 aand r =4 b. The serial correlation parameter takes random values into its maximum range 0,0.9. The filled markers indicate the minimum values of . The dashed line links the maximum values q max of the degrees of the estimated polynomial trends. CĂLIN VAMOŞ AND MARIA CRĂCIUN PHYSICAL REVIEW E 78, 036707 2008 036707-10
the noise. Hence we expect that for fixed P the average re- semblance index on a statistical ensemble of numerically generated series should have a minimum at a value q 0 , beyond which the influence of the fluctuations of the noise becomes more important than the ability of the polynomial trend to approximate the real trend. For different values of the number P of the trend mono- tonic parts and the extreme values of the ratio r 0,0.9, Fig. 9 shows the average resemblance index for degrees q q min ,6P. If the estimated trend with the optimal degree q 0 does not satisfy the conditions B1and B2, then the time series is eliminated from the statistical ensemble. For the time series dominated by noise has a clear minimum for q 0 3P Fig. 9a. When the noise is small Fig. 9b, the increase of the degree further than 3P does not significantly improve the resemblance with the real trend, the graph of has an almost stationary value, and the minimum of is less clearly defined. In all cases, as ex- pected, when the real trend becomes more complex P in- creases, then it is more difficult for the estimated trend to follow the shape of the real one and increases. The graph of also depends on . For greater values of the noise becomes more similar to a deterministic trend and the esti- mated trend cannot distinguish one from the other. Therefore becomes greater for larger values of q and the minimum of occurs at smaller values q 0 . The influence of is more important for small values of r. Taking into account the behavior of , we choose the maximum degree of the estimated polynomial trend q max =3P, represented by a dashed line in Fig. 9. For time series dominated by noise this choice assures us that for the major- ity of them we can determine the actual optimal degree q 0 . In other cases it is possible that q 0 q max , but then the differ- ence between the polynomial trend of degree q 0 and q max is small. In fact we are not interested on the precise value of the optimal degree q 0 because for real time series the trend is unknown, the index cannot be calculated and the optimal degree q 0 cannot be determined. The degree q ˜ of a subjec- tively estimated polynomial trend has a value somewhere about its optimal value q 0 . Because we want to numerically simulate such a subjective procedure, we randomly choose q ˜ within the interval q 0 - q , q 0 + qP ,3P. For the nu- merical tests we use q = 3. The polynomial trend of q ˜ degree is accepted only if it satisfies the two local conditions B1 and B2. 1G. M. Viswanathan, S. V. Buldyrev, E. K. Garger, V.A. Kash- pur, L. S. Lucena, A. Shlyakhter, H. E. Stanley, and J. Tsch- iersch, Phys. Rev. E 62, 4389 2000. 2W. Knospe, L. Santen, A. Schadschneider, and M. Schrecken- berg, Phys. Rev. E 65, 056133 2002. 3K. Kiyono, Z. R. Struzik, N. Aoyagi, S. Sakata, J. Hayano, and Y.Yamamoto, Phys. Rev. Lett. 93, 178103 2004; K. Kiyono, Z. R. Struzik, N.Aoyagi, F. Togo, and Y. Yamamoto, ibid. 95, 058101 2005. 4W. M. Macek, R. Bruno, and G. Consolini, Phys. Rev. E 72, 017202 2005. 5K. Kiyono, Z. R. Struzik, and Y. Yamamoto, Phys. Rev. Lett. 96, 068701 2006. 6H.-D. Xi, Q. Zhou, and K.-Q. Xia, Phys. Rev. E 73, 056312 2006. 7P. Weber, F. Wang, I. Vodenska-Chitkushev, S. Havlin, and H. E. Stanley, Phys. Rev. E 76, 016109 2007. 8K. Hu, P. Ch. Ivanov, Z. Chen, P. Carpena, and H. E. Stanley, Phys. Rev. E 64, 011114 2001. 9Z. Chen, K. Hu, P. Carpena, P. Bernaola-Galvan, H. E. Stanley, and P. Ch. Ivanov, Phys. Rev. E 71, 011104 2005. 10J. W. Kantelhardt, E. Koscielny-Bunde, H. H. A. Rego, S. Hav- lin, and A. Bunde, Physica A 295, 441 2001. 11P. J. Brockwell and R. A. Davis, Time Series: Theory and Methods Springer Verlag, New York, 1996. 12P. J. Brockwell, R. A. Davis, Introduction to Time Series and Forecasting Springer Verlag, New York, 2003. 13J. D. Hamilton, Time Series Analysis Princeton University Press, Princeton, NJ, 1994. 14D. Maraun, H. W. Rust, and J. Timmer, Nonlinear Processes Geophys. 11, 495 2004. 15E.-J. Wagenmakers, S. Farrell, and R. Ratcliff, Psychon. Bull. Rev. 11, 579 2004; T. L. Thorton and D. L. Gilden, ibid. 12, 409 2005. 16J. Timmer, U. Schwarz, H. U. Voss, I. Wardinski, T. Belloni, G. Hasinger, M. van der Klis, and J. Kurths, Phys. Rev. E 61, 1342 2000. 17S.Yue and P. Pilon, Water Resour. Res. 39, 1077 2003. 18C. Stărică and C. Granger, Rev. Econ. Stat. 87, 495 2005. 19J. Gao, J. Hu, W.-W. Tung, Y. Cao, N. Sarshar, and V. P. Roychowdhury, Phys. Rev. E 73, 016117 2006. 20C. Vamoş, Phys. Rev. E 75, 036705 2007. 21A. Carbone, G. Castelli, and H. E. Stanley, Phys. Rev. E 69, 026105 2004. 22A. Carbone and H. E. Stanley, Physica A 384, 21 2007. 23C.-K. Peng, S. V. Buldyrev, S. Havlin, M. Simons, H. E. Stan- ley, and A. L. Goldberger, Phys. Rev. E 49, 1685 1994. 24G. Box, G. Jenkins, and G. Reinsel, Time Series Analysis: Forecasting and Control, 3rd ed. Prentice-Hall, Upper Saddle River, NJ, 1994. 25C. Vamoş, Ş. M. Şoltuz, and M. Crăciun, e-print arXiv:0709.2963. 26The observational data, not yet published, was kindly provided by V. V. Morariu. 27F. Brochard and J. F. Lennon, J. Phys. Paris36, 1035 1975. 28H. Strey, M. Peterson, and E. Sackmann, Biophys. J. 69, 478 1995. 29H.-G. Döbereiner, G. Gompper, C. K. Haluska, D. M. Kroll, P. G. Petrov, and K. A. Riske, Phys. Rev. Lett. 91, 048301 2003. 30S. Zhao and G. W. Wei, Comput. Stat. Data Anal. 42, 219 2007. SERIAL CORRELATION OF DETRENDED TIME SERIES PHYSICAL REVIEW E 78, 036707 2008 036707-11
2008

Related Posts