Skip to main content

Scale Invariance

  • Chapter
  • First Online:
A Primer on Complex Systems

Part of the book series: Lecture Notes in Physics ((LNP,volume 943))

  • 1235 Accesses

Abstract

Scale-invariance is one of the concepts that appears more often in the context of complexity. The basic idea behind scale-invariance can be naively expressed as: ‘the system looks the same at every scale’. Or, ‘if we zoom in (or out) our view of the system, its features remain unchanged’. Although it is true that complex dynamics are often at work when a system exhibits scale-invariance, it is important to be aware that this is not always the case. For instance, the random walk [1] is a process that exhibits scale-invariance but with underlying dynamics that are far from complex in any sense of the word (see Chap. 5).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Although we will use these two terms as synonyms throughout this book, mathematicians often distinguish between scale-invariant functions (i.e., those that satisfy Eq. 3.1) and self-similar functions. The latter correspond to those that are invariant under discrete dilations, such as any of the mathematical fractals that can be generated by iterative procedures. Throughout this book, we will also use the term self-similar to refer to objects or processes that can be broken down in smaller pieces that are similar (in the best case, identical; in other cases, only approximately or in a statistical sense) to the original. In other words, we will sometimes abuse the term self-similar and consider it a weaker (discrete, approximate or statistical) version of scale-invariant.

  2. 2.

    In the case of Kleiber law, it has sometimes been suggested that the observed scaling might be related to the increase in complexity of the fractal-like circulation system as animals get larger.

  3. 3.

    We briefly discussed the mesorange in Chap. 1. In particular, in Sects. 1.2.3 and 1.3.2.

  4. 4.

    The box-counting fractal dimension is closely related to the so-called Hausdorff dimension, but it is less precise from a mathematical point of view. Although both dimensions coincide for many real fractals, there are documented cases in which the two yield different results. The Hausdorff dimension predates the fractal concept. It was introduced as far back as 1918, by the German mathematician Felix Hausdorff. The idea is to consider the object immersed in a metric space of dimension n. Then, one counts the number of n-dimensional balls of radius at most r, N(r), required to cover the object. One then builds a measure of the size of the object in the form

    $$\displaystyle \begin{aligned} M_d(r) = \sum_{i=1}^{N(r)} A(r_i)r_i^d,\end{aligned} $$
    (3.10)

    where A(r) is a geometrical factor that depends on the metric chosen and the dimensionality. The Hausdorff dimension is then defined as the value of d = d H that makes that, for all d < d H , the measure tends to zero. And, that for all d > d H , the measure diverges. In this sense, d H is a critical boundary between values of d for which the covering is insufficient to cover the space, and values for which the covering is overabundant.

  5. 5.

    This simply follows from the fact that, if a number between zero and one is raised to a large positive power, the result becomes smaller the smaller the number.

  6. 6.

    One can compute the amount of information associated to the covering of size l by calculating its Shannon entropy, \(S(l) = \sum _{i=1}^{N(l)} w_i \log w_i \). The entropy dimension is then defined after assuming:

    $$\displaystyle \begin{aligned} S(l) \sim l^{-D_{\mathrm{en}}} \rightarrow D_{\mathrm{en}} = - \lim_{l\rightarrow 0}\frac{\log S(l)}{\log l}.\end{aligned} $$
    (3.23)
  7. 7.

    The correlation dimension of a set of N points is computed [21] by counting the total number of pairs of points, n p , that have a distance between them smaller than some 𝜖 > 0. For small 𝜖, the limit of the function known as the correlation integral,

    $$\displaystyle \begin{aligned}C(\epsilon) = \lim_{N\rightarrow \infty} \frac{n_p}{N^2} \sim \epsilon^{D_{\mathrm{co}}},\end{aligned} $$
    (3.24)

    where D co is the correlation dimension. It can also be formulated by introducing the function \(C(l) = \sum _{i=1}^{N(l)} w_i^2\), and then defined after assuming:

    $$\displaystyle \begin{aligned} C(l) \sim l^{-D_{\mathrm{co}}} \rightarrow D_{\mathrm{co}} = - \lim_{l\rightarrow 0}\frac{\log C(l)}{\log l}.\end{aligned} $$
    (3.25)
  8. 8.

    In the theory of mathematical functions, the Holder exponent appears as a way of quantifying the degree of singularity of non-differentiable functions at a given point. Indeed, a function f that is differentiable at x satisfies that, for small δ,

    $$\displaystyle \begin{aligned} \left|\,f(x + \delta) - f(x)\right| \propto \left| \delta \right|{}^1, \end{aligned} $$
    (3.27)

    which permits to define its derivative at x as the usual,

    $$\displaystyle \begin{aligned} \frac{df}{dx} (x)= \lim_{\delta\rightarrow 0 }\frac{ f(x + \delta) - f(x)}{\delta} \end{aligned} $$
    (3.28)

    On the other hand, a function with a bounded discontinuity at x (for instance, the Heaviside step function, that has a jump of one at x = 0), satisfies,

    $$\displaystyle \begin{aligned} \left|\,f(x + \delta) - f(x)\right| \propto \left| \delta \right|{}^0, \end{aligned} $$
    (3.29)

    so that its derivative at x does not exist.

    Continuous, but non-differentiable functions behave between one case and the other,

    $$\displaystyle \begin{aligned} \left|\,f(x + \delta) - f(x)\right| \propto \left| \delta \right|{}^\alpha, ~~~~0 < \alpha < 1. \end{aligned} $$
    (3.30)

    The exponent α, in this case, quantifies the degree of the singularity of the function at x or, in plain words, how far it is from being differentiable. It is called the local Holder exponent.

    How does this mathematical digression justify referring to α i in Eq. 3.26 as a local “Holder exponent”? Well, fractals are very irregular objects, usually non-differentiable. Thus, it is reasonable to expect that their local singularities could be describable by some kind of non-integer exponent. The term “Holder exponent” is then borrowed by mere association. Although it can take values larger than one. Indeed, for a monofractal we saw that α i  = D BC, that can take all non-integer values within the interval (0, 3).

  9. 9.

    It is quite reassuring to note that, in the limiting case in which the multifractal becomes a monofractal, only one value of α contributes to the sum (i.e., ρ(α) = δ(α − D bc )). On the other hand, f(α 0) = D bc , the BC fractal dimension of the monofractal. And if we take q = 0, we see that the integral nicely reduces to \(N(l) \sim l^{-D_{bc}}\), as it should be!

  10. 10.

    Thanks in part to the advent of both laser technology and inexpensive high-speed high-resolution CCD cameras.

  11. 11.

    If one compare fluctuation data measured at the edge of the same fusion device, but for different plasma discharges with similar conditions, they would all look like different realizations of the same random process. The same happens for the runaway production signal shown in the right frame of Fig. 3.8. The different realizations would then correspond to runs done with the same parameters but initialized with a random generator.

  12. 12.

    In addition, propagators will be used heavily in Chap. 5, since they play a dominant role in the theory of fractional transport.

  13. 13.

    In the rest of this book, we will often drop the adjective “statistically” when referring to random processes, in order to make the discussion more agile. The implication should however not be forgotten.

  14. 14.

    Clearly, p 2(y 1, t 1; y 2, t 2) does not exhaust all the statistical information that can be retrieved from the process. One could define a whole hierarchy of higher-order pdfs that connect up to n-points, with n > 2 as large as desired, all of which are also joint probability distribution functions for the process. We will not consider any these functions here, though.

  15. 15.

    It should be remembered that, in the theory of probabilities, the conditional probability of an event A happening, assuming that another event B has already taken place, is defined as p(A|B) = p(A ∩ B)/p(B). That is, it is given by the ratio of the joint probability of A and B happening, divided by the probability of B happening.

  16. 16.

    The term “propagator” probably originates from the fact that G(y, t| y 0, t 0) satisfies the following property:

    $$\displaystyle \begin{aligned} G(y_2, t_2 |\,y_0, t_0) = \int dy_1\int dt_1 G(y_2, t_2 |\,y_1, t_1) G(y_1, t_1 |\,y_0, t_0) ,~~~~t_0 \leq t_1 \leq t_2. \end{aligned} $$
    (3.42)

    This relation simply expresses that the probability of reaching the value y 2 at time t 2, after having started from the initial value y 0, is the sum of the probabilities of “propagating” the solution through all the possible intermediate values y 1 at all intermediate times t 1. It is a direct consequence of the fact that the total probability must be conserved during the evolution of the process. The propagator has played an important role in many disciplines within Mathematics and Physics. In particular, it is one of the building bricks of Richard Feynman’s path-integral reformulation of Quantum Mechanics [29].

  17. 17.

    The same result is obtained for any other choice for p(y 0), as long as it is positive everywhere and normalizable to one.

  18. 18.

    In fact, Mandelbrot preferred instead the definition [12],

    $$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle y(t) &\displaystyle =&\displaystyle y(t') + \frac{1}{\varGamma\left(H + \frac{1}{2}\right)} \left[ \int_{-\infty}^t ds~ (t - s)^{H-1/2} \xi_2(s) \right. \\ &\displaystyle &\displaystyle - \left. \int_{-\infty}^{t'} ds~(t' - s)^{H-1/2} \xi_2(s) \right], ~~~ H\in (0, 1]\vspace{-3pt} \end{array} \end{aligned} $$
    (3.72)

    since he felt that Eq. 3.71 assigned too much significance to the initial time t 0, as will be made much clearer in Chap. 4, where we discuss memory in time processes. We have however preferred to stick with Eq. 3.71 since some t 0 must be chosen to simulate fBm processes numerically or to compare against experimental data (clearly, no code or measurement can be extended to t →−!). In any case, both definitions lead to very similar properties.

  19. 19.

    Clearly, it satisfies Eq. 3.67 with the choice \(\varPhi = N_{[0,\hat \sigma _\xi ^2]}(x)\) for any H ∈ (0, 1].

  20. 20.

    The lower limit comes from the fact that the Gaussian does not have finite moments for q < −1, since the integrand, 1/|x|q then has a non-integrable divergence at x = 0.

  21. 21.

    In the case of time processes, we will use the symbol h s to refer to the Holder exponent, and reserve α for the tail-index of Levy pdfs. This is in contrast to what is usually done when carrying out multifractal analysis on spatial objects, where the symbol α is reserved for the singularity exponent. That is why we adhered to the popular criteria when discussing spatial multifractals in Sect. 3.2.2.

  22. 22.

    It is somewhat curious that, although H is a measure of a global property (i.e., rescaling) and h s measures a local property (degree of the local singularities), they do coincide for fBm. The situation is similar to what was found for spatial monofractals, where the fractal dimension (global property) and the Holder exponent (local property) were also identical. It is for this reason that it is sometimes said that H-ss processes have a monofractal character. It is also interesting to remark that the Holder exponent defined for fBm is identical to the one used in the theory of mathematical functions, whilst the one used for spatial fractals had a different definition (Eq. 3.26). As a result, the fractal dimension of fBm time traces is not equal to h s , but given by D = 2 − h s  = 2 − H (See Problem 3.5). This is correlated with the fact that, the smaller h s , the more irregular the trace becomes, so it fills its embedding space more densely.

  23. 23.

    As with fBm, another definition for fLm exists that avoids giving too much importance to the initial time. It is:

    $$\displaystyle \begin{aligned} \begin{array}{rcl}{} \displaystyle y_{\alpha,H}^{\mathrm{fLm}}(t) &\displaystyle =&\displaystyle y(t_0) + \frac{1}{\varGamma\left(H - \frac{1}{\alpha} + 1\right)} \left[ \int_{-\infty}^t ds~ (t - s)^{H-1/\alpha} \xi_\alpha (s) \right. \\ &\displaystyle &\displaystyle - \left. \int_{-\infty}^{t'} ds~(t' - s)^{H-1/\alpha} \xi_\alpha(s) \right], ~~~\mathrm{with} ~~~~ 0 < H \leq \max\left( 1, \frac{1}{\alpha}\right). \end{array} \end{aligned} $$
    (3.75)

    where t′ < t is again an arbitrary past reference time.

  24. 24.

    It is worth mentioning that the theory of fLm processes using non-symmetric Lévy distributions has not been developed very much so far, in spite of the fact that there are some physical problems where it might be useful. We will discuss one such example in Sect. 5.5, when we investigate transport across the running sandpile.

  25. 25.

    However, the terms subdiffusion and superdiffusion are not used to refer to any of these behaviours, as will be discussed in Chap. 5.

  26. 26.

    Note that a(H, 2) = a(H), the function we introduced for fBm in Sect. 3.3.3. For that reason, the limit of fLm when α → 2 is fBm with the same value of H.

  27. 27.

    As with the Gaussian, the lower limit comes from the fact that the symmetry Lévy does not have finite moments for q < −1, since 1/| y − y 0| then has a non-integrable divergence at y = y 0.

  28. 28.

    Or, in other words, the noise series used to integrate Eq. 3.64.

  29. 29.

    Except for trivially self-similar processes such as the constant process.

  30. 30.

    We will also show later that, due to the translational time-invariance of both fBm/fLm, things are not so bad as stated here, and that methods exist to improve the statistics when dealing with the integrated process directly even if few (or just one) realizations are available (see Sect. 3.4.1).

  31. 31.

    As we also did when discussing scale-invariance, we will only consider two-point statistical information in this book.

  32. 32.

    In theory, one might use Lamperti’s theorem to test whether any time series is self-similar (or stationary). One would just need to apply the Lamperti’s transform to it and check whether the result is stationary (or self-similar). However, due to the exponentials appearing in Lamperti’s formulation, this scheme is often difficult to use in practice.

  33. 33.

    It is also common in the literature to define fGn simply as \(y_H^{\mathrm {fGn}}(t + h) -y_H^{\mathrm {fGn}}(t)\) instead, without the h −1 prefactor [36]. We have decided to adopt the definition that includes h −1 so that we can better assimilate fGn to a derivative of fBm, in spite of the latter being non-differentiable. The reason will become clearer when discussing methods to generate numerically fBm (see Appendix 1) using fractional derivatives. The only differences between one or another choice are that \(\sigma ^2_h = h^{2H}\sigma ^2_\xi \) in Eq. 3.86, and that the factor r (H−1) becomes r H in Eqs. 3.87 and 3.88.

  34. 34.

    Equation 3.87 applies for any arbitrary function Φ in Eq. 3.67, since it is only due to the scale-invariance. For example, it is also satisfied by fractional Levy noise, to be introduced next.

  35. 35.

    In fact, this is one of the conditions that sets the allowed values of the self-similarity exponent H to the interval \((0,\max (1,\alpha ^{-1})]\). The other one is the requirement of the propagator of the process being positive everywhere, at every time [37].

  36. 36.

    Again, if one adopts the definition of fLn without the h −1 prefactor, σ h  = h H σ 𝜖 instead in Eq. 3.91. Also, the factor r (H−1) becomes r H in Eqs. 3.92 and 3.93.

  37. 37.

    The interested reader may also refer to Appendix 3, that discusses an alternative technique based on the use of wavelets to characterize multifractality in time series.

  38. 38.

    Non-normalized weights must be used in the case of time processes since, otherwise, the normalization, \(\sum _{i=1}^{N_b}|\,y(t_{i}+T) - y(t_i)|\) would eliminate the scaling of the weight for a monofractal (fBm or fLm). The reason is that the support of the time process is a regular line (the temporal axis) instead of a fractal with dimension D. As a result, N b  ∝ T −1, instead of T D.

  39. 39.

    The name structure function originates from the theory of turbulence, back in the 1940s, when Kolmogorov formulated his famous law that stated that the structure function of the turbulent velocity fluctuations scaled as \(S_p(\textbf {x}) := \left \langle |V(\textbf {r}+ \textbf {x}) - V(\textbf {r})|{ }^p\right \rangle \propto r^{\xi (p)}\), with ξ(p) = p/3 [43]. In fact, much of multifractal analysis for time processes was originally developed and extensively applied later to the study of fluid turbulence [19, 44, 45].

  40. 40.

    The partition (or structure) function is sometimes introduced a little bit differently from how we do it here. The main difference is that the factor T −1 that appears in Eq. 3.95 is omitted. As a result, the generalized Hurst exponent is defined as H(q) = ξ(q)/q. The equations that give the singularity spectrum (Eqs. 3.102 and 3.103) then become \(h_s^*(q) = d\xi /dq\) and \(f(h_s^*(q)) = qh_s^*(q) -\xi (q) + 1 \).

  41. 41.

    Some authors define multifractal as any instance in which H(q) ≠ H, ∀q. Thus, fLm is then considered as multifractal, since H(q) ≠ H for q ≥ α. We do not adhere to this practice.

  42. 42.

    One must however be careful, since the spectrum of discrete realizations of fBm and fLm also have a finite width, caused by their discreteness. We will come back to this issue in Sect. 3.4.3.

  43. 43.

    For spatial objects, the box-counting procedures discussed in Sect. 3.2 are very good to calculate fractal dimensions and the multifractal spectrum. We will not illustrate them here, though.

  44. 44.

    In particular, we will discuss techniques that determine self-similarity through the analysis of the autocorrelation function of the time process (Sect. 4.4.1), its power spectrum (Sect. 4.4.2) or the so-called rescaled range (R/S) analysis (Sect. 4.4.4).

  45. 45.

    The determination of Φ is often useful in itself, since it allow us to classify the process under investigation even further, perhaps relating it to fBm (if Φ = Gaussian) or fLm (Φ = symmetric Lévy).

  46. 46.

    One should always start by checking whether the provided dataset is stationary or not. If it is, the process cannot be self-similar, although its increments might be. We will provide some techniques to test stationarity later in this section.

  47. 47.

    If the process is not stationary, temporal and ensemble averaging are no longer equivalent!

  48. 48.

    Sometimes, experimental data can be “contaminated” by slow-varying trends that mask the stationarity of a fast-varying component. For instance, one could have an stationary process superimposed with one or several periodic processes. Think, for instance, of fluctuations in a turbulent fluid which is itself rotating! In order to be able to check for the stationarity of the faster process (or the self-similarity of its integrated process), the periodic trend must be removed first. This could be done by applying some (high-pass) digital filter in order to remove the lower frequencies associated to the rotation. In other cases, the trend is due to a slowly varying external drive. This often happens when measuring turbulent fluctuations in a tokamak reactor while the plasma profiles are ramping up or down. In this case, removing the trend requires the subtraction of the running-average of the data, that should be calculated using a box with a size of the order of the characteristic time of the drive variation. In general, it may be difficult to differentiate between actual trends and features of the process of interest. This is, for example, one of the limitations of the DFA technique (see Appendix 2). As suggested by the previous examples, some knowledge of the dynamics is usually needed to guide our hand when dealing with trends.

  49. 49.

    For instance, one could mention the Dickey-Fuller test, the Kwiatkowski-Phillips-Schmidt-Shin test or the Phillips-Perron test, that are widely used in fields such as econometrics [46]. Most of them are however applicable only to Gaussian random processes.

  50. 50.

    In the case of fLm, one should always apply the stationarity test on a finite moment. That is, using \(\left <|x|{ }^q\right >\) with 0 < q < α!

  51. 51.

    The scaling however begins to deteriorate, in this case, for moments higher than 8, due to the lack of sufficient statistics of the Gaussian tail within the 10, 000 points available.

  52. 52.

    Or, when examining the behaviour of the moments of its increments when the spacing is varied, one finds that e′(q) ≠ qH, contrary to what Eq. 3.114 predicts for a self-similar process.

  53. 53.

    We will discuss these integro-differential operators and their physical meaning at length in Chap. 5, since they play an important role in the theory of fractional transport.

  54. 54.

    Equation 3.123 is only exact if a = − [47]. However, we will ‘abuse’ the formula and assume it as valid when a = t 0, so that we can work out a reasonable algorithm to generate fGn/fLn series.

  55. 55.

    All synthetic fGn/fLn and fBm/fLm series used in this chapter have been produced in this way.

  56. 56.

    There are different orders of DFA, distinguished by the order of the polynomial used to remove the local trend. The one we just discussed is called DFA1, since it uses linear fits. DFAn, instead, uses polynomials of order n.

  57. 57.

    The removal of the hidden trends is where the subtlety of applying DFA correctly lies (see Problem 3.9). Since in most cases trends are unknown, one could remove more (or less) than the physically meaningful trends, thus affecting the actual process under examination. As it is the case of all the other methods in this book, DFA should be handled with care. Some knowledge of the underlying physics is always needed in order to be able to tell whether any trend that is removed is actually meaningful and not a feature of the process under study.

  58. 58.

    Most DFA practitioners do not consider fLn to be a monofractal either, since M-DFA yields that H(q) ≠ q for q ≥ α. For instance, for fLm with H = 1/α it is found that H(q) = 1/α for q < α and H(q) = 1/q for q ≥ α [51].

References

  1. Einstein, A.: Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen. Ann. Phys. 17, 549 (1905)

    Article  MATH  Google Scholar 

  2. Gutenberg, B., Richter, C.F.: Seismicity of the Earth and Associated Phenomena. Princeton University Press, Princeton (1954)

    Google Scholar 

  3. Bak, P., Tang, C.: Earthquakes as a Self-organized Critical Phenomenon. J. Geophys. Res. 94, 15635 (1989)

    Article  ADS  Google Scholar 

  4. Shaw, B.E., Carlson, J.M., Langer, J.S.: Patterns of Seismic Activity Preceding Large Earthquakes. J. Geophys. Res. 97, 478 (1992)

    ADS  Google Scholar 

  5. Hergarten, S.: Self-organized Criticality in Earth systems. Springer, Heidelberg (2002)

    Book  Google Scholar 

  6. Crosby, N.B., Aschwanden, M.J., Dennis, B.R.: Frequency Distributions and Correlations of Solar X-ray Flare Parameters. Sol. Phys. 143, 275 (1993)

    Article  ADS  Google Scholar 

  7. Frank, J., King, A., Raine, D.: Accretion Power in Astrophysics. Cambridge University Press, Cambridge (2002)

    Book  Google Scholar 

  8. Richardson, L.F.: Variation of the Frequency of Fatal Quarrels with Magnitude. Am. Stat. Assoc. 43, 523 (1948)

    Article  Google Scholar 

  9. Kleiber, M.: Body Size and Metabolism. Hilgardia 6, 315 (1932)

    Article  Google Scholar 

  10. Frisch, U.: Turbulence: The Legacy of A. N. Kolmogorov. Cambridge University Press, Cambridge (1996)

    MATH  Google Scholar 

  11. Mandelbrot, B.B.: The Fractal Geometry of Nature. W.H. Freeman, New York (1982)

    MATH  Google Scholar 

  12. Mandelbrot, B.B., van Ness, J.W.: Fractional Brownian Motions, Fractional Noises and Applications. SIAM Rev. 10, 422 (1968)

    MATH  Google Scholar 

  13. Mandelbrot, B.B.: Fractals: Form, Chance, and Dimension. W.H. Freeman, New York (1977)

    MATH  Google Scholar 

  14. Mandelbrot, B.B.: How Long Is the Coast of Britain?: Statistical Self-similarity and Fractional Dimension. Science 156, 636 (1967)

    Article  ADS  Google Scholar 

  15. Taylor, R.P.: The Art and Science of Foam Bubbles. Nonlinear Dynamics Psychol. Life Sci. 15, 129 (2011)

    Google Scholar 

  16. Li, J., Ostoja-Starzewski, M.: The Edges of Saturn are Fractal. SpringerPlus 4, 158 (2014)

    Article  Google Scholar 

  17. Liu, H., Jezek, K.C.: A Complete High-Resolution Coastline of Antarctica Extracted from Orthorectified Radarsat SAR Imagery. Photogramm. Eng. Remote Sens. 70, 605 (2004)

    Article  Google Scholar 

  18. Feder, J.: Fractals. Plenum Press, New York (1988)

    Book  MATH  Google Scholar 

  19. Mandelbrot, B.B.: Intermittent Turbulence in Self-similar Cascades. J. Fluid Mech. 62, 331 (1977)

    Article  ADS  MATH  Google Scholar 

  20. Martinez, V.J., Paredes, S., Borgani, S., Coles, P.: Multiscaling Properties of Large-Scale Structure in the Universe. Science 269, 1245 (1995)

    Article  ADS  Google Scholar 

  21. Grassberger, P., Procaccia, I.: Measuring the Strangeness of Strange Attractors. Physica D 9, 189 (1983)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  22. Kinsner, W.: A Unified Approach to Fractal Dimensions. Int. J. Cogn. Inform. Nat. Intell. 1, 26 (2007)

    Article  Google Scholar 

  23. Hentschel, H.G.E., Procaccia, I.: The Infinite Number of Generalized Dimensions of Fractals and Strange Attractors. Physica D 8, 435 (1983)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  24. Halsey, T.C., Jensen, M.H., Kadanoff, L.P., Procaccia, I., Shraiman, B.I.: Fractal Measures and Their Singularities: The Characterization of Strange Sets. Phys. Rev. A 33, 1141 (1986)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  25. Wootton, A.J., Carreras, B.A., Matsumoto, H., McGuire, K., Peebles, W.A., Ritz, Ch.P., Terry, P.W., Zweben, S.J.: Fluctuations and Anomalous Transport in Tokamaks. Phys. Fluids B 2, 2879 (1990)

    Article  ADS  Google Scholar 

  26. Carreras, B.A.: Progress in Anomalous Transport Research in Toroidal Magnetic Confinement Devices. IEEE Trans. Plasma Sci. 25, 1281 (1997)

    Article  ADS  Google Scholar 

  27. van Milligen, B.P., Sanchez, R., Hidalgo, C.: Relevance of Uncorrelated Lorentzian Pulses for the Interpretation of Turbulence in the Edge of Magnetically Confined Toroidal Plasmas. Phys. Rev. Lett. 109, 105001 (2012)

    Article  ADS  Google Scholar 

  28. Fernandez-Gomez, I., Martin-Solis, J.R., Sanchez, R.: Perpendicular Dynamics of Runaway Electrons in Tokamak Plasmas. Phys. Plasmas 19, 102504 (2012)

    Article  ADS  Google Scholar 

  29. Feynman, R., Hibbs, A.R.: Quantum Mechanics and Path Integrals. McGraw-Hill, New York (1965)

    MATH  Google Scholar 

  30. Langevin, P.: Sur la theorie du mouvement brownien. C.R. Acad. Sci. (Paris) 146, 530 (1908)

    Google Scholar 

  31. Calvo, I., Sanchez, R.: The Path Integral Formulation of Fractional Brownian Motion for the General Hurst Exponent. J. Phys. A 41, 282002 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  32. Huillet, T.: Fractional Lévy Motions and Related Processes. J. Phys. A 32, 7225 (1999)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  33. Laskin, N., Lambadaris, I., Harmantzis, F.C., Devetsikiotis, M.: Fractional Lévy Motion and its Application to Network Traffic Modelling. Comput. Netw. 40, 363 (2002)

    Article  Google Scholar 

  34. Calvo, I., Sanchez, R., Carreras, B.A.: Fractional Lévy Motion Through Path Integrals. J. Phys. A 42, 055003 (2009)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  35. Lamperti, J.W.: Semi-stable Stochastic Processes. Trans. Am. Math. Soc. 104, 62 (1962)

    Article  MathSciNet  MATH  Google Scholar 

  36. Samorodnitsky, G., Taqqu, M.S.: Stable Non-Gaussian Processes. Chapman & Hall, New York (1994)

    MATH  Google Scholar 

  37. Mainardi, F., Luchko, Y., Pagnini, G.: The Fundamental Solutions for the Fractional Diffusion-Wave Equation. Appl. Math. Lett. 9, 23 (1996)

    Article  MathSciNet  Google Scholar 

  38. Sassi, R., Signorini, M.G., Cerutti, S.: Multifractality and Heart Rate Variability. Chaos 19, 028507 (2009)

    Google Scholar 

  39. Losa, G., Merlini, D., Nonnenmacher, T., Weiben, E.R.: Fractals in Biology and Medicine. Birkhauser, New York (2012)

    Google Scholar 

  40. Peltier, R.F., Lévy-Véhel, J.: Multifractional Brownian Motion: Definition and Preliminary Results. Rapport de recherche de INRIA, No 2645 (1995)

    Google Scholar 

  41. Lacaux, C.: Series Representation and Simulation of Multifractional Lévy Motions. Adv. Appl. Probab. 36, 171 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  42. Ayache, A., Lévy-Véhel, J.: The Generalized Multifractional Brownian Motion. Stat. Infer. Stoch. Process. 3, 7 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  43. Monin, A.S., Yaglom, A.M.: Statistical Fluid Mechanics. MIT Press, Boston (1985)

    Google Scholar 

  44. Meneveau, C.: Analysis of Turbulence in the Orthonormal Wavelet Representation. J. Fluid Mech. 232, 469 (1991)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  45. Farge, M.: Wavelet Transforms and Their Applications to Turbulence. Annu. Rev. Fluid Mech. 24, 395 (1992)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  46. Davidson, R., MacKinnon, J.G.: Econometric Theory and Methods. Oxford University Press, New York (2004)

    Google Scholar 

  47. Podlubny, I.: Fractional Differential Equations. Academic, New York (1998)

    MATH  Google Scholar 

  48. Chechkin, A.V., Gonchar, V.Y.: A Model for Persistent Lévy Motion. Physica A 277, 312 (2000)

    Article  ADS  Google Scholar 

  49. Greene, M.T., Fielitz, B.D.: Long-Term Dependence in Common Stock Returns. J. Financ. Econ. 4, 339 (1977)

    Article  Google Scholar 

  50. Peng, C.K., Buldyrev, S.V., Havlin, S., Simons, M., Stanley, H.E., Goldberger, A.L.: On the Mosaic Organization of DNA Sequences. Phys. Rev. E 49, 1685 (1994)

    Article  ADS  Google Scholar 

  51. Kantelhardt, J.W., Zschiegner, S.A., Koscielny-Bunde, E., Havlin, S., Bunde, A., Stanley, H.E.: Multifractal Detrended Fluctuation Analysis of Nonstationary Time Series. Physica A 316, 87 (2002)

    Article  ADS  MATH  Google Scholar 

  52. Mallat, S.: A Wavelet Tour of Signal Processing. Academic, New York (1998)

    MATH  Google Scholar 

  53. Grossmann, A., Morlet, J.: Decomposition of Hardy Functions into Square Integrable Wavelets of constant Shape. SIAM J. Appl. Anal. 15, 723 (1984)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Appendices

Appendix 1: Numerical Generation of Fractional Noises

Various algorithms have been proposed in the literature to generate numerical time series that approximate either fGn, or fLm with arbitrary tail-index α, for a prescribed Hurst exponent H [36]. Here, we will discuss one that is based on rewriting fBm (Eq. 3.71) and fLm (Eq. 3.89) in the form,

$$\displaystyle \begin{aligned} y^{\mathrm{H},\alpha}(t) = y^{\mathrm{H},\alpha}(t_0) + \mbox{}_{t_0}D_t^{-(H+1/\alpha - 1)}\xi_\alpha. \end{aligned} $$
(3.119)

This equation introduces a new type of operators known as fractional operators [47]. Equation 3.119 will be formally introduced in Chap. 5 under the name of the fractional Langevin equation (see Sect. 5.3.2). It is straightforward to show that it reduces to fBm for α = 2, and yields all symmetric fLms if α < 2.

The operator that appears in the fractional Langevin equation is known as a Riemann-Liouville fractional operator. They are integro-differential operators defined as [47],

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \mbox{}_aD_t^p f(t) = \left\lbrace \begin{array}{lc} \displaystyle \frac{1}{\varGamma(k-p)}\frac{d^k}{dt^k}\left[\int_a^t dt' (t-t')^{k - p - 1}f(t')dt' \right], &\displaystyle p > 0 \\ &\displaystyle \\ \displaystyle \frac{1}{\varGamma(-p)} \left[\int_a^t dt' (t-t')^{-(p + 1)}f(t')dt' \right], &\displaystyle p < 0 \end{array} \right. \end{array} \end{aligned} $$
(3.120)

where k is the integer satisfying k − 1 ≤ p < k. They are called fractional derivatives if p > 0, and fractional integrals if p < 0. An introduction to the basic features of fractional operatorsFootnote 53 will be given in Appendix 2 of Chap. 5. For now, it suffices to say that fractional operators have many interesting properties. Among them, that for p = ±k, they coincide with the usual k-th order derivatives (+ ) or integrals (−) and, for p = 0, with the identity. They provide interpolations between the usual integrals and derivatives of integer order. Fractional operators were first introduced by Gottfried von Leibniz, as early as 1697, although the form given in Eq. 3.120 was not introduced until the middle of the eighteenth century by Bernhard Riemann and Joseph Liouville.

We focus now on the properties of fractional operators that will allow us to generate synthetic fBm and fLm data, following a method introduced by A.V. Chechkin and V. Yu. Gonchar [48]. The first property is that, when acted on the right by a normal derivative, they satisfy that (see Appendix 2 of Chap. 5, Eq. 5.135):

$$\displaystyle \begin{aligned} \frac{d^m}{dt^m}\cdot \mbox{}_aD_t^p f(t) = \mbox{}_aD_t^{p+m} f(t),\end{aligned} $$
(3.121)

for any positive integer m. Since fGn/fLn is essentially the derivative of fBm/fLm, fGn/fLn can be obtained by applying a normal derivative to Eq. 3.119 to get:

$$\displaystyle \begin{aligned} \varDelta y^{\mathrm{H},\alpha}:= \lim_{h\rightarrow 0}\varDelta_h y^{\mathrm{H,\alpha}}= \frac{dy^{\mathrm{H},\alpha}}{dt} = \mbox{}_{t_0}D_t^{-(H+1/\alpha - 2)}\xi_\alpha.\end{aligned} $$
(3.122)

The second property of interest has to do with the relation between the Fourier transforms (see Appendix 1 of Chap. 2 for an introduction to Fourier transforms) of a function and its fractional derivative/integral (see Appendix 2 of Chap. 5, Eq. 5.139)Footnote 54:

$$\displaystyle \begin{aligned} \mathrm{F}\left[ \mbox{}_{a}D_t^p f(t)\right] \simeq (-\i \omega)^p \hat f(\omega),\end{aligned} $$
(3.123)

where \(\i = \sqrt {-1}\) and ω stands for the frequency. Applying the Fourier transform to Eq. 3.122, one thus obtains:

$$\displaystyle \begin{aligned} \varDelta \hat y^{\mathrm{H},\alpha}(\omega) =\frac{\hat \xi_\alpha (\omega) }{(-\i \omega)^{H+1/\alpha - 2}}. \end{aligned} $$
(3.124)

Thus, in order to generate a synthetic fGn/fLn series with arbitrary Hurst exponent H and tail-index α,Footnote 55 a possible procedure is simply to (see Problem 3.5):

  • generate a random Gauss/Lévy noise sequence (see Appendix 2 of Chap. 2);

  • carry out a discrete Fast Fourier transform (FFT) of the noise series;

  • divide each Fourier harmonic by the corresponding factor (−ıω)H+1/α−2;

  • and Fourier invert the result to get the desired, approximated fGn/fLn series.

In addition, fBm/fLm synthetic series with arbitrary exponents H and α can easily be obtained by numerically integrating the fGn/fLn series for the same exponents that have been generated with this method. Or, by reusing the procedure just described but dividing instead the Gauss/Lévy noise series by (−ıω)H+1/α−1.

Appendix 2: Detrended Fluctuation Analysis

Detrended fluctuation analysis (or DFA) [49, 50] is a popular method that can be applied to test for self-similarity in both stationary and non-stationary signals. In order to introduce the method, let’s consider the series,

$$\displaystyle \begin{aligned} y = \left\lbrace y_n, ~~ n = 1, 2, \cdots, N \right\rbrace, \end{aligned} $$
(3.125)

that might be stationary or not. The method considers first the associated integrated motion,

$$\displaystyle \begin{aligned} Y_n = \sum_{i = 1}^n y_i - n\bar y, \end{aligned} $$
(3.126)

where the overall average of the motion, \(\bar y = \sum y_i /N\) is removed.

Next, for every possible scale l > 0, the integrated motion is divided in (possibly overlapping) windows of size l. Inside each of these windows, a local least-squared-fit to a polynomial is done to capture the local trend. That is, if a possible linear local trend is assumed,Footnote 56 the fit would be against a straight line, my + b. The total squared error, χ 2, for each window k is then given by:

$$\displaystyle \begin{aligned} \chi^2_k = \frac{1}{l} \sum_{i\in W_k} \left(Y_i - im_k - b_k\right)^2,~~~ k = 1, N/l. \end{aligned} $$
(3.127)

where W k stands for the k-th window. The fluctuation value at scale l, F(l), is given by the average of the squared-root of the error over all windows W k of size l:

$$\displaystyle \begin{aligned} F^2(l) := \frac{l}{N}\sum_{k = 1}^{N/l} \chi^2_k.\end{aligned} $$
(3.128)

If the underlying process is self-similar once the trends have properly been removed,Footnote 57 it should happen that F(l) ∼ l a, for some exponent a. The interesting thing is that, if DFA is applied to fractional Gaussian noise, that is stationary, it is found that

$$\displaystyle \begin{aligned} F(l) \sim l^H,\end{aligned} $$
(3.129)

for all scales l >> δ, with H being the Hurst exponent that defines the associated fBm. But, more interestingly, DFA could also have been applied directly to the fractional Brownian motion signal itself, which is non-stationary. Then, one would find that,

$$\displaystyle \begin{aligned} F(l) \sim l^{H+1},\end{aligned} $$
(3.130)

so that the Hurst exponent can be obtained directly from fBm.

DFA can also be extended to probe for multifractality. The procedure is often referred to as M-DFA [51]. The twist here is to calculate, instead of the squared error per window, the quantity,

$$\displaystyle \begin{aligned} F_q(l) = \left[\frac{l}{N}\sum_{k=1}^{N/l} \left| \chi_k^2\right|{}^{q/2}\right]^{1/q}.\end{aligned} $$
(3.131)

The generalized Hurst exponent is then defined by assuming the scaling,

$$\displaystyle \begin{aligned} F_q(l) \propto l^{H(q)},\end{aligned} $$
(3.132)

that, for a monofractalFootnote 58 (i.e., fGn), reduces to H(q) = H, ∀q. The same techniques that were discussed in Sect. 3.3.7 to the estimation of the multifractal spectrum can be used here on the H(q) obtained from the application of MDFA.

Appendix 3: Multifractal Analysis Via Wavelets

Wavelets are another powerful technique to test for multifractal behaviour in time series. We will not discuss the basics of wavelets at any length, since there are some wonderful review papers and books available that discuss them much better than what we could do here [44, 45, 52]. Instead, we provide just a brief introduction to the topic, sufficiently long to clarify their relevance to the investigation of multifractal features (see also Problem 3.10).

Wavelets were introduced in the mid 1980s as an extension of Fourier analysis that permitted the examination of local properties in real time [53]. That is, while Fourier analysis expresses an arbitrary function as a linear combination of sines and cosines that are localized in frequency but not in time, the wavelet representation expresses the function as a linear combinations of rescaled versions of a prescribed basis function Φ, also called wavelet, that is localized both in frequency and in time.

The Φ-wavelet transform of a function, f(t), at a scale r and time t is defined by the integral [45]:

$$\displaystyle \begin{aligned} \hat f_\varPhi(r, t) =C^{-1/2} r^{-1/2}\int_{-\infty}^\infty \varPhi\left(\frac{t - t'}{r}\right)f(t')dt', \end{aligned} $$
(3.133)

where the constant C is defined by:

$$\displaystyle \begin{aligned} C := \int_{-\infty}^\infty \frac{|\hat\varPhi(\omega)|{}^2}{|\omega|} d\omega, \end{aligned} $$
(3.134)

where \(\hat \varPhi (\omega )\) is the Fourier transform (in time) of the wavelet basis function. Clearly, C < , if Φ is to provide a valid wavelet, which requires that \(\hat \varPhi (0) = 0\) (or, in other words, that Φ has a zero mean). \(\hat f_\varPhi (r,t)\) can be interpreted as the part of f(t) that contributes at time t to the scale r. A nice property of \(\hat f_\varPhi \) is that, if:

$$\displaystyle \begin{aligned} |\,f(t + r) - f(t) | \sim r^\alpha,~~~r\rightarrow 0, \end{aligned} $$
(3.135)

then

$$\displaystyle \begin{aligned} \hat f_\varPhi(r, t) \sim r^\alpha. \end{aligned} $$
(3.136)

That is, the local Holder exponent α of f(t) at any given time can be recovered by the scaling of local wavelet spectrum with r at the same time.

Therefore, the wavelet analysis of a time process allows in principle the direct determination of the local Holder exponent as a function of time, which opens up many new avenues of characterizing multifractality. In addition, wavelet multifractal analysis has been reported to be more robust (i.e., less sensitive to noise, for instance) that some of the methods that we have discussed in this chapter. The use of wavelets has also its own complications. A significant one is how to choose the more proper choice of the basis function, Φ, in order to best characterize singularities in time processes. Another one lies in the fact that the rescaled wavelets that enter in the continuous formulation given in Eq. 3.133 do not form an orthonormal set. Therefore, they provide a redundant representation whose interpretation can sometimes be confusing. This problem can be partially resolved through the introduction of discrete, orthonormal wavelet representations [44].

Problems

3.1 Scale Invariance: Power-Laws

Derive Eq. 3.4 from the Gutenberg-Richter law (Eq. 3.3).

3.2 Fractals: Cantor Set

Prove that the self-similar dimension of the Cantor Set is \(D_{\mathrm {ss}} = \log (2)/\log (3)\).

3.3 Fractals: The Mandelbrot Set

Consider the quadratic recurrence \(z_{k+1} = z_k^2 + z_0\) in the complex plane. Mandelbrot set is composed of all the initial choices for z 0 for which the orbit predicted by the recurrence relation does not tend to infinity. Build a code to generate the Mandelbrot set. Determine its BC fractal dimension.

3.4 Time Processes: Brownian Motion

Write a numerical code to generate Brownian motion trajectories in two dimensions. Use the box-counting procedure and show that the BC fractal dimension of 2D Brownian motion is equal to 2. That is, if given enough time, Brownian motion would fill the whole plane.

3.5 Time Processes: Fractional Brownian Motion

Prove, by exploiting the self-similarity of fBm, that the fractal dimension of the time traces of fBm is given by D = 2 − H.

3.6 Generation of Synthetic fGn/fLn Series

Write a code that implements the algorithm described in Appendix 1 in order to generate synthetic series of fGn/fLn with arbitrary tail-index α and Hurst exponent, H. Implement the possibility of generating fBm/fLm as well.

3.7 Running Sandpile: Scaling Behaviour for Various Overlapping Regimes

Use the sandpile code (see Problem 1.5) to generate time series for the total mass of the sandpile over the SOC state using L = 1000, N f  = 30, Z c  = 200, N b  = 10, one for each the following values of p 0 = 10−6, 10−5, 10−3 and 10−2. Repeat the scale-invariance analysis discussed in Sect. 3.5 for each cases. What is the mesorange for each case? How does the monofractal behaviour change as a function of the figure-of-merit that controls avalanche overlapping, (p 0 L)−1?

3.8 Running Sandpile: Scaling Behaviour Below the Mesorange

Use the sandpile code (see Problem 1.5) and generate a time series for the total mass of the sandpile over the SOC state using L = 1000, N f  = 30, Z c  = 200, N b  = 10 and p 0 = 10−4. Then, repeat the scale-invariance analysis discussed in Sect. 3.5, but for the range of block sizes 1 < M < 30, 000. How are the results different from what was obtained within the mesorange?

3.9 Advanced Problem: Detrended Fluctuation Analysis

Write a code that implements DFA1 (see Appendix 2). Then, use the code to estimate the Hurst exponent of fGn series generated with Hurst exponents H = 0.25, 0.45, 0.65 and 0.85 (see Problem 3.6). Compare the performance of DFA with the moment methods discussed in this chapter, both for the fGn series and their integrated fBm processes.

3.10 Advanced Problem: Wavelet Analysis

Write a code to perform the local determination of the Holder exponent using wavelets (see Appendix 3). Then, use the code on a synthetic fBm generated with nominal exponent H = 0.76 and show that the process is indeed monofractal, and the instantaneous local Holder exponent is constant and given by H. Refer to [45] to decide on the best option for the wavelet basis function.

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Science+Business Media B.V.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Sánchez, R., Newman, D. (2018). Scale Invariance. In: A Primer on Complex Systems. Lecture Notes in Physics, vol 943. Springer, Dordrecht. https://doi.org/10.1007/978-94-024-1229-1_3

Download citation

Publish with us

Policies and ethics