Skip to main content
Log in

Finely Tuned Models Sacrifice Explanatory Depth

  • Published:
Foundations of Physics Aims and scope Submit manuscript

Abstract

It is commonly argued that an undesirable feature of a theoretical or phenomenological model is that salient observables are sensitive to values of parameters in the model. But in what sense is it undesirable to have such ‘fine-tuning’ of observables (and hence of the underlying model)? In this paper, we argue that the fine-tuning can be interpreted as a shortcoming of the explanatory capacity of the model: in particular it signals a lack of a particular type of explanatory depth. The aspect of depth that we probe relates most closely to a lack of sensitivity to changes in parameters associated with such models. In support of this argument, we develop a schema—for (a certain class of) models that arise broadly in physical settings—that quantitatively relates fine-tuning of observables to a lack of depth of explanations based on these models. We apply our schema in two different settings in which, within each setting, we compare the depth of two competing explanations. The first setting involves explanations for the Euclidean nature of spatial slices of the universe today: in particular, we compare an explanation provided by the big-bang model of the early 1970s (where no inflationary period is included) with an explanation provided by a general model of cosmic inflation. The second setting has a more phenomenological character, where the goal is to infer from a limited sequence of data points, using maximum entropy techniques, the underlying probability distribution from which these data are drawn. In both of these settings we find that our analysis favors the model that intuitively provides the deeper explanation of the observable(s) of interest. We thus provide an account that relates two ‘theoretical virtues’ of models used broadly in physical settings—namely, a lack of fine-tuning and explanatory depth—and argue that finely tuned models sacrifice explanatory depth.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. In what follows (and given what we have just described) we will not need to make much of the distinction between theories and models, and so for the sake of generality we shall refer primarily to “models” (though we will revert to using “theories” in places where it is more natural to do so, such as in Sect. 5.1).

  2. For some background on theoretical virtues in the sciences, see [22] and [27]. For more recent work, see, for example, [18] and [30].

  3. The explanandum is, in effect, the values possessed by the observables. Our measure of depth takes into account all parameters that describe an observable—though the measure is most sensitive to parameters that render the observable ‘finely tuned’. Also, the inclusion of a larger number of parameters will generally decrease the depth of the corresponding explanation. In this way, a lack of depth is related to both a sensitive dependence of possessed values of an observable on (values of) parameters, as well as the total number of parameters.

  4. ‘Stability’ is often characterized as the invariance of a phenomenon to perturbations in the model that describes that phenomenon. The ‘Einstein static universe’ was famously shown by Eddington to be ‘unstable’ [13]. The equilibrium condition in this model-universe specifies a particular value of the mass density of matter (in terms of the cosmological constant). A slight increase in the mass density of matter away this value leads to a runaway contraction of the universe, whereas a slight decrease leads to a runaway expansion. ‘Robustness’ is closely related to this characterization of stability, but depending on the particular context, it can have distinct or added features. One such feature, that arises for biological systems, is a “slow degradation of a system’s functions after damage, rather than catastrophic failure” [19, p. 1663]. ‘Naturalness’ is a concept that is commonly employed in particle physics settings. It describes a “prohibition of sensitive correlations between widely separated energy scales” [35, p. 82]. (See also [7] and [36] for recent foundational and philosophical accounts.) On the issue of how naturalness and fine-tuning (understood as akin to ‘stability’) may come apart, see [35, Sect. 3].

  5. The literature on the issue of the fine-tuning of life is (understandably) contentious: in particular, fine-tuning of life is not universally deemed to be in need of explanation. Earman [12, p. 314] contends that it is not “evident that puzzlement is the appropriate reaction” to the claim that small changes in relevant parameters would violate conditions necessary for life—though he does allow for ‘puzzlement’ as an option whilst calling into question some common resolutions of this puzzlement. Landsman [23] describes a similar conclusion—wherein worries about such fine-tuning are deemed “misguided”; and Colyvan et al. [9] deny the inference from fine-tuning of the universe for carbon-based life to a low probability for this universe to arise. (See also Refs. [24] and [26] for further skeptical positions on the issue of the fine-tuning of life, as well as the response in [21].) To clarify the scope of the schema developed in this paper: we do, indeed, highlight a functional role for considerations of fine-tuning and explanatory depth—and an initial puzzlement over fine-tuning of life is certainly consistent with, and arguably encouraged by, our approach. Though our schema does not render a definitive verdict on whether such puzzlement is justified and, owing to the nuances involved in this (thorny) issue, we leave a more complete treatment of this issue for future work.

  6. Other examples that, we believe, are amenable to such an analysis include: (i) Ptolemy’s geocentric model being supplanted by subsequent non-geocentric models (see, for some background, Weinberg [33], who describes Ptolemy’s model as finely tuned) and (ii) the development of quantum chromodynamics, which provided a unified framework—via an account of interactions between quarks and gluons—to understand the ‘zoo’ of hadrons that had been discovered by nuclear physicists from the mid-twentieth century onward.

  7. As we will discuss in Sect. 5.1, our schema is applicable to physical settings described more generally by effective field theories (and, in particular, dynamical systems derived from such theories). These theories come with an energy cutoff that delimits the regime of applicability of the theory.

  8. At first glance, this definition of a “significant change” may seem problematic, in that it will lead to scenarios in which the smaller the experimental bounds on some observable—namely, the more precisely we can pinpoint its value—the more finely tuned will be that observable. We wish to make two points about this issue. First, as indeed pointed out in [39, p. 211] there is a tradeoff between precision (of the explanandum) and sensitivity: “This is simply because smaller causal deviations are needed to disrupt the dependency between the explanans and a fine-grained explanandum than coarser-grained ones”. Second, an important dimension of our account is the comparative role that our account of depth can play. In particular, comparing the depth of two different explanations (supported by two different underlying models), for some observable whose value is determined within certain experimental bounds, is an important pragmatic aspect of our account.

  9. Note that in the case where \({|\mathbf {O}({\varvec{p}}^{\prime })|}=0\) [so that \(O_{i}({\varvec{p}}^{\prime })=0\) for each i] one would need to modify the schema above. In particular, a “significant change” in the observables then arises when a shift to the point \({\varvec{p}}^{\prime }+{\varvec{v}}_{i}^{+}\) yields a value for \(|\mathbf {O}({\varvec{p}}^{\prime }+{\varvec{v}}_{i}^{+})|\), which is a significant fraction of the total distance that one could travel in the resultant direction in \(\mathbf {O}({\mathcal {P}})\), namely, in the image of the map \(\mathbf {O}\) [in Eq. (1)].

  10. We thank an anonymous referee for pressing us on this point.

  11. Note that there is recent work that has analyzed the flatness problem that, in effect, draws the opposite conclusion about such fine-tuning—so that there is, indeed, a lack of agreement in the literature about the nature and severity of the flatness problem. (See, for example, work by Carroll [8] and Holman [16].) A more thorough analysis of the flatness problem lies outside the intended scope of our argument, but note the following two comments—relevant to this lack of agreement—that are intended to clarify the scope of the example as we have developed it in the main text. (i) The approach presented in our paper is indeed in accord with the standard (historical) view; and it primarily serves to exemplify how our schema can be applied, as well as how our schema yields an answer that accords with what would be expected under the standard view. (And, of course, we take this as a positive feature of our schema.) (ii) One of the issues that underlies the lack of agreement relates to how one should understand the underlying parameter space. So, for example, whether or not it is justifiable to assume (in a probabilistic setting) a uniform probability distribution for the relevant density parameter (as is assumed, in effect, in the standard view). We contend that our schema for explanatory depth is sufficiently general to also provide appropriate judgements about depth for such alternate viewpoints.

  12. Note also that in later connecting these introductory remarks to our account of explanation, the value of the dimensionless density parameter at some initial time will correspond to the “parameter” in the corresponding explanation (what we have earlier called \({\varvec{p}^{\prime }}\)), whereas the value of the dimensionless density parameter today will correspond to the “observable” (what we have earlier called \(\mathbf {O}\)). In what follows—keeping with conventions in the cosmology literature—we will continue to refer to the function defined in Eq. (10) as a (dimensionless density) parameter.

  13. The computation of \(\varOmega _{\text{R},0}\) is a little subtle. To estimate this latter quantity from recent cosmological data we assume that the energy density in radiation today, \(\rho_{\text{R},0}\), can be related to the temperature, \(T_0\), of radiation today (in the cosmic microwave background). One finds that \(\varOmega _{\text{R},0}\propto T_{0}^{4}/H_{0}^{2}\).

  14. Note that increasing the number of e-folds of cosmic inflation well beyond the critical value (\(N_{{\text {c}ritical}}\approx 60\)) does not (and indeed cannot) lead to a noticeable increase in the depth of the explanation (that is, the green lines in Fig. 1 effectively lie on top of each other near the maximum value of unity). This is consistent with the fact that the observable universe today circumscribes a horizon corresponding to the (last) 60 e-folds of inflation.

  15. Note that there is another scenario that is often tied to an extension of cosmic inflation to higher energy scales—namely, eternal inflation—in which a multiverse is described where different conditions obtain in different “pocket universes”. (See Guth [15] for a review.) Such scenarios ostensibly furnish explanations (of the general type considered in this paper) of observables—and the following natural question then arises: are such explanations deep? Our notions of global-fine tuning and depth could, in principle, be used to analyze this type of question (whether or not one takes an expressly probabilistic approach). There are, however, a number of thorny open physical and conceptual problems whose resolution would significantly aid in adequately addressing this question—so we leave such an analysis for future work. (See, for example, Aguirre [1] for an account of some of the problems involved.)

  16. We refer here to maximum entropy distributions that are consistent with one- and two-point correlation functions derived from the data (we will indeed probe such scenarios in the main text—though where the underlying random variable is continuous). There are a variety of biophysical systems that have been explored in this way, including: neural systems, proteins, the immune system, and even aggregations of birds (see [32] for a review).

  17. The difficulty here, is in finding an objective way to restrict the range of parameters of the fundamental theory. One may look to argue for such a restricted range based on subjective and/or context-dependent considerations, but it is not clear what these might be.

  18. We thank an anonymous referee for a phrasing that we have used here and for pressing us on what will follow in the remainder of this subsection.

  19. Note that in considering effective theories, our schema properly applies to the comparison of two such theories that are thought to apply at similar energy scales. We leave for future work the question of whether and how our approach could be adapted to compare the depth of two explanations, where one explanation involves a theory that reduces, in some limit, to the other theory. (See [34] for a related discussion of this general issue.)

  20. One goal that such model building (and scientific theorizing more broadly) seems attuned to is the pursuit of parsimonious descriptions of previously collected data, with a view to accurate future predictions. This was a major accomplishment of quantum chromodynamics, which explained a large amount of phenomenological data on nuclear physics (as mentioned in Sect. 2); and quantum electrodynamics, which achieved similar successes for electromagnetic phenomena. This goal also underlies the effort to find a theory that unifies all the forces of nature.

  21. Of course, we need to make precise how one determines this value. The variable \(\mu\) is a stand-in for a ‘high value’ and precisely what this value is (or should be) is a determination best made in the specific context in which an explanation is being constructed. We will not need to specify such a value to describe our scheme.

  22. Here we have assumed a small shift in the point in parameter space determined by a shift solely in the ith direction. We have represented the new point thus obtained by \(\varvec{p}\) instead of \(\varvec{p}^{\prime }\). By convention, limits of the range \(\delta _i\) are determined by the first point in parameter space such that the probability of observables lying in \(\varDelta _{\mathbf {O}_{M}}\) is less than or equal to \(\mu\).

References

  1. Aguirre, A.: Making predictions in a multiverse: conundrums, dangers, co-incidences. In: Carr, B (ed.) Universe or Multiverse?. Cambridge University Press, Cambridge, pp. 367–386 (2007). arXiv: astro-ph/0506519

  2. Albert, David Z.: Time and Chance. Harvard University Press, Cambridge (2000)

    MATH  Google Scholar 

  3. Azhar, F., Butterfield, J.: Scientific realism and primordial cosmology. In: Saatsi, J. (ed.) The Routledge Handbook on Scientific Realism, pp. 304–320. Routledge, London (2018)

    Google Scholar 

  4. Azhar, F., Loeb, A.: Gauging fine-tuning. Phys. Rev. D 98, 103018 (2018)

    Article  ADS  MathSciNet  Google Scholar 

  5. Bialek, W.: Biophysics: Searching for Principles. Princeton University Press, Princeton (2012)

    Google Scholar 

  6. Brawer, R.: Inflationary cosmology and the horizon and flatness problems: the mutual constitution of explanation and questions. M.Sc. Dissertation, Massachusetts Institute of Technology (1996)

  7. Butterfield, J.: Review of Hossenfelder, S. Lost in Math: How Beauty Leads Physics Astray. Basic Books, p. 304 (2018). Physics in Perspective 21, 63 (2019)

  8. Carroll, S.M.: In what sense is the early Universe fine-tuned? arXiv e-prints: 1406.3057 [arXiv: astro-ph.CO]

  9. Colyvan, M., Garfield, J.L., Priest, G.: Problems with the argument from fine tuning. Synthese 145, 325 (2005)

    Article  Google Scholar 

  10. Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley, New York (1991)

    Book  Google Scholar 

  11. Dowson, D.C., Wragg, A.: Maximum-entropy distributions having prescribed first and second moments. IEEE Trans. Inf. Theory 19, 689 (1973)

    Article  MathSciNet  Google Scholar 

  12. Earman, J.: The SAP also rises: a critical examination of the Anthropic Principle. Am. Philos. Q. 24, 307 (1987)

    Google Scholar 

  13. Eddington, A.S.: On the instability of Einstein’s spherical world. Mon. Not. R. Astron. Soc. 90, 668 (1930)

    Article  ADS  Google Scholar 

  14. Guth, A.H.: Inflationary universe: a possible solution to the horizon and flatness problems. Phys. Rev. D 23, 347 (1981)

    Article  ADS  Google Scholar 

  15. Guth, A.H.: Eternal inflation and its implications. J. Phys. A 40, 6811 (2007)

    Article  ADS  MathSciNet  Google Scholar 

  16. Holman, M.: How problematic is the near-Euclidean spatial geometry of the large-scale Universe? Found. Phys. 48, 1617 (2018)

    Article  ADS  MathSciNet  Google Scholar 

  17. Hitchcock, C., Woodward, J.: Explanatory generalizations, Part II: plumbing explanatory depth. Noûs 37, 181 (2003)

    Article  Google Scholar 

  18. Keas, M.N.: Systematizing the theoretical virtues. Synthese 195, 2761 (2018)

    Article  Google Scholar 

  19. Kitano, H.: Systems biology: a brief overview. Science 295, 1662 (2002)

    Article  ADS  Google Scholar 

  20. Kitcher, P., Salmon, W.C. (eds.): Scientific Explanation, vol. 13. Minnesota Studies in the Philosophy of Science. University of Minnesota Press, Minneapolis (1989)

    Google Scholar 

  21. Koperski, J.: Should we care about fine-tuning? Br. J. Philos. Sci. 56, 303 (2005)

    Article  MathSciNet  Google Scholar 

  22. Kuhn, T.S.: Objectivity, value judgment, and theory choice. In: The Essential Tension: Selected Studies in Scientific Tradition and Change. University of Chicago Press, Chicago, pp. 320–339 (1977)

  23. Landsman, K.: The fine-tuning argument: exploring the improbability of our existence. In: Landsman, K., van Wolde, E. (eds.) The Challenge of Chance: A Multidisciplinary Approach from Science and the Humanities, pp. 111–129. Springer, Cham (2016)

    Chapter  Google Scholar 

  24. Manson, N.A.: There is no adequate definition of ’fine-tuned for life’. Inquiry 43, 341 (2000)

    Article  Google Scholar 

  25. Maudlin, T.: Fine-tuned for what? Talk at the International Conference on the Physics of Fine-Tuning (2017). Accessed 22 April 2020

  26. McGrew, T., McGrew, L., Vestrup, E.: Probabilities and the fine-tuning argument: a sceptical view. Mind 110, 1027 (2001)

    Article  Google Scholar 

  27. McMullin, E.: Values in science. In: PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, vol. 1982, Volume Two: Symposia and Invited Papers. University of Chicago Press, Chicago, pp. 3–28 (1982)

  28. Penrose, R.: Difficulties with inflationary cosmology. Ann. N. Y. Acad. Sci. 571, 249 (1989)

    Article  ADS  Google Scholar 

  29. Planck Collaboration. Planck 2018 results. VI. Cosmological parameters. Astron. Astrophys. 641: A6 (2020)

  30. Schindler, S.: Theoretical Virtues in Science: Uncovering Reality Through Theory. Cambridge University Press, Cambridge (2018)

    Book  Google Scholar 

  31. Skow, B.: Scientific explanation. In: Humphreys, P. (ed.) The Oxford Handbook of Philosophy of Science, pp. 524–543. Oxford University Press, Oxford (2016)

    Google Scholar 

  32. Tkačik, G., Bialek, W.: Information processing in living systems. Annu. Rev. Condens. Matter Phys. 7, 89 (2016)

    Article  ADS  Google Scholar 

  33. Weinberg, S.: To Explain the World: The Discovery of Modern Science. Harper, New York (2015)

    MATH  Google Scholar 

  34. Weslake, B.: Explanatory depth. Philos. Sci. 77, 273 (2010)

    Article  Google Scholar 

  35. Williams, P.: Naturalness, the autonomy of scales, and the 125 GeV Higgs. Stud. Hist. Philos. Mod. Phys. 51, 82 (2015)

    Article  Google Scholar 

  36. Williams, P.: Two notions of naturalness. Found. Phys. 49, 1022 (2019)

    Article  ADS  MathSciNet  Google Scholar 

  37. Woodward, J.: Scientific explanation. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy, Winter edn. (2019)

  38. Woodward, J., Hitchcock, C.: Explanatory generalizations, Part I: a counterfactual account. Noûs 37, 1 (2003)

    Article  Google Scholar 

  39. Ylikoski, P., Kuorikoski, J.: Dissecting explanatory power. Philos. Stud. 148, 201 (2010)

    Article  Google Scholar 

Download references

Acknowledgements

We thank Porter Williams for discussions as well as two anonymous referees for helping to significantly improve an earlier version of this paper. We acknowledge support from the Black Hole Initiative at Harvard University, which is funded through a Grant from the John Templeton Foundation and the Gordon and Betty Moore Foundation. FA also acknowledges support from the Faculty Research Support Program (FY2019) at the University of Notre Dame.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feraz Azhar.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix 1: Observables from Probabilistic Maps

Appendix 1: Observables from Probabilistic Maps

The schema we have developed in Sect. 3 can naturally be extended to the case where observables are probabilistically related to parameters (in a way that is distinct from our phenomenological account in Sect. 4.2). In particular, in the context of some physical model, \({\mathcal {M}}\), one can construct a probability density function for the observables given the parameters: \(P(\mathbf {O}| \varvec{p}, {\mathcal {M}})\). We assume also that we have access to a prior over parameters, namely, \(P(\varvec{p}|{\mathcal {M}})\). Instead of now restricting, to be finite, the domain over which \(\varvec{p}\) can take values, we assume, for the sake of simplicity, that \(\varvec{p}\in {\mathbb {R}}^{n}\) but that the prior, \(P(\varvec{p}|{\mathcal {M}})\), is nonzero only over a finite region. Similarly we assume that in principle, \(\mathbf {O}\in {\mathbb {R}}^{m}\), but \(P(\mathbf {O}| \varvec{p}, {\mathcal {M}})\) is nonzero over some finite region.

In this case therefore, an explanation of the vector of observables taking the value \(\mathbf {O}_{M}\) (more specifically, taking a value in some small m-dimensional box, \(\varDelta _{\mathbf {O}_{M}}\), centered on \(\mathbf {O}_{M}\)) will comprise an argument in which the probability of the observables taking these values is larger than some fiducial value \(\mu\).Footnote 21 We can represent the resulting argument that comprises the explanation in the following summarized form:

$$\begin{aligned} {\bar{E}}\;{\text {:}}\;\varvec{p}^{\prime } \wedge [P(\mathbf {O}_{M}| \varvec{p}^{\prime }, {\mathcal {M}})\varDelta _{\mathbf {O}_{M}} > \mu ] \therefore \mathbf {O}_{M}. \end{aligned}$$
(57)

The depth of this explanation can be computed via the following steps.

  1. (i)

    For the ith parameter direction about the point \(\varvec{p}^{\prime }\), we construct the range, \(\delta _i\), of parameter values over which observables do not change significantly. That is, the range over which the probability that the vector of observables takes values in the same small m-dimensional box described above, centered on \(\mathbf {O}_{M}\), remains high: \(P(\mathbf {O}_{M}| \varvec{p}, {\mathcal {M}})\varDelta _{\mathbf {O}_{M}} > \mu\).Footnote 22

  2. (ii)

    Next we find the probability of parameters lying in this range as gleaned from the appropriate marginal distribution:

    $$\begin{aligned} P_{i}(\delta _i)\equiv \left[ \prod _{k \ne i}\int _{{\mathbb {R}}}d{p_k}\right] \;\int _{\delta _{i}}dp_{i}\;P(\varvec{p}| {\mathcal {M}}). \end{aligned}$$
    (58)
  3. (iii)

    Finally we define a corresponding measure of global fine-tuning, \(\bar{{\mathcal {G}}}_{i}(\mathbf {O};\varvec{p}^{\prime })\), extending the treatment in Sect. 3 and in [4]:

    $$\begin{aligned} \bar{{\mathcal {G}}}_{i}(\mathbf {O};\varvec{p}^{\prime })\equiv \log _{10}\left( \frac{1}{P_{i}(\delta _i)}\right) . \end{aligned}$$
    (59)

    This measure is manifestly non-negative, with the minimum value (namely, zero) occurring when the probability of the observables lying in \(\varDelta _{\mathbf {O}_{M}}\), centered on \(\mathbf {O}_{M}\), is greater than the fiducial value \(\mu\), independent of the value of the ith parameter.

Our measure of the depth of the explanation in Eq. (57), which we denote by \({ {\bar{D}}_{I}^{{\bar{E}}}}(\mathbf {O}; {\varvec{p}}^{\prime })\) is then defined by the following:

$$\begin{aligned} { {\bar{D}}_{I}^{{\bar{E}}}}(\mathbf {O}; {\varvec{p}}^{\prime })\equiv \frac{1}{\displaystyle \prod _{i=1}^{n} \left[ 1+\bar{{\mathcal {G}}}_{i}(\mathbf {O}; {\varvec{p}}^{\prime })\right] }. \end{aligned}$$
(60)

This measure has analogous features to the measure described in Sect. 3.3 [see points (i)–(v) under Eq. (7)] with appropriate reassignments, for example, \({\mathcal {G}}\rightarrow \bar{{\mathcal {G}}}\) and \({ D_{I}^{E}}\rightarrow {{\bar{D}}_{I}^{{\bar{E}}}}\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Azhar, F., Loeb, A. Finely Tuned Models Sacrifice Explanatory Depth. Found Phys 51, 91 (2021). https://doi.org/10.1007/s10701-021-00493-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10701-021-00493-2

Keywords

Navigation