Abstract
Traditionally, a scientific model is thought to provide a good scientific explanation to the extent that it satisfies certain scientific goals that are thought to be constitutive of explanation (e.g. generating understanding, identifying mechanisms, making predictions, identifying high-level patterns, allowing us to control and manipulate phenomena). Problems arise when we realize that individual scientific models cannot simultaneously satisfy all the scientific goals typically associated with explanation. A given model’s ability to satisfy some goals must always come at the expense of satisfying others. This has resulted in philosophical disputes regarding which of these goals are in fact necessary for explanation, and as such which types of models can and cannot provide explanations (e.g. dynamical models, optimality models, topological models, etc.). Explanatory monists argue that one goal will be explanatory in all contexts, while explanatory pluralists argue that the goal will vary based on pragmatic considerations. In this paper, I argue that such debates are misguided, and that both monists and pluralists are incorrect. Instead of any goal being given explanatory priority over others in a given context, the different goals are all deeply dependent on one another for their explanatory power. Any model that sacrifices some explanatory goals to attain others will always necessarily undermine its own explanatory power in the process. And so when forced to choose between individual scientific models, there can be no explanatory victors. Given that no model can satisfy all the goals typically associated with explanation, no one model in isolation can provide a good scientific explanation. Instead we must appeal to collections of models. Collections of models provide an explanation when they satisfy the web of interconnected goals that justify the explanatory power of one another.
Similar content being viewed by others
Notes
These principles can be understood in terms of strict nomological laws, behavioural patterns, broad causal regularities, or true generalizations made about the system.
The list above should by no means be interpreted as an exhaustive inventory of the sorts of scientific goals that may be relevant for scientific explanation. Additional goals may well be worth including as well. For the sake of brevity and simplicity, I will focus my attention on these five given that these have all been explicitly defended by philosophers of science in recent years for their explanatory power.
It should be noted that Craver is not suggesting that a given scientific model will always become better the more mechanistic details it includes (see: Craver and Kaplan, under review). The appropriate amount of mechanistic detail for a model to employ will vary based on our particular needs. Instead, he argues only that a model must always have some variables that map to structural/mechanistic features of the system in order to carry explanatory content (which optimality models do not have). A model which satisfies the other explanatory goals but fails to identify relevant mechanisms cannot be explanatory.
It is worth noting that the term “explanatory pluralism” is not always used consistently throughout the philosophy of science literature. As such, this pragmatic contextualist interpretation of explanatory pluralism may not correctly describe all those who self-identify as pluralists. For the sake of clarity, I have in mind here the sort of explanatory pluralism advocated by the likes of Chemero and Silberstein 2008, and Chirimuuta 2014 (among others).
One might object that this simply reflects an ambiguity in the term “understanding” as opposed to any deeper claim regarding the interdependence between the goal of understanding and the other explanatory goals (special thanks to a blind referee for pointing out this worry). While constraints on space limit my ability to address this problem at length here, it should be sufficient for my purpose to highlight the fact that almost every definition of understanding involves some sort of cognitive component in which the target phenomenon is made intelligible to the inquirer (for psychological studies that support this, see: Keil 2006; Braverman et al. 2012; Waskan et al. 2014a, b, c. See also: Potochnik 2015). This very minimal shared criterion of “understanding” is sufficient to show the interdependence between it and the other goals, as each of the other goals has been defended as essential for explanation on the grounds that the psychological intelligibility of the phenomenon is contingent on their attainment. That being said, this point is still contentious and may deserve greater exploration.
For a straightforward example of this sort of model, consider the use of large scale graph-based models to characterize certain organizational features of complex biological mechanisms. Such models are often necessary for representing organizational features like complex feedback loop, but can only do so by idealizing away from many of the structural and behavioural features of the system needed for both manipulation and prediction (for details and discussion, see Bechtel 2015).
Thanks to Natalia Washington for encouraging me to emphasize this distinction.
It is worth noting that Potochnik draws a very different conclusion from this interdependence between models than I do. While she grants that there is an epistemic interdependence between the different models, she insists that optimality model remain explanatorily independent from the other models. She argues that the model which identifies the high-level causal pattern is the best explanation for why a particular trait occurs. Other models, like those that identify essential evolutionary mechanisms, may be needed to effectively construct and apply an optimality model, but it is the optimality model that provides the explanation independently of those models.
Yet I propose that this interpretation is incorrect. The mechanistic details are essential to our explanation of the phenomenon, since the presence or absence of certain evolutionary mechanisms (such as epistasis and pleiotropy) is essential for the phenomenon to display the patterns represented in the optimality model. In other words, the explanation as to why the trait appears is not merely because it is locally optimal, it is because it is locally optimal in virtue of the presence or absence of certain key mechanistic facts. These facts are part of the explanation as to why the trait occurs as it does, and are only identified by the mechanistic model, not the optimality model. Thus the mechanistic model not only provides context for the optimality model, it provides relevant explanatory information as to why the optimal trait occurs. And so to suggest that the optimality model’s explanatory power is independent of the mechanistic model is extremely misleading.
What appears prima facie to be a case of explanatory independence is instead a case in which our pragmatic interests shift our attention from one model to another. This shift in attention should not be confused with a shift in explanatory content however. Once the mechanistic model is used to identifying the relevant evolutionary mechanisms, we shift our focus to the optimality model in order to satisfy explanatory goals that our mechanistic model could not provide. It only appears like the optimality model is explanatorily independent from the mechanistic model because it seems like the explanatory content is only available to us once we have the optimality model in hand, and not when we have the mechanistic model. But this perception is deceptive, since in order to generate the optimality model we must already have available to us the information from the mechanistic model. So by the time we apply the optimality model, the explanatory information available to us is being conveyed by both the mechanistic and optimality models together. It only seems like the optimality model is providing an independent explanation because the mechanistic information has been pushed into the background as we focus our attention on the optimality model, and so appears invisible. But it is only when the information from our optimality model is used to supplement the information from our mechanistic model that we begin to generate an explanation. The explanatory contents of the models are not independent, but deeply dependent on one another.
References
Achinstein P (1983) The nature of explanation. Oxford University Press, New York
Batterman R (2001) The devil in the details: asymptotic reasoning in explanation, reduction, and emergence. Oxford University Press, Oxford
Batterman R (2002) Asymptotics and the role of minimal models. Br J Philos Sci 53:21–38
Bechtel W (2008) Mental mechanisms: philosophical perspectives on cognitive neuroscience. Lawrence Erlbaum Associates, New York
Bechtel W (2015) Can mechanistic explanation be reconciled with scale-free constitution and dynamics? Stud Hist Philos Sci Part C: Stud Hist Philos Biol Biomed Sci. doi:10.1016/j.shpsc.2015.03.006
Bechtel W, Abrahamsen A (2005) Explanation: a mechanistic alternative. Stud Hist Philos Biomed Sci 36:421–441
Bogen J (2005) Regularities and causality; generalizations and causal explanations. Stud Hist Philos Sci Part C 36:397–420
Braverman M, Clevenger J, Harmon I, Higgins A, Horne Z, Spino J, Waskan J (2012). Intelligibility is necessary for explanation but accuracy may not be. In: Proceedings of the thirty-fourth annual conference of the cognitive science society
Bull JJ (2006) Optimality models of phage life history and parallels in disease evolution. J Theor Biol 241:928–938
Bull JJ, Pfennig DW, Wang I-N (2004) Genetic details, optimization and phage life histories. Trends Ecol Evol 19(2):76–82
Chemero A, Silberstein M (2008) After the philosophy of mind: replacing scholasticism with science. Philos Sci 75:1–27
Chirimuuta M (2014) Minimal models and canonical neural computations: the distinctness of computational explanation in neuroscience. Synthese 191(2):127–153
Craver C (2006) When mechanistic models explain. Synthese 153(3):355–376
Craver C, Kaplan D (under review) Are more details better? On the norms of completeness for mechanistic explanations
Dretske F (1994) If you can’t make one, you don’t know how it works. Midwest Stud Philos 19(1):468–482
Eliasmith C (2010) How we ought to describe computation in the brain. Stud Hist Philos Sci Part A 41:313–320
Eliasmith C, Trujillo O (2014) The use and abuse of large-scale brain models. Curr Opin Neurobiol 25:1–6
Fitzhugh R (1960) Thresholds and plateaus in the Hodgkin-Huxley nerve equations. J Gen Physiol 43(5):867–896
Glennan S (2002) Rethinking mechanistic explanation. Philos Sci 69(S3):S342–S353
Gopnik A (2000) Explanation as orgasm and the drive for causal knowledge: the function, evolution, and phenomenology of the theory formation system. In: Keil F, Wilson R (eds) Cognition and explanation. MIT Press, Cambridge, pp 299–323
Gould S, Lewontin R (1979) The spandrels of San Marco and the Panglossian paradigm: a critique of the adaptationist programme. Proc R Soc Lond B 205:581–598
Hempel C (1965) Aspects of scientific explanation. Free Press, New York
Hempel C, Oppenheim P (1948) Studies in the logic of explanation. Philos Sci 15:135–175
Hochstein E (2016a) One mechanism, many models: a distributed theory of mechanistic explanation. Synthese 193(5):1387–1407
Hochstein E (2016b) Giving up on convergence and autonomy: why the theories of psychology and neuroscience are codependent as well as irreconcilable. Stud Hist Philos Sci 56:135–144
Hodgkin AL (1992) Chance and design: reminiscences of science in peace and war. Cambridge University Press, Cambridge
Hodgkin AL, Huxley AF (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol 117:500–544
Hoppensteadt FC, Izhikevich EM (1997) Weakly connected neural networks. Springer, New York
Huneman P (2010) Topological exlanations and robustness in biological sciences. Synthese 177(2):213–245
Izhikevich E (2007) Dynamical systems in neuroscience: the geometry of excitability and bursting. MIT Press, Cambridge
Jackson F, Pettit P (1992) In defense of explanatory ecumenicalism. Econ Philos 8(1):1–21
Kaplan D, Bechtel W (2011a) Dynamical models: an alternative or complement to mechanistic explanations? Top Cogn Sci 3:438–444
Kaplan D, Craver C (2011b) The explanatory force of dynamical and mathematical models in neuroscience: a mechanistic perspective. Philos Sci 78(4):601–627
Keil F (2006) Explanation and understanding. Annu Rev Psychol 57:227–254
Lange M (2013) What makes a scientific explanation distinctively mathematical? Br J Philos Sci 64(3):485–511
Legare CH, Wellman HM, Gelman SA (2009) Evidence for an explanation advantage in naıve biological reasoning. Cogn Psychol 58:177–194
Levins R (1966) The strategy of model building in population biology. Am Sci 54:5
Lewontin R (1979) Fitness, survival, and optimality. In: Horn D, Stairs G, Mitchell R (eds) Analysis of ecological systems, third annual biosciences colloquium. Ohio State University Press, Columbus, pp 3–21
Lewontin R (1989) A natural selection. Nature 339:107
Lombrozo T, Carey S (2006) Functional explanation and the function of explanation. Cognition 99(2):167–204
Machamer P, Darden L, Craver CF (2000) Thinking about mechanisms. Philos Sci 67(1):1–25
Matthewson M, Weisberg M (2009) The structure of tradeoffs in model building. Synthese 170(1):169–190
Miłkowski M (2016) Unification strategies in cognitive science. Stud Log Gramm Rhetor 48(61):13–33
Mitchell S (2003) Biological complexity and integrative pluralism. Cambridge University Press, Cambridge
Nagumo J, Arimoto S, Yoshizawa S (1962) An active pulse transmission line simulating Nerve Axon. Proc Inst Radio Eng 50(10): 2061–2070
Piccinini G (2015) Physical computation: a mechanist account. Oxford University Press, Oxford
Piccinini G, Craver C (2011) Integrating psychology and neuroscience: functional analyses as mechanism sketches. Synthese 183(3):283–311
Potochnik A (2007) Optimality modeling and explanatory generality. Philos Sci 74:680–691
Potochnik A (2010) Explanatory independence and epistemic interdependence: a case study of the optimality approach. Br J Philos Sci 61(1):213–233
Potochnik A (2015) The diverse aims of science. Stud Hist Philos Sci 53:71–80
Povich M (2016) Minimal models and the generalized ontic conception of scientific explanation. Br J Philos Sci. doi:10.1093/bjps/axw019
Rice C (2015) Moving beyond causes: optimality models and scientific explanation. Noûs 49(3):589–615
Ross L (2015) Dynamical models and explanation in neuroscience. Philos Sci 81(1):32–54
Salmon W (1984) Scientific explanation and the causal structure of the world. Princeton University Press, Princeton
Salmon W (1989) Four decades of scientific explanation. University of Minnesota Press, Minneapolis
Schwartz J (2002) Population genetics and sociobiology. Perspect Biol Med 45(2):224–240
Strevens M (2008) Depth: an account of scientific explanation. Harvard University Press, Cambridge
Trumpler M (1997) Techniques of intervention and forms of representation of sodium-channel proteins in nerve cell membranes. J Hist Biol 30(1):55–89
Wang IN, Dykhuizen DE, Slobodkin LB (1996) The evolution of phage lysis timing. Evol Ecol 10:545–558
Waskan J, Harmon I, Horne Z, Spino J, Clevenger J (2014a) Explanatory anti-psychologism overturned by lay and scientific case classifications. Synthese 191:1013–1035
Waskan J, Harmon I, Higgins A, Spino J (2014a) Three senses of ‘Explanation’. In: Bello P, Guarini M, McShane M, Scassellati B (eds) Proceedings of the 36th annual conference of the cognitive science society. Cognitive Science Society: Austin, TX, pp 3090–3095
Waskan J, Harmon I, Higgins A, Spino J (2014b) Investigating lay and scientific norms for using ‘Explanation.’ In: Lissack M, Graber A (eds) Modes of explanation: affordances for action and prediction. Palgrave Macmillan, pp 198–205
Weber M (2008) Causes without mechanisms: experimental regularities, physical laws, and neuroscientific explanation. Philos Sci 75:995–1007
Weisberg M (2013) Simulation and similarity: using models to understand the world. Oxford University Press, New York
Woods J, Rosales A (2010) Virtuous distortion in model-based science. In: Magnani L, Carnielli W, Pizzi C (eds) Model-based reasoning in science and technology: abduction, logic and computational discovery. Springer, Berlin, pp 3–30
Woodward J (2000) Explanation and invariance in the special sciences. British Journal for the Philosophy of Science 51:197–254
Woodward J (2003) Making things happen: a theory of causal explanation. Oxford University Press, Oxford
Woodward J (2017) Scientific explanation. In: Zalta EN (ed) The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), https://plato.stanford.edu/archives/spr2017/entries/scientific-explanation/
Zednik C (2011) The nature of dynamical explanation. Philos Sci 78(2):238–263
Acknowledgements
There are many I owe a great deal of thanks for assistance with earlier drafts of this paper. This includes Callie Philips, Anya Plutynski, Tim Kenyon, Doreen Fraser, Nathan Haydon, Ian McDonald, Mark Povich, Carl Craver, and Peter Blouw. Special thanks in particular go to Lauren Olin, Joseph McCaffrey and Natalia Washington for in-depth discussions, feedback, and encouragement. I would also like to offer thanks to the blind referees of this paper. Their feedback was not only constructive and insightful, but essential in helping to shape the paper.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Hochstein, E. Why one model is never enough: a defense of explanatory holism. Biol Philos 32, 1105–1125 (2017). https://doi.org/10.1007/s10539-017-9595-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10539-017-9595-x