Emergence and the Philosophy of Science


In the philosophy of science emergence was first categorised as a response to so called vitalist theories of biology. The traditional vitalist view was that living things possessed some kind of animating spirit or force that gave them purpose and made them distinct from inanimate matter. This idea became unfashionable in light of an increasing desire, inspired by 19th and early 20th century science, to find an explanation for all things that was rooted in the natural world and expressed in purely mechanical terms. Emergence was then a way of categorising what is different about living beings: rather than possessing some animating spirit they are ‘just’ extremely complex machines, so complex in fact, that for thinkers such as C.D. Broad and Bertrand Russell, the whole organism cannot in principle be understood merely by looking at the components that make it up. Hence living entities are ‘emergent’ from the entities that comprise them. Since then the idea of emergence has been extended beyond living organisms to include other natural phenomena. 

However, the early enthusiasm for this notion of emergence was beset by many reductionist criticisms. The challenge for emergentists is to spell out in what way an emergent entity is distinct from the entities that comprise it and the laws that those comprising entities obey. One particular blow for emergentism came from investigations into complex non-linear systems. It was shown that very complex behaviour could be generated from a simple set of initial conditions and rules. So the presence of a complex and difficult to predict end result did not mean that a system was not entirely specified by those more fundamental simple rules: the complexity in these cases is merely resultant not emergent.

Recently though the challenge from reductionism has been met by a new breed of emergentists, inspired by many examples from condensed matter physics. They argue that a complex system may be emergent if in some sense it transcends its parts, in the sense that it has properties that cannot fully be explained by reference to their properties, considered individually or in combination. In such cases, as Philip Anderson has famously put it, ‘[t]he behaviour of large and complex aggregates of elementary particles … is not to be understood in terms of a simple extrapolation of the properties of a few particles’ (1972, 393), or in an even more familiar phrase ‘the whole is … more than the sum of its parts’ (see for instance Davies 2006). The challenge is to spell out exactly how this transcending of parts comes about and what it means.

Emergence, Dependence and Novelty

Emergence can be thought of as a complex relationship between two kinds of properties.

The first of these relations is dependence: roughly, that A arises from, or is grounded in, B. At the very least, A couldn’t exist without B. This might be thought of in terms of causation, if B is the necessary cause of A then A couldn’t have happened unless B caused it to happen.

The model B is dependent on the building blocks, it couldn’t exist without them. But the model has novel qualities the bricks alone do not: it resembles a house, it can contain small objects and the walls have a certain rigidity. How strongly distinct are these qualities from those of individual blocks?

“Dependence comes in different varieties, depending on how the word ‘couldn’t’ is interpreted. As well as causal dependence, there is ‘ontological’ dependence (ontology concerns what exists). This is a tighter connection between A and B than a causal connection. Some philosophers argue that to be gold (the stuff that is kept in ingots in Fort Knox) just is to be composed of atoms that have a nuclear charge of 79: that’s what makes something gold. This essentialist claim entails that gold couldn’t exist unless atoms with a nuclear charge of 79 existed: a world without atoms with a nuclear of 79 (or indeed a world without atoms) would necessarily be a world without gold.”

  The second relation is novelty. To say that A is causally maintained by B does not entail that A and B are the same property. Nor does saying that having B is what makes something A. Even where these dependencies hold there is room for A and B to be distinct. That is, for A to be something in addition to B, to be novel in some way with respect to B. That, really, is the central emergentist thought. 

Novelty comes in a spectrum of strengths, and these strengths account for the different strengths of emergence. At the weak end of the spectrum of novelty, A and B might only seem different, perhaps because we are acquainted with them via different routes, or conceptualise them differently. 

In the case of weak novelty we might have good reason for thinking that B in fact maintains A, but we do not have the understanding to see how it does. In these cases we might be unable to reduce all high level systems to fundamental physics, but this would be due to pragmatic limitations on how ‘smart’ we are, not the adding of something extra over and above the fundamental laws at the higher level.

At the more substantial end of novelty comes the idea that instances of A, although dependent on, or arising from, instances of B, confer distinct causal powers. It is controversial whether this last idea which is sometimes called “strong” emergence, may apply to any real system, because it conflicts with a presupposition of many philosophers and physicists: that the only real causal powers are those that are conferred by fundamental physical properties.

Strong Emergence and the Unity of Science

Are any real properties or systems strongly emergent? Whether in the mental or in the (broadly) physical realm, that issue has always been associated closely with the question of the unity of science: how many different kinds of thing does science study? Does nature express just a few basic laws, which directly govern the most fundamental parts of nature, and through them, the more complex systems they compose? Or does nature display a complex and disunified patchwork of laws (Cartwright 1999)? The classical conception of the unity of science was provided by Oppenheim and Putnam (1958), who made explicit a widespread view of the sciences as hierarchically structured, and predicted their progressive explanatory unification, with the entities of sciences higher up being shown to be complexes constructed out of the entities of science lower down. 

The hierarchy of sciences 
Image: XCKD.

This hierarchy can be thought of as a ladder and reduction as the ability to move between rungs. At the bottom level we have fundamental physics, theories such as General Relativity and Quantum Field Theory. Then, above these, we have other areas of physics, such as condensed matter physics. Then upwards chemistry, biology and the social sciences. One way of formulating the emergentist idea is that as we move from one level another we discover novelty, in the objects of study at the new level or in the laws and theories themselves. The opposite of emergence is reductionism. In reductionism we can say everything we want to say at one level only in terms of the lower level. Sometimes scientists look for ways of reducing one theory to another, such as reducing thermodynamics to statistical mechanics, or Newtonian Gravity to General Relativity. In such cases there is, it is argued, nothing new added by the reduced theory over and above the reducing theory.  

This reduction can come in different forms. We can reduce all entities referred to at one level to a combination of entities at a lower level: so a person is made of cells, and cells are made of atoms, and atoms are made of quarks etc. Or the reduction can be in terms of what a theory can tell us about the world: so for instance we might say all that thermodynamics explains can also be explained in terms of statistical mechanics. In these cases we can say we have unified statistical mechanics and thermodynamics.

Nagel (1979) provided a formal model of this explanatory unification: the laws of the reducing theory deductively entail the laws of the reduced theory. Nagel’s scheme couldn’t be made to work in practice due to the sheer difficulty of deducing higher level laws from lower ones in many cases. Moreover for many Nagel’s scheme is conceptually wrong since explanation is not merely deduction and what theory reduction requires is explanatory reduction. Even those examples that supposedly fit Nagel’s scheme are disputed. For example the classic case of reducing temperature to mean molecular kinetic energy has always faced objections (see Sklar 1993 for a survey of the issue, and Needham 2009 for more recent criticisms of this case as an example of reduction). Yet temperamental reductionists have always been untroubled by these objections, and temperature is still widely claimed to be reducible to mean kinetic energy (see for instance Loewer 2001 and Papineau 2010). One common response to the failure of classical Nagelian reductionism is that, despite the inapplicability of Nagel’s derivational model, classical thermodynamics, chemistry, life and the mind are all ‘reducible in principle’ to fundamental physics: it is just that the Nagelian derivations are blocked by the sheer complexity of special-science systems, the mathematical intractability of the equations that describe them, or conceptual mismatch between the physics and the special sciences. It was the letter of Nagel’s utopian model that was at fault, rather than the spirit of reductionism more generally.

However a significant (and growing) minority of emergentists, substance pluralists and sceptics reject this reductionist view. If Nagelian reductions are not to be expected, what exactly do the reductionist and the emergentist disagree about? What positive reason can the reductionist give for thinking that right is on their side of the debate? ‘Reducibility in principle’ is far too vague a notion for proper philosophical debate (Crane and Mellor 1990). The reductionist response is to argue that there are good general reasons for believing that it is plausible that reductions in principle are possible; one of these general reasons is the assumption that physics is causally complete.

Causal Closure and the Completeness of Physics

The basic reductionist intuition is that worldly events and processes are governed by just a few basic properties, linked through just a few basic laws. These laws determine the passage of events, to the extent that they are determined.  (Or if the standard interpretations of quantum mechanics are correct, and we live in an indeterministic world, we can say instead that only the chances of events are determined.) 

Given this kind of presupposition, emergent properties could not, as strong emergence requires, confer causal powers over and above those conferred by their bases. This notion has been formulated in two slightly different ways: as the principle of the causal closure of the physical, and as the completeness of physics. These are sometimes used interchangeably, but there are potentially important differences. One formulation mentions causation, the other does not. One involves the specific scientific discipline of physics, the other a class of properties, entities or events, which may or may not be picked out by reference to the discipline of physics. For the purposes of this overview it will be convenient to use a single abbreviation (CCP) to name this kind of principle, but the plausibility of the principle must depend on just how it is formulated, so this is an issue the Durham Emergence Project will address.

CCP states that either: 

​(1) Every event has a cause which can be formulated in terms of the physical. 


​(2) Physics is explanatorily complete, that is whatever needs explaining can in principle be explained using physics (even if in practice we cannot because it would be too time consuming or too difficult).

The two often go hand in hand, since if causes are physical and physics is the subject that tells us about fundamental causation then all causes are causes articulable in terms of physics 

Just about every aspect of CCP is controversial. Some philosophers think it cannot be coherently formulated. Others think that, even if it can be coherently formulated, it is plausible only given an outmoded view of the metaphysics of causation. Some think it is so obviously true that it hardly needs arguing for: in fact, the burden of evidence is on those who reject it to prove it false. Others think that is a strong, and most probably false, claim. Some think it is supported by modern physics, and its explanatory power with respect to the discoveries of ‘special’ sciences like chemistry and biology. Others think it is violated by modern physical theories like quantum mechanics, or that, although physical theories like quantum mechanics have deepened the explanations provided by special sciences like chemistry and biology, the detail of these explanations provides no evidence for CCP.

Evidence and Emergence

Assessing the scientific evidence for emergence presents many difficulties. One complication is the distinction, which is not universally appreciated, between (i) criteria for emergence which apply to the theoretical description of a phenomenon (or the natural systems that display it); and (ii) criteria for emergence that apply directly to that phenomenon itself (or the natural systems that display it). The problem is twofold: the former are expressed in the mathematical language of theoretical physics, while the latter are expressed in abstract enough language for them to have the generality required to cover all the candidate cases. Bridging that gap is a central task for the Durham Emergence Project, and involves integrated work across the philosophy of mind, philosophy of science and physics. The work is of two kinds: (i) critically examining philosophers’ criteria for emergence in order to see how far they can be made clearly applicable to real physical systems, and to their theoretical descriptions; (ii) considering the significance to the broader debate on emergence of specific mathematical structures arising in theories of physics, which appear on the face of it to be candidates for strong emergence, if it exists, within mathematical theories of the world (answering (i) above). 

There are reasons to hope that some form of emergence may be made precise in the mathematical sense that applies to models of real systems. By looking into the mathematical structure of functions, for example, we can identify phenomena that map onto the structure of emergent causality: the local structure of analytic functions determines the structure of those functions everywhere on a complex plane of real and imaginary numbers, but only as far as the existence of any non-analytic singularity. Elsewhere in the plane other structures must emerge. In both particle field theory and statistical physics, the renormalisation group (e.g.Wilson 1971) creates singularities in the mathematical structure of a theory that correspond both to (at least weakly) emergent phenomena (such as second order phase transitions) and to their non-determination from laws and properties at smaller scales (such as the irrelevance of “bare” coupling constants). 

Similarly “top-down” structures in mathematical physics are candidates for the generation of counterexamples of CCP, with or without renormalisation. Some cases are deceptively simple: the use of the macroscopic temperature to write down the statistical partition function of an ensemble of microscopic spins, is an example. Temperature can only be defined at the level of the ensemble, yet it determines the likelihood of any configuration of a single spin variable. Doing this mathematically sets up a “causal loop” by which bottom-up physics (e.g.of local coupling) is completed by top-down physics (e.g.coupling to a heat bath composed of the aggregate of spins themselves); out of such a loop emerges structure which is demonstrably not implied by the smaller scale physics on its own. 

Whether or not there are cases of strong emergence presupposes that there can be such cases. Hence any investigation of strong emergence requires a thorough examination of CCP: its proper formulation, its consequences, what evidence there might be for (or against) it and what evidence there is for (or against) it. It also requires a critical examination of the different criteria for emergence, and of attempts to formulate them in the mathematical language of physics. Lastly, such an investigation requires a critical examination of candidate cases of emergence provided by mental causation, by the application of physical theories to the discoveries of the special sciences, and within condensed matter physics, which concerns the new kinds of behaviour of matter that arise at various levels of complexity, and so deals very directly with emergence. 

Few philosophers of mind or science have thought it necessary to make explicit just where in science the evidence for physicalism should be sought, and even fewer have made a detailed case that the required evidence can actually be found there. Notable exceptions to this reticence are Brian McLaughlin (1992), Barry Loewer (2001), David Papineau (2002) and Andrew Melnyk (2004). In a generally sympathetic (and widely cited) discussion of ‘British emergentists’ like Samuel Alexander and C.D. Broad, McLaughlin argues that their position, although coherent, lacks scientific support. If British emergentism were true, then there exist real systems which are moved by so-called ‘configurational’ forces, but as he puts it, there is ‘not a scintilla of evidence’ that they do exist. It is a common assumption of physicalist arguments for CCP that the burden of proof lies with the emergentist (Loewer and Melnyk argue in similar ways). On the other hand, Nancy Cartwright (1983), like other members of the ‘Stanford Group’ (such as Ian Hacking, John Dupré, Peter Galison and Patrick Suppes) has long argued that the evidential burden should be on the other side: the conclusion that some theory is applicable to some phenomenon requires a detailed predictive model of that phenomenon within that theory, rather than an argument-in-principle that covers a whole domain of phenomena, yet is based on generalization from just a few simple cases which may be particularly simple. Thus Cartwright would urge caution in moving to the conclusion that quantum mechanics is sufficient alone to explain the whole of chemistry, if this is based on just an elegant treatment of the hydrogen atom (a special case on account of its symmetry), or the hydrogen molecule (also a special case, like other diatomic molecules). If the debate on emergence is to move on, then the arguments for CCP must be subjected to critical examination, with one of the following results: (i) the arguments are found to be broadly correct; (ii) they are improved; (iii) the content of CCP is curtailed to match the (lack of) evidence for it; (iv) commitment to CCP is shown to be unjustified.

The philosophy of mind and philosophy of science sections of this project will both address the question of how CCP is best formulated. A central question for the philosophy of science is whether, under its best expression, the principle can plausibly be regarded as true, in the light of explanatory relationships between theories in two different areas: chemistry, and condensed matter physics. One issue will be whether the unique causal or explanatory role accorded to physical properties is best understood as involving CCP, or the logically weaker conjunction of two related theses: the ubiquity of physical properties (what is known in the philosophy of mind as token physicalism), and the strictness of physical laws governing those properties (see Hendry 2010). The cases from chemistry and condensed matter physics are uniquely salient in two ways. Firstly, detailed studies of reduction, emergence and intertheory relations have recently become available for these cases (see for instance Sklar 1993, Batterman 2002, Hendry 2006, 2010, forthcoming). Secondly, theories in these domains are expressed mathematically in ways that bear clear relationships to fundamental physical theories like quantum mechanics. (The contrast with supposed identities like ‘pain = c-fibres firing’ could not be starker.) Surely in these domains, if anywhere, the idea of emergence can be given clear expression and its existence tested. Surely here, if anywhere, the onus is on the reductionist side to provide recognizably reductive explanations, or to explain why, if they cannot be provided, reductionism is still tenable.

Another issue which requires examination by the Durham Emergence Project is how far recent literature in the philosophy of science on Mechanisms can illuminate the issue of emergence in the relationship between fundamental physics on one hand and chemistry and condensed matter physics on the other. The literature on mechanisms, mostly focused within the philosophy of biology, argues that new features and new principles, or law-like relations among these features, arise from specific arrangements of objects with properties at a ‘lower’ level. William Bechtel, for instance, is one of the original advocates of the importance of mechanisms in biology. Bechtel’s recent work (2008, forthcoming) can be seen to support the claim that specific relationships, and especially feedback interactions, between parts of a biological mechanism like neurons produce new kinds of behaviour that, although not inconsistent with the principles governing the lower level features, is nevertheless not supervenient on any facts about the parts that involve only features and relations involved in lower-level principles. This dovetails neatly with recent work on powers in the philosophy of science, and the developments described above in the philosophy of mind and metaphysics that sees the lower level principles as descriptions of powers, and stresses that what happens when systems with different powers interact is not a simple consequence of the lower-level principles, but depends crucially on particular configurations.


Anderson, P.W. 1972 ‘More is different’ Science 177, 393-396.

Anderson, P.W. 1984 Basic Notions of Condensed Matter Physics (Boulder, CO: Westview Press)

Batterman, Robert 2002 The Devil in the Details (New York: Oxford University Press)

Bechtel, William 2008 Mental mechanisms: Philosophical Perspectives on Cognitive Neuroscience (London: Routledge)   Bechtel, William (forthcoming) ‘Understanding biological mechanisms: using illustrations from circadian rhythm research’ in Kampourakis, K. (Ed.) Philosophical Issues in Biology Education.

Broad, C.D. 1925 The Mind and Its Place in Nature (London: Routledge & Kegan Paul)

Cartwright, Nancy 1983 How the Laws of Physics Lie (Oxford: Clarendon Press)

Cartwright, Nancy 1999 The Dappled World: A Study in Complexity (Cambridge: Cambridge University Press)

Crane, Tim and D.H. Mellor, 1990 ‘There is no question of physicalism’ Mind 99 185-206.

Davies, Paul 2006 Preface to Philip Clayton and Paul Davies (eds.) The Re-Emergence of Emergence (Oxford: Oxford University Press), ix-xiv.

De Gennes, Pierre-Gilles 1980 Scaling Concepts in Polymer Physics (Ithaca, NY: Cornell University Press)

Oppenheim Paul and H. Putnam 1958 ‘Unity of science as a working hypothesis’ in H. Feigl, M. Scriven and G. Maxwell (eds.), Minnesota Studies in the Philosophy of Science Volume II (Minneapolis: University of Minnesota Press), 3-36.

Hendry, Robin Findlay 2006 ‘Is there downward causation in chemistry?’ In Davis Baird, Lee McIntyre and Eric Scerri (eds) Philosophy of Chemistry: Synthesis of a New Discipline. (Dordrecht: Springer)

Hendry Robin Findlay 2010 ‘Ontological reduction and molecular structure’ Studies in History and Philosophy of Modern Physics 41, 183–191

Hendry, Robin Findlay (forthcoming) The Metaphysics of Chemistry (Oxford: Oxford University Press)

Humphreys, Paul 1997 ‘Emergence, not supervenience’ Philosophy of Science 64, S337-S345.

Kim, Jaegwon 1998 Mind in a Physical World (Cambridge, MA: MIT Press)

Kim, Jaegwon 2006 ‘Being realistic about emergence’ in Philip Clayton and Paul Davies (eds.) The Re-Emergence of Emergence (Oxford: Oxford University Press)

Koons, R.C. and G. Bealer (eds) 2010 The Waning of Materialism (Oxford: Oxford University Press)

Laughlin, R.B 1999 ‘Nobel lecture: fractional quantization’ Reviews of Modern Physics 71, 863-874.

Loewer, Barry 2001 ‘From physics to physicalism’ in C. Gillett and B. Loewer (eds.) Physicalism and its Discontents (Cambridge: Cambridge University Press)

Lowe, E.J. 1996 Subjects of Experience (Cambridge: Cambridge University Press)

Lowe, E.J. 2008 Personal Agency: The Metaphysics of Mind and Action (Oxford: Oxford University Press)

McLaughlin, Brian 1992 ‘The rise and fall of British emergentism’ in A. Beckermann, H. Flohr, and J. Kim (eds.) Emergence or Reduction? (Berlin: de Gruyter)

Melnyk, Andrew 2004 A Physicalist Manifesto (Cambridge: Cambridge University Press)

Nagel, Ernest 1979 The Structure of Science Second Edition (Indianapolis: Hackett)

Needham, Paul 2009 ‘Reduction and emergence: a critique of Kim’ Philosophical Studies 146, 93-116.

O'Connor, Timothy 2000 Persons and Causes (Oxford: Oxford University Press)

Papineau, David 2002 Thinking about Consciousness (Oxford: Clarendon Press)

Papineau, David 2010 ‘Can any sciences be special?’ in Cynthia Macdonald and Graham Macdonald (eds.) Emergence in Mind (Oxford: Oxford University Press)

Sklar, Lawrence 1993 Physics and Chance: Philosophical Issues in the Foundations of Statistical Mechanics (Cambridge: Cambridge University Press)

Wilson, Kenneth 1975 ‘The renormalization group: critical phenomena and the Kondo problem’ Reviews of Modern Physics 47, 773-840.