Host 1 (Tim) (5:04)
The philosophical ideas we were discussing with Andrew Lampinen from DeepMind last week. Biologist Peter Corning asserted that this whole discussion rather misses the point. He said that wholes produce unique combined effects. But many of these effects may be co determined by by the context and the interactions between the whole and its environment. Now, weak emergence describes new properties arising in systems as a result of low level interactions. These might be interactions between components of the system or components and their environment. Emerging properties are scale dependent, though, and can only be observed at large enough system scale. One reason emergent behavior is hard to predict is that the number of interactions between a system's components increases exponentially with the number of components, thus allowing for many new and subtle types of behavior to emerge. Emergence is often a product of particular patterns of interaction. Negative feedback introduces constraints that serve to fix structures or behaviors. In contrast, positive feedback promotes change, allowing local variations to grow into global patterns. On the other hand, merely having a large number of interactions is not enough by itself to guarantee emergent behavior. Many of the interactions may be negligible or irrelevant, or may cancel each other out in some cases. A large number of interactions can in fact hinder the emergence of interesting behavior by creating a lot of noise to drown out any emerging signal. The system has to reach a combined threshold of diversity, organization, and connectivity before emergent behavior emergency appears. Marc Bedow said in his 1999 paper titled Weak Emergence, that an innocent form of emergence, what he called weak emergence, is now commonplace in the thriving interdisciplinary nexus of scientific activity, sometimes called the sciences of complexity. Interestingly, which he elected to put in air quotes for some reason, he said that this included connectionist modeling and nonlinear dynamics, which is now commonly known as chaos theory and indeed artificial life. He gave two interesting hallmarks of emergent phenomena in his opinion. One, emergent phenomena are somehow constituted by and generated from an underlying process, and two, emergent phenomena are somehow autonomous from the underlying process. So he said that emergence is a perennial philosophical puzzle, and at best the idea raises the specter of illegitimately getting something from nothing. He said that any defence of emergence should aim to explain, that is to say, explain away the apparent illegitimate metaphysics and indeed demonstrate emergence to be entirely compatible with materialism. He argued that emergence must be more than intellectual masturbation. Putting words in his mouth here. And actually demonstrate tangible value to the empirical sciences and be a constructive player in our understanding of the natural world. He argued that weak emergence meets these goals, but argued that stronger forms of emergence are entirely irrelevant. He said that the failings of strong emergence can be traced back to this idea of strong downward causation, which is this notion that things in the lower resolution emergent domain can cause things in the high resolution domain. Mark said that strong emergence is uncomfortably like magic. How does a supervenient but irreducibly downward causal power arise since by definition it cannot be the result of the high resolution domain. He said this would discomfort reasonable forms of materialism and pay homage to the idea that it's possible to get something from nothing. My Mark concluded by saying that strong emergence is just a mystery which we don't need. It's interesting to note that his definition of weak emergence is as Macrostate P with microdynamic D is weakly emergent if and only if P can be derived From D&S's external conditions, but only by simulation. So interestingly, his definition incorporates the necessity for computational irreducibility, but not the notion of whether it is effectively computable. One of the main hallmarks of weak emergence is the underrivability, except for finite simulation, the exponential divergence of trajectories, or indeed the so called butterfly effect. Describing the sensitivity of a physical simulation on its starting parameters is a well known feature of chaotic systems. But Mark says that weak emergence is present in almost all complex systems, regardless of whether they produce chaotic dynamics, which lead to weak emergence being part of the definition of what it means to be a complex system. The popular physics YouTuber Dr. Sabine Hossenfelder wrote a paper called the Case for Strong Emergence. She felt that weak emergence was too deterministic an affront on free will, if you like. She used to think that we're all made of tiny particles which follow strict laws and human behavior is really just a consequence of these particles laws. Needless to say, she's since changed her mind and she thinks that you should as well. She led by saying reductionism works. Large things are made of smaller things and if you know what the smaller things do, you know what the larger things do. Physicists call this idea reductionism. Now you might not like it, but it works pretty well. Arguably, reductionism allowed us to understand molecular bonds and chemical elements, atomic fission and fusion, the behavior of an atom's constituents and and the constituents of those constituents and whoever knows what the physicist will come up with next, she said. She admits that the best explanation for the world around us right now is almost certainly incomplete. Sabine decided to discuss the concept of emergence in respect to physical theories and how fundamental they are. She said that a physical theory is a set of mathematically consistent axioms combined with an identification of of some of the theory's mathematical structures with observables. If two physical theories give the same predictions for all possible observables, then they are physically equivalent. She displayed a figure depicting a directed graph of physical theories. An edge between two theories meant that One was more fundamental than the other. She said that a physical theory A is more fundamental than B if B can be derived from A, but not the other way around. In this case, the theory B is weakly emergent from A. A physical theory is fundamental if it is to the best current knowledge, not emergent from any other theory. So this is quite interesting. Weakly emergent is the opposite of more fundamental. The idea that the theory at low resolution is always weakly emergent can be derived, at least in principle, from the theory at high resolution. Sabine also discussed the causal exclusion argument, which, roughly speaking, says that if a lower resolution effect can be derived from a theory at high resolution, then the effect cannot have another cause. The causal exclusion argument combined with effective field theory is the main reason why physicists believe that reductionism is correct and in a sense, why strong emergence is not a thing. She also spoke about top down causation, which is this idea that the laws of a system at low resolution can dictate the laws at high resolution. A good example of this is the mental states in our brain causing our bodies to perform physical actions. So it's important not to think of the emergent layers as being independent or assuming that they could or should be modeled in isolation. Interestingly, though, in Sabine's article she denied that top down causation even exists at all. In her conclusion, Sabine did a 180 degrees. And she decided that in fact there are many examples where there isn't a clear effective computational or functional path between physical theories. She gave a hypothetical example of a function which cannot be computed for negative values of X or a Taylor series expansion around zero. And she said that if there are any points where the coupling can't be continued between resolutions, you'll need new initial values which would need to be determined by measurement, and therefore strong emergence is viable. She said it's only fair on philosophers who believe that strong emergence exists that physicists first show the coupling constraints of a quantum field theory can always be continued to low energies for physically realistic systems. So what is emergence? Emergence is just the interpretation of a phenomenon from the perspective of a different scale, at least according to Professor David Chalmers. He wrote a paper called Strong and Weak Emergence where he lamented the abuse of the term strong emergence by complex system scientists and cognitive scientists. Echoing Marc Bedow before him, Chalmers says that it is strong emergence which is most common in the philosophical parlance of emergence and in particular used by the British emergentists of the 1920s. He thought that we could say a high level phenomenon is strongly emergent. With respect to a low level domain, when the high level domain phenomenon arises from the low level domain, but truths concerning that phenomenon are not deducible, even in principle, from truths in the low level domain. Now, I think deducible is a bit of a weasel word, but we'll talk more about that in a minute. He says that weak emergence does not yield the same sort of radical metaphysical expansion in our conception of the world as strong emergence. But it's no less interesting, he says, that you can think of weak emergence in terms of the ease of understanding of one level. In terms of another level. Emergent properties are usually properties which are more easily understood in their own right than in terms of properties at a lower level, indicating that weak emergence appears to be an observer relative property. Now, how interesting is this high level phenomenon to an observer, and how difficult is it to deduce this phenomenon from the lower level, that is emergence? So Chalmers takes emergence in the general sense to mean surprising or interesting and indeed an unexpected phenomena. And he uses the strong versus weak designation to delineate a radical paradigmatic surprise. He says that the emergence of high level patterns in cellular automata, a paradigm of emergence in recent complex systems theory, provides a clear example. If one is given only basic rules governing a cellular automaton, then the formation of complex high level patterns such as gliders, may well be unexpected. Therefore, the patterns are weakly emergent, but the formation of those patterns is straightforwardly deducible from the rules and the initial conditions. He concedes that this might take a fair amount of computation, which he indicates as a reason why the emergent behavior wasn't obvious to start with. And I assume by the word obvious, he's kind of means it as an autonym to unexpected cellular automata are provably computationally irreducible. This means that there are no analytical shortcuts to perform the effective calculation without resorting to running the sequential simulation in its entirety. Since the computational domain is exponentially large in the case of discrete cellular automata, and infinitely large in the case of continuous cellular automata. If you were trying to find the initial conditions and rules for a given behavior, or even if you had to recompute the simulation, we would argue that this constitutes at least a semi strong designation of emergence because of the effective computability. Right? The effective computability must come into it. Professor Chalmers says that strong emergence has much more radical consequences than weak emergence. If there are phenomena that are strongly emergent with respect to the domain of physics, then our conception of the natural world would need to be revolutionized to accommodate them with new fundamental theories. Now, I find this a little bit strange. I mean, given that a Class 4 cellular automata is Turing complete, which is to say that they can represent any computer program, it seems like a contentious point that there's no possible output in a cellular automata, which would be paradigmatically surprising. Maybe I'm wrong. To be clear, Chalmers is a materialist, right? He's not subscribing to any kooky views by saying this. He's a computationalist in the sense that he agrees that if you replicated him atom by atom in the natural world according to our universe, then it would have a consciousness.