The following is the first complete draft of this e-book, aimed at scientifically-minded people including young scientists. The contents are as follows:
Prelude: the unconscious brain
Chapter 1: What is lucidity? What is understanding?
Chapter 2: Mindsets, evolution, and language
Chapter 3: Acausality illusions, and the way perception works
Chapter 4: What is science?
Chapter 5: Music, mathematics, and the Platonic
Postlude: the amplifier metaphor for climate
With encouragement from a potential publisher, I'm keeping this draft in the public domain as I enter my twilight years because, plainly, the issues at stake are becoming more urgent than ever. Comments welcome!
Not least amongst the problems are our communication difficulties and the forces working against good science, of which young scientists need to be aware. What's amazing, though -- and to me inspirational -- is that good science continues to make progress despite all these problems. Contributing to good science while understanding both its power and its limitations, and its deep connections to other great human endeavours, seems to me one of the most worthwhile things anyone can do.
The e-book builds on ideas from the three Lucidity and Science articles published in Interdisciplinary Science Reviews 22, 199-216 and 285-303 (1997) and 23, 29-70 (1998) together with my keynote lecture to the 4th Kobe Symposium on Human Development published in Bull. Faculty Human Devel. (Kobe University, Japan), 7(3), 1-52 (2000). Corrected and updated copies of the Interdisciplinary Science Reviews articles can be downloaded via this index. See also the CORRIGENDUM, a slightly corrupted version of which was published in the December 1998 issue of Interdisciplinary Science Reviews.
Consider, if you will, the following questions.
Good answers are important to our hopes of a civilized future; and many of the answers are surprisingly simple. But a quest to find them will soon encounter a conceptual and linguistic minefield, as witness, for example, the way in which ideas like `innateness' and `instinct' have taken such a battering for so long (e.g. Bateson and Martin 1999), and the way in which the selfish-gene metaphor, while still useful in its place, has persisted as a kind of Answer to Everything in evolutionary biology. But I think I can put us within reach of some good answers (with a small `a') by recalling, first, a few points about how our human and pre-human ancestors must have evolved according to today's best evidence -- in a way that differed, in crucial respects, from what popular culture says, and popular books on evolution -- and, second, a few points about how we perceive and understand the world, especially points you can check for yourself with no special equipment.
Our mental apparatus for what we call perception and understanding, or cognition, and our language ability, must have been shaped by the way our ancestors survived over many millions of years. And the way they survived, though clearly a very `instinctive' business indeed, was not, the best evidence says, `all in the genes' as some people say. Nor was it all down to culture. Rather, genes and culture must have evolved together, as crucial parts of a multi-timescale process. In such a process -- of which countless other examples are well known and well understood, in many branches of science -- there is a strong, two-way interaction between very slow and very fast mechanisms. In this case they are genomic evolution and cultural evolution. Surprisingly, as I'll discuss in chapter 2, such interactions have often been neglected in the literature on evolutionary biology. They are, however, increasingly recognized today.
The way perception works -- and language -- will be a major theme in this book, including insights from the way music works.
It seems to me that one of the points missed in the sometimes narrow-minded debates about `nature versus nurture', `instinct versus learning', `genes versus culture', `selfishness versus altruism' and so on is that most of what's involved in perception and understanding, and in our general functioning, takes place well beyond the reach of conscious thought. Some people find this hard to accept. Perhaps they feel offended, in a personal way, to be told that the slightest aspect of their existence might, just possibly, not be under full and rigorous conscious control. A scientist I know took offence in exactly that way, in a recent discussion of unconscious assumptions in science, even though exposing such assumptions is the usual way in which scientific knowledge improves.
It's easy to show that plenty of things in our brains take place involuntarily, that is, entirely beyond the reach of conscious thought and conscious control. Kahneman (2011) gives many examples. My own favourite example is a very simple one, Gunnar Johansson's `walking lights' animation. Twelve moving dots in a two-dimensional plane are unconsciously assumed to represent a particular three-dimensional motion. When the dots are moving, everyone with normal vision sees -- has no choice but to see -- a person walking:
Figure 1: Gunnar Johansson's `walking lights' animation. The printed version of this book will show a QR code to display the animation on a smartphone, and will provide it as a page-flick movie. It can also be found by websearching for "Gunnar Johansson's walking lights". The walking-lights phenomenon is a well studied classic in experimental psychology and is one of the most robust perceptual phenomena known.
And again, anyone who has driven cars or flown aircraft will probably remember experiences in which accidents were narrowly avoided, ahead of conscious thought. The typical experience is often described as witnessing oneself taking, for instance, evasive action when faced with a head-on collision. It is all over by the time conscious thinking has begun. It has happened to me. I think such experiences are quite common.
Many years ago, the anthropologist-philosopher Gregory Bateson put the essential point rather well, in classic evolutionary cost-benefit terms:
No organism can afford to be conscious of matters with which it could deal at unconscious levels.
Gregory Bateson's point applies to us as well as to other living organisms. Why? There's a mathematical reason, combinatorial largeness. Every living organism has to deal all the time with a combinatorial tree, a combinatorially large number, of present and future possibilities. Being conscious of all those possibilities would be almost infinitely costly.
Combinatorially large means exponentially large, like compound interest over millennia, or the number of ways to shuffle a pack of cards. Each branching of possibilities multiplies, rather than adds to, the number of possibilities. Such numbers are unimaginably large. No-one can feel their magnitudes intuitively. For instance the number of ways to shuffle a pack of 53 cards is just over 4 x 1069 , or four thousand million trillion trillion trillion trillion trillion.
The `instinctive' avoidance of head-on collision in a car -- the action taken ahead of conscious thought -- was not, of course, something that came exclusively from genetic memory. Learning was involved as well. The same goes for the way we see the `walking lights' animation. But much of that learning was itself unconscious, stretching back to the (instinctive) infantile groping that discovers the world and helps the visual system to develop. At a biologically fundamental level, nurture is intimately part of nature. That intimacy stretches even further back, to the genome within the embryo `discovering', and then interacting with, its maternal environment both within and outside the embryo (Noble 2006, chapter 4). Jurassic Park is a great story, but scientifically wrong because you need complete dinosaur eggs as well as dinosaur DNA. (Who knows, though -- since birds are today's dinosaurs, someone might manage it with birds' eggs.)
Normal vision, by the way, is known not to develop in people who start life with a congenital cataract or opaque cornea. That fact has surprised many who've supposed that surgical removal of the opaque element in later life would `make the blind to see'. A discussion of typical case histories can be found in Sacks (1995).
As must be obvious by now, my approach to the foregoing questions will be that of a scientist. Scientific thinking is my profession. Although many branches of science interest me, my professional work has mainly been applied-mathematical research to understand the highly complex fluid dynamics of the Earth's atmosphere -- phenomena such as the great jetstreams and the air motion that shapes the Antarctic ozone hole. There are associated phenomena sometimes called the `world's largest breaking waves'. (Imagine a sideways breaker the mere tip of which is as big as the United States.) That in turn has helped us to understand the fluid dynamics and magnetic fields of the Sun's interior, in an unexpected way. But long ago I almost became a musician. Or rather, in my youth I was, in fact, a part-time professional musician and could have made it into a full-time career. So I've had artistic preoccupations too, and artistic aspirations. This book tries to get at the deepest connections between all these things.
It's obvious, isn't it, that science, mathematics, and the arts are all of them bound up with the way perception works. And common to all these human activities, including science -- whatever popular culture may say to the contrary -- is the creativity that leads to new understanding, the thrill of lateral thinking, and sheer wonder at the whole phenomenon of life itself and at the astonishing Universe we live in.
One of the greatest of those wonders is our own adaptability. Who knows, it might even get us through today's troubles, hopeless though that might seem just now. That's despite our being genetically similar to our hunter-gatherer ancestors, tribes of people driven again and again to migration and warfare in increasingly clever ways by, among other things, huge and rapid climate fluctuations -- the legendary years of famine and years of plenty. (How else did our species -- a single, genetically-compatible species with its single human genome -- spread around the globe in less than a hundred millennia?) In chapter 2 I'll point to recent hard evidence for the sheer rapidity, and magnitude, of those climate fluctuations, and to some important advances in our understanding of evolutionary biology and natural selection, and human nature -- advances that are hard to find in the popular-science literature. And here, by the way, as in most of this book, I lay no claim to originality. For instance the evidence on past climates comes from the painstaking work of many other scientists, including great scientists such as the late Nick Shackleton whom I had the privilege of knowing personally.
Our ancestors must have had not only language and lateral thinking -- and music, dance, poetry, and storytelling -- but also, no doubt, the mechanisms of greed, power games, scapegoating, genocide, ecstatic suicide and the rest. To survive, they must have had love and altruism too, consciously or unconsciously. The precise timescales and evolutionary pathways for these things are uncertain. But the timescales for at least some of them, including the growth of our language ability, must have been more like thousands than hundreds of millennia. As already suggested they must have depended on the co-evolution of genome and culture, very much a multi-timescale process as recognized long ago by Jacques Monod (1970) and Phillip Tobias (1971) even though often overlooked, it seems, in the literature on evolutionary biology. In fact multi-timescale processes are commonplace in nature. They show up in a vast range of phenomena. One of the simplest is air pressure, as when pumping up a bicycle tyre. Fast molecular collisions mediate slow changes in air pressure, and temperature. This is a strong, indeed crucial, two-way interaction across arbitrarily disparate timescales.
The opposing ideas that long and short timescales can't interact, and that cultural evolution, being fast, is something completely separate and that language suddenly started around a hundred millennia ago, or even more recently, as a purely cultural invention signalled by the appearance of beads, bracelets and other such durables in the archaeological record -- ideas that are still repeated from time to time (e.g. Trask et al. 1998, Pagel 2012) -- never made sense to me. At a biologically fundamental level, they make no more sense than does the underlying false dichotomy, nature `versus' nurture. I'll return to these points in chapter 2. It's sometimes forgotten that language and culture can be mediated purely by sound waves and light waves and held in individuals' memories: the epic saga phenomenon, if you will, as in the Odyssey or in a multitude of other oral traditions, including the `immense wealth' (van der Post 1972) of the unwritten literature of Africa. That's a very convenient, an eminently portable, form of culture for a tribe on the move. And sound waves and light waves are such ephemeral things. They have the annoying property of leaving no archaeological trace. But absence of evidence isn't evidence of absence.
And now, in a mere flash of evolutionary time, a mere few millennia, we've shown our adaptability in ways that seem to me more astonishing than ever. We no longer panic at the sight of a comet. Demons in the air have shrunk to a small minority of alien abductors. We don't burn witches and heretics, at least not literally. Most of us condemn human sacrifice as barbaric, a very recent development in societal norms (e.g. Ehrenreich 1997). The Pope dares to apologize for past misdeeds. Genocide was somehow avoided in South Africa -- very surprising, when you think about it. We even dare, sometimes, to tolerate individual propensities and lifestyles if they don't harm others. We dare to talk about astonishing new things called personal freedom, social justice, women's rights, and human rights, and sometimes even take them seriously despite the confusion they cause from time to time perhaps, as the philosopher John Gray reminded us recently, through hypercredulity -- through perceiving, or feeling, Human Rights as the Answer to Everything. And, most astonishing of all, since 1945 we've even had the good sense so far -- and very much against the odds -- to avoid the use of nuclear weapons.
We have space-based observing instruments, super-accurate clocks, and the super-accurate global positioning that cross-checks Einstein's gravitational theory -- yet again -- and now a further and completely different cross-check of consummate beauty, detection of the lightspeed spacetime ripples predicted by the theory. We have marvelled at the sight of our beautiful home planet in space, poised above the lunar horizon. We have the Internet, making old-style censorship much harder and bringing us new degrees of freedom and profligacy of information and disinformation. It gives us new opportunities, indeed imperatives, to exercise critical judgement and to build computational systems and artificial intelligences of unprecedented power -- exploiting the robustness and reliability growing out of the open-source software movement, `the collective IQ of thousands of individuals' (Valloppillil et al. 1998). We can read and write genetic codes, and thanks to our collective IQ are beginning, just beginning, to understand them (e.g. Wagner 2014). On large and small scales we're carrying out extraordinary new social experiments with labels like `free-market democracy', `free-market autocracy', `children's democracy' (Vaughan 2006), `microlending' conducive to population control (Yunus 1998), and now the social media, so called, and `citizen science' on the Internet. With the weaponization of the social media now upon us there's a huge downside, as with any new technology. But there's also a huge upside, and everything to play for.
One still hears it said that we live in an age devoid of faith, in the technically advanced societies at least. Well, I'm one of those who have a personal faith, a belief, a conviction, a passion, that the urge and the curiosity to increase our own self-understanding can evolve us toward societies that won't be utopian Answers to Everything but could be more hopeful, more spiritually healthy, and generally more civilized than today. That's at least a possibility, despite current trends. And we need such self-understanding in any case, if only to be better aware of its use, and would-be monopolization, by the technocrats of the social media.
When the Mahatma Gandhi visited England in 1930 he is said to have been asked by a journalist, `Mr Gandhi, what do you think of modern civilization?' The Mahatma is said to have replied, `That would be a good idea.' The optimist in me hopes you agree. Part of such a civilization would be not only a clearer recognition of the power and limitations of science -- including its power, and its limitations, in helping us to understand our own nature -- but also a further healing of the estrangement between science and the arts. It might even -- dare I hope for this? -- further reconcile science with the more compassionate, less dogmatic, less politicized, less violent forms of religion and other belief systems that are important for the mental and spiritual health of so many people.
I have dared to hint at the `deepest' connections amongst all these things. In a peculiar way, some of the connections can be seen not only as deep but also as simple -- provided that one is willing to think on more than one level, and willing to maintain a certain humility -- a willingness to admit that even one's own best idea might not be the Answer to Everything.
Multi-level thinking is nothing new. It has long been recognized, unconsciously at least, as being essential to science. It goes back in time beyond Newton, Galileo and Archimedes. What's complex at one level can be simple at another. Newton treated the Earth as a perfect sphere, and then as a point mass.
Today we have a new conceptual framework, complexity theory or complex-systems theory, working in tandem with the new Bayesian causality theory -- the powerful mathematics behind the commercial, and now political, use of big-data analytics within the social media (Pearl and Mackenzie 2018, McNamee 2019). Its use within science should help to clarify what's involved in multi-level thinking and to develop it more systematically, more generally, and more consciously. Key ideas include self-organization, self-assembling components or building blocks, and the Bayesian probabilistic `do' operator. The uses of the `do' operator by the social-media technocrats include myriads of behavioural experiments on a vast scale, such as Pokémon Go -- amounting to a `large hadron collider' of experimental psychology -- from which they've built the models of unconscious human behaviour now in use for commercial and political gain, the behaviour-prediction and behaviour-targeting engines. The sheer scale of this innovation is hard to grasp, but if only for democracy's sake it needs to be widely understood.
Another key idea is that of emergent properties -- at different levels of description within complex systems and hierarchies of systems, not least the human brain itself. `Emergent property' is a specialist term for something that looks simple at one level even though caused by the interplay of complex, chaotic events at deeper levels. A related idea is that of `order emerging from chaos'. Self-assembling building blocks are also called autonomous components, or `automata' for brevity.
We'll see that the ideas of multi-level thinking, automata, and self-organization are all crucial to making sense of many basic phenomena, such as the way genetic memory works and what instincts are -- instincts, that is, in the everyday sense relating to things we do, and perceive, and feel automatically, ahead of conscious thought.
One example is the way normal vision develops, giving rise to perceptual phenomena such as the walking lights. It's known that the visual system assembles itself from many automata -- automata made of molecular-biological circuits and assemblies of such circuits -- switching genes on and off and subject to inputs from each other and from the external environment, during many stages of unconscious learning at molecular level and upward. Another example is language, as will be shown in chapter 2. And without multi-level thinking there's no chance of avoiding the confusion surrounding ideas such as `selfish gene', `altruism', `consciousness', and `free will'.
Scientific progress has always been about finding a level of description and a viewpoint, or viewpoints, from which something at first sight hopelessly complex can be seen as simple enough to be understandable. The Antarctic ozone hole is a case in point. I myself made a contribution by spotting some simplifying features in the fluid dynamics, in the way the air moves and transports chemicals. And, by the way, so high is our scientific confidence in today's understanding of the ozone hole -- with a multitude of observational and theoretical cross-checks -- that the professional disinformers who tried to discredit that understanding, in a well known information-warfare campaign, are no longer taken seriously. That's despite the enormous complexity of the problem, involving spatial scales from the planetary down to the atomic, and timescales from centuries down to thousand-trillionths of a second -- and despite the disinformers' financial resources and their powerful influence on the newsmedia, of which more later.
We now have practical certainty, and wide acceptance, that man-made chemicals are the main cause of the ozone hole. We now understand in detail why the deepest ozone hole appears in the south, even though the chemicals causing it are emitted mostly in the north. And through the Montreal Protocol we now have internationally-agreed regulations to restrict emissions of those chemicals, despite the disinformers' aim of stopping any such regulation. We have a new symbiosis between regulation and market forces. It's now well documented, by the way, how some of the same disinformers had already honed their skills in the tobacco companies' lung-cancer campaigns (Oreskes and Conway 2010), developing the dark arts of camouflage and deception from a deep understanding of human nature and the way perception works. And history is now repeating itself in the wider arena of climate science, and in the other front lines of information warfare in the social media.
What makes life as a scientist worth living? For me, part of the answer is the joy of being honest. There's a scientific ideal and a scientific ethic that power open science. And they depend crucially on honesty. If you stand up in front of a large conference and say of your favourite theory `I was wrong', you gain respect rather than losing it. I've seen it happen. Your reputation increases. Why? The scientific ideal says that respect for the evidence, for theoretical coherence and self-consistency, for finding mistakes and for improving our collective knowledge is more important than personal ego or financial gain. And if someone else has found evidence that refutes your theory, then the scientific ethic requires you to say so. The ethic says that you must not only be factually honest but must also give due credit to others, by name, whenever their contributions are relevant.
The scientific ideal and ethic are powerful because, even when imperfectly followed, they encourage not only a healthy scepticism but also a healthy mixture of competition and cooperation. Just as in the open-source software community, the ideal and ethic harness the collective IQ, the collective brainpower, of large research communities in a way that can transcend even the power of short-term greed and financial gain. The ozone hole is a case in point. So is the human-genome story (Sulston and Ferry 2002). Our collective brainpower is the best hope of solving the many formidable problems now confronting us.
In the Postlude I'll return to some of those problems, and to the ongoing struggle between open science and the forces working against it -- whether consciously or unconsciously -- with particular reference to climate change. Again, there's no claim to originality here. I merely aim to pick out, from the morass of confusion surrounding the topic, a few simple points clarifying where the uncertainties lie, as well as the near-certainties. These points are far simpler than many people think.
Note regarding citations: This book attempts to lighten up on scholarly citations, beyond some publications of exceptional importance, mostly recent, such as Pearl and Mackenzie (2018). There are two reasons. The first is that my original publications on these matters were extensively end-noted, making clear my many debts to the research literature but sometimes making for heavy reading. The second is the ease with which one can now track down references by websearching with key phrases. For instance, websearching with the exact phrase "lucidity principles" will quickly find my original publications complete with endnotes and personal acknowledgements, either on my own website or in an archive at the British Library.
This book reflects my own journey toward the frontiers of human self-understanding. Of course many others have made such journeys. But in my case the journey began in a slightly unusual way.
Music and the visual and literary arts were always part of my life. Music was pure magic to me as a small child. But the conscious journey began with a puzzle. While reading my students' doctoral thesis drafts, and working as a scientific journal editor, I began to wonder why lucidity, or clarity -- in writing and speaking, as well as in thinking -- is often found difficult to achieve; and I wondered why some of my scientific and mathematical colleagues are such surprisingly bad communicators, even within their own research communities, let alone on issues of public concern. Then I began to wonder what lucidity is, in a functional or operational sense. And then I began to suspect a deep connection with the way music works.
I now like to understand the word `lucidity' in a more general sense than usual. It's not only about what you can find in style manuals and in books on how to write, excellent and useful though many of them are. (Strunk and White 1979 is a little gem.) It's also about deeper connections not only with music but also with mathematics, pattern perception, biological evolution, and science in general. A common thread is the `organic-change principle'. It's familiar, I think, to most artists, at least unconsciously.
The principle says that we're perceptually sensitive to, and have an unconscious interest in, patterns exhibiting `organic change'. These are patterns in which some things change, continuously or by small amounts, while others stay the same. So an organically-changing pattern has invariant elements.
The walking lights is an example. The invariant elements include the number of dots, always twelve dots. Musical harmony is another -- chord progressions if you will. Musical harmony is an interesting case because `small amounts' applies not in one but in two different senses, leading to the idea of `musical hyperspace'. Chord progressions can take us somewhere that's nearby yet far away. That's how some of the magic is done, in many styles of Western music. An octave leap is a large change in one sense, but small in the other, indeed so small that musicians use the same name for the two pitches. The invariant elements in a chord progression can be pitches or chord shapes.
Music consists of organically-changing sound patterns not just in its harmony or chord progressions, but also in its melodic shapes and counterpoint and in the overall form, or architecture, of an entire piece of music. Mathematics involves organic change too. In mathematics there are beautiful results about `invariants' or `conserved quantities', things that stay the same while other things change, often continuously through a vast space of possibilities. The great mathematician Emmy Noether discovered a common origin for many such results, through a profound and original piece of mathematical thinking. It is called Noether's Theorem and is recognized today as a foundation-stone of theoretical physics.
Our perceptual sensitivity to organic change exists for strong biological reasons. One reason is the survival-value of recognizing the difference between living things and dead or inanimate things. To see a cat stalking a bird, or to see a flower opening, is to see organic change.
So I'd dare to describe our sensitivity to it as instinctive. Many years ago I saw a pet kitten suddenly die of some mysterious but acute disease. I had never seen death before, but I remember feeling instantly sure of what had happened -- ahead of conscious thought. And the ability to see the difference between living and dead has been shown to be well developed in human infants a few months old.
Notice by the way how intimately involved, in all this, are ideas of a very abstract kind. The idea of some things changing while others stay invariant is itself highly abstract, as well as simple. It is abstract in the sense that vast numbers of possibilities are included. There are vast numbers -- combinatorially large numbers -- of organically-changing patterns. Here we're again glimpsing the fact already hinted at, that the unconscious brain can handle many possibilities at once. We have an unconscious power of abstraction. That's almost the same as saying that we have unconscious mathematics. Mathematics is a precise means of handling many possibilities at once, in a self-consistent way. For instance the walking-lights animation shows that we have unconscious Euclidean geometry, the mathematics of angles and distances. The roots of mathematics and logic, and of abstract cognitive symbolism, lie far deeper and are evolutionarily far more ancient than they're usually thought to be. In chapter 5 I'll show that our unconscious mathematics includes the mathematics underlying Noether's theorem.
So I've been interested in lucidity, `lucidity principles', and related matters in a sense that cuts deeper than, and goes far beyond, the niceties and pedantries of style manuals. But before anyone starts thinking that it's all about ivory-tower philosophy and cloud-cuckoo-land, let's remind ourselves of some harsh practical realities -- as Plato himself would have done had he lived today. What I'm talking about is relevant not only to thinking and communication but also, for instance, to the ergonomic design of machinery, of software and user-friendly IT systems (information technology), of user interfaces in general, friendly and unfriendly, and of technological systems of any kind including the emerging artificial-intelligence systems, where the stakes are so incalculably high (e.g. Rees 2014, McNamee 2019).
The organic-change principle -- that we're perceptually sensitive to organically-changing patterns -- shows why good practice in any of these endeavours involves not only variation but also invariant elements, i.e., repeated elements, just as music does. Good control-panel design might use, for instance, repeated shapes for control knobs or buttons. And in writing and speaking one needn't be afraid of repetition, especially if it forms the invariant element within an organically-changing word pattern. `If you are serious, then I'll be serious' is a clearer and stronger sentence than `If you are serious, then I'll be earnest.' Such pointless or gratuitous variation in place of repetition is what Fowler (1983) ironically called `elegant' variation, an `incurable vice' of `the minor novelists and the reporters'. Its opposite -- let's call it lucid repetition, as with the second `serious' -- isn't the same as being repetitious. The pattern as a whole is changing, organically. It works the same way in all the languages I've looked at, including Chinese.
Two other `lucidity principles' are worth noting briefly, while I'm at it. (You can find more on the transferable skills, and on the underlying experimental psychology, by websearching for the exact phrase "lucidity principles".) There's an `explicitness principle' -- the need to be more explicit than you feel necessary -- because, obviously, you're communicating with someone whose head isn't full of what your own head is full of. As the great mathematician J. E. Littlewood once put it, `Two trivialities omitted can add up to an impasse.' Again, this applies to design in general, as well as to any form of writing or speaking that aims at lucidity. And of course there's the more obvious `coherent-ordering principle', the need to build context before new points are introduced. It applies not only to writing and speaking but also to the design of anything intended to take you through some sequential process on, for instance, a website or a ticket-vending machine.
There's another reason for attending to the explicitness principle. Human language is surprisingly weak on logic-checking. For this and other reasons, human language is a conceptual minefield. (This keeps many philosophers in business.)
Beyond everyday misunderstandings we have, for instance, not only the workings of professional camouflage, deception and information warfare, but also the inadvertent communication failures underlying, for instance, the usual IT disasters:
Figure 2a: How the customer explained it. (Courtesy of projectcartoon.com.)
Figure 2b: How the analyst designed it. (Courtesy of projectcartoon.com.)
Figure 2c: How the programmer wrote it. (Courtesy of projectcartoon.com.)
Figure 2d: What the customer really wanted. (Courtesy of projectcartoon.com, q.v. for elaborations.)
The logic-checking weakness shows up in the misnomers and self-contradictory terms encountered not only in everyday dealings but also -- to my continual surprise -- in the technical language used by my scientific and engineering colleagues. You'd think we should know better. You'd laugh if, echoing Spike Milligan, I said that someone has a `hairy bald head'. But consider for example the scientific term `solar constant'. It's a precisely-defined measure of the mean solar power per unit area reaching Earth. Well, the solar constant isn't a constant. It's variable, because the Sun's output is variable, though fortunately by not too much.
Another such term -- please forgive me if I get technical for a moment -- is the so-called 'slow manifold'. It is an abstract mathematical entity, a complicated geometrical object that's important in my research field of atmospheric and oceanic fluid dynamics. Well, the slow manifold isn't a manifold. In non-technical language, it's like something hairy, while a manifold is like something bald. I'm not kidding. (I've tried hard to persuade my fluid-dynamical colleagues to switch to `slow quasimanifold', but with scant success so far. For practical purposes the thing often behaves as if it were a manifold, even though it isn't. It's `thinly hairy'.)
In air-ticket booking systems there's a `reference number' that isn't a number. In finance there's a term `securitization' that means, among other things, making an investment less secure -- yes, less secure -- by camouflaging what it's based on. And then there's the famous `heteroactive barber'. That's the barber who shaves only those who don't shave themselves. `Heteroactive barber' may sound impressive. Some think it philosophically profound. But it's no more than just another self-contradictory term. Seeing that fact does, however, take a conscious effort. There's no instinctive logic-checking whatever. There are clear biological reasons for this state of things, to which I'll return in chapter 2. I'll leave it to you, dear reader, if need be, to go through the logical steps showing that `heteroactive barber' is indeed a self-contradictory term. (If he doesn't shave himself, then it follows that he does, etc.)
Being more explicit than you feel necessary improves your chances of negotiating the minefield. It clarifies your own thinking. Your chances are improved still further if you get rid of gratuitous variations and replace them by lucid repetitions, maintaining the sometimes tricky discipline of calling the same thing by the same name, as in good control-panel design using repeated control-knob shapes. And it's even better if you're cautious about choosing which shape, or which name or term, to use. You might even want to define a technical term carefully at its first occurrence, if only because meanings keep changing, even in science. `I'll use the idea of whatsit in the sense of such-and-such, not to be confused with whatsit in the sense of so-and-so.' 'I'll denote the so-called solar constant by S, remembering that it's actually variable.' Another example is `the climate sensitivity'. It has multiple meanings, as I'll explain in the the Postlude. In his 1959 Reith Lectures, the great biologist Peter Medawar remarks on the `appalling confusion and waste of time' caused by the `innocent belief' that a single word has a single meaning.
A fourth `lucidity principle' -- again applying to good visual and technical design as well as to good writing and speaking -- is of course pruning, the elimination of anything superfluous. On your control panel, or web page, or ticket-vending machine, or in your software code and documentation, it's helpful to omit visual and verbal distractions. In writing and speaking, it's helpful to `omit needless words', as Strunk and White put it. If you're a good observer, you'll have noticed the foregoing lucidity principles in action when you look at the meteoric rise of some businesses. Google was a clear example. Indeed, there's a tendency to regard lucidity principles as trade secrets, or proprietary possessions. I recall some expensive litigation by another fast-rising business, Amazon, claiming proprietary ownership of `omit needless clicks'.
Websites, ticket-vending machines, and other user interfaces that, by contrast, violate lucidity principles -- making them `unfriendly' -- are still remarkably common, together with all those unfriendly technical manuals and financial instruments. One repeatedly encounters gratuitous variation and inexplicitness, combined with verbal and visual distractions and other needless complexity. The pre-google search engines were typical examples. Their cluttered screens and chaotic, semi-explicit search rules are now, thankfully, a fast-fading memory. With those search engines and with many technical manuals the violations often seem inadvertent, stemming from ignorance. With financial instruments, on the other hand, one might dare to speculate that some of the violations are deliberate, favouring a small élite of individuals clever enough to break the codes and then, in due course, wealthy enough to employ a whole team of codebreakers.
Among the commonplace gratuitous variations I was struck recently by the case of the two reservation codes encountered when booking air tickets. This case has the usual whiff of inadvertence. The two codes look similar, but have distinct purposes that the customer has to decode. So far I've counted nine different but overlapping names for the two reservation codes. On one occasion my booking used a code `2BE8HM', variously called the reference number (even though it isn't a number), the flight reference number, the airline reference, the airline reference locator, the reservation code, and the booking reservation number. Another, similar-looking code `LIEV86', whose purpose was entirely different, was variously called the airline reservation code, the airline booking reference, and the airline confirmation number. The same thing with different names, and different things with the same name. As also found in technical manuals. One encounters further examples when making online purchases. Why are we told to be sure to quote our `account number' when the website called it the `customer reference number'? Could there be a second kind of number I need to know about, or is it yet another gratuitous variation?
In case you think this is getting trivial, let me remind you of Three Mile Island Reactor TMI-2, and the nuclear accident for which it became well known in 1979. The accident was potentially very dangerous as well as incalculably costly, especially when you count the long-term damage to customer confidence. Was that trivial?
You don't need to be a professional psychologist to appreciate the point. Before the nuclear accident, the control panels were like a gratuitously-varied set of traffic lights in which stop is sometimes denoted by red and sometimes by green, and vice versa. Thus, at Three Mile Island, a particular colour on one control panel meant normal functioning while the same colour on another panel meant `malfunction, watch out' (Hunt 1993). Well, the operators got confused.
As I walk around Cambridge and other parts of the UK, I continually encounter the `postmodernist traffic rules' followed by pedestrians here. Postmodernism says that `anything goes'. So you keep left or keep right just as you fancy. All for the sake of interest and variety. How boring, how pedantic, to keep left all the time. Just like those boring traffic lights where red always means stop. To be fair, the UK Highway Code quite reasonably tells us to face oncoming traffic on narrow country lanes except, of course, on right-hand bends, and on unsegregated pedestrian-plus-cycle tracks where the Code does indeed say, implicitly, that anything goes. I always feel a slight sense of relief when I visit the USA, where everyone keeps right most of the time.
There's a quasi-bureaucratic mindset that seems ignorant, or uncaring, about examples like Three Mile Island. It says `User-friendliness is a luxury we can't afford.' (Yes, afford.) `Go away and read the technical manual. Look, it says on page 342 that red means `stop' on one-way streets, `go' on two-way streets, and `caution' on right-hand bends. And of course it's the other way round on Sundays and public holidays, except for Christmas which is obviously an exception. What could be clearer? Just read it carefully, all 450 pages, and do exactly what it says,' etc.
With complicated systems like nuclear power plants, or large IT systems, or space-based observing systems -- such as those created by some of my most brilliant scientific colleagues -- there's a combinatorially large number of ways for the system to go wrong even with good design, and even with communication failures kept to a minimum. I'm always amazed when any of these systems work at all. I'm also amazed at how our governing politicians overlook this point again and again, it seems, when commissioning the large IT systems that they hope will save money. All this, of course, is familiar territory for the risk assessors working in the insurance industry, and in the military and security services.
What then is lucidity, in the sense I'm talking about? Let me try to draw a few threads together. In the words of an earlier essay, which was mostly about writing and speaking, `Lucidity... exploits natural, biologically ancient perceptual sensitivities, such as the sensitivities to organic change and to coherent ordering, which reflect our instinctive, unconscious interest in the living world in which our ancestors survived. Lucidity exploits, for instance, the fact that organically changing patterns contain invariant or repeated elements. Lucid writing and speaking are highly explicit, and where possible use the same word or phrase for the same thing, similar word-patterns for similar or comparable things, and different words, phrases, and word-patterns for different things... Context is built before new points are introduced...'
I also argued that `Lucidity is something that satisfies our unconscious, as well as our conscious, interest in coherence and self-consistency' -- in things that make sense -- and that it's about `making superficial patterns consistent with deeper patterns'. It can be useful to think of our perceptual apparatus as a multi-level pattern recognition system, with many unconscious levels.
To summarize, four `lucidity principles' seem especially useful in practice. They amount to saying that skilful communicators and designers give attention to organic change, to explicitness, to coherent ordering, and to pruning superfluous material. The principles apply not only to writing and speaking but also, for instance, to website and user-interface design and to the safety systems of nuclear power plants, with stakes measured in billions of dollars.
Of course a mastery of lucidity principles can also serve an interest in camouflage and deception, with even higher stakes. Such mastery was conspicuous in the tobacco and ozone-hole disinformation campaigns, and more recently in climate disinformation. It is, and always was, conspicuous on political battlefields and in the dichotomizing speeches of demagogues, and binary-referendum campaigners, as in `You're either with us or against us'. That's another case of making superficial patterns consistent with deeper patterns, including deeper patterns of an unpleasant and dangerous kind, embedded in the more reptilian parts of our brains. Today the dark arts of full-blown camouflage and deception and of making the illogical seem logical -- the so-called `weapons of mass deception' -- have been further developed in a highly professional and well-resourced way. The weapon makers exploit not only their deep knowledge of the way perception works, including the dichotomization instinct, but also the big-data technology of the social media and the postmodernist idea that, regardless of the evidence, anything goes (e.g. Pomerantsev 2015). Today's social media are artificial intelligences that are not yet very intelligent (e.g. McNamee 2019), with their mindless amplification of evidence-blindness and of dichotomies such as with us or against us, like or dislike and, as the social-media jargon has it, friend or unfriend.
Enough of that! What of my other question? What is this subtle and elusive thing we call understanding, or insight? Of course there are many answers, depending on one's purpose and viewpoint. As far as science is concerned, however, let me try to counter some of the popular myths. What I've always found in my own research, and have always tried to suggest to my students, is that developing a good scientific understanding of something -- even something in the inanimate physical world -- requires looking at it, and testing it, from as many different viewpoints as possible as well as maintaining a healthy scepticism. It's sometimes called `diversity of thought', and because it respects the evidence is to be sharply distinguished from the postmodernist `anything goes'. It is an important part of the creativity that goes into good science.
For instance, the fluid-dynamical phenomena I've studied are far too complex to be understandable at all from a single viewpoint, such as the viewpoint provided by a particular set of mathematical equations. One needs equations, words, pictures, and feelings all working together, as far as possible, to form a self-consistent whole. And the fluid-dynamical equations themselves take different forms embodying different viewpoints, with technical names such as `variational', `Eulerian', `Lagrangian', and so on. They're mathematically equivalent but, as Richard Feynman used to say, `psychologically very different'. Bringing in words, in a lucid way, is a critically important part of the whole but needs to be related to, and made consistent with, equations, pictures, and feelings.
Such multi-modal thinking and healthy scepticism have been the only ways I've known of escaping from the usual mindsets or unconscious assumptions that tend to entrap us. The history of science shows that escaping from such mindsets has always been a key aspect of progress. And an important aid to cultivating a multi-modal view of any scientific problem is the habit of performing what Albert Einstein famously called `thought-experiments', and mentally viewing those from as many angles as possible.
Einstein certainly talked about feeling things, in one's imagination -- forces, motion, colliding particles, light waves -- and was always doing thought-experiments, `what-if experiments' if you prefer. The same thread runs through the testimonies of Feynman and of other great scientists, such as Henri Poincaré, Peter Medawar, and Jacques Monod. It all goes back to juvenile play, that deadly serious rehearsal for real life -- young children pushing and pulling things (and people!) to see, and feel, how they work.
In my own research community I've often noticed colleagues having futile arguments about `the' cause of some observed phenomenon. `It's driven by such-and-such', says one. `No, it's driven by so-and-so', says another. Sometimes the argument gets quite acrimonious. Often, though, they're at cross-purposes because, perhaps unconsciously, they have two different thought-experiments in mind.
And notice by the way how the verb `to drive' illustrates what I mean by language as a conceptual minefield. The verb `to drive' sounds incisive and clearcut, but is nonetheless dangerously ambiguous. I sometimes think that our word-processors should make it flash red for danger, as soon as it's typed, along with a few other dangerously ambiguous words such as the pronoun `this'.
Quite often, `to drive' is used when a better verb would be `to mediate', as often used in the biological literature to signify an important part of some mechanism. By contrast, `to drive' can mean `to control', as when driving a car. That's like controlling an audio amplifier via its input signal. `To drive' can also mean `to supply the energy needed' via the fuel tank or the amplifier's power supply. Well, there are two obvious, and different, thought-experiments here, on the amplifier let's say. One is to change the input signal. The other is to pull the power plug. A viewpoint that focused on the power plug alone might miss important aspects of the problem!
You may laugh, but there's been a mindset in my community that has, or used to have, precisely such a focus. It said that the way to understand our atmosphere and ocean is through their intricate `energy budgets', disregarding questions of what the system is sensitive to. Yes, energy budgets are interesting and important, but no, they're not the Answer to Everything. Energy budgets focus attention on the power supply and tempt us to ignore the input signal.
The topic of mindsets and cognitive illusions has been illuminated not only through the famous psychological studies of Daniel Kahneman, Amos Tversky and others but also, more recently, through a vast and deeply thoughtful book by the psychiatrist Iain McGilchrist (2009), conveying some fascinating new insights into the evolutionarily ancient roles of the brain's left and right hemispheres. Typically, the left hemisphere has great analytical power but is more prone to mindsets. It seems clear that the sort of scientific understanding I'm talking about -- in-depth understanding if you will -- involves an intricate collaboration between the two hemispheres with each playing to its own very different strengths. If that collaboration is disrupted, for instance by damage to the right hemisphere that paralyses a patient's left arm, extreme forms of mindset can result. There are cases in which the patient vehemently denies that the arm is paralysed, and will make up all sorts of excuses as to why he or she doesn't fancy moving it when asked. This denial-state is called anosognosia. It's a kind of unconscious wilful blindness, if I may use another self-contradictory term. Such phenomena are also discussed in the important book by Ramachandran and Blakeslee (1998).
Back in the 1920s, the great physicist Max Born was immersed in the mind-blowing experience of developing quantum theory. Born once commented that engagement with science and its healthy scepticism can give us an escape route from mindsets. With the more dangerous kinds of zealotry and fundamentalism in mind, he wrote
I believe that ideas such as absolute certitude, absolute exactness, final truth, etc., are figments of the imagination which should not be admissible in any field of science... This loosening of thinking [Lockerung des Denkens] seems to me to be the greatest blessing which modern science has given to us. For the belief in a single truth and in being the possessor thereof is the root cause of all evil in the world...
(quoted in Gustav Born 2002). Further wisdom on these topics is recorded in the classic study of cults by Conway and Siegelman (1978), echoing religious wars across the centuries, as well as today's polarizations and their amplification by the social media. Time will tell, perhaps, how the dangers from the fundamentalist religions compare with those from the fundamentalist atheisms. Among today's fundamentalist atheisms we have not only Science is the Answer to Everything And Religion Must Be Destroyed -- provoking a needless backlash against science, sometimes violent -- but also free-market fundamentalism, in some ways the most dangerous of all because of its vast financial resources. I don't mean Adam Smith's idea that market forces are useful, in symbiosis with suitable regulation, written or unwritten, as Smith made clear (e.g. Tribe 2008, Mazzucato 2018). I don't mean the business entrepreneurship that provides us with valuable goods and services. By free-market fundamentalism I mean a hypercredulous belief, a taking-for-granted, an incoherent mindset that unregulated markets, profit, and personal greed are the Answer to Everything and the Ultimate Moral Good -- regardless of evidence like the 2008 financial crash. Surprisingly, to me at least, free-market fundamentalism takes quasi-Christian as well as atheist forms (e.g. Lakoff 2014, & refs.).
Common to all forms of fundamentalism is that they inhibit, or forbid, the loosening of thinking or pluralism that allows freedom to view things from more than one angle, especially as regards central beliefs held sacred. `How dare you say there's more than one view -- you're just a loser, a dumb snowflake, a wishy-washy moral weakling!' The 2008 financial crash seems to have made only a small dent in the central beliefs of free-market fundamentalism, so far. And what's called `science versus religion' is not, it seems to me, about scientific insight versus religious, or spiritual, insight. Rather, it's about scientific fundamentalism versus religious fundamentalism, which of course are irreconcilable.
Such futile and damaging conflicts cry out for more loosening of thinking. How can such loosening work? As Ramachandran or McGilchrist might say, it's almost as if the right hemisphere nudges the left with a wordless message to the effect that `You might be sure, but I smell a rat: could you, just possibly, be missing something?' It's well known that in 1983 a Russian officer, Stanislav Petrov, saved us from likely nuclear war. At great personal cost, he disobeyed standing orders when a malfunctioning weapons system said `nuclear attack imminent'. We had a narrow escape. It was probably thanks to Petrov's right hemisphere. There have been other such escapes.
Let's fast-rewind to a few million years ago. Where did we, our insights, and our mindsets come from? What can be said about human and pre-human evolution? And how on Earth did we acquire our language ability -- that vast conceptual minefield -- so powerful, so versatile, yet so weak on logic-checking? These questions are more than just tantalizing. Clearly they're germane to past and current conflicts, and to future risks including existential risks.
Simplistic evolutionary theory is the first obstacle to understanding -- in popular culture at least, and in many parts of the business world too. It's the surprisingly persistent view, or unconscious assumption, that natural selection works solely through `survival of the fittest', presuming a very simplified measure of fitness. `Fitness' is often presumed to mean nothing but an individual's ability to compete with other individuals so as to produce more offspring. I say surprisingly persistent because this purely competitive view is so plainly wrong.
As Charles Darwin himself recognized, our species and other social species, such as baboons, could not have survived without cooperation between individuals. Without such cooperation, alongside competition, our ground-dwelling ancestors would have been easy meals for the large, swift predators all around them -- gobbled up in no time at all! Cooperation restricted to closely related individuals -- as presumed in kin-selection theory, a variant of selfish-gene theory -- would not have been enough to survive these dangers.
Even bacteria cooperate. That's well known. One way they do it is by sharing packages of genetic information called plasmids or DNA cassettes. A plasmid might for instance contain information on how to survive antibiotic attack. Don't get me wrong. I'm not saying that bacteria `think' like us, or like baboons or other social mammals, or like social insects. And I'm not saying that bacteria never compete. They often do. But it's a hard fact, and now an urgent problem in medicine, that vast numbers of individual bacteria cooperate among themselves to develop resistance to antibiotics, among other things. Even different species cooperate (e.g. Skippington and Ragan 2011), as with the currently-emerging resistance to colistin, an antibiotic of last resort. Yes, selective pressures are at work, but at group level as well as at individual and kin level, in heterogeneous populations living in heterogeneous and ever-changing environments. Today the technology of genetic sequencing is uncovering many more bacterial communities and modes of cooperation, and of competition, about which we knew nothing until very recently because it seems that most bacteria can't be grown in laboratory dishes.
So it's plain that natural selection operates at many levels in the biosphere, and that cooperation is widespread alongside competition. Indeed the word `symbiosis' in its standard meaning denotes a variety of intimate, and well studied, forms of cooperation between entirely different species. The trouble is the sheer complexity of it all -- again a matter of combinatorial largeness. We're far from having comprehensive mathematical models of how it works despite, for instance, spectacular recent breakthroughs at molecular level (e.g. Wagner 2014). Perhaps the persistence of simplistic evolutionary theory comes from a feeling that if one can't describe something then it can't exist. McGilchrist would argue, I think, that that's a typical left-hemisphere mindset.
There are more sophisticated variants of that mindset. For very many years there have been acrimonious disputes among biologists over false dichotomies such as `kin selection versus group selection', as if the one excluded the other (e.g. Segerstråle 2000). Fortunately, the worst of those disputes now seem to be dying out, as the models improve and the evidence accumulates for what's now called multi-level selection. There are many different lines of evidence. Some of them are powerfully argued, for instance, in the review articles by Robin Dunbar (2003), Matt Rossano (2009) and Kevin Laland et al. (2010), in the books by Andreas Wagner (2014), Christopher Wills (1994), and David Sloan Wilson (2015), and in a searching and thoughtful compendium edited by Rose and Rose (2000).
Despite much progress there remain some obstacles to understanding our ancestors' evolution, even in its most basic aspects. In particular, there's a pair of mindsets saying first that the genes'-eye view gives us the only useful angle from which to view evolution -- or, more fundamentally, the replicators'-eye view including regulatory `junk' or noncoding DNA, so called -- and, second, that one must ignore selective pressures at levels higher than that of individuals. Those two mindsets seem built in to selfish-gene theory as usually articulated (e.g. Pinker 1997; Dawkins 2009; Pagel 2012).
The first mindset misses the value of viewing a problem from more than one angle. And the weight of evidence against both mindsets is getting stronger and stronger. Their persistence has always puzzled me, but it seems likely that they originated with mathematical models now seen as grossly oversimplified -- the oldest population-genetics models, with procreation of individuals as the only measure of fitness, the models that originally gave rise to selfish-gene theory. The second mindset, against group-level selection in particular, may have been a reaction to the sloppiness of some old non-mathematical arguments for such selection, for instance ignoring the complex game-theoretic aspects such as multi-agent `reciprocal altruism', and conflating altruism as conscious sentiment `for the good of the group' with altruism as actual behaviour, including its deeply unconscious aspects.
Of course none of this is helped by the `lucidity failures' in the research literature such as failures to be explicit in defining mathematical symbols and -- more seriously -- failures to be explicit about what's assumed in various models. One example is an assumption that multi-timescale processes are unimportant. That assumption, whether conscious or unconscious, now looks like one of the more serious mistakes in the literature. It's one of the problems with the original population-genetics models and therefore with selfish-gene theory, not least as applied to ourselves.
The mistake is perhaps surprising because of the familiarity, in other scientific contexts, as mentioned already, of multi-timescale processes in which strongly interacting mechanisms have vastly disparate timescales. The example of air pressure has already been mentioned. In air under ordinary conditions, each molecule collides with another on a very short timescale, about a ten billionth of a second. This very fast process strongly reacts back on -- is an essential part of -- slowly-changing air pressures and temperatures. Yet the biological literature has long spoken of `proximate causes' and `ultimate causes', referring respectively to fast mechanisms involved in the development of individual organisms and the much slower, evolutionary-timescale mechanisms of genomic change. The slow mechanisms are not only assumed but are also, sometimes, confidently declared, to be wholly independent of the fast mechanisms, simply because the timescales are so different. Genes are therefore `selfish', the story goes, because they govern, or ultimately cause, everything in biology, with no feedback on their evolution from the fast or `proximate' processes. That such assumptions are not just implausible but also inconsistent with the weight of evidence is increasingly recognized today (e.g. Thierry 2005, Laland et al. 2011, Danchin and Pocheville 2014).
Mathematical equations can hide other aspects of the evolution problem. A simple example is the effect of population heterogeneity. It becomes invisible if you average over entire populations, without attending to spatial covariances. Chapter 3 of Wilson (2015) shows how such averaging has impeded understanding. To quote Wills (1994), researchers can sometimes become `prisoners of their mathematical models'.
Wills notes two examples of such imprisonment, going back to the 1970s. The first relates to the co-evolution of genomes and cultures, with a model neglecting multi-timescale processes, and the second to a dispute about adaptive genomic changes -- those that give an organism an immediate advantage -- versus neutral genomic changes, which have no immediate effect. That second dispute has been transcended by many subsequent developments including the breakthrough at molecular level described in Wagner (2014). Wagner and co-workers have shown in detail at molecular level why both kinds of change are practically speaking inevitable, and how neutral changes contribute to the huge genetic diversity that's key to survival in an ever-changing environment. What started as neutral can become adaptive, and does so in many cases. A spinoff from this work is a deeper insight into what we should mean by functionality within so-called junk DNA.
So it's one thing to write down impressive-looking equations but quite another to write, test, calibrate, and fully understand equations that model the complex reality in a useful way, viewing things from more than one angle. Fortunately, models closer to the required complexity (e.g. Lynch 2007, Laland et al. 2010, 2011, & refs., Schonmann et al. 2013, Werfel et al. 2015) have become easier to explore and to calibrate thanks to the power of today's computers, and to the evidence from today's genomic-sequencing technologies.
I sometimes wonder whether mathematical imprisonment, and the resulting legacy of confusion (e.g. Segerstråle 2000), mightn't have involved an exaggerated respect for the mathematical equations themselves, in some cases at least. There's a tendency to think that an argument is made more `rigorous' just by bringing in equations. But even the most beautiful equations can describe a model that's too simple for its intended purposes, or just plain wrong. As the old aphorism says, there's no point in being quantitatively right if you're qualitatively wrong. Or an equation can be beautiful, and useful, and correct as far as it goes, but incomplete. An example is the celebrated Price equation, sometimes called the E=mc2 of population genetics. It's useful as a way of making population heterogeneity or spatial covariance more visible. It does not, however, itself represent everything one needs to know. It isn't the Answer to Everything! And when confronted with a phrase such as `mathematical proofs from population genetics', and similarly grandiose claims, one needs to ask what equations were used and what assumptions were made.
The sheer trickiness of all this reminds me of my own work, alongside that of many colleagues, on jetstreams and the ozone hole. It involved equations giving deep insight into some aspects of the problem though, in our case, precisely because other aspects are kept hidden. (Technically it comes under headings like `balanced flow', `potential-vorticity inversion', and the `slow quasimanifold' mentioned earlier.) Keeping some aspects hidden happens to be useful here for reasons that have been closely assessed and well studied, both at an abstract theoretical level and, for instance, through experience in numerical weather forecasting. Precisely what's hidden (sound waves and something called `inertia-gravity waves') is well understood and attended to, and demonstrably unimportant in many cases. And the backdrop to all of it is the surprising, fascinating, and awkward fact noted in the Prelude -- and emphasized in Wilson (2015) -- that equations can take alternative forms that are mathematically equivalent yet `psychologically very different'.
For the human species and our ancestors' evolution it seems to me that we do, nevertheless, have enough understanding to say something useful despite all the difficulties. The essence of that understanding is cogently summarized in the book by Wills (1994), which in many ways was well ahead of its time. Some important recent developments are reviewed in Laland et al. (2010, 2011), Richerson et al. (2010), and Danchin and Pocheville (2014), within a rapidly expanding research literature.
What's key for our species is that culturally-relevant traits and propensities -- including our unconscious compassion and generosity as well as our less pleasant traits -- and our language ability and our sheer versatility -- must have come from the co-evolution of genome and culture for at least hundreds of millennia and probably much longer, thanks to selective pressures on tightly-knit, ground-dwelling groups facing other groups and large predators in a changing climate. The selective pressures, including social pressures, must have operated at group level as well as individual level because survival must have depended on group solidarity and cooperation. Indeed, the distinguished palaeoanthropologist Phillip Tobias has argued for thousands of millennia of such co-evolution, with a two-way feedback loop between the slow and fast processes, genomic and cultural:
... the brain-culture relationship was not confined to one special moment in time. Long-continuing increase in size and complexity of the brain was paralleled for probably a couple of millions of years [my emphasis] by long-continuing elaboration and `complexification'... of the culture. The feedback relationship between the 2 sets of events is as indubitable as it was prolonged in time...
(Tobias 1971). Remarkably, Tobias' insight, clearly recognizing the multi-timescale aspects, seems to have been forgotten in the more recent literature on genome-culture co-evolution. There has also been a tendency to see the technology of stone tools as the only important aspect of `culture', giving an impression that culture stood still for one or two million years, just because the stone tools didn't change very much.
It's worth stressing again that an absence of beads and bracelets and other cultural archaeological durables is no evidence for an absence, or a standing-still, of culture and language, or proto-language whether gestural, or vocal, or both. It hardly needs saying, but apparently does need saying, again, that culture and language can be mediated purely by sound waves and light waves, leaving no archaeological trace. The gradually-developing ability to create what eventually became music, dance, poetry, rhetoric, and storytelling, held in the memories of gifted individuals and in a group's collective, cultural memory, would have produced selective pressures for further brain development and for generations of individuals still more gifted. Not only the best craftspeople, hunters, fighters, tacticians and social manipulators but also, in due course, the best singers and storytellers -- or more accurately, perhaps, the best and most sophisticated singer-storytellers -- would have had the best mating opportunities. Intimately part of all this would have been the kind of social intelligence we call `theory of mind' -- the ability to guess what others are thinking and, in due course, what others think I'm thinking, and so on. The implied genome-culture feedback loop is a central theme in Wills' important book, which draws on a deep knowledge of palaeoanthropology and experimental population genetics.
We need not argue about the overall timescales of these developments except to say that they must have been far, far longer than the recent tens of millennia we call the Upper Palaeolithic, the time of the beads and bracelets, as well as the beautiful cave paintings and many other such durables. In particular, there had to be plenty of time for the self-assembling building blocks of language and culture to seed themselves within genetic memory -- the genetically-enabled automata for language and culture -- from rudimentary beginnings or proto-language as echoed, perhaps, in the speech and gestural signing of a two-year-old today.
It's precisely on this point that the mindset against group-level selection, and the neglect of multi-timescale processes, conspire to mislead us most severely in the case of human evolution at least.
At some early stage in our ancestors' evolution, perhaps a million years ago or even more, language or proto-language barriers must have become increasingly significant. Little by little, they'd have sharpened the separation of one group from another. The groups, regarded as evolutionary `survival vehicles', would have developed increasingly tight outer boundaries. Such boundaries would have enhanced the efficiency of those vehicles as carriers of replicators into future generations within each group (e.g. Pagel 2012). The replicators would have been cultural as well as genomic. This channelling of both kinds of replicator within groups down the generations must have strengthened the feedback, the multi-timescale dynamical interplay, between cultures and genomes. And it was likely to have intensified the selective pressures exerted at group-versus-group level, for part of the time at least.
The importance or otherwise of group-level genomic selection would no doubt have varied over past millions of years, as the ground-dwelling groups continued to survive predation and to compete for resources whilst becoming more and more socially sophisticated -- with ever-increasing reliance on proto-language whether vocal, or gestural, or both. In the most recent stages of evolution, with runaway brain evolution as Wills called it, and languages approaching today's complexity -- perhaps over the past several hundred millennia -- the art of warfare between large tribes might have made the within-group channelling of genomic information less effective than earlier through, for instance, the enslavement of enemy females, increasing the cross-tribal flow of genomic information. But then again, that's a one-way flow into the strongest, smartest tribe, providing extra genetic diversity and adaptive advantage, including artistic talent, when it spills across internal caste boundaries and slave boundaries. Such asymmetric gene flow is completely ignored in the oldest population-genetics models and in many of their successors.
These group-wise or caste-wise-heterogeneous multi-timescale processes are dauntingly complex, as well as completely invisible to selfish-gene theory. But some of the complexity is now being captured in population-genetics models that are increasingly sophisticated, reflecting several viewpoints of which the gene-centric view is only one (e.g. Schonmann et al. 2013). And new lines of enquiry are sharpening our understanding of the multifarious dynamical mechanisms and feedbacks involved (e.g. Noble 2006, Laland et al. 2011, Danchin and Pocheville 2014). Some of those mechanisms have been discussed for a decade or more under headings such as `evo-devo' (evolutionary developmental biology) and, more recently, the controversial `extended evolutionary synthesis', which includes new mechanisms of `epigenetic heritability' that operate outside the DNA sequences themselves -- all of which says that genetic memory and genetically-enabled automata are even more versatile and flexible than previously thought.
Some researchers today even question the use of the words `genetic' and `genetically' for this purpose, but I think the words remain useful as a pointer to the important contribution from the DNA-mediated information flow, alongside many other forms of information flow into future generations including those via language, culture and `niche construction'. And the idea of genetically-enabled automata seems to me so important -- not least as an antidote to the old genetic-blueprint idea -- that I propose to use it without further apology. It's crucial, though, to remember that the manner in which these automata do or do not assemble themselves is very circumstance-dependent.
That the human language ability depends on genetically-enabled automata has been spectacularly confirmed, in an independent way, by recent events in Nicaragua of which we'll be reminded shortly. Of course the automata in question must be like those involved in developing the visual system, or in any other such biological self-assembly process, in that they're more accurately characterized as hierarchies and networks of automata linked to many genes and regulatory DNA segments and made of molecular-biological circuits and assemblies of such circuits -- far more complex than the ultra-simple `automata' used in mathematical studies of computation and artificial intelligence, and far more complex, and circumstance-dependent, than the hypothetical `genes', so called, of the old population-genetics models; see my endnote on Danchin and Pocheville (2014). Even professional biologists sometimes conflate these purely hypothetical genes with the actual genes found in DNA sequences.
And to say that there's an innate potential for language dependent on genetically-enabled automata is quite different from saying that language is innately `hard-wired' or `blueprinted'. As with the visual system, the building blocks are not the same thing as the assembled product -- assembled of course under the influence of a particular environment, physical and cultural. Recognition of this distinction between building blocks and assembled product might even, I dare hope, get us away from the silly quarrels about `all in the genes' versus `all down to culture'.
(Yes, language is in the genes and the regulatory DNA and culturally constructed, where of course we must understand the construction as being largely unconscious, as great artists, great writers, and great scientists have always recognized -- consciously or unconsciously! And there's no conflict with the many painstaking studies of comparative linguistics, showing the likely pathways and relatively short timescales for the cultural ancestry of today's languages. Particular linguistic patterns, such as Indo-European, are one thing, while the innate potential for language is another.)
But what about those multi-timescale aspects? How on Earth can genome, language and culture co-evolve, and interact dynamically, when their timescales are so very, very different? And above all, how can the latest cultural whim or flash in the pan influence so slow a process as genomic evolution? Isn't the comparison with air pressure too simplistic?
Well, there are many other examples of multi-timescale processes. Many have far greater complexity than air pressure, even if far short of biological complexity. The ozone hole is one such example. One might equally well ask, how can the very fast and very slow processes involved in the ozone hole have any significant interplay? How can the seemingly random turbulence that makes us fasten our seat belts have any role in a stratospheric phenomenon on a spatial scale bigger than Antarctica, involving timescales out to a century or so?
As I was forced to recognize in my own research -- there is a significant and systematic interplay between atmospheric turbulence and the ozone hole. It's now well understood. Among other things it involves a sort of fluid-dynamical jigsaw puzzle made up of waves and turbulence. Despite differences of detail, and greater complexity, it's a bit like what happens in the surf zone near an ocean beach. There, tiny, fleeting eddies within the foamy turbulent wavecrests not only modify, but are also shaped by, the wave dynamics in an intimate interplay that, in turn, generates and interacts with mean currents, including rip currents, and with sand and sediment transport over far, far longer timescales.
The ozone hole is even more complex, and involves two very different kinds of turbulence. The first kind is the familiar small-scale, seat-belt-fastening turbulence, on timescales of seconds to minutes. The second is a much slower, larger-scale phenomenon involving a chaotic interplay between jetstreams, cyclones and anticyclones. Several kinds of waves are involved, including jetstream meanders. And interwoven with all that fluid-dynamical complexity we have regions with different chemical compositions, and an interplay between the transport of chemicals, on the one hand, and a large set of fast and slow chemical reactions on the other. The chemistry interacts with solar and terrestrial radiation, from the ultraviolet to the infrared, over a vast range of timescales from thousand-trillionths of a second as photons hit molecules out to days, weeks, months, years, and longer as chemicals are moved around by global-scale mean circulations. The key point about all this, though, is that what looks like a panoply of chaotic, flash-in-the-pan, fleeting and almost random processes on the shorter timescales has systematic mean effects over far, far longer timescales.
In a similar way, then, our latest cultural whims and catch-phrases may seem capricious, fleeting and sometimes almost random -- almost a `cultural turbulence' -- while nevertheless exerting long-term selective pressures that systematically favour the talents of gifted and versatile individuals who can grasp, exploit, build on, and reshape traditions and zeitgeists in what became the arts of communication, storytelling, imagery, politics, technology, music, dance, and comedy, with storytelling the most basic and powerful of these arts, as Plato once said: `Those who tell the stories rule society.' The feeling that it's `all down to culture' surely reflects the near-impossibility of imagining the vast overall timespans, out to millions of years, over which the automata or building blocks that mediate language and culture must have evolved under those turbulent selective pressures -- all the way from rudimentary beginnings millions of years ago.
The air-pressure, ocean-beach and ozone-hole examples are enough to remind us that multi-timescale co-evolution is possible, with strong interactions across vastly disparate timescales. So for the co-evolution of genome and culture over millions of years there's no need for accelerated rates of genomic evolution, as has sometimes been thought (e.g. Wills 1995, pp. 10-13; Segerstråle 2000, p. 40). And the existence of genetically-enabled automata for language itself has been spectacularly verified by some remarkable recent events in Nicaragua.
Starting in 1979, Nicaragua saw the creation of a new Deaf community and an entirely new sign language, Nicaraguan Sign Language (NSL). Beyond superficial borrowings, NSL is considered by sign-language experts to be entirely distinct from any pre-existing sign language, such as American Sign Language or British Sign Language. It's clear moreover that NSL was created by, or emerged from, a community of schoolchildren with essentially no external linguistic input.
Deaf people had no communities in Nicaragua before 1979, a time of drastic political change. It was in 1979 that dozens, then hundreds, of deaf children first came into social contact. This came about through a new educational programme. It included schools for the deaf. Today, full NSL fluency at native-speaker level, or rather native-signer level, is found in just one group of people. They are those, and only those, who were young children in 1979. That's a simple fact on the ground. It's therefore practically certain that NSL was somehow created by the children, and that NSL was nonexistent before 1979.
Linguists quarrel over how to interpret this situation in part because the detailed evidence, as set out most thoroughly, perhaps, in Kegl et al. (2001), contradicts some well-entrenched ideas on how languages come into being. The feeling that it's `all down to culture' seems to be involved, with a single `human mother tongue' (e.g. Pagel 2012) having been invented as a purely cultural development and a by-product of increasing social intelligence.
If, however, we take the facts on the ground in Nicaragua and put them together with the improved understanding of natural selection already mentioned, including multi-level selection and the genome-culture feedback loop -- the complex, multi-timescale interplay between so-called nature and so-called nurture -- then we're forced to the conclusion that language acquisition and creation do require genetically-enabled automata among other things. Regardless of how we characterize the emergence of NSL, the evidence shows that the youngest children played a crucial role. And a key aspect must have been a child's unconscious urge to impose syntactic function and syntactic regularity on whatever language is being acquired or created. After all, it's common observation that a small child learning English will say things like `I keeped mouses in a box' rather than `I kept mice in a box'. It's the syntactic irregularities that need to be taught by older people, not the syntactic function itself.
This last point was made long ago by Noam Chomsky among others. But the way it fits in with natural selection was unclear at the time. We didn't then have today's insights into multi-level selection, multi-timescale genome-culture feedback, and genetically-enabled automata.
And as for the Nicaraguan evidence, the extensive account in Kegl et al. (2001) is a landmark. It describes careful and systematic studies using video and transcription techniques developed by sign-language experts. Those studies brought to light, for instance, what are called the pidgin and creole stages in the collective creation of NSL by, respectively, the older and the younger children, with full syntactic functionality arising at the creole stage only and coming from children aged 7 or younger. Pinker (1994) gives an excellent popular account. More recent work illuminates how the repertoire of syntactic functions in NSL is being filled out, and increasingly standardized, by successive generations of young children (e.g. Senghas 2010).
And what of the changing climate with which our ancestors had to cope? For most of the past several hundred millennia the climate system underwent huge fluctuations some of which were very sudden, as will be illustrated shortly, drastically affecting our ancestors' living conditions and food supplies. And it was indeed the past few hundred millennia that saw the most spectacular human brain-size expansion in the fossil record (e.g. Dunbar 2003, Figure 4), corresponding to what Wills called runaway brain evolution, developing the social and cultural skills conducive to the survival of groups of our ancestors including ever more elaborate rituals, belief systems, songs, and stories passed from generation to generation.
And what stories they must have been! Great sagas etched into a tribe's collective memory. It can hardly be accidental that the sagas known today tell of battles, of epic journeys, of great floods, and of terrifying deities that are both fickle benefactors and devouring monsters -- just as the surrounding large predators must have appeared to our still-more-remote ancestors as they foraged and scavenged before becoming major predators themselves (e.g. Ehrenreich 1997).
Figure 3 is a palaeoclimatic record giving a coarse-grain overview of climate variability going back 800 millennia. Time runs from right to left, and the upper graph shows temperature changes. Human recorded history occupies only a small sliver at the left-hand edge of Figure 3 extending about as far as the leftmost temperature maximum, a tiny peak to the left of the `H' of `Holocene'. The Holocene is the slightly longer period up to the present, roughly the past ten millennia, during which the climate was relatively warm.
Figure 3: Antarctic ice-core data from Lüthi et al. (2008) showing estimated temperature (upper graph) and measured atmospheric carbon dioxide (lower graph). Time, in millennia, runs from right to left up to the present day. The significance of the lower graph is discussed in the Postlude. The upper graph estimates air temperature changes over Antarctica, indicative of worldwide changes. The temperature changes are estimated from the amount of deuterium (hydrogen-2 isotope) in the ice, which is temperature-sensitive because of fractionation effects as water evaporates, transpires, precipitates, and redistributes itself between oceans, atmosphere, and ice sheets. The shaded bar corresponds to the relatively short time interval covered in Figure 4 below. The `MIS' numbers denote the `marine isotope stages' whose signatures are recognized in many deep-ocean mud cores, and `T' means `termination' or `major deglaciation'. The thin vertical line at around 70 millennia marks the time of the Lake Toba supervolcanic eruption.
The temperature changes in the upper graph of Figure 3 are estimated from a reliable record in Antarctic ice and are indicative of worldwide temperature changes. There are questions of detail and precise magnitudes, but little doubt as to the order of magnitude of the estimated changes. The changes were huge, especially during the past four hundred millennia, with peak-to-peak excursions of the order of ten degrees Celsius or more. A good cross-check is that the associated global mean sea-level excursions were also huge, up and down by well over a hundred metres, as temperatures went up and down and the great land-based ice sheets shrunk and expanded. There are two clear and independent lines of evidence on sea levels, further discussed in the Postlude below. Also discussed there is the significance of the lower graph, which shows concentrations of carbon dioxide in the atmosphere. Carbon dioxide as a gas is extremely stable chemically, allowing it to be reliably measured from the air trapped in Antarctic ice. The extremes of cold, of warmth, and of sea levels mark what are called the geologically recent glacial-interglacial cycles.
When we zoom in to much shorter timescales, we see that some climate changes were not only severe but also abrupt, over time intervals comparable to, or even shorter than, an individual human lifetime. We know this thanks to patient and meticulous work on the records in ice cores and oceanic mud cores and in many other palaeoclimatic records (e.g. Alley 2000, 2007). The sheer skill and hard labour of fine-sampling, assaying, and carefully decoding such material to increase the time resolution, and to cross-check the interpretation, is a remarkable story of high scientific endeavour.
Not only were there occasional nuclear-winter-like events from volcanic eruptions, including the Lake Toba supervolcanic eruption around 70 millennia ago (thin vertical line in Figure 3) -- a far more massive eruption than any in recorded history -- but there was large-amplitude internal variability within the climate system itself. Even without volcanoes the system has so-called chaotic dynamics, with scope for rapid changes in, for instance, sea-ice cover and in the meanderings of the great atmospheric jetstreams and their oceanic cousins, such as the Gulf Stream and the Kuroshio and Agulhas currents.
This chaotic variability sometimes produced sudden and drastic climate change over time intervals as small as a few years or even less -- practically instantaneous by geological and palaeoclimatic standards. Such events are called `tipping points' of the chaotic dynamics. Much of the drastic variability now known -- in its finest detail for the last fifty millennia or so -- takes the form of complex and irregular `Dansgaard-Oeschger cycles' involving a large range of timescales from millennia down to tipping-point timescales and strongly affecting much of the northern hemisphere.
Figure 4 expands the time interval marked by the shaded bar near the left-hand edge of Figure 3. Note that time again runs from right to left. The graph is a record from Greenland ice with enough time resolution to show details for some of the Dansgaard-Oeschger cycles, those conventionally numbered from 3 to 10. The cycles have amplitudes much greater in the northern hemisphere than in the southern. The graph estimates air temperature changes over Greenland (see caption). The thin vertical lines mark the times of major warming events, which by convention define the end of one cycle and the start of the next. Those warmings were huge, typically of the order of ten degrees Celsius, in round numbers, as well as very abrupt. Indeed they were far more abrupt than the graph can show. In some cases they take a few years or less (Dokken et al. 2013, & refs.); see also Alley (2000).
Figure 4: Greenland ice-core data from Dokken et al. (2013), for the time interval corresponding to the shaded bar in Figure 3. Time in millennia runs from right to left. The graph shows variations in the amount of the oxygen-18 isotope in the ice, from which temperature changes can be estimated in much the same way as in Figure 3. The abrupt warmings marked by the thin vertical lines are mostly of the order of 10°C or more. The thicker vertical lines show timing checks from layers of tephra or volcanic debris. The shaded areas refer to geomagnetic excursions.
Between the major warming events we see an incessant variability at more modest amplitudes -- more like a few degrees Celsius -- nevertheless more than enough to have affected our ancestors' food supplies and living conditions. It seems that the legendary years of famine and years of plenty in human storytelling had an origin far more ancient than recorded history.
And, to survive, our ancestors must have had strong leaders and willing followers. The stronger and the more willing, the better the chance of surviving hardship, migration, and warfare. Hypercredulity and weak logic-checking must have become central to all this. They must have been strongly selected for, as genome and culture co-evolved and as language became more and more sophisticated.
How do you make leadership work? Do you make a reasoned case? Do you ask your followers to check your logic? Do you check it yourself? Of course not! You're a leader because, with your people starving or faced with a hostile tribe, or both, you've emerged as a charismatic visionary, divinely inspired: `O my people, I have seen the True Path that we must follow. Come with me! Let's make our tribe great again. Beyond those mountains, over that distant horizon, that's where we'll win through and find our Promised Land. It is our destiny to find that Land and overcome all enemies because we, and only we, are the True Believers. Our stories are the only true stories.' How else, in the incessantly-fluctuating climate, I ask again, did our one species -- our single human genome -- spread all around the globe in less than a hundred millennia?
And what of dichotomization? It's far more ancient of course, as well as deeply and powerfully instinctive. Ever since the Cambrian, half a billion years ago, individual lives have teetered on the brink of fight or flight, edible or inedible, male or female, friend or foe. But with language and hypercredulity in place, dichotomization -- deep in the more primitive, reptilian parts of our brains -- can take the new forms we see today. Not just friend or foe and with us or against us but also We are right and they are wrong. It's the Absolute Truth of our tribe's belief system versus the absolute falsehood of theirs -- a dichotomization now hugely, profitably, and perilously amplified by the social media (McNamee 2019).
And in case you're tempted to dismiss all this as a mere `just so story' -- speculation unsupported by evidence -- let me call your attention not only to today's amplified extremisms but also to the wealth of supporting evidence and careful thinking, and mathematical modelling, summarized in the book by D.S. Wilson (2015). In particular, his chapters 3, 6 and 7 make the case not only for multi-level selection but also for the adaptive power of traits such as those I've been calling hypercredulity, dichotomization, and weak logic-checking. In particular, he details their conspicuous role in today's fundamentalisms -- religious and atheist alike -- including the atheist form of free-market fundamentalism whose best-known prophet was its Joan of Arc, the legendary Ayn Rand. Understanding free-market fundamentalism is important because of its continuing influence on the written and unwritten rules that, for the time being, govern our unstable economies. By contrast with Adam Smith's ideas, individual profit is the supreme goal and the Absolute Moral Imperative. (Short-term profit especially; and `short term' now means small fractions of a second, in computer-mediated trading.)
A strangely different belief system, not an ordinary fundamentalism but similarly centred on weak logic-checking, is the belief that `nothing is true and everything is possible' (Pomerantsev 2015) -- a postmodernism of `alternative facts', an Absolute Truth that nothing is true except the belief that nothing is true. (How profound! How ineffable! How Derridian!) That belief, or at least the idea behind it, has been weaponized by progagandists such as master manipulator Vladislav Surkov, over the past two decades, creating an intricate `postmodernist traffic rules' of politics and taking the dark arts of camouflage and deception to new levels altogether.
As Wilson demonstrates from detailed case studies, both religious and atheist, the characteristic dichotomization in fundamentalist belief systems is `Our ideas good, their ideas bad', for everyone without exception. For instance Ayn Rand seemed to claim what Adam Smith did not, that selfishness is absolutely good and altruism absolutely bad, for absolutely everyone -- or, rather, I suspect, for everyone that matters, everyone in my tribe, every true believer who shares our stories and thereby deserves to survive and prosper. It seems that some well-intentioned believers such as Rand's disciple Alan Greenspan were devastated when the 2008 financial crash took them by surprise, shortly after Greenspan's long reign at the US Federal Reserve Bank. By a supreme irony Rand's credo also says, or takes for granted, that `We are rational and they are irrational.' Any logic-checking that supports an alternative viewpoint is `irrational', something to be dismissed out of hand. And more than that, any departure from the sacred dichotomy is a disgrace, a sign of lily-livered moral weakness.
Dichotomization makes us stupid, doesn't it. But of course many other traits must have been selected for, underpinning our species' remarkable social sophistication and tribal organization as discussed in, for instance, Pagel (2012). Recent advances in palaeoarchaeology have added much detail to the story of the past hundred millennia, based on evidence that includes the size and structure of our ancestors' campsites. Some of the evidence now points to inter-group trading as early as 70 millennia ago, perhaps accelerated by extreme climate stress from the Toba super-eruption around then (e.g. Rossano 2009) -- suggesting not only warfare between groups but also wheeling and dealing, all of it favouring high levels of social sophistication and organization, and versatility. Indeed, group identity and the we-versus-they dichotomy can be very flexible in humans, as so brilliantly shown in the behavioural experiments of social psychologist Stephen Reicher and his co-workers. The same flexibility was dramatically illustrated by the recent (22 June 2019) electoral success in Turkey where political traction came not from the polarizing `identity politics' of recent years but, rather, from a politicians' colourful handbook, the `Book of Radical Love', switching the focus toward pluralistic core values and `caring for each other' -- even for one's political opponents!
Today we must live with the genetic inheritance from all this. In our overcrowded world, awash with powerful military, financial, cyberspatial and disinformational weaponry, the dangers are self-evident. And yet our flexibility can give us great hope, despite what a human-nature cynic might say. Thanks to our improved understanding of natural selection, we now understand much better how genetic memory works. Genetic memory and `human nature' are not nearly as rigid, not nearly as `hard-wired', as many people think. Wilson (2015) points out that this improved understanding suggests new ways of coping. We do, believe it or not, have the potential to go deeper and get smarter!
For instance practical belief systems have been rediscovered that can avert the `tragedy of the commons', the classic devastation of resources that comes from unrestrained selfishness. That tragedy, as ancient as life itself (Werfel et al. 2015), now threatens our entire planet. And the push toward it by further prioritizing selfishness is increasingly recognized as -- well -- insane. Not the true path, after all. Not the Absolute Moral Imperative. There are signs, now, of saner and more stable compromises, or rather symbioses, between regulation and market forces, more like Adam Smith's original idea. The Montreal Protocol on the ozone hole is an inspiring example.
And the idea of genetically-enabled automata or self-assembling building blocks becomes more important than ever, displacing the older, narrower ideas of genetic blueprint, innate hard wiring, selfish genes, and rigid biological determinism. The epigenetic flexibility allowed by the automata are now seen as significant aspects of biological evolution, not least that of our ancestors.
I really do hope that these advances in our understanding of evolution might free us from the old cliché that the nastier parts of human nature are purely biological, while the nicer parts are purely cultural, or purely religious. Yes, it's clear that our genetic memory can spawn powerful, self-assembling automata for the worst kinds of human behaviour. Yet, as history shows, changing circumstances can mean that those automata don't always have to assemble themselves in the same way. Everyone needs some kind of faith or hope; but personal beliefs don't have to be fundamentalist and exclusional. And genocide isn't hard-wired. It can be outsmarted and avoided. It has been avoided on some occasions. Group identity can be flexible and multifarious, when not boxed in by the dichotomizing social media. Compassion and generosity can come into play, welling up from unconscious levels and transcending the mere game-playing of reciprocal altruism. There are such things as loneliness, friendship, forgiveness, and unconditional love. They too have their biological automata, deep within our unconscious being -- our unconscious, epigenetic being. They too are outside the scope of selfish-gene theory, but are part of our human nature and its potential to get smarter. Their ubiquity -- their very ordinariness -- is attested to, in a peculiar way, by the very fact that they're seldom considered newsworthy.
Love and redemption are forces strongly felt in some of the great epics, such as the story of Parsifal. And of course even they have their dark side, within the more dangerous fundamentalisms. Insights into all this go back to the great psychologist Carl Gustav Jung and before that, as the great novelist Ursula K. Le Guin reminds us, back to ancient wisdoms such as Taoism -- exploring what Jung called the `collective unconscious' of our species and its dark and light sides, which are so inextricably intertwined:
I would know my shadow and my light; so shall I at last be whole.
In his great oratorio for peace, A Child for Our Time, the composer Michael Tippett set those words to some of the most achingly beautiful music ever written.
Picture a typical domestic scene. `You interrupted me!' `No, you interrupted me!
Such stalemates can arise from the fact that perceived timings differ from actual timings in the outside world. I once tested this experimentally by secretly tape-recording a dinner-table conversation. At one point I was quite sure that my wife had interrupted me, and she was equally sure it had been the other way round. When I listened afterwards to the tape, I discovered to my chagrin that she was right. She had started to speak a few hundred milliseconds before I did.
Musical training includes learning to cope with the discrepancies between perceived timings and actual timings. For example, musicians often check themselves with a metronome, a small machine that emits precisely regular clicks. The final performance won't necessarily be metronomic, but practising with a metronome helps to remove inadvertent errors in the fine control of rhythm. `It don't mean a thing if it ain't got that swing...'
There are many other examples. I once heard a radio interviewee recalling how he'd suddenly got into a gunfight: `It all went intuh slowww... motion.'
(A scientist who claims to know that eternal life is impossible has failed to notice that perceived timespans at death might stretch to infinity. That, by the way, is a simple example of the limitations of science. What might or might not happen to perceived time at death is a question outside the scope of science, because it's outside the scope of experiment and observation. It's here that ancient religious teachings show more wisdom, I think, when they say that deathbed compassion and reconciliation are important to us. Perhaps I should add that I'm not myself conventionally religious. I'm an agnostic whose closest approach to the numinous -- to things transcendental, to the divine if you will -- has been through music.)
Some properties of perceived time are very counterintuitive indeed. They've caused much conceptual and philosophical confusion, especially in the literature on free will. For instance, the perceived times of outside-world events can precede the arrival of the sensory data defining those events, sometimes by as much as several hundred milliseconds. At first sight this seems crazy, and in conflict with the laws of physics. Those laws include the principle that cause precedes effect. But the causality principle refers to time in the outside world, not to perceived time. The apparent conflict is a perceptual illusion. I'll refer to such phenomena as `acausality illusions'.
The existence of acausality illusions -- of which music provides outstandingly clear examples, as we'll see shortly -- is a built-in consequence of the way perception works. And the way perception works is well illustrated by the `walking lights'.
Consider for a moment what the walking lights tell us. The sensory data are twelve moving dots in a two-dimensional plane. But they're seen by anyone with normal vision as a person walking -- a particular three-dimensional motion exhibiting organic change. (The invariant elements include the number of dots, and the distances, in three-dimensional space, between particular pairs of locations corresponding to particular pairs of dots.) There's no way to make sense of this except to say that the unconscious brain fits to the data an organically-changing internal model that represents the three-dimensional motion, using an unconscious knowledge of Euclidean geometry.
This by the way is what Kahneman (2011) calls a `fast' process, something that happens ahead of conscious thought, and outside our volition. Despite knowing that it's only twelve moving dots, we have no choice but to see a person walking.
Such model-fitting has long been recognized by psychologists as an active process involving unconscious prior probabilities, and therefore top-down as well as bottom-up flows of information (e.g. Gregory 1970, Hoffman 1998, Ramachandran and Blakeslee 1998). For the walking lights the greatest prior probabilities are assigned to a particular class of three-dimensional motions, privileging them over other ways of creating the same two-dimensional dot motion. The active, top-down aspects show up in neurophysiological studies as well (e.g. Gilbert and Li 2013).
The term pattern-seeking is sometimes used to suggest the active nature of the unconscious model-fitting process. For the walking lights the significant pattern is four-dimensional, involving as it does the time dimension as well as all three space dimensions. Without the animation, one tends to see no more than a bunch of dots. So active is our unconscious pattern-seeking that we are prone to what psychologists call pareidolia, seeing patterns in random images.
And what is a `model'? In the sense I'm using the word, it's a partial and approximate representation of reality, or presumed reality. As the famous aphorism says, `All models are wrong, but some are useful'. Models are made in a variety of ways.
The internal model evoked by the walking lights is made by activating some neural circuitry. The objects appearing in video games and virtual-reality simulations are models made of electronic circuitry and computer code. Children's model boats and houses are made of real materials but are, indeed, models as well as real objects -- partial and approximate representations of real boats and houses. Population-genetics models are made of mathematical equations, and computer code usually. So too are models of photons, of black holes, of lightspeed spacetime ripples, and of jetstreams and the ozone hole. Any of these models can be more or less accurate, and more or less detailed. But they're all partial and approximate.
So ordinary perception, in particular, works by model-fitting. Paradoxical and counterintuitive though it may seem, the thing we perceive is -- and can only be -- the unconsciously-fitted internal model. And the model has to be partial and approximate because our neural processing power is finite. The whole thing is counterintuitive because it goes against our visual experience of outside-world reality -- as not just self-evidently external, but also as direct, clearcut, unambiguous, and seemingly exact in many cases. Indeed, that experience is sometimes called `veridical' perception, as if it were perfectly accurate. One often has an impression of sharply-outlined exactness -- for instance with such things as the delicate shape of a bee's wing or a flower petal, the precise geometrical curve of a hanging dewdrop, the sharp edge of the sea on a clear day and the magnificence, the sharply-defined jaggedness, of snowy mountain peaks against a clear blue sky.
(Right now I'm using the word `reality' to mean the outside world. Also, I'm assuming that the outside world exists. I'm making that assumption consciously as well as, of course, unconsciously. Notice by the way that `reality' is another dangerously ambiguous word. It's another source of conceptual and philosophical confusion. To start with, the thing we perceive is often called `the perceived reality', whether it's a mountain peak, a person walking, a charging rhinoceros or a car on a collision course or anything else. Straight away we blur the distinction drawn long ago by Plato, Kant, and other great thinkers -- the distinction between the thing we perceive and the thing-in-itself in the outside world. And is music real? Is mathematics real? Is our sense of `self' real? Is religious experience real? Are love and redemption real? There are different kinds of `reality' belonging to different levels of description, some of them very different for different individuals. To me, music is very real and I have an excellent ear for it. When it comes to conventional religion, I'm nearly tone-deaf.)
The walking lights remind us that the unconscious model-fitting takes place in time as well as in space. Perceived times are -- and can only be -- internal model properties. And they must make allowance for the brain's finite information-processing rates. That's why, in particular, the existence of acausality illusions is to be expected.
In order for the brain to produce a conscious percept from visual or auditory data, many stages and levels of processing are involved -- top-down as well as bottom-up. The overall timespans of such processing are well known from experiments using high-speed electrical and magnetic recording such as electroencephalography and magnetoencephalography, to detect episodes of brain activity. Timespans are typically of the order of hundreds of milliseconds. Yet, just as with visual perception, the perceived times of outside-world events have the same `veridical' character of being clearcut, unambiguous, and seemingly exact, like the time pips on the radio. It's clear at least that perceived times are often far more accurate than hundreds of milliseconds.
That accuracy is a consequence of biological evolution. In hunting and survival situations, eye-hand-body coordination needs to be as accurate as natural selection can make it. Perceived times need not -- and do not -- await completion of the brain activity that mediates their perception. Our ancestors survived. We've inherited their timing abilities. World-class tennis players time their strokes to a few milliseconds or thereabouts. World-class musicians work to similar accuracies, in the fine control of rhythm and in the most precise ensemble playing. It's more than being metronomic; it's being `on the crest of the rhythm'.
You don't need to be a musician or sportsperson to appreciate the point I'm making. If you and I each tap a plate with a spoon or chopstick, we can easily synchronize a regular rhythm with each other, or synchronize with a metronome, to accuracies far, far better than hundreds of milliseconds. Accuracies more like tens of milliseconds can be achieved without much difficulty. So it's plain that perceived times -- internal model properties -- are one thing, while the timings of associated brain-activity events, spread over hundreds of milliseconds, are another thing altogether.
This simple point has been missed again and again in the philosophical and cognitive-sciences literature. In particular, it has caused endless confusion in the debates about consciousness and free will. The interested reader will find further discussion in Part II of Lucidity and Science but, in brief, the confusion seems to stem from an unconscious assumption -- which I hope I've shown to be nonsensical -- an assumption that the perceived `when' of hitting a ball or taking a decision should be synchronous with the `when' of some particular brain-activity event.
As soon as that nonsense is blown away, it becomes clear that acausality illusions should occur. And they do occur. The simplest and clearest examples come from music -- `the art that is made out of time', as Ursula Le Guin once described it in her great novel The Disposessed. Let's suppose that we refrain from dancing to the music, and that we keep our eyes closed. Then, when we simply listen, the data to which our musical internal models are fitted are the auditory data alone.
I'll focus on Western music. Nearly everyone with normal hearing is familiar, at least unconsciously, with the way Western music works. The unconscious familiarity goes back to infancy or even earlier. Regardless of genre, whether it be commercial jingles, or jazz or folk or pop or classical or whatever -- and, by the way, the classical genre includes much film music, for instance Star Wars -- regardless of genre, the music depends on precisely timed events called harmony changes. That's why children learn guitar chords. That's how the Star Wars music suddenly goes spooky, after the heroic opening.
The musical internal model being fitted to the incoming auditory data keeps track of the times of musical events, including harmony changes. And those times are -- can only be -- perceived times, that is, internal model properties.
Figure 5 shows one of the clearest examples I can find. Playback is available from a link in the figure caption. It's from a well known classical piano piece that's simple, slow, and serene, rather than warlike. There are five harmony changes, the third of which is perceived to occur midway through the example, at the time shown by the arrow. Yet if you stop the playback just after that time, say a quarter of a second after, you don't hear any harmony change. You can't, because that harmony change depends entirely on the next two notes, which come a third and two-thirds of a second after the time of the arrow. So in normal playback the perceived time of the harmony change, at the time of the arrow, precedes by several hundred milliseconds the arrival of the auditory data defining the change.
Figure 5: Opening of the slow movement of the piano sonata K 545 by Wolfgang Amadeus Mozart. Here's an audio clip giving playback at the speed indicated. Here's the same with orchestral accompaniment. (Mozart would have done it more subtly -- with only one flute, I suspect -- but that's not the point here.)
That's a clear example of an acausality illusion. It's essential to the way the music works. Almost like the `veridical' perception of a sharp edge, the harmony change has the subjective force of perceived reality -- the perceived `reality' of what `happens' at the time of the arrow.
When I present this example in a lecture, it's sometimes put to me that the perceived harmony change relies on the listener being familiar with the particular piece of music. Having been written by Mozart, the piece is indeed familiar to many classical music lovers. My reply is to present a variant that's unfamiliar, with a new harmony change. It starts diverging from Mozart's original just after the time of the arrow (Figure 6):
Figure 6: This version is the same as Mozart's until the second note after the arrow. Here's the playback. Here's the same with orchestral accompaniment.
As before, the harmony change depends entirely on the next two notes but, as before, the perceived time of the harmony change -- the new and unfamiliar harmony change -- is at, not after, the time of the arrow. The point is underlined by the way any competent composer or arranger would add an orchestral accompaniment, to either example -- an accompaniment of the usual kind found in classical piano concertos. Listen to the second clip in each figure caption. The accompaniments change harmony at, not after, the time of the arrow.
I discussed those examples in greater detail in Part II of Lucidity and Science, with attention to some subtleties in how the two harmony changes work and with reference to the philosophical literature, including Dennett's `multiple-drafts' theory of consciousness, which is a way of thinking about perceptual model-fitting in the time dimension.
Just how the brain manages its model-fitting processes is still largely unknown, even though the cleverness, complexity and versatility of these processes can be appreciated from a huge range of examples. Interactions between many brain regions are involved and, in many cases, more than one sensory data stream.
An example is the McGurk effect in speech perception. Visual data from lip-reading can cause changes in the perceived sounds of phonemes. For instance the sound `baa' is often perceived as 'daa' when watching someone say `gaa'. The phoneme-model is being fitted multi-modally -- simultaneously to more than one sensory data stream, in this case visual and auditory. The brain often takes `daa' as the best fit to the slightly-conflicting data.
The Ramachandran-Hirstein `phantom nose illusion' -- which can be demonstrated without special equipment -- produces a striking distortion of one's perceived body image, a nose elongation well beyond Pinocchio's or Cyrano de Bergerac's (Ramachandran and Blakeslee, p. 59). It's produced by a simple manipulation of tactile and proprioceptive data. They're the data feeding into the internal model that mediates the body image, including the proprioceptive data from receptors such as muscle spindles sensing limb positions.
What's this so-called body image? Well, the brain's unconscious internal models must include a self-model -- a partial and approximate representation of one's self, and one's body, in one's surroundings. Plainly one needs a self-model, if only to be well oriented in one's surroundings and to distinguish oneself from others. `Hey -- you're treading on my toe.'
There's been philosophical confusion on this point, too. Such a self-model must be possessed by any animal. Without it, neither a leopard nor its prey would have a chance of surviving. Nor would a bird, or a bee, or a fish. Any animal needs to be well-oriented in its surroundings, and to be able to distinguish itself from others. Yet the biological, philosophical, and cognitive-science literature sometimes conflates `having a self-model', on the one hand, with `being conscious' on the other.
Compounding the confusion is another misconception, the `archaeological fallacy' that symbolic representation came into existence only recently, at the start of the Upper Palaeolithic with its beads, bracelets, flutes, and cave paintings, completely missing the point that leopards and their prey can perceive things and therefore need internal models. So do birds, bees, and fish. Their internal models, like ours, are -- can only be -- unconscious symbolic representations. Patterns of neural activity are symbols. Again, symbolic representation is one thing, and consciousness is another. Symbolic representation is far more ancient -- by hundreds of millions of years -- than is commonly supposed.
The use of echolocation by bats and sea mammals is a variation on the same theme. For bats, too, the perceived reality must be the internal model -- not the echoes themselves, but a symbolic representation of the bat's surroundings. It must work in much the same way as our vision except that the bat provides its own illumination, with refinements such as motion detection by Doppler shifting. To start answering the famous question `what is it like to be a bat' we could do worse than imagine seeing in the dark with a stroboscopic floodlight, whose strobe frequency can be increased at will.
And what of the brain's two hemispheres? Here I must defer to McGilchrist (2009) and to Ramachandran and Blakeslee (1998), who in their different ways offer a rich depth of understanding coming from neuroscience and neuropsychiatry, far transcending the superficialities of popular culture. For present purposes, McGilchrist's key point is that having two hemispheres is evolutionarily ancient. Even fish have them. The two hemispheres may have originated from the bilaterality of primitive vertebrates but then evolved in different directions. If so, it would be a good example of how a neutral genomic change can later become adaptive.
A good reason to expect such bilateral differentiation, McGilchrist argues, is that survival is helped by having two styles of perception. They might be called holistic on the one hand, and detailed, focused, analytic, and fragmented on the other. The evidence shows that the first, holistic style is a speciality of the right hemisphere, and the second a speciality of the left, or vice versa in a minority of people.
If you're a pigeon who spots some small objects lying on the ground, then you want to focus attention on them because you want to know whether they are, for instance, inedible grains of sand or edible seeds. That's the left hemisphere's job. It has a style of model-fitting, and a repertoire of models, that's suited to a fragmented, dissected view of the environment, picking out a few chosen details while ignoring the vast majority of others. The left hemisphere can't see the wood for the trees. Or, more accurately, it can't even see a single tree but only, at best, leaves, twigs or buds (which, by the way, might be good to eat). One can begin to see why the left hemisphere is more prone to mindsets.
But suppose that you, the pigeon, are busy sorting out seeds from sand grains and that there's a peculiar flicker in your peripheral vision. Suddenly there's a feeling that something is amiss. You glance upward just in time to see a bird of prey descending and you abandon your seeds in a flash! That kind of perception is the right hemisphere's job. The right hemisphere has a very different repertoire of internal models, holistic rather than dissected. They're often fuzzier and vaguer, but with a surer sense of overall spatial relations, such as your body in its surroundings. They're capable of superfast deployment. The fuzziness, ignoring fine detail, makes for speed when coping with the unexpected.
Ramachandran and Blakeslee point out that another of the right hemisphere's jobs is to watch out for inconsistencies between incoming data and internal models, including any model that's currently active in the left hemisphere. When the data contradict the model, the left hemisphere has a tendency to reject the data and cling to the model -- to be trapped in a mindset. `Don't distract me; I'm trying to concentrate!' Brain scans show a small part of the right hemisphere that detects such inconsistencies or discrepancies. If the discrepancy is acute, the right hemisphere bursts in with `Look out, you're making a mistake!' If the right hemisphere's discrepancy detector is damaged, severe mindsets such as anosognosia can result.
McGilchrist points out that the right hemisphere is involved in many subtle and sophisticated games, such as playing with the metaphors that permeate language or, one might even say, that mediate language. So the popular-cultural mindset that language is all in the left hemisphere misses many of the deeper aspects of language.
And what of combinatorial largeness? Perhaps the point is obvious. For instance there's a combinatorially large number of possible visual scenes, and of possible assemblages of internal models to fit them. Even so simple a thing as a chain with 10 different links can be assembled in 3,628,800 different ways, and with 100 different links in approximately 10158 different ways, 1 followed by 158 zeros. Neither we nor any other organism can afford to deal with all the possibilities. Visual-system processes such as early-stage edge detection (e.g. Hofmann 1998) and the unconscious perceptual grouping studied by the Gestalt psychologists, as with the two groups in dot patterns like •• ••• (e.g. Gregory 1970), give us glimpses of how the vast combinatorial tree of possibilities is pruned by our extraordinary model-fitting apparatus -- the number of possibilities cut down -- at lightning speed and ahead of conscious thought.
Perceptual grouping works in time as well as in space, as for instance with the four-note groups starting at the arrows in Figures 5 and 6. This grouping in subjective time was adumbrated long ago in the thinking of the philosopher Henri Bergson, predating the work of the Gestalt psychologists. Such grouping is part of what gives rise to acausality illusions.
And what of science itself? What about all those mathematical and computer-coded models of population genetics and of photons, of molecules, of black holes, of lightspeed spacetime ripples, of jetstreams and the ozone hole, and of the other entities we deal with in science? Could it be that science itself is always about finding useful models that fit data from the outside world, and never about finding Veridical Absolute Truth? Can science be a quest for truth even if the truth is never Absolute?
The next chapter will argue that the answer to both questions is an emphatic yes. One of the key points will be that, even if one were to find a candidate `Theory of Everything', one could never test it at infinite accuracy, in an infinite number of cases, and in all parts of the Universe or Universes. One might achieve superlative scientific confidence, with many accurate cross-checks, within a very wide domain of applicability. The theory might be described by equations of consummate beauty. And that would be wonderful. But in principle there'd be no way to be Absolutely Certain that it's Absolutely Correct, Absolutely Accurate, and Applicable to Everything. That's kind of obvious, isn't it?
So I'd like to replace all those books on philosophy of science by one simple, yet profound and far-reaching, statement. It not only says what science is, in the most fundamental possible way, but it also clarifies the power and limitations of science. It says that science is an extension of ordinary perception, meaning perception of outside-world reality. Like ordinary perception, science fits models to data.
If that sounds glib and superficial to you, dear reader, then all I ask is that you think again about the sheer wonder of so-called ordinary perception. It too has its power and its limitations, and its fathomless subtleties, agonized over by generations of philosophers. Both science and ordinary perception work by fitting models -- symbolic representations -- to data from the outside world. Both science and ordinary perception must assume that the outside world exists, because it can't be proven absolutely. Models, and assemblages and hierarchies of models -- schemas or schemata as they're sometimes called -- are partial and approximate representations, or candidate representations, of outside-world reality. Those representations can be anything from superlatively accurate to completely erroneous.
Notice that the walking-lights animation points to the tip of a vast iceberg, a hierarchy of unconscious internal models starting with the three-dimensional motion itself but extending all the way to the precise manner of walking and the associated psychological and emotional subtleties. The main difference between science and so-called ordinary perception is that, in science, the set of available models is more extensive and the model-fitting process to some extent more conscious, as well as being far slower, and dependent on vastly extended data acquisition, computation, and cross-checking. Making the process more systematic in our big-data era is one of today's grand challenges -- and the mathematical means to do so is now available (e.g. Pearl and Mackenzie 2018) and is being put to use in, for instance, artificial intelligence systems based on machine learning. These systems fit models to experimental data in a logically self-consistent way, the experimenter's actions being represented by the Bayesian probabilistic `do' operator. Such systems learn by `artificial juvenile play', by trying things out. They do so of course within some prescribed universe of discourse, which might be that of social-media profitability but might instead, by contrast, be that of a particular scientific problem, such as understanding the complex circuitry that switches genes on and off.
And yes, all our modes of observation of the outside world are, of course, theory-laden, or prior-probability-laden. That's a necessary aspect of the model-fitting process. But that doesn't mean that `science is mere opinion' as some postmodernists say. Some models fit much better than others. And some are a priori more plausible than others, with more cross-checks to boost their prior probabilities. And some are simpler and more widely applicable than others, for example Newton's and Einstein's theories of gravity. These are both, of course, partial and approximate representations of reality even though superlatively accurate, superlatively simple, and repeatedly cross-checked in countless ways within their very wide domains of applicability -- Einstein's still wider than Newton's because it includes, for instance, the orbital decay and merging of pairs of black holes or neutron stars and the resulting spacetime ripples, or gravitational waves, which were first observed on 14 September 2015 (Abbott et al. 2016) and which provided yet another cross-check on the theory and opened a new window on the Universe. And both theories are not only simple but also mathematically beautiful.
Notice that all this has to do with cross-checking, data quality, goodness of fit, and beauty and economy of modelling, never with Absolute Truth and Absolute Proof, nor even with uniqueness of model choice. Currently Einstein's theory has no serious competitors in its domain of applicability, but in general the choice of model needn't be unique. There might be two or more alternative models that work equally well. They might have comparable simplicity and accuracy and offer complementary, and equally powerful, insights into outside-world reality.
The possibility of non-uniqueness is troublesome for believers in Absolute Truth, and is much agonized over in the philosophy-of-science literature, under headings such as `incommensurability'. However, as I keep saying, even the existence of the outside world can't be proven absolutely. It has to be assumed. Both science and ordinary perception proceed on that assumption. The justification is no more and no less than our experience that the model-fitting process works, again and again -- never perfectly, but often well enough to gain our respect.
If you observe a rhinoceros charging toward you, then it's probably a good idea to jump out of the way even though your observations are, unconsciously, theory-laden and even though there's no absolute proof that the rhinoceros exists. Even a postmodernist might jump out of the way. And the spacetime ripples gain our respect not only for the technical triumph of observing them but also because the merging black holes emit a very specific wave pattern, closely matching the details of what's computed from Einstein's equations when the black holes have particular masses and spins.
So beauty and economy of modelling can be wonderful and inspirational. Yet the same cautions apply. Indeed, Unger and Smolin (2015) argue that the current crisis in physics and cosmology has its roots in a tendency to conflate outside-world reality with mathematical models of it. The mathematical models tend to be viewed as the same thing as the outside-world reality. Jaynes (2003) aptly calls this conflation the `mind projection fallacy'. (The late Edwin T. Jaynes was one of the great thinkers about model-fitting, prior probabilities, and Bayesian analytics, where the mind projection fallacy used to be a major impediment to understanding. Probability distribution functions are model components, not things in the outside world.) The mind projection fallacy seems to be bound up with the hypercredulity instinct. In physics and cosmology, it generates a transcendental vision of Absolute Truth in which the entire Universe is seen as a single mathematical object of supreme beauty, a Theory of Everything -- an Answer to Everything -- residing within that ultimate `reality', the Platonic world of perfect forms. Alleluia!
Because the model-fitting works better in some cases than in others, there are always considerations of just how well it is working, involving a balance of probabilities. We must always consider how many independent cross-checks have been done and to what accuracies. For Einstein's equations, the spacetime ripples from merging black holes provide a new independent cross-check, adding to the half dozen or so earlier kinds of cross-check that include an astonishingly accurate one from the orbital decay of a binary pulsar -- accurate to about 14 significant figures, or one part in a hundred million million. So the detection of spacetime ripples didn't suddenly `prove' Einstein's theory, as journalists had it, but instead just added another cross-check, and a very beautiful one.
If you can both hear and see the charging rhinoceros and if your feet feel the ground shaking in synchrony, then you have some independent cross-checks. You're checking a single internal model, unconsciously of course, against three independent sensory data streams. With so much cross-checking, it's a good idea to accept the perceived reality as a practical certainty. We do it all the time. Think what's involved in riding a bicycle, or in playing tennis, or in pouring a glass of wine. But the perceived reality is still the internal model within your unconscious brain, paradoxical though that may seem. It is still theory-laden, from hundreds of millions of years of evolution. And, again, the outside world is something whose existence must be assumed.
One reason I keep banging on about these issues is the quagmire of philosophical confusion that has long surrounded them (e.g. Smythies 2009). The Vienna Circle thought that there were such things as direct, or absolute, or veridical, observations -- sharply distinct from theories or models. That's what I called the `veridical perception fallacy' in Part II of Lucidity and Science. Others have argued that all mental constructs are illusions. Yet others have argued that the entire outside world is an illusion, subjective experience being the only reality. But none of this helps! Like the obsession with absolute proof and absolute truth, it just gets us into a muddle, often revolving around the ambiguity of the words `real' and `reality'.
Journalists, in particular, often seem hung up on the idea of absolute proof, unconsciously at least. They often press us to say whether something is scientifically `proven' or not. But as Karl Popper emphasized long ago, that's a false dichotomy and an unattainable mirage. I have a dream that professional codes of conduct for scientists will clearly say that, especially in public, we should talk instead about the balance of probabilities and the degree of scientific confidence. Many scientists do that already, but others still talk about The Truth, as if it were absolute (e.g. Segerstråle 2000).
Let me come clean. I admit to having had my own epiphanies, my eurekas and alleluias, from time to time. But as a professional scientist I wouldn't exhibit them in public, at least not as absolute truths. They should be for consenting adults in private -- an emotional resource to power our research efforts -- not something for scientists to air in public. I think most of my colleagues would agree. We don't want to be lumped with all those cranks and zealots who believe, in Max Born's words, `in a single truth and in being the possessor thereof'. And again, even if a candidate Theory of Everything, so called, were to be discovered one day, the most that science could ever say is that it fits a large but finite dataset to within a small but finite experimental error. Unger and Smolin (2015) are exceptionally clear on this point and on its implications for cosmology.
Consider again the walking-lights animation. Instinctively, we feel sure that we're looking at a person walking. `Hey, that's a person walking. What could be more obvious?' Yet the animation might not come from a person walking at all. The prior probabilities, the unconscious choice of model, might be wrong. The twelve moving dots might have been produced in some other way -- such as luminous pixels on a screen! The dots might `really' be moving in a two-dimensional plane, or three-dimensionally in any number of ways. Even our charging rhinoceros might, just might, be a hallucination. As professional scientists we always have to consider the balance of probabilities, trying to get as many cross-checks as possible and trying to reach well-informed judgements about the level of scientific confidence. That's what was done with the ozone-hole work in which I was involved, which eventually defeated the ozone-hole disinformers. That's what was done with the discovery and testing of quantum theory, where nothing is obvious!
There is of course a serious difficulty here, on the level of lucidity principles and communication skills. We do need quick ways to express extremely high confidence, such as confidence in the sun rising tomorrow. We don't want to waste time on such things when confronted with far greater uncertainties. Scientific research is like driving in the fog, straining to see ahead. Sometimes the fog is very thick. So there's a tendency to use terms like `proof' and `proven' as a shorthand to indicate things to which we attribute practical certainty, things that we shouldn't be worrying about when trying to see through the fog. But because of all the philosophical confusion, and because of the hypercredulity instinct, and the dichotomization instinct, I think it preferable in public to avoid terms like `proof' or `proven', or even `settled', and instead try to use a more nuanced range of terms like `practically certain', `indisputable', `hard fact',`well established', `highly probable', and so on, when we feel that strong statements are justifiable in the current state of knowledge. Such terms sound less final and less absolutist, especially when we're explicit about the strength of the evidence and the variety of cross-checks. I try to set a good example in the Postlude on climate.
And I think we should avoid the cliché fact `versus' theory. It's another false dichotomy and it perpetuates the veridical perception fallacy. Even worse, it plays straight into the hands of the professional disinformers, those well-resourced masters of information warfare who work to discredit good science when they think it threatens profits, or political power, or any other vested interest. The `fact versus theory' mindset gives them a ready-made framing tactic, paralleling `good versus bad' (e.g. Lakoff 2014).
I want to return to the fact -- the indisputable practical certainty, I mean -- that what's complex at one level can be simple, or at least understandable, at another. And multiple levels of description are not only basic to science but also, unconsciously, basic to ordinary perception. They're basic to how our brains work. Straight away, our brains' left and right hemispheres give us at least two levels of description, respectively a lower level that dissects fine details, and a more holistic higher level. And neuroscience has revealed a variety of specialized internal models or model components that symbolically represent different aspects of outside-world reality. In the case of vision there are separate model components representing not only fine detail on the one hand, and overall spatial relations on the other but also, for instance, motion and colour (e.g. Sacks 1995, chapter 1; Smythies 2009). For instance damage to a part of the brain dealing with motion can produce visual experiences like successions of snapshots or frozen scenes -- very disabling if you're trying to cross the road.
In science, as recalled in the Prelude, progress has always been about finding a level of description and a viewpoint, or viewpoints, from which something at first sight hopelessly complex becomes simple enough to be understandable. And different levels of description can look incompatible with each other, if only because of emergent properties or emergent phenomena -- phenomena that are recognizable at a particular level of description but unrecognizable amidst the chaos and complexity of lower levels.
The need to consider multiple levels of description is especially conspicuous in the biological sciences, contrary to what the selfish-gene metaphor might suggest. For instance molecular-biological circuits, or regulatory networks, are now well recognized entities. They involve patterns of highly specific interactions between molecules of DNA, of RNA, and of proteins as well as many other large and small molecules (e.g. Noble 2006, Danchin and Pocheville 2014, Wagner 2014). Some protein molecules have long been known to be allosteric enzymes. That is, they behave somewhat like the transistors within electronic circuits (e.g. Monod 1970). Genes are switched on and off by the action of molecular-biological circuits. Causal arrows point downward as well as upward. Such circuits and actions are impossible to recognize from lower levels such as the level of genes alone, still less from the levels of chemical bonds and bond strengths within thermally-agitated molecules, jiggling back and forth on timescales of thousand-billionths of a second, and the still lower levels of atoms, atomic nuclei, electrons, and quarks. And again, there are of course very many higher levels of description, in the hierarchy of models -- level upon level, with causal arrows pointing both downward and upward. There are molecular-biological circuits and assemblies of such circuits, going up to the levels of archaea, bacteria and their communities, of yeasts, of multicellular organisms, of niche construction and ecologies, and of ourselves and our families, our communities, our nations, our globalized plutocracies, and the entire planet -- which Newton treated as a point mass.
None of this would need saying were it not for the persistence, even today, of an extreme-reductionist view -- I think it's partly unconscious -- saying, or assuming, that looking for the lowest possible level and for atomistic `units' such as quarks, or atoms, or genes, or memes, gives us the Answer to Everything and is therefore the only useful angle from which to view a problem. Yes, in many cases it can be enormously useful; but no, it isn't the Answer to Everything! Noble (2006) makes both these points very eloquently. In some scientific problems, including those I've worked on myself, the most useful models aren't at all atomistic. In fluid dynamics we use accurate `continuum-mechanics' models in which highly nonlocal, indeed long-range, interactions are crucial. They're mediated by the pressure field. They're a crucial part of, for instance, how birds, bees and aircraft stay aloft, how a jetstream can circumscribe and contain the ozone hole, and how waves and vortices interact.
McGilchrist tells us that extreme reductionism comes from our left hemispheres. It is indeed a highly dissected view of things. His book can be read as a passionate appeal for more pluralism -- for more of Max Born's `loosening of thinking', for the deeper understanding that can come from looking at things on more than one level and from more than one viewpoint, while respecting the evidence. Such understanding requires a better collaboration between our garrulous and domineering left hemispheres and our quieter, indeed wordless, but also passionate, right hemispheres.
Surely, then, professional codes of conduct for scientists -- to say nothing of lucidity principles as such -- should encourage us to be explicit, in particular, about which level or levels of explanation we're talking about. And when even the level of explanation isn't clear, or when the questions asked are `wicked questions' having no clear meaning at all, still less any clear answer, it would help to be explicit in acknowledging such difficulties. It would help to more explicit than we feel necessary.
Such an approach might also be helpful when confronted with the confusion about consciousness and free will. I want to stay off this subject -- having already had a go at it in Part II of Lucidity and Science -- except to say that some of the confusion seems to come not only from unawareness of acausality illusions, but also from conflating different levels of description. I like the aphorism that free will is a biologically indispensable illusion, but a socially indispensable reality. There's no conflict between the two statements. They belong to different, incompatible levels of description. And they sharply remind us of the ambiguity, and the context-dependence, of the word `reality'.
I ask again, is music real? Is mathematics real? Is our sense of self real? Is the outside world real? For me, at least, they're all vividly real but in four different senses. And one of life's realities is that pragmatic social functioning depends on accepting our sense of self -- our internal self-model -- as an entity having, or seeing itself as having, free will or volition or agency as it's variously called. It wouldn't do to be able to commit murder and then, like a modern-day Hamlet, to say to the jury `it wasn't me, it was my genes wot dunnit.'
The walking lights show that we have unconscious Euclidean geometry. We also have unconscious calculus.
Calculus is the mathematics of continuous change. Among many other things it deals with objects like those shown in Figure 7. They are made of smooth curves -- pathways whose direction changes continuously, the curves that everyone calls `mathematical':
Figure 7: Some Platonic objects, including the outline of a liquid drop.
Such curves include perfect circles, ellipses, and portions thereof, among countless other examples. A straight line is the special case having zero rate of change of direction. A circle has a constant rate of change of direction, and an ellipse has a rate of change that's itself changing, and so on.
Experience suggests that such `Platonic objects', as I'll call them, are of special interest to the unconscious brain. Whenever one sees natural phenomena exhibiting what look like straight lines or smooth curves, such as the edge of the sea on a clear day, or the edge of the full moon, or the shape of a hanging dewdrop, they tend to excite our sense of something special, and beautiful. So do the great pillars of the Parthenon, and the smooth curves of the Sydney Opera House. We feel their shapes as resonating with something `already there'. Plato felt that the world of such shapes, or forms, and the many other beautiful entities found in mathematics, is in some mysterious sense a world more real than the outside world with its commonplace messiness. He felt his `world of perfect forms' to be something eternal -- something that is already there, and will always be there.
My heart is with Plato here. When the shapes, or forms, look truly perfect, they can excite a sense of great wonder and mystery. So can the mathematical equations describing them. How can such immutable perfection exist at all?
Indeed, so powerful is our unconscious interest in such perfection that we see smooth curves even when they're not actually present in the incoming visual data. For instance we see them in the form of what psychologists call `illusory contours'. Figure 8 is an example. If you stare at the inner edges of the black marks for several seconds, and if you have normal vision, you will begin to see an exquisitely smooth curve joining them:
Figure 8: An illusory contour. To see it, stare at the inner edges of the black marks.
That curve is not present on the screen or on the paper. It is constructed by your visual system. To construct it, the system unconsciously solves a problem in calculus -- in the branch of it called the calculus of variations. The problem is to consider all the possible curves that can be fitted to the inner edges of the black marks, and to pick out the curve that's as smooth as possible, in a sense to be specified. The smoothness is specified using some combination of rates of change of direction, and rates of change of rates of change, and so on, averaged along each curve. So we have not only unconscious Euclidean geometry, but also an unconscious calculus of variations. And that in turn, by the way, gets us closer to some of the deepest parts of theoretical physics as we'll see shortly.
Sportspeople are good at unconscious calculus. The velocity of a tennis ball is the rate of change of its position. When the ball is in flight, the pathway it follows is a smooth curve.
The existence of the Platonic world is no surprise from an evolutionary perspective. It is, indeed, `already there' in the sense of being evolutionarily ancient -- something that comes to us through genetic memory and the automata that it enables -- self-assembling into, among many other things, the special kinds of symbolic representation that correspond to Platonic objects. That's again because of combinatorial largeness. Over vast stretches of time, natural selection has put the unconscious brain under pressure to make its model-fitting processes as simple as the data allow. That requires a repertoire of internal model components that are as simple as possible. Many of these components are Platonic objects, smooth curves or portions of smooth curves or, rather, their internal symbolic representations. Please remember that actual or latent patterns of neural activity are symbols -- and we are now talking about mathematical symbols -- even though we don't yet have the ability to read them directly from the brain's neural networks.
A perfect circle is a Platonic object simply because it's simple. The illusory contour in Figure 8 shows that the brain's model-fitting process assigns the highest prior probabilities to models representing objects with the simplest possible outlines consistent with the data, in this case an object with a smooth outline sitting in front of some smaller black objects. That is part of how the visual system separates an object from its background, an important part of making sense of the visual scene. Making sense of the scene has been crucial to survival for hundreds of millions of years -- crucial to navigation, crucial to finding mates, and crucial to eating and not being eaten. Many of the objects to be distinguished have outlines that are more or less smooth. They range from distant hills down to fruit and leaves, tusks and antlers, and teeth and claws.
`We see smooth curves even when they're not actually present.' Look again at Figure 7. None of the Platonic objects we see are actually present in the figure. Take the circle, or the ellipse as it may appear on some screens. It's actually more complex. With a magnifying glass, one can see staircases of pixels. Zooming in more and more, one begins to see more and more detail, such as irregular or blurry pixel edges. One can imagine zooming in to the atomic, nuclear and subnuclear scales. Long before that, one encounters the finite scales of the retinal cells in our eyes. Model-fitting is partial and approximate. What's complex at one level can be simple at another. And perfectly smooth curves are things belonging not to any part of the incoming sensory data but rather -- I emphasize once more -- to the unconscious brain's repertoire of model components. I think Plato would have found this interesting. (I wonder if he knew about illusory contours -- perhaps a Plato scholar can tell me.)
The calculus of variations is a gateway to some of the deepest parts of theoretical physics. That's because it leads to Noether's theorem. The theorem depends on writing the laws of physics -- our most basic model of the outside world -- in `variational' form. That's a form allowing the calculus of variations to be used. It is Feynman's own example of things that are mathematically equivalent but `psychologically very different'.
Think of playing tennis on the Moon. The tennis ball feels no air resistance, and moves solely under the Moon's gravity. One way to model such motion, familiar to scientists for over three centuries, is to use Newton's equations giving the moment-to-moment rates of change of quantities like the position of the tennis ball. In our lunar example, solving those equations produces a pathway for the tennis ball in the form of a smooth curve, approximately a portion of a parabola. But the same smooth curve can also be derived as the solution to a variational problem, a problem more like that of Figure 8 because it treats the path of the tennis ball as a single entity. It deals with all parts of the curve simultaneously. That's psychologically very different indeed.
One considers all possible paths beginning and ending at a given pair of points. Instead of finding the smoothest of those paths, however, the problem is to find the path having the smallest value of another property, quite different from any measure of roughness or smoothness. That property is the time-average, along the whole path, of the velocity squared minus twice the gravitational altitude, or gravitational energy per unit mass. In order for the problem to make sense one has to specify a fixed travel time as well as fixed end points. Otherwise the velocity squared could be anything at all. The time-averaged quantity to be minimized is proportional to what physicists call the `action integral' for the problem, or `the action' for brevity.
If one solves this variational problem, minimizing the action, then one gets exactly the same smooth curve as one gets from solving Newton's equations. Indeed, even though psychologically different, the variational problem is mathematically equivalent to Newton's equations. Using the standard methods of the calculus of variations -- the same methods as for Figure 8 -- one can make a single calculation showing that the equivalence holds in all possible cases. Mathematics does indeed handle many possibilities at once. For the reader wanting more detail I'd recommend the marvellous discussion in Feynman et al. (1964).
And once one has the problem in variational form, one can apply Noether's theorem. The theorem tells us that, in each case of tennis-ball motion, we have organic change in the abstract sense I've defined it. There are invariant quantities, including an invariant called the total energy, that stay the same while the tennis ball changes its position. Invariant quantities become more and more important as we deal with problems that are more and more complex. Invariant total energies show why it's a waste of time to try building a complicated perpetual-motion machine. And invariants are crucial to theoretical physics at its deepest levels, including all of electrodynamics, and all of quantum mechanics and particle physics. The invariants in all these cases become accessible through Noether's theorem, which in turn connects them, in a very general way, to another powerful branch of mathematics called group theory. The only things that need changing from case to case are the formula for the action, and the kind of space in which it is calculated. The mathematical framework stays the same.
Also in the unconscious brain's repertoire of model components are the special sets of musical pitches called harmonic series. An example is shown in Figure 9:
Figure 9: A musical harmonic series. You can hear the pitches in this audio clip. In the case shown here the first note, called the `fundamental' or `first harmonic', corresponds to a vibration frequency 65.4Hz (65.4 cycles per second), the second note or harmonic to twice this, 130.8Hz, and the third to three times, 196.2Hz, and so on. The fundamental note and its octave harmonics, the 2nd, 4th, 8th and so on all have the same musical name C or Doh. If you happen to have a tunable electronic keyboard and would like to tune it to agree with the harmonic series, then you need to sharpen the 3rd, 6th and 12th harmonics by 2 cents (2/100 of a semitone) and the 9th by 4 cents -- these differences are barely audible -- but also to flatten the 5th and 10th by 14 cents (easily audible to a good musical ear), the 7th by 31 cents and the 11th by 49 cents, relative to B flat and F sharp. The last two changes are plainly audible to just about anyone. It's worth going through this exercise if only to play the so-called `Tristan chord' (6th + 7th + 9th + 10th), to hear what it sounds like when thus tuned. The differences arise from the fact that the standard tuning, called `equal temperament', divides the octave into twelve exactly equal `semitones' with frequency ratios 21/12 = 1.059463.
The defining property is that the pitches correspond to vibration frequencies equal to the lowest frequency, in this case 65.4Hz (65.4 cycles per second), multiplied by a whole number such as 1, 2, 3, etc.
A harmonic series is a `Platonic object' in just the same sense as before. How can that be? The answer will emerge shortly, when we consider how hearing works. And it will expose yet more connections between music and mathematics. But first, dear reader, please take a moment to listen to the musical pitches themselves. Do they hint at something special and beautiful? Something that could divert you, and Plato, from commonplace messiness? Echoes of fairy horn calls, perhaps? However they strike you, these sounds are special to the musical brain, again for reasons that are evolutionarily ancient as we'll see.
Also special are combinations of these pitches played together. For instance if the pitches numbered 4, 5 and 6 are played together -- they are called the 4th, 5th, and 6th harmonics -- we hear the familiar sound of what musicians call a `common chord' or `major triad'. If we add to that chord the 1st, 2nd, 3rd, 8th, 10th, 12th, and 16th, then it sounds like a more spacious version of the same chord -- more like the grand, thunderous chord that opens the Star Wars music. If on the other hand we play the 6th, 7th, 9th and 10th together then we get what has famously been called the `Tristan chord', the first chord to be heard in Richard Wagner's opera Tristan und Isolde. (Some people think that Wagner invented this chord, even though it was actually invented -- I'd rather say discovered -- long before that. For instance the chord occurs over twenty times in another famous piece of music, Dido's Lament, written by Henry Purcell about two centuries before Tristan.)
It seems that Claude Debussy was the first great composer to exploit the fact that any subset of pitches from a harmonic series is special to the musical brain. Notice that all the chords just mentioned are harmonic-series subsets whether or not each pitch is played together with its own higher harmonics, because whole numbers multiply together to give whole numbers. Debussy made extraordinary use of these insights, together with the organic-change principle, to open up what he called a new frontier in musical harmony (Platt 1995), taking us far beyond Wagner, and exploited across a vast range of twentieth-century genres including, for instance, the bebop jazz of Charlie Parker.
The organic-change principle for harmony involves small pitch changes, which as mentioned in chapter 1 can be small in two different senses. These can now be stated more clearly. One sense is the obvious one, closeness on the keyboard or guitar fingerboard. The second is the inverse distance between the notes in a harmonic series. Thus, the 1st and 2nd harmonics are closest in this second sense. They are so close that musicians give them the same name, C or Doh in the case of Figure 9, even though they're far apart in the first sense, by a whole octave. The 2nd and 3rd are the next closest in the second sense, then the 3rd and 4th, and so on. That's almost all one needs to know in order to master musical harmony, if one has a good ear -- though admittedly the big harmony-counterpoint textbooks offer many useful examples, as well as showing the importance of the way melodic lines or `voices' move against each other.
The musical brain is good at recognizing subsets from more than one harmonic series at once, even when superposed in complicated combinations. Without this, we wouldn't have the sounds of Star Wars and symphonies and jazz bands and prog rock. In particular, many powerful chords are made by superposing subsets from more than one harmonic series. A point often missed is that the ordinary minor common chord is a polychord in this sense. For instance the 3-note chord called E minor is made up of the 5th and 6th pitches from Figure 9, the harmonic series based on the note C, overlapping with the 4th and 5th from another harmonic series, that based on the note G, 98.1Hz, an octave below the 3rd harmonic in Figure 9. The first spooky chord in Star Wars is another 3-note chord, similarly made up of the 4th and 5th from one series overlapping with the 4th and 5th from another.
If you listened to the audio clip in the caption to Figure 9, you probably noticed that there are slight differences in pitch relative to the pitches on a standard keyboard. The differences are detailed in the figure caption. These differences are easily audible for the 7th and 11th harmonics and also, more subtly, for the 5th and 10th. Such differences give us a valuable artistic resource, contrary to an impression one might get from the fuss about theoretical tuning differences -- Pythagorean commas and so on. Musicians playing non-keyboard instruments develop great skill in slightly varying the pitch from moment to moment as the music unfolds, exploiting the tension between harmonics and keyboard pitches for expressive purposes as with, for instance, `blue notes' in jazz. Blue notes flirt with the pitch of the 7th harmonic. A pianist can't sound a blue note, but a singer or saxophonist can, while the piano plays other notes. The 11th harmonic is sounded -- to magical effect, I think -- by the French horn player in Benjamin Britten's Serenade for tenor, horn and strings. (This performance by horn player Radovan Vlatković respects the composer's instructions. Accurate 11th, 7th, and 14th harmonic pitches are heard in the first minute or so.)
But -- I hear you ask -- why are these particular sets and subsets of pitches special to the brain, and what has all this to do with evolution and survival? The answer begins with the defining property of a harmonic series, namely that its frequencies are whole-number multiples of the fundamental frequency or first harmonic. It follows that sound waves made up of any set of pitches taken from a harmonic series, played together in any combination, with any relative strengths, take a very simple form. The waveform precisely repeats itself at a single frequency. That's 65.4 times per second in the case of Figure 9. The Tristan chord, tuned as in Figure 9, produces just such a repeating waveform. A famous theorem of Joseph Fourier tells us that any repeating waveform corresponds to some combination of harmonic-series pitches, as long as you allow an arbitrary number of harmonics. Repeating waveforms, then, are mathematically equivalent to sets or subsets of harmonic-series pitches -- mathematically equivalent, even if psychologically very different.
Our neural circuitry is good at timing things. It has evolved to give special attention to repeating waveforms because they're important for survival in the natural world. Many animal sounds are produced by vibrating elements in a larynx, or a syrinx in the case of birds. Such vibrations will often repeat themselves, to good accuracy, for many cycles, as the vibrating element oscillates back and forth like the reed of a saxophone or clarinet. So repeating waveforms at audio frequencies are important for survival because it's important, for survival, to be able to pick out individual sound sources from a jungleful of animal sounds. This rather astonishing feat of model-fitting is similar to that of a musician skilled in picking out sounds from individual instruments, when an orchestra is playing. It depends on having a repertoire of model components that include repeating waveforms. That is, exactly repeating waveforms are among the simplest model components needed by the hearing brain to help identify sound sources, just as smooth curves are among the simplest model components needed by the visual brain to help identify objects. That's why repeating waveforms have the status of Platonic objects, or forms, for the hearing brain, just as smooth curves do for the visual brain. Both contribute to making sense of a complex visual scene, or of a complex auditory scene, as the case may be, while being as simple as possible.
In summary, then, for survival's sake the hearing brain has to be able to carry out auditory scene analysis, and therefore has to know about repeating waveforms -- has to include them in its repertoire of unconscious model components available for fitting to the incoming acoustic signals, in all their complexity. And that's mathematically equivalent to saying that the unconscious brain has to know about the harmonic series.
The accuracy with which our neural circuitry can measure the frequency of a repeating waveform reveals itself via musicians' pitch discrimination. Experience shows that the musical ear can judge pitch to accuracies of the order of a few cents, that is, to a few hundredths of what musicians call a semitone, the interval between adjacent pitches on a keyboard or guitar fingerboard. It used to be thought, incidentally, that our pitch discrimination is mediated by the inner ear's basilar membrane. That's wrong because, although the basilar membrane does carry out some frequency filtering, that filtering is far too crude to account for the observed discrimination.
Auditory scene analysis isn't exclusive to humans. So it should be no surprise to find that other creatures can perceive pitch to similar accuracies. The European cuckoo comes to mind. I've heard two versions of its eponymous two-note call in the English countryside. One of them matched the 6th and 5th harmonics with moderate accuracy, and the other the 5th and 4th. The composer Frederick Delius used both versions in his famous piece On Hearing the First Cuckoo in Spring. They are woven with exquisite subtlety into the gentle, lyrical music, from just past two minutes into the piece. Among other pieces of music quoting cuckoo calls the most famous, perhaps, are Beethoven's Pastoral Symphony and Saint-Saëns' Carnival of the Animals. Both use only the second version, the version matching the 5th and 4th harmonics.
In New Zealand, where I grew up, I heard even clearer examples of accurate avian pitch perception -- much to my youthful astonishment. Two of them came from a wonderfully feisty native bird called the tui, also called the parson bird because of its white bib worn against dark plumage (Figure 10 below). Tuis have a vast repertoire of complex and virtuosic calls, but as a schoolboy on summer holidays in the Southern Alps I encountered a particular bird that liked to sing exceptionally simple tunes, using accurate harmonic-series pitches. The bird sounded these pitches with an accuracy well up to the standards of a skilled human musician, and distinctly better than the average cuckoo.
That was in the 1950s, before the days of cheap tape recorders, but I'd like to put on record what it sounded like. I can recall two of the tunes with complete clarity. Imagine a small, accurately tuned xylophone echoing through the trees of a beech forest in the Southern Alps. Here's an audio clip reconstructing one of the tunes. It uses the 4th, 5th and 10th harmonics from Figure 9, though two octaves higher. The second tune, in this audio clip, uses the 5th, 6th, 8th and 10th together with two notes from another harmonic series. To a human musician, both tunes are in C major. The bird always sang in C major, sounding the notes as accurately as any human musician. In the second tune, the third and fourth notes are the 6th and 5th harmonics of B flat. The rhythms are exceptionally simple and regular. Each tune is terminated by a complicated burst of sound, which I could imitate only crudely in the reconstructions. Such complicated sounds, with no definite, single pitch, are more typical of tui utterances. The bird sang just one tune, or slight variants of it, in the summer of one year, and the other tune in another year, sometime in the 1950s.
Even though tuis are famous for their skills in mimicry, I think we can discount the possibility that this bird learned its tunes by listening to a human musician. The location was miles away from the nearest human habitation, other than our holiday camp; and we had no radio or musical instrument apart from a guitar and a descant recorder. The recorder is a small blockflute with a pitch range overlapping the bird's. When the bird was around, I used to play various tunes on the recorder, in the same key, C major, but never got any response. It was as if the bird felt that my efforts were unworthy of notice. It usually fell silent, then later on started up with its own tune again.
An internet search turns up many examples of tui song, but I have yet to find an example remotely as simple and tuneful as the C major songs I heard. And my impression at the time was that such songs are exceptional among tuis. I did, however, find a more complex tui song that again demonstrates supremely accurate pitch perception. It is interesting in another way because one hears two notes at once, accurately tuned against each other. As is well known, the avian syrinx can be used like a pair of laryxes, to sound two notes at once. This song can be notated fairly accurately in standard musical notation, as shown in Figure 10, where audio clips are provided in the caption:
Figure 10: A more complex tui song, recording courtesy of Les McPherson. Here are two links to the recording, one at actual speed, with one partial and three complete repetitions of the song -- tuis often repeat fragments of songs -- and the other slowed to half speed in order to make the detail more accessible to human hearing.
At the first occurrence of two notes together, they are accurately tuned to the spacing of the 3rd and 4th harmonics (of B natural), the interval that musicians call a perfect fourth and notoriously sensitive to mistuning. To my ear, when I listen to the half-speed version, the bird hits the perfect fourth very accurately before sliding up to the next smallest harmonic-series spacing, that of the 4th and 5th harmonics (of G), what is called a major third, again tuned very accurately. At half speed this musical fragment is playable on the violin, complete with the upward slide, as I've occasionally done in lectures.
Another New Zealand bird that's known to sing simple, accurately-pitched tunes is the North Island kokako, a crow-sized near-ground dweller, shown in Figure 11:
Figure 11: North Island kokako. The transcription corresponds only to the start of its song in this audio recording, again courtesy of Les McPherson, which continues in a slightly more complicated way ending with the 8th harmonic of 110Hz A, to my ear creating a clear key-sense of A major. The first three notes, those shown in the transcription, are close to the 6th, 7th, and 5th harmonics of the same A.
I want to mention one more connection between music and mathematics -- yet another connection that's not mentioned in the standard accounts, confined as they are to games with numbers. (Of course composers have always played with numbers to get ideas, but that's beside the point.)
The point is that there are musical counterparts to illusory contours. Listen to this audio clip from the the first movement of Mozart's piano sonata K 545, whose second movement was quoted in Figure 5. After the first eight seconds or so one hears a smooth, flowing passage of fast notes that convey a sense of continuous motion, a kind of musical smooth curve, bending upward and downward in this case. Mozart himself used to remark on this smoothess. In his famous letters he would describe such passages as flowing "like oil" when played well enough. But as with the black segments in Figure 8, there is no smoothness in the actual sounds. The actual sounds are abrupt, percussive sounds, distinct and separate from each other. Of course hearing works differently from vision, and the analogy is imperfect. To give the impression of smoothness in the musical case the notes have to be spaced evenly in time, with adjacent notes similar in loudness. Mozart once admitted that he'd had to practise hard to get the music flowing like oil. When the notes are not spaced evenly, as in this clip, the smoothness disappears -- a bit like the vestiges of an illusory contour that some of us can see joining the outer edges of the black segments in Figure 8.
Coming back to musical pitch perception for a moment, if you're interested in perceived pitch then you may have wondered how it is that violinists, singers and others can use what's called `vibrato' while maintaining a clear and stable sense of pitch. Vibrato can be shaped to serve many expressive purposes and is an important part of performance in many Western and other musical genres. The performer modulates the frequency by variable amounts far greater than the pitch discrimination threshold of a few cents, up and down over a range that's often a hundred cents or even more, and at a variable rate that's typically within the range of 4 to 7 complete cycles per second depending on the expressive purpose. There is no corresponding fluctuation in perceived pitch. That, however, depends on the fluctuation being rapid enough. A recording played back at half speed or less tends to elicit surprise when heard for the first time, the perception then being a gross wobble in the pitch. Here's a 26-second audio clip in which a violin, playing alone, begins a quiet little fugue before a piano joins in. The use of vibrato is rather restrained. Yet when played at half speed the pitch-wobble becomes surprisingly gross and unpleasant, to my ear at least.
It appears that in order to judge pitch the musical brain does not carry out Fourier analysis but, rather, counts repetitions of neural firings over a timespans up to two hundred milliseconds or thereabouts. Any such timespan is long by comparison with a repetition cycle, and so this idea was called the `long-pattern hypothesis' by Boomsliter and Creel (1961) in a classic discussion. It accounts not only for the vibrato phenomenon but also for several other phenomena familiar to musicians, including degrees of tolerance to slightly mistuned chords. Another musically significant aspect is that vibrato can influence the perceived tone quality. For instance quality can be perceived as greatly enriched when the strengths of different harmonics fluctuate out of step with each other, as happens with the sound of violins and other bowed-stringed instruments. The interested reader is referred to Figure 3 of the review by McIntyre and Woodhouse (1978). It seems that the unconscious brain has a special interest in waveforms that repeat themselves while slightly varying their shapes, as well as their periods, continuously over a long-pattern timescale. Probably that's because, by contrast with trills and tremolos, the continuity in a vibrato pattern makes it an organically-changing pattern.
Let's return for a moment to theoretical-physics fundamentals. Regarding models that are made of mathematical equations, there's an essay that every physicist knows of, I think, by the famous physicist Eugene Wigner, about the `unreasonable effectiveness of mathematics' in representing the real world. But what's unreasonable is not the fact that mathematics comes in. As I keep saying, mathematics is just a means of handling many possibilities at once, in a precise and self-consistent way. What's unreasonable is that very simple mathematics comes in when you build accurate models of sub-atomic Nature. It's not the mathematics that's unreasonable; it's the simplicity.
So I think Wigner should have talked about the `unreasonable simplicity of sub-atomic Nature'. It just happens that at the level of electrons and other sub-atomic particles things look astonishingly simple. That's just the way nature seems to be at that level. And of course it means that the corresponding mathematics is simple too. One of the greatest unanswered questions in physics is whether things stay simple, or not, when we zoom in to the far smaller scale at which quantum mechanics and gravity mesh together.
As is well known, and widely discussed under headings such as `Planck length', `proton charge radius', and `Bohr radius', we are now talking about a scale of the order of a hundred billion billion times smaller than the diameter of a proton, and ten million billion billion times smaller than the diameter of a hydrogen atom -- well beyond the range accessible to observation and experimentation. At that scale, things might for instance be complex and chaotic, like turbulent fluid flow, with order emerging out of chaos only at much larger scales. Such possibilities have been suggested for instance by my colleague Tim Palmer, who has thought deeply about these issues -- and about their relation to the vexed questions at the foundations of quantum mechanics -- alongside his better-known work on the chaotic dynamics of weather and climate.
Journalist to scientist during a heat wave, flash flood, or other weather extreme such as Cyclone Idai: `Tell me, Professor So-and-So, is this a one-off extreme -- pure chance -- or is it due to climate change?' Well, dichotomization makes us stupid, doesn't it. The professor needs to say `Hey, this isn't an either-or. It's both of course. Climate change produces a long-term upward trend in the probability of extreme weather events.' This point is, at long, long last, gaining traction as dangerous weather extremes become more and more frequent.
How significant is the upward trend? Here's one way to look at the long-established scientific consensus. Chapter 1 mentioned audio amplifiers and two different questions one might ask about them: firstly what powers them, and secondly what they're sensitive to. Pulling the amplifier's power plug corresponds to switching off the Sun. But in the climate system is there anything corresponding to an amplifier's sensitive input circuitry? Today we have a clear answer yes. And it shows that the upward trend is highly significant, that it's mostly caused by humans, and that weather extremes can be expected to become more intense. They can be expected to become more intense in both directions -- cold extremes as well as hot, wet extremes as well as dry. Some of this comes from the behaviour of meandering jetstreams.
So the climate system is -- with certain qualifications to be discussed below -- a powerful but slowly-responding amplifier with sensitive inputs. Among the climate amplifier's sensitive inputs are small changes in the Earth's tilt and orbit. They have repeatedly triggered large climate changes, with global mean sea levels going up and down by well over 100 metres. Those were the glacial-interglacial cycles encountered in chapter 2, `glacial cycles' for brevity, with overall timespans of about a hundred millennia per cycle. And `large' is a bit of an understatement. As is clear from the sea levels and the corresponding ice-sheet changes, those climate changes were huge by comparison with the much smaller changes projected for the coming century. I'll discuss the sea-level evidence below.
Another sensitive input is the injection of carbon dioxide into the atmosphere. Carbon dioxide, whether injected naturally or artificially, has a central role in the climate amplifier not only as a plant nutrient but also as our atmosphere's most important non-condensing greenhouse gas. Without recognizing that central role it's impossible to make sense of climate behaviour in general, and of the huge magnitudes of the glacial cycles in particular. Those cycles depended not only on the small orbital changes, and on the sensitive dynamics of the great land-based ice sheets, but also on natural injections of carbon dioxide into the atmosphere from the deep oceans. Of course to call such natural injections `inputs' is strictly speaking incorrect, except as a thought-experiment, but along with the ice sheets they're part of the amplifier's sensitive input circuitry as I'll try to make clear.
The physical and chemical properties of so-called greenhouse gases are well established and uncontentious, with very many cross-checks. Greenhouse gases in the atmosphere make the Earth's surface warmer than it would otherwise be. For reasons connected with the properties of heat radiation, almost any gas whose molecules have three or more atoms can act as a greenhouse gas. (More precisely, to interact strongly with heat radiation the gas molecules must have a structure that supports a fluctuating electrostatic `dipole moment', at the frequency of the heat radiation.) Examples include carbon dioxide, water vapour, methane, and nitrous oxide. By contrast, the atmosphere's oxygen and nitrogen molecules have only two atoms and are very nearly transparent to heat radiation.
One reason for the special importance of carbon dioxide is its great chemical stability as a gas. Other carbon-containing, non-condensing greenhouse gases such as methane tend to be converted fairly quickly into carbon dioxide. Fairly quickly means within a decade or two, for methane. And of all the non-condensing greenhouse gases, carbon dioxide has always had the most important long-term heating effect, not only today but also during the glacial cycles. That's clear from ice-core data, to be discussed below, along with the well-established heat-radiation physics.
Water vapour has a central but entirely different role. Unlike carbon dioxide, water vapour can and does condense or freeze, in vast amounts, as well as being copiously supplied by evaporation from the oceans, the rainforests, and elsewhere. This solar-powered supply of water vapour -- sometimes called `weather fuel' because of the thermal energy released on condensing or freezing -- makes it part of the climate amplifier's power-supply or power-output circuitry rather than its input circuitry. Global warming is also global fuelling, because air can hold about 6% more weather fuel for every degree Celsius rise in temperature. The power output includes fluctuating jetstreams, meandering over thousands of kilometres, and cyclonic storms and their precipitation in which the energy released can be huge, dwarfing the energy of thermonuclear bombs. It is huge whether the precipitation takes the form of rain, hail, or snow. Tropical cyclones (hurricanes and typhoons), and other extreme precipitation and flooding events, both tropical and extratropical, remind us what these huge energies mean in reality. Extremes are in both directions because more weather fuel makes the whole system more active and vigorous, and fluctuations larger, including jetstream meanders.
A century or two ago, the artificial injection of carbon dioxide into the atmosphere was only a thought-experiment, of interest to a few scientists such as Joseph Fourier, John Tyndall, and Svante Arrhenius. Tyndall did simple but ingenious laboratory experiments to show how heat radiation interacts with carbon dioxide. For more history and technical detail I strongly recommend the textbook by Pierrehumbert (2010). Today, inadvertently, we're doing such an injection experiment for real. And we now know that the consequences will be very large indeed.
How can I say that? As with the ozone-hole problem, it's a matter of spotting what's simple about a problem at first sight hopelessly complex. But I also want to sound a note of humility. All I'm claiming is that the climate-science community now has enough insight, enough in-depth understanding, and enough cross-checks, to say that the climate system is sensitive to carbon dioxide injections by humans, and that the consequences will be very large. The main hope now is that, because of the slow response of the climate amplifier, there's still time to ward off the worst consequences over the coming decades and centuries.
The sensitivity of the climate system is not, by the way, what's meant by the term `climate sensitivity' encountered in many of the community's technical writings. There are various technical definitions, in all of which atmospheric carbon dioxide values are increased by some given amount but, in all of which, artificial constraints are imposed. The constraints are often left unstated. Imposing the constraints usually corresponds to a thought-experiment in which the more slowly-responding parts of the system -- including the deep oceans, the ice sheets, and large underground reservoirs of methane -- are all held fixed in an artificial and unrealistic way. Adding to the confusion, the state reached under some set of artificial constraints is sometimes called `the equilibrium climate', as if it represented some conceivable reality. And, to make matters even worse, attention is often confined to global-mean temperatures, concealing all the many other aspects of climate change including ocean heat content, and the statistics of jetstream meanders and weather extremes such as droughts and flash flooding.
Many climate scientists try to minimize confusion by spelling out which thought-experiment they have in mind. That's an important example of the explicitness principle in action. And the thought-experiments and the computer model experiments are improving year by year. As I'll try to clarify further, the climate system has many different `sensitivities' depending on the choice of thought-experiment. That's one reason why the amplifier metaphor needs qualification. In technical language, we're dealing with a system that's highly `nonlinear'. The response isn't simply proportional to the input. In audio-amplifier language, there's massive distortion and internal noise. In these respects, and as regards its generally slow response, the climate-system amplifier is very unlike an audio amplifier. We still, however, need some way of talking about climate that recognizes some parts of the system as being more sensitive than others.
And there are still many serious uncertainties on top of all the communication difficulties. But over the past twenty years or so our understanding has become good enough, deep enough, and sufficiently cross-checked to show that the uncertainties are mainly about the precise timings and sequence of events, over the coming decades and centuries. These coming events will include nonlinear step changes or `tipping points', perhaps in succession like dominoes falling as now seems increasingly likely. The details remain highly uncertain. But in my judgement there's no significant uncertainty about the response being very large, sooner or later, and practically speaking irreversible -- with notional recovery timescales exceeding tens of thousands of years (e.g. Archer 2009), practically speaking infinite from a human perspective.
Science is one thing and politics is another. I'm only a humble scientist. My aim here is to get the most robust and reliable aspects of the science stated clearly, simply, accessibly, and dispassionately, along with the implications under various assumptions about the politics and the workings of the human hypercredulity instinct. I'll draw on the wonderfully meticulous work of very many scientific colleagues including the late Nick Shackleton and his predecessors and successors, who have laboured so hard, and so carefully, to tease out information about past climates. Past climates, especially those of the past several hundred millennia, are our main source of information about the workings of the real system, taking full account of its vast complexity all the way down to the details of cyclones, clouds, forest canopies, soil ecology, ocean plankton, and the tiniest of eddies in the ocean and the atmosphere.
Is such an exercise useful at all? The optimist in me says it is. And I hope, dear reader, that you might agree because, after all, we're talking about the Earth's life-support system and the possibilities for some kind of future civilization.
In recent decades there's been a powerful disinformation campaign that's been creating yet more confusion about climate. Superficial viewpoints hold sway. Significant aspects of the problem are ignored or camouflaged. The postmodernist idea of `science as mere opinion' is used when convenient, along with the false dichotomy fact `versus' theory. For me it's a case of déja vu, because the earlier ozone-hole disinformation campaign was strikingly similar.
We now know that that similarity was no accident. According to extensive documentation cited in Oreskes and Conway (2010) -- including formerly secret documents now exposed through anti-tobacco litigation -- the current climate-disinformation campaign was seeded, originally, by the same few professional disinformers who masterminded the ozone-hole disinformation campaign and, before that, the tobacco companies' lung-cancer campaigns. The secret documents describe how to manipulate the newsmedia and sow confusion in place of understanding. For climate the confusion has spread into significant parts of the scientific community, including some influential senior scientists most of whom are not, to my knowledge, among the professional disinformers and their political allies but who have tended to focus too narrowly on the shortcomings of the big climate models, ignoring the many other lines of evidence. And such campaigns and their political fallout are, of course, threats to other branches of science as well, and indeed to the very foundations of good science. The more intense the politicization, the harder it becomes to live up to the scientific ideal and ethic.
One reason why the amplifier metaphor is important despite its limitations is that the climate disinformers ignore it when comparing water vapour with carbon dioxide. They use the copious supply of water vapour from the tropical oceans and elsewhere as a way of suggesting that the relatively small amounts of carbon dioxide are unimportant for climate. That's like focusing on an amplifier's power-output circuitry and ignoring the input circuitry, exactly the `energy budget' mindset mentioned in chapter 1. The disinformers used the same tactic with the ozone hole, saying that the pollutants alleged to cause it were present in such tiny amounts that they couldn't possibly be important. Well, for the ozone hole there's an amplifier mechanism too; it's called catalysis. One molecule of pollutant can destroy many tens of thousands of ozone molecules.
In all humility, I think I can fairly claim to be qualified as a dispassionate observer of the climate-science scene. I would dearly love to believe the disinformers when they say that carbon dioxide is unimportant for climate. And my own professional work has never been funded for climate science as such.
However, my professional work on the ozone hole and the fluid dynamics of the great jetstreams has taken me quite close to research issues in the climate-science community. Those of its members whom I know personally are ordinary, honest scientists, respectful of the scientific ideal and ethic. They include many brilliant thinkers and innovators. Again and again, I have heard members of the community giving careful conference talks on the latest findings. They are well aware of the daunting complexity of the problem, of the imperfections of the big climate models, of the difficulty of weeding out data errors, and of the need to avoid superficial viewpoints, false dichotomies, and exaggerated claims. Those concerns are reflected in the restrained and cautious tone of the vast reports published by the Intergovernmental Panel on Climate Change (IPCC). The reports make heavy reading but contain reliable technical information about the basic physics and chemistry I'm talking about such as, for instance, the magnitude of greenhouse-gas heating as compared with variations in the Sun's output -- the variable solar `constant'.
As it happens, my own professional work has involved me in solar physics as well; and my judgement on that aspect would be that the most recent IPCC assessment of solar variation is substantially correct, namely that solar variation is too small to compete with past and present carbon-dioxide injections. That's based on very recent improvements in our understanding of solar physics, to be mentioned below.
* * *
Let's pause for a moment to draw breath. I want to be more specific on how past climates have informed us about these issues, using the latest advances in our understanding. I'll try to state the leading implications and the reasoning behind them. The focus will be on implications that are extremely clear and extremely robust. They are independent of fine details within the climate system, and independent of the imperfections of the big climate models.
* * *
The first point to note is that human activities are increasing the carbon dioxide in the atmosphere by amounts that will be large.
They will be large in the only relevant sense, that is, large by comparison with the natural range of variation of atmospheric carbon dioxide with the Earth system close to its present state. The natural range is well determined from ice-core data, recording the extremes of the hundred-millennium glacial cycles. That's one of the hardest, clearest, most unequivocal pieces of evidence we have. It comes from the ability of ice to trap air, beginning with compacted snowfall, giving us clean air samples from the past 800 millennia from which carbon dioxide concentrations can be reliably measured.
In round numbers the natural range of variation of atmospheric carbon dioxide is close to 100 ppmv, 100 parts per million by volume. The increase since pre-industrial times now exceeds 120 ppmv. In round numbers we have gone from an glacial 180 ppmv through a pre-industrial 280 ppmv up to today's values, just over 400 ppmv. And on current trends the 400 ppmv will have increased to 800 ppmv or more by the end of this century. An increase from 180 to 800 ppmv is an increase of the order of six times the natural range of variation. Whatever happens, therefore, the climate system will be like a sensitive amplifier subject to a large new input signal, the only question being just how large -- just how many times larger than the natural range.
For comparison with, say, 800 ppmv, the natural variation across recent glacial cycles has been roughly from minima around 180-190 ppmv to maxima around 280-290 ppmv but then back again, i.e., in round numbers, over the aforementioned natural range of about 100 ppmv -- repeatedly and consistently back and forth over several hundreds of millennia (recall Figure 3 in chapter 2). The range appears to have been determined largely by deep-ocean storage and leakage rates. Storage of carbon in the land-based biosphere, and input from volcanic eruptions, appear to have played only secondary roles in the glacial cycles, though wetland biogenic methane emissions are probably among the significant amplifier mechanisms or positive feedbacks.
Recent work (e.g. Shakun et al. 2012, Skinner et al. 2014) is clarifying how the natural 100 ppmv carbon-dioxide injections involved in `deglaciations', the huge transitions from the coldest to the warmest extremes, arose mainly by release of carbon dioxide from the oceans through an interplay of ice-sheet and ocean-circulation changes, and through many other events in a complicated sequence triggering positive feedbacks -- the whole sequence having been initiated then reinforced by a small orbital change, as explained below. (The disinformers ignore all these complexities by saying that the Earth somehow warmed. The warming then caused the release of carbon dioxide, they say, with little further effect on temperature.)
The deglaciations show us just how sensitive the climate-system amplifier can be. What I'm calling its input circuitry includes ice-sheet dynamics and what's called the natural `carbon cycle', though a better name would be `carbon sub-system'. Calling it a sub-system would do more justice to the vast complexity already hinted at, dependent on deep-ocean storage (mostly as bicarbonate ions), on chemical and biochemical transformations on land and in the oceans, on complex groundwater, atmospheric and oceanic flows down to the finest scales of turbulence, on sea-ice cover and upper-ocean layering and indeed on biological and ecological adaptation and evolution -- nearly all of which is outside the scope of the big climate models. Much of it is also outside the scope of specialist carbon-cycle models, if only because such models grossly oversimplify the transports of carbon and biological nutrients by fluid flows, within and across the layers of the sunlit upper ocean for instance. But we know that the input circuitry was sensitive during deglaciations without knowing all the details of the circuit diagram. It's the only way to make sense of the records in ice cores, in caves, and in the sediments under lakes and oceans, which tell us many things about the climate system's actual past behaviour (e.g. Alley 2000, 2007).
The records showing the greatest detail are those covering the last deglaciation. Around 18 millennia ago, just after the onset of an initiating orbital change, atmospheric carbon dioxide started to build up from a near-minimum glacial value around 190 ppmv toward the pre-industrial 280 ppmv. Around 11 millennia ago, it was already close to 265 ppmv. That 75 ppmv increase was the main part of what I'm calling a natural injection of carbon dioxide into the atmosphere. It must have come from deep within the oceans since, in the absence of artificial injections by humans, it's only the oceans that have the ability to store the required amounts of carbon, in suitable chemical forms. Indeed, land-based storage worked mostly in the opposite direction as ice retreated and forests spread.
The oceans not only have more than enough storage capacity, as such, but also mechanisms to store and release extra carbon dioxide, involving limestone-sludge chemistry (e.g. Marchitto et al. 2006). How much carbon dioxide is actually stored or released is determined by a delicate competition between storage rates and leakage rates. For instance one has storage via dead phytoplankton sinking from the sunlit upper ocean into the deepest waters. That storage process is strongly influenced, it's now clear, by details of the ocean circulation, especially near Antarctica, and the effects on gas exchange between deep waters and atmosphere and on phytoplankton nutrient supply and uptake, all of which is under scrutiny in current research (e.g. Le Quéré et al. 2007; Burke et al. 2015; Watson et al. 2015, & refs.).
In addition to the ice-core record of atmospheric carbon-dioxide buildup starting 18 millennia ago, we have hard evidence for what happened to sea levels. The sea level rise began in earnest about two millennia afterwards, that is, about 16 millennia ago, and a large fraction of it had taken place within a further 8 millennia. The total sea level rise over the whole deglaciation was by most estimates well over 100 metres, perhaps as much as 140. It required the melting of huge volumes of land-based ice.
Our understanding of how the ice melted is incomplete, but it must inevitably have involved a complex interplay between snow deposition, ice flow and ablation, and ocean-circulation and sea-ice changes releasing carbon dioxide. The main carbon-dioxide injection starting 18 millennia ago must have significantly amplified the whole process. That statement holds independently of climate-model details, being a consequence only of the persistence, the global scale, and the known order of magnitude of the greenhouse heating from carbon dioxide, all of which are indisputable. Still earlier, between 20 and 18 millennia ago, a relatively small amount of orbitally-induced melting of the northern ice sheets seems to have triggered a massive Atlantic-ocean circulation change, reaching all the way to Antarctica and, in this and other ways, to have started the main carbon-dioxide injection. The buildup of greenhouse heating was then able to reinforce a continuing increase in the orbitally-induced melting. That in turn led to the main acceleration in sea level rise, two millennia later. Some of the recent evidence supporting this picture is summarized here.
The small orbital changes are well known and can be calculated very precisely over far greater, multi-million-year timespans, thanks to the remarkable stability of the solar system's planetary motions. The orbital changes include a 2° oscillation in the tilt of the Earth's axis (between about 22° and 24°) and a precession that keeps reorienting the axis relative to the stars, redistributing solar heating in latitude and time while hardly changing its average over the globe and over seasons. Figure 12, taken from Shackleton (2000), shows the way in which the midsummer peak in solar heating at 65°N has varied over the past 400 millennia:
Figure 12: Midsummer diurnally-averaged insolation at 65°N, in W m-2, from Shackleton (2000), using orbital calculations carried out by André Berger and co-workers. They assume constant solar output but take careful account of variations in the Earth's orbital parameters in the manner pioneered by Milutin Milanković. Time in millennia runs from right to left.
The vertical scale on the right is the local, diurnally-averaged midsummer heating rate from incoming solar radiation at 65°N, in watts per square metre. It is these local peaks that are best placed to initiate melting on the northern ice sheets. One gets a peak when closest to the Sun with the North Pole tilted toward the Sun. However, such melting is not in itself enough to produce a full deglaciation. Only one peak in every five or so is associated with anything like a full deglaciation. They are the peaks marked with vertical bars. The timings can be checked from Figure 3 in chapter 2. The marked peaks were accompanied by the biggest carbon-dioxide injections, as measured by atmospheric concentrations reaching 280 ppmv or more. It's noteworthy that, of the two peaks at around 220 and 240 millennia ago, it's the smaller peak around 240 millennia that's associated with the bigger carbon-dioxide and temperature response. The bigger peak around 220 millennia is associated with a somewhat smaller response.
In terms of the amplifier metaphor, therefore, we have an input circuit whose sensitivity varies over time. In particular, the sensitivity to high-latitude solar heating must have been greater at 240 than at 220 millennia ago. That's another thing we can say independently of the climate models.
There are well known reasons to expect such variations in sensitivity. One is that the system became more sensitive when it was fully primed for the next big carbon-dioxide injection. To become fully primed it needed to store enough extra carbon dioxide in the deep oceans. Extra storage was favoured in the coldest conditions, which tended to prevail during the millennia preceding full deglaciations. How this came about is now beginning to be understood, with changes in ocean circulation near Antarctica playing a key role, alongside limestone-sludge chemistry and phytoplankton fertilization from iron in airborne dust (e.g. Watson et al. 2015, & refs.). Also important was a different priming mechanism, the slow buildup and areal expansion of the northern land-based ice sheets. The ice sheets slowly became more vulnerable to melting in two ways, first by expanding equatorward into warmer latitudes, and second by bearing down on the Earth's crust, taking the upper surface of the ice down to warmer altitudes. This ice-sheet-mediated priming mechanism would have made the system more sensitive still.
Specialized model studies (e.g. Abe-Ouchi et al. 2013, & refs.) have long supported the view that both priming mechanisms are important precursors to deglaciation. It appears that both are needed to account for the full magnitudes of deglaciations like the last. It must be cautioned, however, that our ability to model the details of ice flow and snow deposition is still very limited. That's related to some of the uncertainties now facing us about the future. For one thing, there are signs that parts of the Greenland ice sheet are becoming more sensitive today, as well as parts of the Antarctic ice sheet, especially the part known as West Antarctica where increasingly warm seawater is intruding sideways underneath the ice, some of which is grounded below sea level. In the past, there have been huge surges in ice-flow called Heinrich events, of which the dynamics is not well understood.
Ice-flow modelling is peculiarly difficult because of the need to describe slipping and lubrication at the base of an ice sheet, over areas whose sizes, shapes, and frictional properties are hard to predict, while accounting for the highly complex fracture patterns that might or might not develop in the ice as meltwater chisels downwards and seawater intrudes sideways.
As regards the deglaciations and the roles of the abovementioned priming mechanisms -- ice-sheet buildup and deep-ocean carbon dioxide storage -- two separate questions must be distinguished. One concerns the magnitudes of deglaciations. The other concerns their timings, every 100 millennia or so over the last few glacial cycles. For instance, why aren't they just timed by the strongest peaks in the orbital curve above?
It's hard to assess the timescale for ocean priming because, here, our modelling ability is even more limited, not least regarding the details of sunlit upper-ocean circulation and layering where phytoplankton live (see for instance Marchitto et al. 2006, and my notes thereto.) We need differences between storage rates and leakage rates; and neither are modelled, nor observationally constrained, with anything like sufficient accuracy. However, Abe-Ouchi et al. make a strong case that the timings of deglaciations, as distinct from their magnitudes, must be largely determined not by ocean storage but by ice-sheet buildup. That conclusion depends not on a small difference between ill-determined quantities but, rather, on a single gross order of magnitude, namely the extreme slowness of ice-sheet buildup by snow accumulation, which is crucial to their model results. And ocean priming is unlikely to be slow enough to account for the full 100-millennia timespan. But the results also reinforce the view that the two priming mechanisms are both important for explaining the huge magnitudes of deglaciations.
Today, in the year 2017, with atmospheric carbon dioxide overtopping the 400 ppmv mark and far above the pre-industrial 280 ppmv, we have already had a total, natural plus artificial, carbon-dioxide injection more than twice as large as the preceding natural injection, as measured by atmospheric buildup. Even though the system's sensitivity may be less extreme than just before a deglaciation, the climate response would be large even if the buildup were to stop tomorrow. That's despite the way the injected carbon dioxide is repartitioned between the atmosphere, the oceans and the land-based biosphere, and despite what's technically called carbon-dioxide opacity, producing what's called a logarithmic dependence in its greenhouse heating effect (e.g. Pierrehumbert 2010, sec. 4.4.2). Logarithmic dependence means that the magnitude of the heating effect is described by a graph that continues to increase as atmospheric carbon dioxide increases, but progressively less steeply. That's well known and was pointed out long ago by Arrhenius.
A consideration of sea levels puts all this in perspective. A metre of sea level rise is only a tiny fraction of the 100 metres or more by which sea levels rose between 20 millennia ago and today, and the additional 70 metres or so by which they'd rise if all the land-based ice sheets were to melt. It's overwhelmingly improbable that an atmospheric carbon-dioxide buildup twice as large as the natural range, let alone six times as large, or more, as advocated by the climate disinformers and their political allies, would leave sea levels clamped precisely at today's values. There is no known, or conceivable, mechanism for such clamping -- it would be Canute-like to suppose that there is -- and there's great scope for substantial further sea level rise. For instance a metre of global-average sea level rise corresponds to the melting of only 5% of today's Greenland ice plus 1% of today's Antarctic. That's nothing at all by comparison with a deglaciation scenario, but is already very large from a human perspective. And it could easily be several metres or more, over the coming decades and centuries, with drastic geopolitical consequences, especially if we fail to curb greenhouse-gas emissions soon.
An integral part of the picture is that artificial carbon-dioxide injections have cumulative and, from a human perspective, essentially permanent and irreversible effects on the entire atmosphere-ocean-land system. Among these are large effects on ocean ecosystems and food chains, the destruction of coral reefs for instance, as they respond to rising temperatures and to the ocean acidification that results from repartitioning. Our own food chains will be affected, more and more drastically. The natural processes that can take the artificially-injected carbon dioxide back out of the system as a whole have timescales far longer even than tens of millennia (e.g. Archer 2009). To be sure, the carbon dioxide could be taken back out artificially, using known technologies -- that's by far the safest form of geoengineering, so-called -- but the expense has made this politically impossible so far.
Cumulativeness means that the effect of our carbon-dioxide injections on the climate system depends mainly on the total amount injected, and hardly at all on the rate of injection.
From a risk-management perspective it would be wise to assume that the climate-system amplifier is already more sensitive than in the pre-industrial past. The risk from ill-known factors increases as the system moves further and further away from its best-known states, those of the past few hundred millennia. There are several reasons to expect increasing sensitivity, among them the ice-sheet sensitivity already mentioned. Another is the loss of sea ice in the Arctic, increasing the area of open ocean exposed to the summer sun. The dark open ocean absorbs solar heat faster than the white sea ice. This is a strong positive feedback, on top of the Arctic's tendency to warm faster than the rest of the planet, one of the most robust predictions from several generations of climate models. A third reason is the existence of what are called methane clathrates, or frozen methane hydrates, large amounts of which are stored underground in high latitudes. They add yet another positive feedback, increasing the sensitivity yet further.
Methane clathrates consist of natural gas trapped in ice instead of in shale. There are large amounts buried in permafrosts, probably dwarfing conventional fossil-fuel and shale-gas reserves although the precise amounts are uncertain (Valero et al. 2011). As the system moves further beyond pre-industrial conditions, increasing amounts of clathrates will melt and release methane gas. It's well documented that such release is happening today, at a rate that isn't well quantified but is almost certainly increasing (e.g. Shakhova et al. 2014, Andreassen et al. 2017). Permafrost has become another self-contradictory term. Permafrost is now very temporary. This is another positive feedback whose ultimate magnitude is highly uncertain but which does increase the probability, already far from negligible, that the Earth system might go all the way into a very hot, very humid state like that of the early Eocene around 56 million years ago. Methane that gets into our atmosphere jolts the system toward hotter states because in the short term it's more powerful than methane that's burnt or otherwise oxidized. Its greenhouse-warming contribution per molecule is far greater than that of the carbon dioxide to which it's subsequently converted within a decade or so (e.g. Pierrehumbert 2010, sec. 4.5.4).
Going into a new Eocene or `hothouse Earth' would mean first that there'd be no great ice sheets at all, even in Antarctica, second that sea levels would be about 70 metres higher than today -- some hundreds of feet higher -- and third that cyclonic storms would be much more frequent and much more powerful than today -- much more powerful than Cyclone Idai, which recently devastated large areas of Mozambique. A piece of robust and well-established physics, called the Clausius-Clapeyron relation, says that air can hold increasing amounts of weather fuel, in the form of water vapour, as temperatures increase -- around six to seven percent more weather fuel for each degree Celsius. And the geology of the early Eocene shows clear evidence of `storm flood events' and massive soil erosion (e.g. Giusberti et al. 2016). It's therefore no surprise that some land-based mammals found it useful to migrate into the oceans around the time of the early Eocene. That's clear both from the fossil record and from genomics. Within a relatively short time, several million years, some of them had evolved into fully aquatic mammals like today's whales and dolphins. Selective pressures from extreme surface storminess makes sense of those extraordinary evolutionary events.
The early Eocene was hot, humid, and stormy despite the Sun being about half a percent weaker than today. We don't have accurate records of atmospheric carbon dioxide at that time. But extremely high values, perhaps thousands of ppmv, are to be expected from large-scale volcanic activity. Past volcanic activity was sometimes far greater and more extensive than anything within human experience, as with the pre-Eocene lava flows that covered large portions of India, whose remnants form what are called the Deccan Traps, and -- actually overlapping the time of the early Eocene and even more extensive -- the so-called North Atlantic Igneous Province. Sufficiently high carbon dioxide can easily explain the high temperatures and high humidity, despite the weaker Sun.
The weakness of the Eocene Sun counts as something else that we know about with extremely high scientific confidence. The Sun's total power output gets stronger by roughly 1 percent every hundred million years. The solar models describing the power-output increase have become extremely secure -- very tightly cross-checked -- especially now that the so-called neutrino puzzle has been resolved. Even before that puzzle was resolved a few years ago, state-of-the-art solar models were tightly constrained by a formidable array of observational data, including very precise data characterizing the Sun's acoustic vibrations, called helioseismic data. The same solar models are now known to be consistent, also, with the measured fluxes of different kinds of neutrino. That's a direct check on conditions near the centre of the Sun, where the nuclear reactions powering it take place.
These solar models, plus recent high-precision observations, plus recent advances in understanding the details of radiation from the Sun's surface and atmosphere, point strongly to another significant conclusion. Variability in the Sun's output on timescales much less than millions of years comes from variability in sunspots and other magnetic phenomena. These phenomena are by-products of the turbulent fluid motion caused by thermal convection in the Sun's outer layers. That variability is now known to have climatic effects distinctly smaller than the effects of carbon dioxide injections to date, and very much smaller than those to come. The climatic effects from solar magnetism include not only the direct response to a slight variability in the Sun's total power output, but also some small and subtle effects from a greater variability in the Sun's ultraviolet radiation, which is absorbed mainly at stratospheric and higher altitudes. The main points are well covered in reviews by Foukal et al. (2006) and Solanki et al. (2013). Controversially, there might be an even more subtle effect from cloud modulation by cosmic-ray shielding. But to propose that any of these effects predominate over greenhouse-gas heating and even more that their timings should coincide with, for instance, the timings of full deglaciations -- the timings of the marked peaks in the orbital curve above -- would be to propose something that's again overwhelmingly improbable.
With the Sun half a percent stronger today, and a new Eocene or hothouse Earth in prospect -- one might call it the Eocene syndrome -- we must also consider what might similarly be called the Venus syndrome. That's the ocean-destroying, life-extinguishing `runaway greenhouse' leading to a state like the observed state of the planet Venus, with its molten-lead surface temperatures. Here we can be a bit more optimistic. Even if the Earth does go into a new Eocene -- perhaps after a few centuries, or a millennium or two -- the Venus syndrome seems unlikely to follow, on today's best estimates. Modelling studies suggest that the Earth can probably avoid tropical runaway-greenhouse conditions. It can do so with the help of the same powerful cyclonic storms, transporting heat and weather fuel more and more copiously away from the tropical oceans into high winter latitudes. So, whatever happens to storm-devastated human societies, and to the biosphere, over the next few centuries, life on Earth will probably survive.
Coming back to our time in the twenty-first century, let's take a closer look at the storminess issue for the near future. Once again, the Clausius-Clapeyron relation is basic: global warming equals global fuelling. Looking beyond tropical cyclones like Idai, also called typhoons and hurricanes, we can also expect more storminess in higher latitudes. There, the Earth's rotation rate, which we can take to be a constant for this purpose, has a very strong influence on the large-scale fluid dynamics. It will tend to preserve the characteristic spatial scales and morphologies of the extratropical jetstreams and cyclones that are familar today, suggesting in turn that the peak intensities of the jetstreams, cyclones and concentrated rainfall events will increase as they're fed with, on average, more and more weather fuel from the tropics and subtropics -- more and more weather fuel going into similar-sized, similar-shaped regions.
So the transition toward a hotter, more humid climate is likely to show extremes in both directions at first: wet and dry, hot and cold, heatwaves and severe cold outbreaks. Fluctuations involving jetstream meanders and tropical moist convection are all likely to intensify, on average, with increasingly large excursions in both directions. They're all tied together by the fluid dynamics. Again thanks to the Earth's rotation, fluid-dynamical influences operate all the way out to planetary scales. In the technical literature such long-range influences are called `teleconnections'. They form a global-scale jigsaw of mutual influences, a complex web of cause and effect operating over a range of timescales out to decades. They have a role for instance in El Niño and other phenomena involving large-scale, decadal-timescale fluctuations in tropical sea-surface temperature and moist convection, exchanging vast amounts of heat between atmosphere and ocean.
None of these phenomena are adequately represented in the big climate models. Although the models are important as part of our hypothesis-testing toolkit -- and several generations of them have robustly predicted the Arctic maximum in warming trends -- they cannot yet accurately simulate such things as the fine details and morphology of jetstreams, cyclones, tropical moist convection, the precise teleconnections between them, and their peak intensities. So as yet they're inadequate as a way of predicting statistically the timings, sequences, and geographic patterns of events, and the precise magnitudes of extreme events, over the coming decades and centuries. Estimating weather extremes over the next few decades is one of the toughest challenges for climate science.
Today's operational weather-forecasting models are getting better at simulating individual storms and precipitation, including extremes. That's mainly because of their far finer spatial resolution, implying a far greater computational cost per day's simulation.
The computational cost still makes it impossible to run such operational models out to many centuries. However, in a recent landmark study an operational model was run on a UK Meteorological Office supercomputer long enough, for the first time, to support the expectation that climate change and global fuelling will increase the magnitudes and frequencies of extreme summer rainfall events in the UK (Kendon et al. 2014). The results point to even greater extremes than expected from the Clausius-Clapeyron relation alone. There's a positive feedback in which more weather fuel amplifies thundercloud updrafts, enabling them to suck up still more weather fuel, for a short time at least.
Such rainfall extremes are spatially compact and the most difficult of all to simulate. As computer power increases, though, there will be many more such studies -- transcending recent IPCC estimates by more accurately describing the statistics of extreme rainstorms and snowstorms, and droughts and heatwaves, in all seasons and locations. Winter storms are spatially more extensive and are better simulated, but again only by the operational weather-forecasting models and not by the big climate models.
Dear reader, before taking my leave I owe you a bit more explanation of the amplifier metaphor. As should already be clear, it's an imperfect metaphor at best. To portray the climate system as an amplifier we need to recognize not only its highly variable sensitivity but also its many intricately-linked components operating over a huge range of timescales -- some of them out to multi-decadal, multi-century, multi-millennial and even longer. And the climate-system amplifier would pretty terrible as an audio amplifier if only because it has so much internal noise and variability, on so many timescales, manifesting the `nonlinearity' already mentioned. An audio aficionado would call it a nasty mixture of gross distortions and feedback instabilities -- as when placing a microphone too close to the loudspeakers -- except that the instabilities have many timescales. Among the longer-timescale components there's the deep-ocean storage of carbon dioxide and the land-based processes including the waxing and waning of forests, wetlands, grasslands, and deserts, as well as ice-flow sensitivity and ice-sheet dynamics, operating on timescales all the way out to a mind-blowing 100 millennia.
Some of the system's noisy internal fluctuations are relatively sudden, for instance showing up as the Dansgaard-Oeschger warming events encountered in chapter 2, subjecting tribes of our ancestors to major climatic change well within an individual's lifetime and probably associated with a collapse of upper-ocean layering and sea-ice cover in the Nordic Seas (Dokken et al. 2013). A similar tipping point might or might not occur in the Arctic Ocean in the next few decades, with hard-to-predict consequences for the Greenland ice sheet and the methane clathrates.
All these complexities help the climate disinformers, of course, because from all the many signals within the system one can always cherry-pick some that seem to support practically any view one wants, especially if one replaces insights into the workings of the system, as seen from several viewpoints, by superficial arguments that conflate timing with cause and effect. Natural variability and noise in the data provide many ways to cherry-pick data segments, showing what looks like one or another trend or phase relation and adding to the confusion about different timescales. To gain what I'd call understanding, or insight, one needs to include good thought-experiments in one's conceptual arsenal. Such thought-experiments are involved, for instance, when considering injections of carbon dioxide and methane into the atmosphere, whether natural or artificial or both.
I also need to say more about why we can trust the ice-core records of atmospheric carbon dioxide, and methane as well. Along with today's atmospheric measurements the ice-core records count as hard evidence, in virtue of the simplicity of the chemistry and the meticulous cross-checking that's been done -- for instance by comparing results from different methods to extract the carbon dioxide trapped in ice, by comparing results between different ice cores having different accumulation rates, and by comparing with the direct atmospheric measurements that have been available since 1958. We really do know with practical certainty the past as well as the present atmospheric carbon-dioxide concentrations, with accuracies of the order of a few percent, as far back as about eight hundred millennia even though not nearly as far back as the Eocene. Throughout the past eight hundred millennia, atmospheric carbon dioxide concentrations varied roughly within the range 180 to 290 ppmv, as already noted. More precisely, all values were within that range except for a very few outlier values closer to 170 and 300 ppmv. All values without exception were far below today's 400 ppmv, let alone the 800 ppmv that the climate disinformers would like us to reach by the end of this century.
And why do I trust the geological record of past sea levels, going up and down by 100 metres or more? We know about sea levels from several hard lines of geological evidence, including direct evidence from old shoreline markings and coral deposits. It's difficult to allow accurately for such effects as the deformation of the Earth's crust and mantle by changes in ice and ocean-mass loading, and tectonic effects generally. But the errors from such effects are likely to be of the order of metres, not many tens of metres, over the last deglaciation at least. And, as is well known, an independent cross-check comes from oxygen isotope records (e.g. Shackleton 2000), reflecting in part the fractionation between light and heavy oxygen isotopes when water is evaporated from the oceans and deposited as snow on the great ice sheets. That cross-check is consistent with the geological estimates.
* * *
So -- in summary -- we may be driving in the fog, but the fog is clearing. The disinformers urge us to shut our eyes and step on the gas. The current US president wants us to burn lots of `beautiful clean coal', pushing hard toward a new Eocene. But the disinformation campaign now seems, at last, to be meeting the same fate for climate as it did for the ozone hole and for tobacco and lung cancer.
Earth observation and modelling will continue to improve, helped by the new techniques of Bayesian causality theory and artificial intelligence. The link between global fuelling and weather extremes will become increasingly clear as computer power increases and case studies accumulate. Younger generations of scientists, engineers and business entrepreneurs will see more and more clearly through the real fog of scientific uncertainty, as well as through the artificial fog of disinformation. Scientists will continue to become more skilful as communicators.
Of course climate isn't the only huge challenge ahead. There's the evolution of pandemic viruses and of antibiotic resistance in bacteria. There's the threat of asteroid strikes. There's the enormous potential for good or ill in new nanostructures and materials, and in genetic engineering, information technology, social media, cyberwarfare, teachable artificial intelligence, and automated warfare -- `Petrov's nightmare', one might call it -- all of which demand clear thinking and risk management (e.g. Rees 2014). On teachable artificial intelligence, for instance, clear thinking requires escape from yet another version of the us-versus-them mindset with either us, or them, the machines, ending up `in charge' or `taking control' -- whatever that might mean -- and completely missing the complexity, and plurality, of human-machine interaction and the possibility that it might be cooperative, with each playing to its strengths. Why not have a few more `brain hemispheres', natural and artificial, helping us to solve our problems and to cope with the unexpected.
On risk management, the number of ways for things to go wrong is combinatorially large, some of them with low probability but enormous cost -- unintended consequences that can easily be overlooked. So I come back to my hope that good science -- which in practice means open science with its powerful ideal and ethic, its openness to the unexpected, and its humility -- will continue to survive and prosper despite all the forces ranged against it.
After all, there are plenty of daring and inspirational examples. One of them is open-source software, and another is Peter Piot's work on HIV/AIDS and other viral diseases such as Ebola. Yet another is the human genome story. There, the scientific ideal and ethic prevailed against corporate might (Sulston and Ferry 2003), keeping the genomic data available to open science. When one contemplates not only human weakness but also the vast resources devoted to short-term profit, by fair means or foul, one can't fail to be impressed that good science gets anywhere at all. That it has done so again and again, against the odds, is to me, at least, very remarkable and indeed inspirational.
The ozone-hole story, in which I myself was involved professionally, is another such example. The disinformers tried to discredit everything we did, using the full power of their commercial and political weapons. What we did was seen as heresy -- as with lung cancer -- a threat to share prices and profits. And yet the science, including all the cross-checks between different lines of evidence both observational and theoretical, became strong enough, adding up to enough in-depth understanding, despite the complexity of the problem, to defeat the disinformers in the end. The result was the Montreal Protocol on ozone-depleting chemicals. It's a new symbiosis between regulation and market forces. That too was inspirational. And it has bought us a bit more time to deal with climate, because the ozone-depleting chemicals are also potent greenhouse gases. If left unregulated, they would have accelerated climate change still further.
And on climate itself we now seem, at long last, to have reached a similar turning point. The Paris climate agreement of December 2015 and its 2018 followup in Katowice prompt a dawning hope that the politics is changing enough to allow another new, and similarly heretical, symbiosis (e.g. Farmer et al. 2019). The disinformers are still very powerful, within the newsmedia, the social media, and within many political circles and constituencies. But free-market fundamentalism and triumphalism were somewhat weakened by the 2008 financial crash. On top of that, the old push to burn all fossil-fuel reserves (e.g. Klein 2014) -- implying a huge input to the climate amplifier -- is increasingly seen as risky even in purely financial terms. It is seen as heading toward another financial crash, which will be all the bigger the longer it's delayed -- what's now called the bursting of the shareholders' carbon bubble. Indeed, some of the fossil-fuel companies have now recognized the need to change their business models and are seriously considering, for instance, carbon capture and storage, economical in the long term (Oxburgh 2016) -- allowing fossil fuels to be burnt without emitting carbon dioxide into the atmosphere -- as well as helping to scale up carbon-neutral `renewable' energy including third-world-friendly distributed energy systems. That's the path to prosperity noted a decade ago by economist Nicholas Stern (2009).
A further sign of hope is the recent publication of a powerful climate-risk assessment (King et al. 2015), drawing on professional expertise not only from science but also from the insurance industry and the military and security services, saying that there's no need for despair or fatalism because `The risks of climate change may be greater than is commonly realized, but so is our capacity to confront them.' And there are signs of a significant corporate response here and there in, for instance, the 2015 CDP Global Climate Change Report (Dickinson et al. 2015). If we're lucky, all this might tip the politics far enough for the Paris agreement to take hold, despite the inevitable surge of disinformation against it.
As regards good science in general, an important factor in the genome story, as well as in the ozone-hole story, was a policy of open access to experimental and observational data. That policy was one of the keys to success. The climate-science community was not always so clear on that point, giving the disinformers further opportunities. However, the lesson now seems to have been learnt.
I don't think, by the way, that everyone contributing to climate disinformation is consciously dishonest. Honest scepticism is crucial to science; and I wouldn't question the sincerity of colleagues I know personally who feel, or used to feel, that the climate-science community got things wrong. Indeed I'd be the last to suggest that that community, or any other scientific community, has never got anything wrong even though my own sceptical judgement is that today's climate-science consensus is mostly right and that, if anything, it underestimates the problems ahead.
It has to be remembered that unconscious assumptions and mindsets are always involved, in everything we do and think about. The anosognosic patient is perfectly sincere in saying that a paralysed left arm isn't paralysed. There's no dishonesty. It's just an unconscious thing, an extreme form of mindset. Of course the professional art of disinformation involves what sales and public-relations people call `positioning' -- the skilful manipulation of other people's unconscious assumptions, related to what cognitive scientists call `framing' (e.g. Kahneman 2011, Lakoff 2014, & refs.).
As used by professional disinformers the framing technique exploits, for instance, the dichotomization instinct -- evoking the mindset that there are just two sides to an argument. The disinformers then insist that their `side' merits equal weight! This and other such techniques illustrate what I called the dark arts of camouflage and deception so thoroughly exploited, now, by the globalized plutocracies and their political allies, drawing on their vast financial resources and their deep knowledge of the way perception works. One of the greatest such deceptions has been the mindset, so widely and skilfully promoted, that carbon-neutral or renewable energy is `impractical' and `uneconomic', despite all the demonstrations to the contrary. It's inspirational, therefore, to see the disinformers looking foolish and facing defeat once again, as innovations in carbon capture and in the smart technology of renewables, including electricity storage, distributed energy systems, peak power management and now, at last, the electrification of personal transport, gain more and more traction in the business world.
In science, in business, and no doubt in politics too, it often takes a younger generation to achieve what Max Born called the `loosening of thinking' needed to expose mindsets and make progress. Science, at any rate, has always progressed in fits and starts, always against the odds, and always involving human weakness alongside a collective struggle with mindsets exposed, usually, through the efforts of a younger generation. The great geneticist J.B.S. Haldane famously distinguished four stages: (1) This is worthless nonsense; (2) This is an interesting, but perverse, point of view; (3) This is true, but quite unimportant; (4) I always said so. The disputes over evolution and natural selection are a case in point.
So here's my farewell message to young scientists, technologists, and entrepreneurs. You have the gifts of intense curiosity and open-mindedness. You have the best chance of dispelling mindsets and making progress. You have enormous computing power at your disposal, and brilliant programming tools, and observational and experimental data far beyond my own youthful dreams of long ago. You have a powerful new tool, the probabilistic `do' operator, for distinguishing correlation from causality in complex systems (Pearl and Mackenzie 2018). You know the value of arguing over the evidence not to score personal or political points, but to reach toward an improved understanding. You'll have seen how new insights from systems biology have opened astonishing new pathways to technological innovation (e.g. Wagner 2014, chapter 7).
Your generation will see the future more and more clearly. Whatever your field of expertise, you know that it's fun to be curious and to find out how things work. It's fun to do thought-experiments and computer experiments. It's fun to develop and test your in-depth understanding, the illumination that can come from looking at a problem from more than one angle. You know that it's worth trying to convey that understanding to a wide audience, if you get the chance. You know that in dealing with complexity you'll need to hone your communication skills in any case, if only to develop cross-disciplinary collaboration, the usual first stage of which is jargon-busting -- as far as possible converting turgid technical in-talk into plain, lucid speaking.
So hang in there. Your collective brainpower will be needed as never before. Science isn't the Answer to Everything, but we're sure as hell going to need it.
The original Lucidity and Science publications, including video and audio demonstrations, can be downloaded via this link.
Abbott, B.P., et al., 2015: Observation of gravitational waves from a binary black hole merger. Physical Review Letters 116, 061102. This was a huge team effort at the cutting edge of high technology, decades in the making, to cope with the tiny amplitude of Einstein's ripples. The `et al.' stands for the names of over a thousand other team members. The first event was observed on 14 September 2015. Another such event, observed on 26 December 2015, and cross-checking Einstein's theory even more stringently, was reported in a second paper Abbott, B.P., et al., 2016, Physical Review Letters 116, 241103. This second paper reports the first observational constraint on the spins of the black holes, with one of the spins almost certainly nonzero.
Abe-Ouchi, A., Saito, F., Kawamura, K., Raymo, M.E., Okuno, J., Takahashi, K., and Blatter, H., 2013: Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume. Nature 500, 190-194.
Alley, R.B., 2000: Ice-core evidence of abrupt climate changes. Proc. Nat. Acad. Sci. 97, 1331-1334. This brief Perspective is a readable summary, from a respected expert in the field, of the way in which measurements from Greenland ice have demonstrated the astonishingly short timescales of Dansgaard-Oeschger warmings, typically less than a decade and only a year or two in at least some cases, including that of the most recent or `zeroth' such warming about 11.7 millennia ago. The warmings had magnitudes typically, as Dokken et al. (2013) put it, `of 10±5°C in annual average temperature'.
Alley, R.B., 2007: Wally was right: predictive ability of the North Atlantic `conveyor belt' hypothesis for abrupt climate change. Annual Review of Earth and Planetary Sciences 35, 241-272. This paper incorporates a very readable, useful, and informative survey of the relevant palaeoclimatic records and recent thinking about them. Wally Broecker's famous `conveyor belt' is a metaphor for the ocean's global-scale meridional overturning circulation that has greatly helped efforts to understand the variability observed during the glacial cycles. Despite its evident usefulness, the metaphor embodies a fluid-dynamically unrealistic assumption, namely that shutting off North Atlantic deep-water formation also shuts off the global-scale return flow. (If you jam a real conveyor belt somewhere, then the rest of it stops too.) In this respect the metaphor needs refinements such as those argued for in Dokken et al.s (2013), recognizing that parts of the `conveyor' can shut down while other parts continue to move, transporting heat and salt at significant rates. As Dokken et al. point out, such refinements are likely to be important for understanding the most abrupt of the observed changes, the Dansgaard-Oeschger warmings (see also Alley 2000), and the Arctic Ocean tipping point that may now be imminent.
Andreassen, K., Hubbard, A., Winsborrow, M., Patton, H., Vadakkepuliyambatta, S., Plaza-Faverola, A., Gudlaugsson, E., Serov, P., Deryabin, A., Mattingsdal, R., Mienert, J., and Bünz, S, 2017: Massive blow-out craters formed by hydrate-controlled methane expulsion from the Arctic seafloor, Science, 356, 948-953. It seems that the clathrates in high latitudes have been melting ever since the later part of the last deglaciation, probably contributing yet another positive feedback, both then and now. Today, the melting rate is accelerating to an extent that hasn't yet been well quantified but is related to ocean warming and to the accelerated melting of the Greenland and West Antarctic ice sheets, progressively unloading the permafrosts beneath. Reduced pressures lower the clathrate melting point.
Archer, D., 2009: The Long Thaw: How Humans Are Changing the Next 100,000 Years of Earth's Climate. Princeton University Press, 180 pp.
Bateson, P., and Martin, P., 1999: Design for a Life: How Behaviour Develops. London, Jonathan Cape, Random House, 280 pp.
Blackburn, E. and Epel, E., 2017: The Telomere Effect: A Revolutionary Approach to Living Younger, Healthier, Longer. London, Orion Spring. Elizabeth Blackburn is the molecular biologist who won the Nobel Prize in 2009 for her co-discovery of telomerase. Elissa Epel is a leading health psychologist. Their book explains the powerful influences of environment and lifestyle on health and ageing, via the role of the enzyme telomerase in renewing telomeres -- the end-caps that protect our strands of DNA and increase the number of pre-senescent cell divisions. Today's knowledge of telomere dynamics well illustrates the error in thinking of genes as the `ultimate causation' of everything in biology, which among other things misses the importance of multi-timescale processes, as I'll explain.
Boomsliter, P. C., Creel, W., 1961: The long pattern hypothesis in harmony and hearing. J. Mus. Theory (Yale School of Music), 5(1), 2-31. This wide-ranging and penetrating discussion was well ahead of its time and is supported by the authors' ingenious psychophysical experiments, which clearly demonstrate repetition-counting as distinct from Fourier analysis. On the purely musical issues there is only one slight lapse, in which the authors omit to notice the context dependence of tonal major-minor distinctions. On the other hand the authors clearly recognize, for instance, the relevance of what's now called auditory scene analysis (pp. 13-14).
Born, G., 2002: The wide-ranging family history of Max Born. Notes and Records of the Royal Society (London) 56, 219-262 and Corrigendum 56, 403 (Gustav Born, quoting his father Max, who was awarded the Nobel Prize in physics, belatedly in 1954. The quotation comes from a lecture entitled Symbol and Reality (Symbol und Wirklichkeit), given at a meeting in 1964 of Nobel laureates at Lindau on Lake Constance.)
Burke, A., Stewart, A.L., Adkins, J.F., Ferrari, R., Jansen, M.F., and Thompson, A.F., 2015: The glacial mid-depth radiocarbon bulge and its implications for the overturning circulation. Paleoceanography, 30, 1021-1039.
Conway, F. and Siegelman, J., 1978: Snapping. New York, Lippincott, 254 pp.
Danchin, E. and Pocheville, A., 2014: Inheritance is where physiology meets evolution. Journal of Physiology 592, 2307-2317. This complex but very interesting review is one of two that I've seen -- the other being the review by Laland et al. (2011) -- that goes beyond earlier reviews such as those of Laland et al. (2010) and Richerson et al. (2010) in recognizing the importance of multi-timescale dynamical processes in biological evolution. It seems that such recognition is still a bit unusual, even today, thanks to a widespread assumption that timescale separation implies dynamical decoupling (see also Thierry 2005). In reality there is strong dynamical coupling, the authors show, involving an intricate interplay between different timescales. It's mediated in a rich variety of ways including not only niche construction and genome-culture co-evolution but also, at the physiological level, developmental plasticity along with the non-genomic heritability now called epigenetic heritability. One consequence is the creation of hitherto unrecognized sources of heritable variability, the crucial `raw material' that allows natural selection to function. The review feeds into a wider discussion now running in the evolutionary-biology community. A sense of recent issues, controversies and mindsets can be found in, for instance, the online discussion of a Nature Commentary by Laland, K. et al. (2014): Does evolutionary theory need a rethink? Nature 514, 161-164. (In the Commentary, for `gene' read `replicator' including regulatory DNA. See also the online comments on the `3rd alphabet of life', the glycome, which consists of `all carbohydrate structures that get added to proteins post translationally... orders of magnitude more complex than the proteome or genome... takes proteins and completely alters their behavior... or can fine tune their activity... a massive missing piece of the puzzle...')
Dawkins, R., 2009: The Greatest Show On Earth. London, Bantam Press, 470 pp. I am citing this book for two reasons. First, chapter 8 beautifully illustrates why self-assembling building blocks and emergent properties are such crucial ideas in biology, and why the `genetic blueprint' idea is so misleading. Second, however, as in Pinker (1997), it makes an unsupported assertion -- for instance in a long footnote to chapter 3 (p. 62) -- that natural selection takes place via selective pressures exerted solely at one level, that of the individual organism, and that to suppose otherwise is an outright `fallacy'. The argument is circular in that it relies on the oldest population-genetics models, which confine attention to individual organisms and to whole-population averages by prior assumption. To be sure, the flow of genomic information from parents to offspring is less clearcut at higher levels than at individual-organism level. It is more probabilistic and less deterministic. But many lines of evidence show that higher-level information flows can nevertheless be important, especially when the flows are increasingly channeled within group-level `survival vehicles' created, or rather reinforced, by language barriers (Pagel 2012), gradually accelerating the co-evolution of genome and culture in all its multi-timescale intricacy (e.g. Danchin and Pocheville 2014 and comments thereon). We are indeed talking about the greatest show on Earth. It is even greater, more complex, more wonderful, and indeed more dangerous, than Dawkins suggests.
Dickinson, P. et al. 2015: Carbon Disclosure Project Global Climate Change Report 2015. This report appears to signal a cultural sea-change as increasing numbers of corporate leaders recognize the magnitude of the climate problem and the implied business risks and opportunities. See also, for instance, the Carbon Tracker website.
Dokken, T.M., Nisancioglu, K. H., Li, C., Battisti, D.S., and Kissel, C., 2013: Dansgaard-Oeschger cycles: interactions between ocean and sea ice intrinsic to the Nordic seas. Paleoceanography, 28, 491-502. This is the first fluid-dynamically credible explanation of the extreme rapidity and large magnitude (see also Alley 2000) of the Dansgaard-Oeschger warming events. These events left clear imprints in ice-core and sedimentary records all over the Northern Hemisphere and were so sudden, and so large in magnitude, that a tipping-point mechanism must have been involved. The proposed explanation represents the only such mechanism suggested so far that could be fast enough.
Doolittle, W.F., 2013: Is junk DNA bunk? A critique of ENCODE. Proc. Nat. Acad. Sci., 110, 5294-5300. ENCODE is a large data-analytical project to look for signatures of biological functionality in genomic sequences. The word `functionality' well illustrates human language as a conceptual minefield. For instance the word is often, it seems, read to mean `known functionality having an adaptive advantage', excluding the many neutral variants, redundancies, and multiplicities revealed by studies such as those of Wagner (2014).
Dunbar, R.I.M., 2003: The social brain: mind, language, and society in evolutionary perspective. Annu. Rev. Anthropol. 32, 163-181. This review offers important insights into the selective pressures on our ancestors, drawing on the palaeoarchaeological and palaeoanthropological evidence. Figure 4 shows the growth of brain size over the past 3 million years, including its extraordinary acceleration in the past few hundred millennia.
Ehrenreich, B., 1997: Blood Rites: Origins and History of the Passions Of War. London, Virago and New York, Metropolitan Books, 292 pp. Barbara Ehrenreich's insightful and penetrating discussion contains much wisdom, it seems to me, not only about war but also about the nature of mythical deities and about human sacrifice, ecstatic suicide, and so on -- as in Stravinsky's Rite of Spring, and long pre-dating 9/11 and IS/Daish. (Talk about ignorance being expensive!)
Farmer, J.D. et al., 2019: Sensitive intervention points in the post-carbon transition. Science 364, 132-134. History shows, they point out, that there is hope of reaching sociological tipping points since, despite the continued political pressures to maintain fossil-fuel subsidies and kill renewables (e.g. Klein 2014), not only are countervailing political pressures now building up but, also, `renewable energy sources such as solar photovoltaics (PV) and wind have experienced rapid, persistent cost declines' whereas, despite `far greater investment and subsidies, fossil fuel costs have stayed within an order of magnitude for a century."
Feynman, R.P., Leighton, R.B., and Sands, M., 1964: Lectures in Physics, chapter 19 of vol. II, Mainly Electromagnetism and Matter. Addison-Wesley.
Fowler, H. W., 1983: A Dictionary of Modern English Usage, 2nd edn., revised by Sir Ernest Gowers. Oxford, University press, 725pp.
Foukal, P., Fröhlich, C., Spruit, H., and Wigley, T.M.L., 2006: Variations in solar luminosity and their effect on the Earth's climate. Nature 443, 161-166, © Macmillan. An extremely clear review of some robust and penetrating insights into the relevant solar physics, based on a long pedigree of work going back to 1977. For a sample of the high sophistication that's been reached in constraining solar models, see also Rosenthal, C. S. et al., 1999: Convective contributions to the frequency of solar oscillations, Astronomy and Astrophysics 351, 689-700.
Gelbspan, R., 1997: The Heat is On: The High Stakes Battle over Earth's Threatened Climate. Addison-Wesley, 278 pp. See especially chapter 2.
Gilbert, C.D. and Li, W. 2013: Top-down influences on visual processing. Nature Reviews (Neuroscience) 14, 350-363. This review presents anatomical and neuronal evidence for the active, prior-probability-dependent nature of perceptual model-fitting, e.g. `Top-down influences are conveyed across... descending pathways covering the entire neocortex... The feedforward connections... ascending... For every feedforward connection, there is a reciprocal [descending] feedback connection that carries information about the behavioural context... Even when attending to the same location and receiving an identical stimulus, the tuning of neurons can change according to the perceptual task that is being performed...', etc.
Giusberti, L., Boscolo Galazzo, F., and Thomas, E., 2016: Variability in climate and productivity during the Paleocene-Eocene Thermal Maximum in the western Tethys (Forada section). Climate of the Past 12, 213-240. doi:10.5194/cp-12-213-2016. The early Eocene began around 56 million years ago with the so-called PETM, a huge global-warming episode with accompanying mass extinctions now under intensive study by geologists and paleoclimatologists. The PETM was probably caused by carbon-dioxide injections comparable in size to those from current fossil-fuel burning. The injections almost certainly came from massive volcanism and would have been reinforced, to an extent not yet well quantified, by methane release from submarine clathrates. The western Tethys Ocean was a deep-ocean site at the time and so provides biological and isotopic evidence both from surface and from deep-water organisms, such as foraminifera with their sub-millimetre-sized carbonate shells.
Gregory, R. L., 1970: The Intelligent Eye. London, Weidenfeld and Nicolson, 191 pp. This great classic is still well worth reading. It's replete with beautiful and telling illustrations of how vision works. Included is a rich collection of stereoscopic images viewable with red-green spectacles. The brain's unconscious internal models that mediate visual perception are called `object hypotheses', and the active nature of the processes whereby they're selected is clearly recognized, along with the role of prior probabilities. There's a thorough discussion of the standard visual illusions as well as such basics as the perceptual grouping studied in Gestalt psychology, whose significance for word-patterns I discussed in Part I of Lucidity and Science. In a section on language and language perception, Chomsky's `deep structure' is identified with the repertoire of unconscious internal models used in decoding sentences. The only points needing revision are speculations that the first fully-developed languages arose only in very recent millennia and that they depended on the invention of writing. That's now refuted by the evidence from Nicaraguan Sign Language (e.g. Kegl et al. 1999), showing that there are genetically-enabled automata for language and syntactic function.
Gray, John, 2018: Seven Types of Atheism. Allen Lane. Chapter 2 includes what seems to me a shrewd assessment of Ayn Rand, as well as the transhumanists with their singularity, echoing de Chardin's "Omega Point", the imagined culmination of all evolution in a single Supreme Being -- yet more versions of the Answer to Everything.
Hoffman, D.D., 1998: Visual Intelligence. Norton, 294 pp. Essentially an update on Gregory (1970), with many more illustrations and some powerful theoretical insights into the way visual perception works.
Hunt, M., 1993: The Story of Psychology. Doubleday, Anchor Books, 763 pp. The remarks on the Three Mile Island control panels are on p. 606.
IPCC 2013: Full Report of Working Group 1. Chapter 5 of the full report summarizes the evidence on past sea levels, including those in the penultimate interglacial, misnamed `LIG' (Last InterGlacial).
Jaynes, E. T., 2003: Probability Theory: The Logic of Science. edited by G. Larry Bretthorst. Cambridge, University Press, 727 pp. This great posthumous work blows away the conceptual confusion surrounding probability theory and statistical inference, with a clear focus on the foundations of the subject established by the theorems of Richard Threlkeld Cox. The theory goes back three centuries to James Bernoulli and Pierre-Simon de Laplace, and it underpins today's state of the art in model-fitting and data compression (MacKay 2003). Much of the book digs deep into the technical detail, but there are instructive journeys into history as well, especially in chapter 16. There were many acrimonious disputes. They were uncannily similar to the disputes over biological evolution. Again and again, especially around the middle of the twentieth century, unconscious assumptions impeded progress. They involved dichotomization and what Jaynes calls the mind-projection fallacy, conflating outside-world reality with our conscious and unconscious internal models thereof. There's more about this in my chapter 5 on music, mathematics, and the Platonic.
Kahneman, D., 2011: Thinking, Fast and Slow. London, Penguin, 499 pp. Together with the book by Ramachandran and Blakeslee (1998, q.v.), Kahneman's book provides deep insight into the nature of human perception and understanding and the brain's unconscious internal models that mediate them, especially through experimental demonstrations of how flexible -- how strongly context-dependent -- the prior probabilities can be, as exhibited for instance by the phenomena called `anchoring', `priming', and `framing'.
Kendon, E.J., Roberts, N.M., Fowler, H.J., Roberts, M.J., Chan, S.C., and Senior, C.A., 2014: Heavier summer downpours with climate change revealed by weather forecast resolution model. Nature Climate Change, doi:10.1038/nclimate2258 (advance online publication).
Kegl, J., Senghas, A., Coppola, M., 1999: Creation through contact: sign language emergence and sign language change in Nicaragua. In: Language Creation and Language Change: Creolization, Diachrony, and Development, 179-237, ed. Michel DeGraff. Cambridge, Massachusetts, MIT Press, 573 pp. Included are detailed studies of the children's sign-language constructions, used in describing videos they watched. Also, there are careful and extensive discussions of the controversies amongst linguists.
King, D., Schrag, D., Zhou, D., Qi, Y., Ghosh, A., and co-authors, 2015: Climate Change: A Risk Assessment. Cambridge Centre for Science and Policy, 154 pp. In case of accidents, I have mirrored a copy here under the appropriate Creative Commons licence. Included in this very careful and sober discussion of the risks confronting us is the possibility that methane clathrates, also called methane hydrates or `shale gas in ice', will be added to the fossil-fuel industry's extraction plans (§7, p. 42). The implied carbon-dioxide injection would be very far indeed above IPCC's highest emissions scenario. This would be the `Business As Usual II' scenario of Valero et al. 2011.
Klein, Naomi, 2014: This Changes Everything: Capitalism vs the Climate. Simon & Schuster, Allen Lane, Penguin. Chapter 4 describes the push to burn all fossil-fuel reserves, the old business plan of the fossil-fuel industry. Even though it's a plan for climate disaster and social disaster the disinformation campaign supporting it remains powerful, at least in the English-speaking world, as seen in the US Congress and in recent UK Government policy reversals. For instance, only months before the Paris agreement the UK Government suddenly withdrew support for solar and onshore wind renewables -- sabotaging long-term investments and business models at a stroke, without warning, and destroying many thousands of renewables jobs while increasing fossil-fuel subsidies. Equally suddenly, it shut down support for the first full-scale UK effort in CCS, carbon capture and storage. However, it seems that the politicians responsible will nevertheless fail to stop the development of newer and smarter forms of CCS (Oxburgh 2016). Nor, it seems, will they stop the scaling-up of smart energy storage, smart grids, distributed energy generation and the increasingly competitive -- and terrorist-resistant -- carbon-neutral renewable energy sources. All this is now gaining momentum in the business world, and now in the business plans of some fossil-fuel companies, whose engineering knowhow will be needed as soon as CCS is taken seriously. A useful summary of the current upheavals in those business plans is published in a Special Report on Oil in the 26 November 2016 issue of the UK Economist magazine. The 26 November issue is headlined `The Burning Question: Climate Change in the Trump Era'.
Lakoff, G., 2014: Don't Think of an Elephant: Know Your Values and Frame the Debate. Vermont, Chelsea Green Publishing, www.chelseagreen.com. Following the classic work of Kahneman and Tversky, Lakoff shows in detail how those who exploit free-market fundamentalism -- including its quasi-Christian version advocated by, for instance, the writer James Dobson -- combine their mastery of lucidity principles with the technique Lakoff calls `framing', in order to perpetuate the unconscious assumptions that underpin their political power. Kahneman (2011) provides a more general, in-depth discussion of framing, and of related concepts such as anchoring and priming. All these concepts are needed in order to understand many cognitive-perceptual phenomena.
Laland, K., Odling-Smee, J., and Myles, S., 2010: How culture shaped the human genome: bringing genetics and the human sciences together. Nature Reviews: Genetics 11, 137-148. This review notes the likely importance, in genome-culture co-evolution, of more than one timescale. It draws on several lines of evidence. The evidence includes data on genomic sequences, showing the range of gene variants (alleles) in different sub-populations. As the authors put it, in the standard mathematical-modelling terminology, `... cultural selection pressures may frequently arise and cease to exist faster than the time required for the fixation of the associated beneficial allele(s). In this case, culture may drive alleles only to intermediate frequency, generating an abundance of partial selective sweeps... adaptations over the past 70,000 years may be primarily the result of partial selective sweeps at many loci' -- that is, locations within the genome. `Partial selective sweeps' are patterns of genomic change responding to selective pressures yet retaining some genetic diversity, hence potential for future adaptability and versatility. The authors confine attention to very recent co-evolution, for which the direct lines of evidence are now strong in some cases -- leaving aside the earlier co-evolution of, for instance, proto-language. There, we can expect multi-timescale coupled dynamics over a far greater range of timescales, for which direct evidence is much harder to obtain, as discussed also in Richerson et al. (2010).
Laland, K., Sterelny, K., Odling-Smee, J., Hoppitt, W., and Uller, T., 2011: Cause and effect in biology revisited: is Mayr's proximate-ultimate dichotomy still useful? Science 334, 1512-1516. The dichotomy, between `proximate causation' around individual organisms and `ultimate causation' on evolutionary timescales, entails a belief that the fast and slow mechanisms are dynamically independent. This review argues that they are not, even though the dichotomy is still taken by many biologists to be unassailable. The review also emphasizes that the interactions between the fast and slow mechanisms are often two-way interactions, or feedbacks, labelling them as `reciprocal causation' and citing many lines of supporting evidence. This recognition of feedbacks is part of what's now called the `extended evolutionary synthesis'. See also my notes on Danchin and Pocheville (2014) and, for instance, Thierry (2005).
Lüthi, D., et al., 2008: High-resolution carbon dioxide concentration record 650,000-800,000 years before present. Nature 453, 379-382. Further detail on the deuterium isotope method is given in the supporting online material for a preceding paper on the temperature record.
Lynch, M., 2007: The frailty of adaptive hypotheses for the origins of organismal complexity. Proc. Nat. Acad. Sci., 104, 8597-8604. A lucid and penetrating overview of what was known in 2007 about non-human evolution mechanisms, as seen by experts in population genetics and in molecular and cell biology and bringing out the important role of neutral, as well as adaptive, genomic changes, now independently confirmed in Wagner (2014).
MacKay, D.J.C., 2003: Information Theory , Inference, and Learning Algorithms. Cambridge University Press, 628 pp. On-screen viewing permitted at http://www.inference.phy.cam.ac.uk/mackay/itila/. This book by the late David MacKay is a brilliant, lucid and authoritative analysis of the topics with which it deals, at the most fundamental level. It builds on the foundation provided by Cox's theorems (Jaynes 2003) to clarify (a) the implications for optimizing model-fitting to noisy data, usually discussed under the heading `Bayesian inference', and (b) the implications for optimal data compression. And from the resulting advances and clarifications we can now say that `data compression and data modelling are one and the same' (p. 31).
Marchitto, T.M., Lynch-Stieglitz, J., and Hemming, S.R., 2006: Deep Pacific CaCO3 compensation and glacial-interglacial atmospheric CO2. Earth and Planetary Science Letters 231, 317-336. This technical paper contains an unusually clear explanation of the probable role of limestone sludge (CaCO3) and seawater chemistry in the way carbon dioxide (CO2) was stored in the oceans during the recent ice ages. The paper gives a useful impression of our current understanding, and of the observational evidence that supports it. The evidence comes from meticulous and laborious measurements of tiny variations in trace chemicals that are important in the oceans' food chains, and in isotope ratios of various elements including oxygen and carbon, laid down in layer after layer of ocean sediments over very many tens of millennia. Another reason for citing the paper, which requires the reader to have some specialist knowledge, is to highlight just how formidable are the obstacles to building accurate models of the carbon sub-system, including the sinking phytoplankton. Such models try to represent oceanic carbon-dioxide storage along with observable carbon isotope ratios, which are affected by the way in which carbon isotopes are taken up by living organisms via processes of great complexity and variability. Not only are we far from modelling oceanic fluid-dynamical transport processes with sufficient accuracy, including turbulent eddies over a vast range of spatial scales, but we are even further from accurately modelling the vast array of biogeochemical processes involved throughout the oceanic and terrestrial biosphere -- including for instance the biological adaptation and evolution of entire ecosystems and the rates at which the oceans receive trace chemicals from rivers and airborne dust. The oceanic upper layers where plankton live have yet to be modelled in fine enough detail to represent the recycling of nutrient chemicals simultaneously with the gas exchange rates governing leakage. It's fortunate indeed that we have the hard evidence, from ice cores, for the atmospheric carbon dioxide concentrations that actually resulted from all this complexity.
Mazzucato, M, 2018: The Value of Everything: Making and Taking in the Global Economy London, Penguin. This well known and charismatic economist makes the case for what I call a `symbiosis' between regulation and market forces -- much closer to Adam Smith's ideas than popular mythology allows -- and goes into detail about how such a symbiosis might be better promoted.
McGilchrist, I., 2009: The Master and his Emissary: the Divided Brain and the Making of the Western World. Yale University Press, 597 pp. In this wideranging and densely argued book, `world' often means the perceived world consisting of the brain's unconscious internal models. The Master is the right hemisphere with its holistic `world', while the Emissary is the left hemisphere with its analysed, dissected and fragmented `world' and its ambassadorial communication skills.
McIntyre, M. E., and Woodhouse, J., 1978: The acoustics of stringed musical instruments. Interdisc. Sci. Rev., 3, 157-173. We carried out our own psychophysical experiments to check the point made in connection with Figure 3, about the intensities of different harmonics fluctuating out of step with each other during vibrato.
McNamee, R., 2019: Zucked: Waking Up to the Facebook Catastrophe. HarperCollins, 288 pages. As a venture capitalist who helped to launch Facebook, who knows how it works, and who still supports it financially, Roger McNamee shows how Facebook has used the power of today's new statistical toolkit (Pearl and Mackenzie 2018) for vast commercial gain, in the process becoming an example of what I call `artificial intelligences that are not yet very intelligent'. Inadvertently, as currently set up, these artificial intelligences, starting innocently with like versus dislike, are now fuelling binary referendums and polarizations that threaten unstable mob rule and thereby threaten to replace democracy, and its division of powers, by autocracy just as in the 1930s and on other past occasions. But Facebook is thereby risking destruction of the business environment that is the very source of its vast wealth.
Monod, J., 1970: Chance and Necessity. Glasgow, William Collins, 187 pp., beautifully translated from the French by Austryn Wainhouse. This classic by the great molecular biologist Jacques Monod -- one of the sharpest and clearest thinkers that science has ever seen -- highlights the key roles of genome-culture co-evolution and multi-level selection in the genesis of the human species. See the last chapter, chapter 9, The Kingdom and the Darkness. Monod's kingdom is the `transcendent kingdom of ideas, of knowledge, and of creation'. He ends with a challenge to mankind. `The kingdom above or the darkness below: it is for him to choose.'
NAS-RS, 2014 (US National Academy of Sciences and UK Royal Society): Climate Change: Evidence & Causes. A brief, readable, and very careful summary from a high-powered team of climate scientists, supplementing the vast IPCC reports and emphasizing the many cross-checks that have been done.
Noble, D., 2006: The Music of Life: Biology Beyond Genes. Oxford University Press. This short and lucid book by a respected biologist clearly brings out the complexity, versatility, and multi-level aspects of biological systems, and the need to avoid extreme reductionism and single-viewpoint thinking, such as saying that the genome `causes' everything. A helpful first metaphor for the genome, it's argued, is a digital music recording. Yes, reading the digital data on one sense `causes' a musical and possibly emotional experience but, if that's all you say, you miss the countless other things on which the experience depends, not least the brain's unconscious model-fitting processes and its web of associations, all strongly influenced by past experience and present circumstance as well as by the digital data. Reading the data into a playback device mediates or enables the listener's experience, rather than solely causing it. Other metaphors loosen our thinking still further, penetrating across the different levels. A wonderful example is the metaphor of the Chinese (kangxi or kanji) characters of which so many thousands are used in the written languages of east Asia, and whose complexities are so daunting to Western eyes -- rather like the complexities of the genome. However, their modular structure uses only a few hundred sub-characters, many of them over and over again in different permutations for different purposes -- just as in the genome, in genomic exons, and in other components of biological systems.
Oreskes, N. and Conway, E.M., 2010: Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. Bloomsbury, 2010.
Oxburgh, R., 2016: Lowest Cost Decarbonisation for the UK: The Critical Role of CCS. Report to the Secretary of State for Business, Energy and Industrial Strategy from the Parliamentary Advisory Group on Carbon Capture and Storage, September 2016. Available from http://www.ccsassociation.org/news-and-events/reports-and-publications/parliamentary-advisory-group-on-ccs-report/
Pagel, M., 2012: Wired for Culture. London and New York, Allen Lane, Penguin, Norton, 416 pp. The author describes an impressive variety of observations on human culture and human behaviour, emphasizing the important role of language barriers as the outer skins or containers of the 'survival vehicles' whereby our ancestors were bound into strongly segregated, inter-competing groups of cooperating individuals. `Vehicle' has its usual meaning in evolutionary theory as a carrier of replicators into future generations. In the book, the replicators carried by our ancestors' survival vehicles are taken to be cultural only, including particular languages and customs. Cultural evolution, with its Lamarckian aspect and timescales far shorter than genomic timescales, gave our ancestors a prodigious versatility and adaptability that continues today. However, it seems obvious that the same survival vehicles must have carried segregated genomic information as well. Such segregation, or channelling, would have intensified the multi-timescale co-evolution of genomes and cultures. The tightening of vehicle containment by language barriers is an efficient way of strengthening population heterogeneity, hence group-level selective pressures, on genomes as well as on cultures. In this respect there's a peculiar inconsistency in the book, namely that the discussion is confined within the framework of selfish-gene theory and assumes that language and language barriers were very late developments, starting with a single `human mother tongue' (p. 299) that arose suddenly and then evolved purely culturally. While recognizing that groups of our ancestors must have competed with one another the author repudiates group-level genomic selection, saying that it is `weak' and by implication unimportant (p. 198). This view comes from the oldest population-genetics models. Those were the mathematical models that led to selfish-gene theory in the 1960s and 1970s along with its game-theoretic spinoffs, such as reciprocal-altruism theory. The crucial effects of language barriers, population heterogeneity, multi-level selection, and multi-timescale processes are all excluded from those models by prior assumption. Another weakness, in an otherwise impressive body of arguments, is too ready an acceptance of the `archaeological fallacy' that symbolic representation came into being only recently, at the start of the Upper Palaeolithic with all its archaeological durables including cave paintings. The fallacy seems to stem from ignoring the unconscious symbolic representations, the brain's internal models, that mediate perception -- as well as ignoring the more conscious cultural modalities that depend solely on sound waves and light waves, modalities that leave no archaeological trace. Likely examples would include gestures, vocalizations, and dance routines that deliberately mimic different animals. Already, that's not just symbolic play but conscious symbolic play. Cave paintings aren't needed!
Pearl, J. and Mackenzie, D., 2018: The Book of Why: The New Science of Cause and Effect. London, Penguin. This lucid, powerful, and very readable book goes further than Jaynes (2003) and MacKay (2003) in describing recently-developed forms of probability theory that make explicit the arrows of causality, clearly distinguishing correlation from causation. This is a crucial part of model-building and model-fitting, and is applicable to big data and arbitrarily complex systems. Its power lies not only behind today's cutting-edge open science but also, as further discussed in McNamee (2919), behind today's commercial giants and the weaponization of the social media.
Pierrehumbert, R.T., 2010: Principles of Planetary Climate. Cambridge University Press, 652 pp.
Pinker, S., 1994: The Language Instinct. London, Allen Lane, 494 pp. The Nicaraguan case is briefly described in chapter 2, as far as it had progressed by the early 1990s.
Pinker, S., 1997: How the Mind Works. London, Allen Lane, 660 pp. Regarding mathematical models of natural selection, notice the telltale phrase `a strategy that works on average' (my italics) near the end of the section `I and Thou' in chapter 6, page 398 in my copy. The phrase `on average' seems to be thrown in almost as an afterthought. To restrict attention to what works `on average' is to restrict attention to the oldest mathematical models of population genetics in which all environmental and population heterogeneities, hence all higher-level selection mechanisms, have been obliterated by averaging over an entire population. Such models are also mentioned about nine pages into the section `Life's Designer' in chapter 3 -- page 163 in my copy -- in the phrase `mathematical proofs from population genetics' (my italics). Not even the Price equation, perhaps the first attempt to allow for heterogeneity, in the mid-1970s, is mentioned. A recent debate on these issues is available online here. In that debate it's noticeable how the dichotomization instinct kicks in again and again -- the unconscious assumption that one viewpoint excludes another -- despite sterling efforts to counter it in, for instance, a thoughtful contribution from David C. Queller. Earlier such debates, and disputes, over several decades, are thoroughly documented in the book by Segerstråle (2000) and further discussed in Wills (1994) and in Rose and Rose (2000). Dichotomization is conspicuous throughout.
Pinker, S., 2018: Enlightenment Now: The Case for Science, Reason, Humanism and Progress. Allen Lane.
Platt, P., 1995: Debussy and the harmonic series. In: Essays in honour of David Evatt Tunley, ed. Frank Callaway, pp. 35-59. Perth, Callaway International Resource Centre for Music Education, School of Music, University of Western Australia. ISBN 086422409 5.
Pomerantsev, P., 2015: Nothing is True And Everything is Possible -- Adventures in Modern Russia. Faber & Faber. Peter Pomerantsev is a television producer who worked for nearly a decade in Moscow with Russian programme-makers. He discusses the remarkable cleverness of the programme-makers and their government supervisors, in exploiting postmodernist thinking and other cultural undercurrents to create an impression of democratic pluralism in Russia today -- part of the confused `virtual reality' described also by Arkady Ostrovsky in his 2015 book The Invention of Russia and created using techniques that include `weaponized relativism'.
Le Quéré, C., et al., 2007: Saturation of the Southern Ocean CO2 sink due to recent climate change. Science 316, 1735-1738. This work, based on careful observation, reveals yet another positive feedback that's increasing climate sensitivity to carbon dioxide emissions.
Ramachandran, V.S. and Blakeslee, S., 1998: Phantoms in the Brain. London, Fourth Estate. The phantoms are the brain's unconscious internal models that mediate perception and understanding. This book and Kahneman's are the most detailed and penetrating discussions I've seen of the nature and workings of those models. Many astonishing experiments are described, showing how flexible -- how strongly context-dependent -- the prior probabilities can be. The two books powerfully complement each other as well as complementing those of Gregory, Hoffman, McGilchrist, and Sacks. The experiments include some from neurological research and clinical neurology, and some that can be repeated by anyone, with no special equipment. There are many examples of multi-modal perception including Ramachandran's famous phantom limb experiments, and the Ramachandran-Hirstein `phantom nose illusion' described on page 59. Chapter 7 on anosognosia includes the brain-scan experiments of Ray Dolan and Chris Frith, revealing the location of the right hemisphere's discrepancy detector.
Rees, M., 2014: Can we prevent the end of the world? This seven-minute TED talk, by Astronomer Royal Martin Rees, makes the key points very succinctly. The talk is available here, along with a transcript. Two recently-established focal points for exploring future risk are the Cambridge Centre for the Study of Existential Risk and the Future of Life Institute.
Richerson, P.J., Boyd, R., and Henrich, J., 2010: Gene-culture coevolution in the age of genomics. Proc. Nat. Acad. Sci. 107, 8985-8992. This review takes up the scientific story as it has developed after Wills (1994), usefully complementing the review by Laland et al. (2010). The discussion comes close to recognizing two-way, multi-timescale dynamical coupling but doesn't quite break free of asking whether culture is `the leading rather than the lagging variable' in the co-evolutionary system (my italics, to emphasize the false dichotomy).
Rose, H. and Rose. S. (eds), 2000: Alas, Poor Darwin: Arguments against Evolutionary Psychology. London, Jonathan Cape, 292 pp. This compendium offers a variety of perspectives on the oversimplified genetic determinism or `Darwinian fundamentalism' of recent decades, as distinct from Charles Darwin's own more pluralistic view recognizing that natural selection -- centrally important though it is to biological evolution, among other mechanisms -- cannot be the Answer to Everything in a scientific problem of such massive complexity, let alone in human and social problems. See especially chapters 4-6 and 9-12 (and more recently Danchin and Pocheville 2014) for examples of developmental plasticity, or epigenetic flexibility. Chapters 10 and 11 give instructive examples of observed animal behaviour. One is the promiscuous bisexuality of female bonobo chimpanzees and its role in their naturally-occurring societies, demolishing the fundamentalist tenet that `natural' sex is for procreation only. Chapter 12 addresses some of the human social problems compounded by the shifting conceptual minefield we call human language. A deeply thoughtful commentary, chapter 12 touches on many salient issues including the postmodernist backlash against scientific fundamentalism.
Rossano, M.J., 2009: The African Interregnum: the "where," "when," and "why" of the evolution of religion. In: Voland, E., Schiefenhövel, W. (eds), The Biological Evolution of Religious Mind and Behaviour, pp. 127-141. Heidelberg, Springer-Verlag, The Frontiers Collection, doi:10.1007/978-3-642-00128-4_9, ISBN 978-3-642-00127-7. The `African Interregnum' refers to the time between the failure of our ancestors' first migration out of Africa, something like 80-90 millennia ago, and the second such migration around 60 millennia ago. Rossano's brief but penetrating survey argues that the emergence of belief systems having a `supernatural layer' boosted the size, sophistication, adaptability, and hence competitiveness of human groups. As regards the Toba eruption around 70 millennia ago, the extent to which it caused a human genetic bottleneck is controversial but not the severity of the disturbance to the climate system, like a multi-year nuclear winter. The resulting resource depletion must have severely stress-tested our ancestors' adaptability -- giving large, tightly-knit and socially sophisticated groups an important advantage. In Rossano's words, they were `collectively more fit and this made all the difference.'
Sacks, O., 1995: An Anthropologist on Mars. New York, Alfred Knopf, 340 pp., Chapter 4, To See and Not See. The two subjects `Virgil' and `S.B.' studied most thoroughly, by Sacks and by Richard Gregory and Jean Wallace, were both 50 years old when the opaque elements were surgically removed to allow light into their eyes. The vision they achieved was very far from normal. An important update is in the 2016 book In the Bonesetter's Waiting Room by Aarathi Prasad, in which chapter 7 mentions recent evidence from Project Prakash, led by Pawan Sinha, providing case studies of much younger individuals blind from birth. There is much variation from individual to individual but it seems that teenagers, for instance, can often learn to see better after surgery, or adjust better to whatever visual functionality they achieve, than did the two 50-year-olds.
Schonmann, R. H., Vicente, R., and Caticha, N., 2013: Altruism can proliferate through population viscosity despite high random gene flow. Public Library of Science, PLoS One, 8, e72043, doi:10.1371/journal.pone.0072043. Improvements in model sophistication, together with a willingness to view a problem from more than one angle, shows that group-selective pressures can be far more effective than the older population-genetics models suggest.
Segerstråle, U., (2000). Defenders of the Truth: The Battle for Science in the Sociobiology Debate and Beyond. Oxford University Press, 493 pp. This important book gives insight into the disputes about natural selection over past decades. It's striking how dichotomization, and the answer-to-everything mindset, kept muddying those disputes even amongst serious and respected scientists -- often under misplaced pressure for `parsimony of explanation', forgetting Einstein's famous warning not to push Occam's Razor too far. Again and again the disputants seemed to be saying that `we are right and they are wrong' and that there is one and only one `truth', to be viewed in one and only one way. Again and again, understanding was impeded by a failure to recognize complexity, multidirectional causality, different levels of description, and multi-timescale dynamics. The confusion was sometimes worsened by failures to disentangle science from politics.
Senghas, A., 2010: The Emergence of Two Functions for Spatial Devices in Nicaraguan Sign Language. Human Development (Karger) 53, 287-302. This later study uses video techniques as in Kegl et al (1999) to trace the development, by successive generations of young children, of syntactic devices in signing space.
Shakhova, N., Semiletov, I., Leifer, I., Sergienko, V., Salyuk, A., Kosmach, D., Chernykh, D. Stubbs, C., Nicolsky, D., Tumskoy, V., and Gustafsson, O., 2014: Ebullition and storm-induced methane release from the East Siberian Arctic Shelf. Nature Geosci., 7, 64-70, doi:10.1038/ngeo2007.   This is hard observational evidence.
Skinner, L.C., Waelbroeck, C., Scrivner, A.C., and Fallon, S.J., 2014: Radiocarbon evidence for alternating northern and southern sources of ventilation of the deep Atlantic carbon pool during the last deglaciation. Proc. Nat. Acad. Sci. Early Edition (online), www.pnas.org/cgi/doi/10.1073/pnas.1400668111
Shackleton, N.S., 2000: The 100,000-year ice-age cycle identified and found to lag temperature, carbon dioxide, and orbital eccentricity. Science 289, 1897-1902.
Shakun, J.D., Clark, P.U., He, F., Marcott, S.A., Mix, A.C., Liu, Z., Otto-Bliesner, B., Schmittner, A., and Bard, E, 2012: Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation. Nature 484, 49-55.
Skippington, E., and Ragan, M.A., 2011: Lateral genetic transfer and the construction of genetic exchange communities. FEMS Microbiol Rev. 35, 707-735. This review article shows among other things how `antibiotic resistance and other adaptive traits can spread rapidly, particularly by conjugative plasmids'. `Conjugative' means that a plasmid is passed directly from one bacterium to another via a tiny tube called a pilus. The two bacteria can belong to different species. The introduction opens with the sentence `It has long been known that phenotypic features can be transmitted between unrelated strains of bacteria.'
Smythies, J. 2009: Philosophy, perception, and neuroscience. Perception 38, 638-651. On neuronal detail this discussion should be compared with that in Gilbert and Li (2013). For present purposes the discussion is of interest in two respects, the first being that it documents parts of what I called the `quagmire of philosophical confusion', about the way perception works and about conflating different levels of description. The discussion begins by noting, among other things, the persistence of the fallacy that perception is what it seems to be subjectively, namely veridical in the sense of being `direct', i.e., independent of any model-fitting process, a simple mapping between appearance and reality. This is still taken as self-evident, it seems, by some professional philosophers despite the evidence from experimental psychology, as summarized for instance in Gregory (1970), in Hoffman (1998), and in Ramachandran and Blakeslee (1998). Then a peculiar compromise is advocated, in which perception is partly direct, and partly works by model-fitting, so that `what we actually see is always a mixture of reality and virtual reality' [sic; p. 641]. (Such a mixture is claimed also to characterize some of the early video-compression technologies used in television engineering -- as distinct from the most advanced such technologies, which work entirely by model-fitting, e.g. MacKay 2003.) The second respect, perhaps of greater interest here, lies in a summary of some old clinical evidence, from the 1930s, that gave early insights into the brain's different model components. Patients described their experiences of vision returning after brain injury, implying that different model components recovered at different rates and were detached from one another at first. On pp. 641-642 we read about recovery from a particular injury to the occipital lobe: `The first thing to return is the perception of movement. On looking at a scene the patient sees no objects, but only pure movement... Then luminance is experienced but... formless... a uniform white... Later... colors appear that float about unattached to objects (which are not yet visible as such). Then parts of objects appear -- such as the handle of a teacup -- that gradually coalesce to form fully constituted... objects, into which the... colors then enter.'
Solanki, S.K., Krivova, N.A., and Haigh, J.D., 2013: Solar Irradiance Variability and Climate. Annual Review of Astronomy and Astrophysics 51, 311-351. This review summarizes and clearly explains the recent major advances in our understanding of radiation from the Sun's surface, showing in particular that its magnetically-induced variation cannot compete with the carbon-dioxide injections I'm talking about. To be sure, that conclusion depends on the long-term persistence of the Sun's magnetic activity cycle, whose detailed dynamics is not well understood. (A complete shutdown of the magnetic activity would make the Sun significantly dimmer during the shutdown, out to times of the order of hundred millennia.) However, the evidence for persistence of the magnetic activity cycle is now extremely strong (see the review's Figure 9). It comes from a long line of research on cosmogenic isotope deposits showing a clear footprint of persistent solar magnetic activity throughout the past 10 millennia or so, waxing and waning over a range of timescales out to millennial. The timing of these changes, coming from the Sun's internal dynamics, can have no connection with the timing of the Earth's orbital changes that trigger terrestrial deglaciations.
Stern, N., 2009: A Blueprint for a Safer Planet: How to Manage Climate Change and Create a New Era of Progress and Prosperity, London, Bodley Head, 246 pp. See also Oxburgh (2016).
Strunk, W., and White, E.B., 1979: The Elements of Style, 3rd edn. New York, Macmillan, 92 pp.
Sulston, J.E., and Ferry, G., 2003: The Common Thread: Science, Politics, Ethics and the Human Genome, Corgi edn. London, Random House (Transworld Publishers), 348 pp, also Washington DC, Joseph Henry Press. See also People patenting. This important book records how the scientific ideal and ethic prevailed against corporate might -- powerful business interests aiming to use the genomic data for short-term profit.
Tobias, P.V., 1971: The Brain in Hominid Evolution. New York, Columbia University Press, 170 pp. See also Monod (1970), chapter 9.
Thierry, B., 2005: Integrating proximate and ultimate causation: just one more go! Current Science 89, 1180-1183. A thoughtful commentary on the history of biological thinking, in particular tracing the tendency to neglect multi-timescale processes, with fast and slow mechanisms referred to as `proximate causes' and `ultimate causes', assumed independent solely because `they belong to different time scales' (p. 1182a) -- respectively individual-organism and genomic timescales. See also Laland et al. (2011) and Danchin and Pocheville (2014).
Trask, L., Tobias, P.V., Wynn, T., Davidson, I., Noble, W., and Mellars, P., 1998: The origins of speech. Cambridge Archaeological J., 8, 69-94. (A short compendium of discussions by linguists, palaeoanthropologists, archaeologists and others interested. It usefully exposes the levels of argument within controversies over the origins of language. See also Dunbar (2003).
Tribe, K., 2008: `Das Adam Smith Problem' and the origins of modern Smith scholarship. History of European Ideas 34, 514-525, doi:10.1016/j.histeuroideas.2008.02.001. This paper provides a forensic overview of Adam Smith's writings and of the many subsequent misunderstandings of them that accumulated in the German, French, and English academic literature of the following centuries -- albeit clarified as improved editions, translations, and commentaries became available. Smith dared to view the problems of ethics, economics, politics and human nature from more than one angle, and saw his two great works The Theory of Moral Sentiments (1759) and An Inquiry into the Nature and Causes of the Wealth of Nations (1776) as complementing each other. Yes, market forces are useful, but only in symbiosis with written and unwritten regulation.
Unger, R. M., and Smolin, L., 2015: The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy. Cambridge, University Press, 543 pp. A profound and wide-ranging discussion of how progress might be made in fundamental physics and cosmology. The authors -- two highly respected thinkers in their fields, philosophy and physics -- make a strong case that the current logjam has to do with our tendency to conflate the outside world with our mathematical models thereof, what Jaynes (2003) calls the `mind-projection fallacy'. Unger and Smolin point out that `part of the task is to distinguish what science has actually found out about the world from the metaphysical commitments for which the findings of science are often mistaken.'
Valero, A., Agudelo, A, and Valero, A., 2011: The crepuscular planet. A model for the exhausted atmosphere and hydrosphere. Energy, 36, 3745-3753. A careful discussion including up-to-date estimates of proven and estimated fossil-fuel reserves.
Valloppillil, V., and co-authors, 1998: The Halloween Documents: Halloween I, with commentary by Eric S. Raymond. On the Internet and mirrored here. This leaked document from the Microsoft Corporation recorded Microsoft's secret recognition that software far more reliable than its own was being produced by the open-source community, a major example being Linux. Halloween I states, for instance, that the open-source community's ability `to collect and harness the collective IQ of thousands of individuals across the Internet is simply amazing.' Linux, it goes on to say, is an operating system in which `robustness is present at every level' making it `great, long term, for overall stability'. I well remember the non-robustness and instability, and user-unfriendliness, of Microsoft's own secret-source software during its near-monopoly in the 1990s. Recent improvements may well owe something to the competition from the open-source community.
van der Post, L., 1972: A Story Like the Wind. London, Penguin. Laurens van der Post celebrates the influence he felt from his childhood contact with some of Africa's `immense wealth of unwritten literature', including the magical stories of the San or Kalahari-Desert Bushmen, stories that come `like the wind... from a far-off place.' See also 1961, The Heart of the Hunter (Penguin), page 28, on how a Bushman told what had happened to his small group: "They came from a plain... as they put it in their tongue, `far, far, far away'... It was lovely how the `far' came out of their mouths. At each `far' a musician's instinct made the voices themselves more elongated with distance, the pitch higher with remoteness, until the last `far' of the series vanished on a needle-point of sound into the silence beyond the reach of the human scale. They left... because the rains just would not come..."
Vaughan, Mark (ed.), 2006: Summerhill and A. S. Neill, with contributions by Mark Vaughan, Tim Brighouse, A. S. Neill, Zoë Neill Readhead and Ian Stronach. Maidenhead, New York, Open University Press/McGraw-Hill, 166 pp.
Wagner, A., 2014: Arrival of the Fittest: Solving Evolution's Greatest Puzzle. London, Oneworld. There is a combinatorially large number of viable metabolisms, that is, possible sets of enzymes hence sets of chemical reactions, that can perform some biological function such as manufacturing cellular building blocks from a fuel like sunlight, or glucose, or hydrogen sulphide -- or, by a supreme irony, even from the antibiotics now serving as fuel for some bacteria. Andreas Wagner and co-workers have shown in recent years that within the unimaginably vast space of possible metabolisms, which has around 5000 dimensions, the viable metabolisms, astonishingly, form a joined-up `genotype network' of closely adjacent metabolisms. This adjacency means that single-gene, hence single-enzyme, additions or deletions can produce combinatorially large sets of new viable metabolisms, including metabolisms that are adaptively neutral or spandrel-like but advantageous in new environments, as seen in the classic experiments of C. H. Waddington on environmentally-stressed fruit flies (e.g. Wills 1994, p. 241). Such neutral changes can survive and spread within a population because, being harmless, they are not deleted by natural selection. Moreover, they promote massive functional duplication or redundancy within metabolisms, creating a tendency toward robustness, and graceful degradation, of functionality. And the same properties of adjacency, robustness, evolvability and adaptability are found within the similarly vast spaces of, for instance, possible protein molecules and possible DNA-RNA-protein circuits and other molecular-biological circuits. Such discoveries may help to resolve controversies about functionality within so-called junk DNA (e.g. Doolittle 2013). These breakthroughs, in what is now called `systems biology', add to insights like those reviewed in Lynch (2007) and may also lead to new ways of designing, or rather discovering, robust electronic circuits and computer codes. Further such insights come from recent studies of artificial self-assembling structures in, for instance, crowds of `swarm-bots'. For a general advocacy of systems-biological thinking as an antidote to extreme reductionism, see Noble (2006).
Watson, A.J., Vallis, G.K., and Nikurashin, M., 2015: Southern Ocean buoyancy forcing of ocean ventilation and glacial atmospheric CO2. Nature Geosci, 8, 861-864.
Werfel, J., Ingber, D. I., and Bar-Yam, I., 2015: Programed death is favored by natural selection in spatial systems. Phys. Rev. Lett. 114, 238103. This detailed modelling study illustrates yet again how various `altruistic' traits are often selected for, in models that include population heterogeneity and group-level selection. The paper focuses on the ultimate unconscious altruism, mortality -- the finite lifespans of most organisms. Finite lifespan is robustly selected for, across a wide range of model assumptions, simply because excessive lifespan is a form of selfishness leading to local resource depletion. The tragedy of the commons, in other words, is as ancient as life itself. The authors leave unsaid the implications for our own species.
Wills, C., 1994: The Runaway Brain: The Evolution of Human Uniqueness. London, HarperCollins, 358 pp. This powerful synthesis builds on an intimate working knowledge of palaeoanthropology and population genetics. It offers many far-reaching insights, not only into the science itself but also into its history. The introduction and chapter 8 give interesting examples of how progress was blocked, or impeded, from time to time, by researchers becoming `prisoners of their mathematical models'. Chapter 8 concerns a classic dispute in which such mathematical imprisonment caught the protagonists in a false dichotomy, about adaptively neutral versus adaptively advantageous genomic changes, seen as mutually exclusive. The role of each kind of change -- both in fact being important for evolution -- has been illuminated by the breakthrough at molecular level described in Wagner (2014). See also the recent review by Lynch (2007). In my notes on Pinker (1997), the allusion to mathematical `proof' from population genetics also points to a case of mathematical imprisonment. The word `proof' can be a danger signal especially when used by non-mathematicians. And another case, from Wills' introduction (pp. 10-12), concerns a famous but grossly-oversimplified view of genome-culture co-evolution, published in 1981, which for one thing failed to recognize that the co-evolution could be a multi-timescale process.
Wilson, D. S., 2015: Does Altruism Exist?: Culture, Genes, and the Welfare of Others. Yale University Press. See also Wilson's recent short article on multi-level selection and the scientific history of the idea. Of course the word `altruism' is another dangerously ambiguous word and source of confusion -- as Wilson points out -- deflecting attention from what matters most, which is actual behaviour. The explanatory power of models allowing multi-level selection in heterogeneous populations is further illustrated in, for instance, the recent work of Werfel et al. 2015.
Yunus, M., 1998: Banker to the Poor. London, Aurum Press, 313 pp. This is the story of the founding of the Grameen Bank of Bangladesh, which pioneered microlending and the emancipation of women against all expectation.
Copyright © Michael Edgeworth McIntyre 2013.
20 July 2019 and (from 23 June 2014 onward) incorporating a sharper
understanding of the last deglaciation and of the abrupt
`Dansgaard-Oeschger warmings', thanks to generous advice from several
colleagues including Dr Luke Skinner.