For reading comfort I recommend zooming in, usually Ctrl+ or Ctrl-Shift+, or Cmnd+ on Mac

Lucidity and Science: The Deepest Connections

(e-book under construction)


The following is a NEWcompletely rewritten draft prelude or extended preface plus four draft chapters, a Postlude on climate, and other bits and pieces yet to be finished. (Anyone interested should keep an eye on the `last updated' line near the end.) With encouragement from a publisher I'm reviving this project and keeping the current draft in the public domain, as I enter my twilight years, because the problems under consideration seem to me more urgent than ever. Comments welcome!

Not least among the problems are our communication difficulties and the forces working against good science, of which ambitious young scientists would do well to be aware. What's amazing, though -- and to me inspirational -- is that good science continues to make progress despite all these problems. Contributing to good science while understanding both its power and its limitations, and its deep connections to other great human endeavours, seems to me one of the most worthwhile things anyone can do.

The old draft prelude now seems to me to have made heavy weather of some things -- but I'm keeping it here for the time being, in case anyone feels interested.

The plan is to build on -- but also to streamline -- ideas from the three Lucidity and Science articles published in Interdisciplinary Science Reviews 22, 199-216 and 285-303 (1997) and 23, 29-70 (1998) together with my keynote lecture to the 4th Kobe Symposium on Human Development published in Bull. Faculty Human Devel. (Kobe University, Japan), 7(3), 1-52 (2000). Related issues in probability and statistics are discussed here. Corrected and updated copies of the Interdisciplinary Science Reviews articles can be downloaded via this index. See also the CORRIGENDUM, a slightly corrupted version of which was published in the December 1998 issue of Interdisciplinary Science Reviews.


Lucidity and Science: The Deepest Connections

Michael Edgeworth McIntyre
Centre for Atmospheric Science at the
Department of Applied Mathematics and Theoretical Physics
Cambridge University
http://www.atm.damtp.cam.ac.uk/people/mem/

Key phrases: communication skills, transferable skills, lucidity principles, fitting models to data, ordinary perception as model-fitting, science as model-fitting, science wars and goodness-of-fit, music (cross-cultural), musical harmony, combinatorial largeness, mathematics, quantum mechanics, perception and cognition, time and consciousness, acausality illusions, human evolution, public understanding of science, science policy, audit culture, Cultural Revolution, fundamentalism, hypercredulity, dichotomization, nature-nurture myth, eugenics myth ("good genes" versus "bad genes"), gene-for-this-or-that myth, analogy with computer software, Halloween Documents and genetic engineering

Prelude: the unconscious brain

Consider, if you will, the following questions.

Good answers are important to our hopes of a civilized future. A quest to find them will soon encounter a cultural and intellectual minefield, as witness, for example, the way in which ideas like `instinct' have taken such a battering for so long (e.g. Bateson and Martin 1999). But I think I can put us within reach of some good answers by recalling, first, a few points about how our human and pre-human ancestors must have evolved according to today's best evidence -- differently from what popular culture says, and popular books on evolution -- and, second, a few points about how we perceive and understand the world, especially points that you can easily check for yourself, with no special equipment.

Our mental apparatus for what we call perception and understanding -- cognition, if you prefer -- must have been shaped by the way our ancestors survived over many millions of years. And the way they survived, though clearly a very `instinctive' business indeed, was not, the best evidence says, `all in the genes'. Nor was it `all down to culture'. The way perception works will be a major theme in this book, from chapter 3 onward, including insights from the way music works.

It seems to me that the most crucial point missed in the sometimes narrow-minded debates about `nature versus nurture', `instinct versus learning', `selfishness versus altruism' and so on is that most of what's involved in perception and cognition, and in our general functioning, takes place well beyond the reach of conscious thought. Some people find this hard to accept. Perhaps they feel offended, in a personal way, to be told that the slightest aspect of their existence might, just possibly, not be under full and rigorous conscious control. But it's easy to show that plenty of things in our brains take place involuntarily, that is, entirely beyond the reach of conscious thought and conscious control. Kahneman (2011) gives many examples. My own favourite example is a very simple one, Gunnar Johansson's `walking lights' animation. Twelve moving dots in a two-dimensional plane are unconsciously assumed to represent a particular three-dimensional motion. When the dots are moving, everyone with normal vision sees a person walking:

Gunnar Johansson's `walking lights' demo, courtesy James Maas

Figure 1: Gunnar Johansson's `walking lights' animation. The printed version of this book will show a barcode to display the animation on a smartphone, and will provide it as a page-flick movie. It can also be found by websearching for "Gunnar Johansson's walking lights". The walking-lights demonstration is a well studied classic in experimental psychology and is one of the most robust perceptual phenomena known.

And again, anyone who has driven cars or flown aircraft will probably remember experiences in which accidents were narrowly avoided, ahead of conscious thought. The typical experience is often described as witnessing oneself taking, for instance, evasive action when faced with a head-on collision. It is all over by the time conscious thinking has begun. This has happened to me. I think such experiences are quite common.

Many years ago, the anthropologist-philosopher Gregory Bateson put the essential point rather well, in classic evolutionary cost-benefit terms:

No organism can afford to be conscious of matters with which it could deal at unconscious levels.

Gregory Bateson's point applies to us as well as to other living organisms. Why? There's a mathematical reason, combinatorial largeness. Every living organism has to deal all the time with a combinatorial tree, a combinatorially large number, of present and future possibilities. Being conscious of all those possibilities would be almost infinitely costly.

Combinatorially large means exponentially large, like compound interest over millennia, or the number of ways to shuffle a pack of cards. Each branching of possibilities multiplies, rather than adds to, the number of possibilities. Such numbers are unimaginably large. No-one can feel their magnitudes intuitively. For instance the number of ways to shuffle a pack of 53 cards is just over 4 x 1069 , or four thousand million trillion trillion trillion trillion trillion.

The `instinctive' avoidance of head-on collision in a car -- the action taken ahead of conscious thought -- was not, of course, something that came exclusively from genetic memory. Learning was involved as well. The same goes for the percept of the `walking lights'. But much of that learning was itself unconscious, stretching back to the (instinctive) infantile groping that discovers the world and helps the visual system to develop. At a biologically fundamental level, nurture is intimately part of nature. That intimacy stretches even further back, to the genome within the embryo `discovering', and then interacting with, its maternal environment both within and outside the embryo (Noble 2006, chapter 4).

Normal vision, by the way, is known not to develop in people who start life with a congenital cataract or opaque cornea. That fact has surprised many who've supposed that surgical removal of the opaque element in later life would `make the blind to see'. A discussion of typical case histories can be found in Sacks (1995).

As must be obvious by now, my approach to the foregoing questions will be that of a scientist. Scientific thinking is my profession. Although many branches of science interest me, my professional work has mainly been applied-mathematical research to understand the highly complex fluid dynamics of the Earth's atmosphere -- phenomena such as the great jetstreams and the air motion that shapes the Antarctic ozone hole. There are associated phenomena sometimes called the `world's largest breaking waves'. (Imagine a sideways breaker the mere tip of which is as big as the United States.) That in turn has helped us to understand the fluid dynamics and magnetic fields of the Sun's interior, in an unexpected way. But long ago I almost became a musician. Or rather, it would be more accurate to say that in my youth I was, in fact, a part-time professional musician and could have made it into a full-time career. So I've had artistic preoccupations too, and artistic aspirations.

It's obvious, isn't it, that science, mathematics, and the arts are all of them bound up with the way perception works. And common to all these human activities, including science -- whatever popular culture may say to the contrary -- is the urge to create, the joy of creation, the thrill of lateral thinking, and sheer wonder at the whole phenomenon of life itself and at the astonishing Universe we live in.

One of the greatest of those wonders is our own adaptability. Who knows, it might even get us through today's troubles. That's despite being genetically similar to our hunter-gatherer ancestors, tribes of people driven again and again to migration and warfare in increasingly clever ways by, among other things, huge and rapid climate fluctuations -- the legendary years of famine and years of plenty.  (Why else did our species -- a single, genetically-compatible species with its single human genome -- spread so quickly around the globe, within the past hundred millennia?)  In chapter 2 I'll point to recent evidence for the sheer rapidity of those climate fluctuations, and to some major advances in our understanding of biological evolution and natural selection, and human nature. And here, by the way, as in most of this book, I lay no claim to originality. For instance the evidence on past climates comes from the painstaking work of many other scientists, including great scientists such as the late Nick Shackleton whom I had the privilege of knowing personally.

Our ancestors must have had not only language and lateral thinking -- and music, dance, poetry and storytelling -- but also, no doubt, the mechanisms of greed, power games, scapegoating, genocide, ecstatic suicide and the rest. To survive, they must have had love and altruism too -- consciously or unconsciously. They probably had ecstatic mysticism. The precise timescales and evolutionary pathways for these things are uncertain. But the timescales for at least some of them must have been more like thousands, than hundreds, of millennia. That's because they must have depended on the coevolution of genome and culture, a multi-timescale process (e.g. Monod 1970, Tobias 1971). Multi-timescale processes are commonplace in many scientific problems. They're characterized by strong two-way interactions between slow and fast mechanisms -- in this case genomic evolution and cultural evolution.

The opposing ideas that long and short timescales can't interact, that cultural evolution is something completely separate, and that language suddenly started around a hundred millennia ago, or even more recently -- ideas that are still repeated from time to time (e.g. Trask et al 1998, Pagel 2012) -- never made sense to me. At a biologically fundamental level, they make no more sense than does the underlying false dichotomy, nature `versus' nurture. I'll return to these points in chapter 2. It's sometimes forgotten that language and culture can be mediated purely by sound waves and light waves and held in individuals' memories: the epic saga phenomenon, if you will, as in the Iliad or in a multitude of other oral traditions, including the `immense wealth' (van der Post 1972) of the unwritten literature of Africa. That's a very convenient, an eminently portable, form of culture for a tribe on the move. And sound waves and light waves are such ephemeral things. They have the annoying property of leaving no archaeological trace. But absence of evidence isn't evidence of absence.

And now, in a mere flash of evolutionary time, a mere few millennia, we've shown our adaptability in ways that seem to me more astonishing than ever. We no longer panic at the sight of a comet. Demons in the air have shrunk to a small minority of alien abductors. We don't burn witches and heretics, at least not literally. We condemn human sacrifice as barbaric, a new development in societal norms (e.g. Ehrenreich 1997). The Pope dares to apologize for past misdeeds. Genocide was somehow avoided in South Africa. We even dare, sometimes, to tolerate individual propensities and lifestyles if they don't harm others. We dare to talk about astonishing new things called personal freedom, social justice, women's rights, and human rights, and sometimes even take them seriously despite the confusion they cause from time to time perhaps, as the philosopher John Gray reminded us recently, through hypercredulity -- through perceiving Human Rights as the Answer to Everything. And, most astonishing of all, since 1945 we've even had the good sense so far -- and very much against the odds -- to avoid the use of nuclear weapons.

We have space travel, space-based observing instruments, super-accurate clocks, and the super-accurate global positioning that cross-checks Einstein's gravitational theory -- yet again -- and now a further and completely different cross-check of consummate beauty, detection of the lightspeed spacetime ripples predicted by the theory. We have the Internet, making Nazi-style censorship much harder and bringing us new degrees of freedom and profligacy of information and disinformation, leading who knows where. It gives us new opportunities to exercise critical judgement and to build computational systems of unprecedented power -- all of it dependent on recent, and also amazing, achievements in robustness and reliability growing out of the open-source software movement, `the collective IQ of thousands of individuals' (Valloppillil et al. 1998). We can read genetic codes and are even beginning, just beginning, to understand them. On large and small scales we're carrying out extraordinary new social experiments with labels like `free-market democracy', `free-market autocracy', `children's democracy' (Vaughan 2006), `microlending' and `emancipation of women', conducive to population control (Yunus 1998), and now `social media', so called, and `citizen science' on the Internet. Some of this was unthinkable or technically impossible until recently. There's a downside as well as an upside, of course -- a downside that's horribly exaggerated by the newsmedia -- but in reality there's everything to play for.

One still hears it said that we live in an age devoid of faith, in the technically advanced societies at least. Well, I'm one of those who have a personal faith, a belief, a conviction, a passion, that the urge and the curiosity to increase our own self-understanding can continue to evolve us toward societies that won't be utopian Answers to Everything but could be more hopeful, more spiritually healthy, and generally more civilized than today. That's at least a possibility, despite the current reversionary trends.

When the Mahatma Gandhi visited England in 1930 he is said to have been asked by a journalist, `Mr Gandhi, what do you think of modern civilization?' The Mahatma is said to have replied, `That would be a good idea.' The optimist in me hopes you agree. Part of such a civilization would be not only a clearer recognition of the power and limitations of science -- including its power, and its limitations, in helping us understand our own nature -- but also a further healing of the estrangement between science and the arts. It might even -- dare I hope for this? -- further reconcile science with the more compassionate, less dogmatic, less violent forms of religion, and other belief systems that are important for the mental and spiritual health of so many people.

I have dared to hint at the `deepest' connections amongst all these things. In a peculiar way, some of the connections can be seen not only as deep but also as simple -- provided that one is willing to think on more than one level, and willing to maintain a certain humility -- a willingness to admit that even one's best idea might not be the Answer to Everything.

Multi-level thinking is nothing new. It has long been recognized, unconsciously at least, as being essential to science. It goes back in time beyond Newton, Galileo and Archimedes. What's complex at one level can be simple at another. Newton treated the Earth as a point mass. Today we have the beginnings of a new conceptual framework, complexity theory or complex-systems theory, that should help to clarify what's involved and to develop it more systematically, more generally, and more consciously. Key ideas include self-organization, self-assembling components or building blocks, and what are called emergent properties -- at different levels of description within complex systems and hierarchies of systems, not least the human brain itself. `Emergent property' is a specialist term for something that looks simple at one level even though caused by the interplay of complex, chaotic events at a deeper level. A related idea is that of `order emerging from chaos'. Self-assembling building blocks are also called autonomous components, or `automata' for brevity.

We'll see that the ideas of multi-level thinking, automata, and self-organization are all crucial to making sense of many phenomena, such as the way genetic memory works and what instincts are -- instincts, that is, in the everyday sense relating to things we do, and perceive, and feel automatically, ahead of conscious thought.

One example is the way normal vision develops, giving rise to perceptual phenomena such as the walking lights. It's known that the visual system assembles itself from many automata -- automata made of molecular-biological circuits and assemblies of such circuits -- subject to inputs from each other and from the external environment, during many stages of unconscious learning at molecular level and upward. Another example is language, to be recalled in chapter 2. And without multi-level thinking there's no chance of avoiding the dangerous confusion surrounding ideas such as `selfish gene', `biological determinism', `heritable intelligence', `altruism', `consciousness', and `free will'.

Scientific progress has always been about finding a level of description and a viewpoint, or viewpoints, from which something at first sight hopelessly complex can be seen as simple enough to be understandable. The Antarctic ozone hole is a case in point. I myself made a contribution by spotting some simplifying features in the fluid dynamics. And, by the way, so high is our scientific confidence in today's understanding of the ozone hole -- with a multitude of observational and theoretical cross-checks -- that the professional disinformers who tried to discredit that understanding, in a well known propaganda campaign, are no longer taken seriously.

That's despite the enormous complexity of the problem, involving spatial scales from the planetary down to the atomic, and timescales from centuries down to thousand-trillionths of a second -- and despite the disinformers' financial resources and their powerful influence on the newsmedia, of which more later. Despite all that, we now have practical certainty, and wide acceptance, that man-made chemicals were the main cause of the ozone hole. And we now have internationally-agreed regulations to restrict emissions of those chemicals, despite the disinformers' aim of stopping any such regulation. We have a new symbiosis between regulation and market forces. It's now well documented, by the way, how the same disinformers had already honed their techniques in the tobacco companies' lung-cancer campaigns (Oreskes and Conway 2010), developing the dark arts of camouflage and deception from a deep understanding of the way perception works.

What makes life as a scientist worth living? For me, part of the answer is the joy of being honest. There's a scientific ideal and a scientific ethic that power open science. And they depend crucially on honesty. If you get up in front of a large conference and say of your favourite theory `I was wrong', you gain respect. Your reputation increases. Why? The scientific ideal says that respect for the evidence, for theoretical coherence and self-consistency, for finding mistakes and for improving our collective knowledge is more important than personal ego or financial gain. And if someone else has found evidence that refutes your theory, then the scientific ethic requires you to say so. The ethic says that you must not only be factually honest but must also give due credit to others, by name, whenever their contributions are relevant.

The scientific ideal and ethic are powerful because, even when imperfectly followed, they encourage not only a healthy scepticism but also a healthy mixture of competition and cooperation. Just as in the open-source software community, the ideal and ethic harness the collective IQ, the collective brainpower, of large research communities in a way that can transcend even the power of short-term greed and financial gain. Again, the ozone hole is a case in point. So is the human-genome story (Sulston and Ferry 2002). Our collective brainpower is the best hope of solving the many formidable problems now confronting us.

In the Postlude I'll return to some of those problems, and to the ongoing struggle between open science and the forces working against it -- whether consciously or unconsciously -- with particular reference to climate change. Again, there's no claim to originality here. I merely aim to pick out, from the morass of confusion surrounding the topic, a few simple points clarifying where the uncertainties lie, as well as the near-certainties.


Note regarding citations: This book attempts to lighten up on scholarly citations, beyond some publications of exceptional importance, mostly recent. There are two reasons. The first is that my original publications on these matters were extensively end-noted, making clear my many debts to the research literature but sometimes making for heavy reading. The second is the ease with which one can now track down references by websearching with key phrases. For instance, websearching with the exact phrase "lucidity principles" will quickly find my original publications complete with endnotes and personal acknowledgements, either on my own website or in an archive at the British Library.


Chapter 1. What is lucidity? What is understanding?

This book reflects my own journey toward the frontiers of human self-understanding. Of course many others have made such journeys. But in my case the journey began in a slightly unusual way.

Music and the visual and literary arts were always part of my life. Music was pure magic to me as a small child. But the conscious journey began with a puzzle. While reading my students' doctoral thesis drafts, and working as a scientific journal editor, I began to wonder why lucidity, or clarity -- in writing and speaking, as well as in thinking -- is often found difficult to achieve. And I wondered why some successful scientists and mathematicians are such surprisingly bad communicators, even within their own research communities, let alone on issues of public concern. Then I began to wonder what lucidity is, in a functional or operational sense.

I now like to understand the term in a wider and deeper sense than usual. It's not only about what you can find in style manuals and in books on how to write, excellent and useful though some of them are. (Strunk and White 1979 is a little gem.) It's also about the deepest connections, as already hinted -- the deepest connections between `lucidity principles' and such things as music, mathematics, pattern perception, and biological evolution. A common thread is the `organic-change principle'. It's familiar, I think, to most artists, at least unconsciously.

The principle says that we're perceptually sensitive to patterns exhibiting `organic change'. Some things change, continuously or by small amounts, while others stay the same. So an organically-changing pattern has invariant elements.

The walking lights is an example. The invariant elements include the number of dots, always twelve dots. Musical harmony is another. Musical harmony is an interesting case because `small amounts' is relevant not in one but in two different senses, leading to the idea of `musical hyperspace'. In musical harmony there are `hyperspace leaps', in the sense of going somewhere that's both nearby and far away. That's how some of the magic is done, in many styles of Western music. An octave leap is a large change in one sense, but small in the other, indeed so small that musicians use the same name for the two pitches. The invariant elements can be pitches or chord shapes.

There are many other examples of organic change and its effectiveness in the arts, in communication, and in thinking -- in thinking about practically anything at all. In mathematics there are many beautiful results about `invariants', or `conserved quantities', things that stay the same while other things change, often continuously through a vast space of possibilities.

Our perceptual sensitivity, and cognitive sensitivity, to organic change is there for biological reasons. One reason is the survival-value of recognizing the difference between living things and dead or inanimate things. To see a cat stalking a bird, or to see a flower opening, is to see organic change.

So I'd dare to describe our sensitivity to it as instinctive. Many years ago I saw a pet kitten suddenly die of some mysterious but acute disease. I had never seen death before, but I remember feeling instantly sure of what had happened -- ahead of conscious thought. And the ability to see the difference between living and dead has been shown to be well developed in human infants a few months old.

Notice by the way how intimately involved, in all this, are ideas of a very abstract kind. The idea of some things changing while others stay invariant is itself highly abstract, as well as simple.

In chapter 5 I'll demonstrate that we all have unconscious mathematics, and an unconscious power of abstraction. That's almost the same as saying that the unconscious brain can handle combinatorial largeness, as already hinted. The brain can handle vast numbers of possibilities at once. That is fundamentally what abstraction means, in the sense I'm using the term. It is the sense used in mathematics, `higher mathematics' if you will. Mathematics is a precise way of handling many possibilities at once. The implication is that the roots of mathematics and logic, and of abstract cognitive symbolism generally, lie far deeper and are evolutionarily far more ancient than they're usually thought to be. That in turn opens the way to understanding what the Platonic world is, and our sense of its being `already there'.

So I've been interested in lucidity, `lucidity principles', and related matters in a sense that cuts deeper than, and goes far beyond, the niceties and pedantries of style manuals. But before anyone starts thinking that it's all about ivory-tower philosophy and cloud-cuckoo-land, let's remind ourselves of some harsh practical realities -- as Plato himself would have done had he lived today. What I'm talking about is relevant not only to thinking and communication but also, for instance, to the ergonomic design of machinery, of software and user-friendly IT systems (information technology), of user interfaces in general, and of technological systems of any kind including the emerging artificial-intelligence systems where the stakes are so incalculably high (e.g. Rees 2014).

The organic-change principle -- that we're perceptually sensitive to organically-changing patterns -- shows why good practice in any of these endeavours involves not only variation but also invariant elements, i.e., repeated elements, just as music does. Good control-panel design might use, for instance, repeated shapes for control knobs or buttons. And in writing and speaking one needn't be afraid of repetition, especially if it forms the invariant element within an organically-changing word pattern. `We will be serious if you are serious' is a clearer and stronger sentence than `We will be serious if you are earnest'. Such pointless or gratuitous variation in place of repetition is what H. W. Fowler ironically called `elegant' variation, an `incurable vice' of `the minor novelists and the reporters'. Its opposite -- let's call it lucid repetition, as with the second `serious' -- isn't the same as being repetitious. The pattern as a whole is changing, organically.

Two other `lucidity principles' are worth noting briefly while I'm at it. (You can find more on the transferable skills, and on the underlying experimental psychology, by websearching for the exact phrase "lucidity principles".) There's an `explicitness principle' -- the need to be more explicit than you feel necessary -- because, obviously, you're communicating with someone whose head isn't full of what your own head is full of. As the great mathematician J. E. Littlewood once put it, `Two trivialities omitted can add up to an impasse.' Again, this applies to design in general, as well as to any form of writing or speaking that aims at lucidity. And of course there's the more obvious `coherent-ordering principle', the need to build context before new points are introduced. It applies not only to writing and speaking but also to the design of anything intended to take you through some sequential process on, for instance, a website or a ticket-vending machine.

There's another reason for attending to the explicitness principle. Human language is surprisingly weak on logic-checking. For this and other reasons, human language is a conceptual minefield. (This keeps many philosophers in business.) Beyond everyday misunderstandings we have, for instance, not only the workings of professional camouflage and deception but also the inadvertent communication failures underlying, for instance, the usual IT disasters:

Tree-swing cartoon 1

Figure 2a: How the customer explained it. (Courtesy of projectcartoon.com.)

Tree-swing cartoon 2

Figure 2b: How the analyst designed it. (Courtesy of projectcartoon.com.)

Tree-swing cartoon 3

Figure 2c: How the programmer wrote it. (Courtesy of projectcartoon.com.)

Tree-swing cartoon 4

Figure 2d: What the customer really wanted. (Courtesy of projectcartoon.com, q.v. for elaborations.)

The logic-checking weakness shows up in the misnomers and self-contradictory terms encountered not only in everyday dealings but also -- to my continual surprise -- in the technical language used by my scientific and engineering colleagues. You'd think we should know better. You'd laugh if, echoing Spike Milligan, I said that someone has a `hairy bald head'. But consider for example the scientific term `solar constant'. It's a precisely-defined measure of the mean solar power per unit area reaching Earth. Well, the solar constant isn't a constant. It's variable, because the Sun's output is variable, though fortunately by not too much.

Another such term is 'slow manifold'. The slow manifold is an abstract mathematical entity, a complicated geometrical object that's important in my research field of atmospheric and oceanic fluid dynamics. Well, the slow manifold isn't a manifold. In non-technical language, it's like something hairy, while a manifold is like something bald. I'm not kidding. (I've tried hard to persuade my fluid-dynamical colleagues to switch to `slow quasimanifold', but with scant success so far. For practical purposes the thing often behaves as if it were a manifold, even though it isn't. It's `thinly hairy'.)

In air-ticket booking systems there's a `reference number' that isn't a number. In finance there's a term `securitization' that means, among other things, making an investment less secure -- yes, less secure -- by camouflaging what it's based on. And then there's the famous `heteroactive barber'. That's the barber who shaves only those who don't shave themselves. `Heteroactive barber' may sound impressive. Some think it philosophically profound. But it's no more than just another self-contradictory term. Seeing that fact does, however, take a conscious effort. There's no instinctive logic-checking whatever. There are clear biological reasons for this state of things, to which I'll return in chapter 2. I'll leave it to you, dear reader, if need be, to go through the logical steps showing that `heteroactive barber' is indeed a self-contradictory term. (If he doesn't shave himself, then it follows that he does, etc.)

Being more explicit than you feel necessary improves your chances of negotiating the minefield. It clarifies your own thinking. Your chances are improved still further if you get rid of gratuitous variations and replace them by lucid repetitions, maintaining the sometimes tricky discipline of calling the same thing by the same name, as in good control-panel design using repeated control-knob shapes. And it's even better if you're cautious about choosing which shape, or which name or term, to use. You might even want to define a technical term carefully at its first occurrence, if only because meanings keep changing, even in science.  `I'll use the idea of whatsit in the sense of such-and-such, not to be confused with whatsit in the sense of so-and-so.'  'I'll denote the so-called solar constant by S, remembering that it's actually variable.'  Another example is `the climate sensitivity'. It has multiple meanings, as I'll explain in the the Postlude. In his 1959 Reith Lectures, the great biologist Peter Medawar remarks on the `appalling confusion and waste of time' caused by the `innocent belief' that a word should have a single, uniquely-defined `essential' meaning.

A fourth `lucidity principle' -- again applying to good visual and technical design as well as to good writing and speaking -- is of course pruning, the elimination of anything superfluous. On your control panel, or web page, or ticket-vending machine, or in your software code and documentation, it's helpful to omit visual and verbal distractions. In writing and speaking, it's helpful to `omit needless words', as Strunk and White put it. If you're a good observer, you'll have noticed the foregoing lucidity principles in action when you look at the meteoric rise of some businesses. Google was a clear example. Indeed, there's surely a temptation to regard lucidity principles as trade secrets, or proprietary possessions. I recall some expensive litigation by another fast-rising business, Amazon, claiming proprietary ownership of `omit needless clicks'.

Websites, ticket-vending machines, and other user interfaces that, by contrast, violate lucidity principles -- making them `unfriendly' -- are still remarkably common, together with all those unfriendly technical manuals and financial instruments. One repeatedly encounters gratuitous variation and inexplicitness, combined with verbal and visual distractions and other needless complexity. The pre-google search engines were typical examples. Their cluttered screens and chaotic, semi-explicit search rules are now, thankfully, a fast-fading memory. With those search engines and with many technical manuals the violations often seem inadvertent, stemming from ignorance. With financial instruments, on the other hand, one might dare to speculate that some of the violations are deliberate, favouring a small élite of individuals clever enough to break the codes and then, in due course, wealthy enough to employ a whole team of codebreakers.

Among the commonplace gratuitous variations I was struck recently by the case of the two reservation codes encountered when booking air tickets. This case has the usual whiff of inadvertence. The two codes look similar, but have distinct purposes that the customer has to decode. So far I've counted nine different but overlapping names for the two reservation codes. On one occasion my booking used a code `2BE8HM', variously called the reference number (even though it isn't a number), the flight reference number, the airline reference, the airline reference locator, the reservation code, and the booking reservation number. Another, similar-looking code `LIEV86', whose purpose was entirely different, was variously called the airline reservation code, the airline booking reference, and the airline confirmation number. The same thing with different names, and different things with the same name. As also found in technical manuals. One encounters further examples when making online purchases. Why are we told to be sure to quote our `account number' when the website called it the `customer reference number'? Could there be a second kind of number I need to know about, or is it yet another gratuitous variation?

In case you think this is getting trivial, let me remind you of Three Mile Island Reactor TMI-2, and the nuclear accident for which it became well known in 1979. The accident was potentially very dangerous as well as incalculably costly, especially when you include the long-term damage to customer confidence. Was that trivial?

You don't need to be a professional psychologist to appreciate the point. Before the nuclear accident, the control panels were like a gratuitously-varied set of traffic lights in which stop is sometimes denoted by red and sometimes by green, and vice versa. Thus, at Three Mile Island, a particular colour on one control panel meant normal functioning while the same colour on another panel meant `malfunction, watch out' (Hunt 1993). Well, the operators got confused.

As I walk around Cambridge and other parts of the UK, I continually encounter the `postmodernist traffic rules' followed by pedestrians here. Postmodernism says that `anything goes'. So you keep left or keep right just as you fancy. All for the sake of interest and variety. How boring, how pedantic, to keep left all the time. Just like those boring traffic lights where red always means stop. To be fair, the UK Highway Code quite reasonably tells us to face oncoming traffic on narrow country roads except, of course, on right-hand bends, and on unsegregated cycle tracks where the Code does indeed say, implicitly, that anything goes. I always feel a slight sense of relief when I visit the USA, where everyone keeps right most of the time.

There's a quasi-bureaucratic mindset that seems ignorant, or uncaring, about examples like Three Mile Island. It says `User-friendliness is a luxury we can't afford.' (Yes, afford.) `Go away and read the technical manual. Look, it says on page 342 that red means `stop' on one-way streets, `go' on two-way streets, and `caution' on right-hand bends. And of course it's the other way round on Sundays and public holidays, except for Christmas which is obviously an exception. What could be clearer? Just read it carefully, all 450 pages, and do exactly what it says,' etc.

With complicated systems like nuclear power plants, or large IT systems, or space-based observing systems -- such as those created by some of my most brilliant scientific colleagues -- there's a combinatorially large number of ways for the system to go wrong even with good design, and even with communication failures kept to a minimum. I'm always amazed when any of these systems work at all. I'm also amazed at how our governing politicians overlook this point again and again, it seems, when commissioning the large IT systems that they hope will save money. All this, of course, is familiar territory for the risk assessors working in the insurance industry, and in the military and security services.

What then is lucidity, in the sense I'm talking about? Let me try to draw a few threads together. In the words of an earlier essay, which was mostly about writing and speaking, `Lucidity... exploits natural, biologically ancient perceptual sensitivities, such as the sensitivities to organic change and to coherent ordering, which reflect our instinctive, unconscious interest in the living world in which our ancestors survived. Lucidity exploits, for instance, the fact that organically changing patterns contain invariant or repeated elements. Lucid writing and speaking are highly explicit, and where possible use the same word or phrase for the same thing, similar word-patterns for similar or comparable things, and different words, phrases, and word-patterns for different things... Context is built before new points are introduced...'

I also argued that `Lucidity is something that satisfies our unconscious, as well as our conscious, interest in coherence and self-consistency' -- in things that make sense -- and that it's about `making superficial patterns consistent with deeper patterns'. It can be useful to think of our perceptual apparatus as a multi-level pattern recognition system.

To summarize, four `lucidity principles' seem especially useful in practice. They amount to saying that skilful communicators and designers give attention to organic change, to explicitness, to coherent ordering, and to pruning superfluous material. The principles apply not only to writing and speaking but also, for instance, to website and user-interface design and to the safety systems of nuclear power plants, with stakes measured in billions of dollars.

Of course a mastery of lucidity principles can serve an interest in camouflage and deception. Such mastery was conspicuous in the tobacco and ozone-hole disinformation campaigns. It is, and always was, conspicuous on political battlefields and in the speeches of demagogues, as in `You're either with us or against us'. That's another case of making superficial patterns consistent with deeper patterns, including deeper patterns of an unpleasant and dangerous kind. Today the dark arts of full-blown camouflage and deception and of making the illogical seem logical -- the so-called `weapons of mass deception' -- have been further developed in a highly professional and well-resourced way (e.g. Lakoff 2014, Pomerantsev 2015), drawing on our ever-improving understanding of the way perception works.

Enough of that! What of my other question? What is this subtle and elusive thing we call understanding, or insight? Of course there are many answers, depending on one's purpose and viewpoint. As far as science is concerned, however, let me try to counter some of the popular myths. What I've always found in my own research, and have always tried to suggest to my students, is that developing a good scientific understanding of something -- even something in the inanimate physical world -- requires looking at it, and testing it, from as many different viewpoints as possible as well as maintaining a healthy scepticism. It's sometimes called `diversity of thought'.

For instance, the fluid-dynamical phenomena I've studied are far too complex to be understandable at all from a single viewpoint, such as the viewpoint provided by a particular set of mathematical equations. One needs equations, words, pictures, and feelings all working together, as far as possible, to form a self-consistent whole. And the fluid-dynamical equations themselves take different forms embodying different viewpoints, with technical names such as `variational', `Eulerian', `Lagrangian', and so on. They're mathematically equivalent but, as Richard Feynman used to say, `psychologically very different'.  Bringing in words, in a lucid way, is a critically important part of the whole but needs to be related to, and made consistent with, equations, pictures, and feelings.

Such multi-modal thinking and healthy scepticism have been the only ways I've known of escaping from the usual mindsets or unconscious assumptions that tend to entrap us. The history of science shows that escaping from such mindsets has always been a key aspect of progress. And an important aid to cultivating a multi-modal view of any scientific problem is the habit of performing what Albert Einstein famously called `thought-experiments', and mentally viewing those from as many angles as possible.

Einstein certainly talked about feeling things, in one's imagination -- forces, motion, colliding particles, light waves -- as well as using equations, words, and pictures. And he was always doing thought-experiments, `what-if experiments' if you prefer. The same thread runs through the testimonies of other great scientists such as Henri Poincaré, Peter Medawar, Jacques Monod, and Richard Feynman. It all goes back to juvenile play, that deadly serious rehearsal for real life -- young children pushing and pulling things (and people!) to see, and feel, how they work.

In my own research community I've often noticed colleagues having futile arguments about `the' cause of some observed phenomenon. `It's driven by such-and-such', says one. `No, it's driven by so-and-so', says another. Sometimes the argument gets quite acrimonious. Often, though, they're at cross-purposes because they have two different thought-experiments in mind, perhaps unconsciously.

And notice by the way how the verb `to drive' illustrates what I mean by language as a conceptual minefield. The verb `to drive' sounds incisive and clearcut, but is nonetheless dangerously ambiguous. I sometimes think that our word-processors should make it flash red for danger, as soon as it's typed, along with a few other dangerously ambiguous words such as the pronoun `this'.

Quite often, `to drive' is used when a better verb would be `to mediate', as often used in the biological literature to signify an important part of some mechanism. By contrast, `to drive' can mean `to control', as when driving a car. That's like controlling a sensitive audio amplifier via its input signal. `To drive' can also mean `to supply the energy needed' via the fuel tank or the amplifier's power supply. Well, there are two obvious, and different, thought-experiments here, on the amplifier let's say. One is to change the input signal. The other is to pull the power plug. A viewpoint that focused on the power plug alone might miss important aspects of the problem!

You may laugh, but there's been a mindset in my community that has, or used to have, precisely such a focus. It said that the way to understand our atmosphere and ocean is through their intricate `energy budgets', disregarding questions of what the system is sensitive to. Yes, energy budgets are important, but no, they're not the Answer to Everything.

The topic of mindsets and cognitive illusions has been illuminated not only through the famous psychological studies of Daniel Kahneman, Amos Tversky and others but also, more recently, through a vast and deeply thoughtful book by the psychiatrist Iain McGilchrist (2009), conveying some fascinating new insights into the evolutionarily ancient roles of the brain's left and right hemispheres. Typically, the left hemisphere has great analytical power but is more prone to mindsets. It seems clear that the sort of scientific understanding I'm talking about -- in-depth understanding if you will -- involves an intricate collaboration between the two hemispheres with each playing to its own very different strengths. If that collaboration is disrupted, for instance by damage to the right hemisphere that paralyses a patient's left arm, extreme forms of mindset can result. There are cases in which the patient vehemently denies that the arm is paralysed, and will make up all sorts of excuses as to why he or she doesn't fancy moving it when asked. This denial-state is called anosognosia. It's a kind of unconscious wilful blindness, if I may use another self-contradictory term. Such phenomena are also discussed in the important book by Ramachandran and Blakeslee (1998).

Back in the 1920s, the great physicist Max Born was immersed in the mind-blowing experience of developing quantum theory. Born once commented that engagement with science and its healthy scepticism can give us an escape route from mindsets. With the more dangerous kinds of zealotry and fundamentalism in mind, he wrote

I believe that ideas such as absolute certitude, absolute exactness, final truth, etc., are figments of the imagination which should not be admissible in any field of science... This loosening of thinking [Lockerung des Denkens] seems to me to be the greatest blessing which modern science has given to us. For the belief in a single truth and in being the possessor thereof is the root cause of all evil in the world...

(Gustav Born 2002). Further wisdom on these topics is recorded in the classic study of cults by Conway and Siegelman (1978), predating today's versions including IS and Breivik and many others and, of course, echoing religious wars across the centuries. Time will tell, perhaps, how the dangers from the fundamentalist religions compare with those from the fundamentalist atheisms. Among today's fundamentalist atheisms we have not only Science as the Answer to Everything -- provoking a needless backlash against science, sometimes violent -- but also free-market fundamentalism, in some ways the most dangerous of all because of its vast financial resources. I don't mean Adam Smith's idea that market forces are useful, in symbiosis with suitable regulation, written or unwritten, as Smith made clear (e.g. Tribe 2008). I don't mean the business entrepreneurship that provides us with valuable goods and services. By free-market fundamentalism I mean a hypercredulous belief, a taking-for-granted, an incoherent mindset that unregulated markets, profit, and personal greed are the Answer to Everything and the Ultimate Moral Good -- regardless of evidence like the 2008 financial crash. Surprisingly, to me at least, free-market fundamentalism takes quasi-Christian as well as atheist forms (e.g. Lakoff 2014, & refs.).

Common to all forms of fundamentalism is that they inhibit, or forbid, the `loosening of thinking' or pluralism that allows freedom to view things from more than one angle, especially as regards central beliefs held sacred. The financial crash seems to have made only a small dent in the central beliefs of free-market fundamentalism, so far. And what's called `science versus religion' is not, it seems to me, about scientific insight versus religious insight. Rather, it's about scientific fundamentalism versus religious fundamentalism, which of course are irreconcilable.

Such futile and damaging conflicts cry out for more loosening of thinking. How can such loosening work? As Ramachandran or McGilchrist might say, it's almost as if the right hemisphere nudges the left with a wordless message along the lines `You might be sure, but I smell a rat: could you, just possibly, be missing something?' In 1983 a Russian officer, Stanislav Petrov, saved us from likely nuclear war. At great personal risk, he disobeyed standing orders when a malfunctioning computer system said `nuclear attack imminent'. We had a narrow escape. It was probably thanks to Petrov's right hemisphere. There have been other such escapes.


Chapter 2. Mindsets, evolution, and language

Let's fast-rewind to a few million years ago. Where did we, our insights, and our mindsets come from? What can be said about human and pre-human evolution? And how on Earth did we acquire our language ability -- that vast conceptual minefield -- so powerful, so versatile, yet so weak on logic-checking? These questions are more than just tantalizing. Clearly they're germane to past and current conflicts, and to future risks including existential risks.

Simplistic evolutionary theory is the first obstacle to understanding -- in popular culture at least, and in parts of the business world. It's the surprisingly persistent view, or unconscious assumption, that natural selection works solely on the level of sex and individual combat. I say surprisingly persistent because it's so plainly wrong. Our species and other social species, from bees to baboons, could not have survived without cooperation between individuals. Without such cooperation, alongside competition, our ground-dwelling ancestors would have been easy meals for the large, swift predators all around them -- gobbled up in no time at all!

Even bacteria cooperate. That's well known. One way they do it is by sharing packages of genetic information called plasmids or DNA cassettes. A plasmid might for instance contain information on how to survive antibiotic attack. Don't get me wrong. I'm not saying that bacteria `think' like us, or like baboons or like bees. And I'm not saying that bacteria never compete. But it's a hard fact, and now an urgent problem in medicine, that vast numbers of individual bacteria cooperate among themselves to develop resistance to antibiotics, among other things. Even different species cooperate (e.g. Skippington and Ragan 2011), as with the currently-emerging resistance to colistin, an antibiotic of last resort. Yes, selective pressures are at work, but at group level as well as at individual and kin level, in heterogeneous populations living in heterogeneous and ever-changing environments. Today the technology of genetic sequencing is uncovering many more bacterial communities and modes of cooperation, and of competition, about which we knew nothing until very recently because it seems that most bacteria can't be grown in laboratory dishes.

So it's plain that natural selection operates at many levels in the biosphere, and that cooperation is widespread alongside competition. Indeed the word `symbiosis' in its standard meaning denotes a variety of intimate, and well studied, forms of cooperation between entirely different species. The trouble is the sheer complexity of it all -- again a matter of combinatorial largeness. We're far from having comprehensive mathematical models of how it works despite, for instance, spectacular recent breakthroughs at molecular level (e.g. Wagner 2014). Perhaps the persistence of simplistic evolutionary theory comes from a feeling that if one can't describe something then it can't exist. McGilchrist would argue, I think, that that's a typical left-hemisphere mindset.

There are more sophisticated variants of that mindset. For very many years there have been acrimonious disputes among biologists over false dichotomies such as `kin selection versus group selection', as if the one excluded the other (e.g. Segerstråle 2000). Fortunately, the worst of those disputes now seem to be dying out, as the models improve and the evidence accumulates for what's now called multi-level selection. There are many different lines of evidence. Some of them are powerfully argued, for instance, in the review articles by Robin Dunbar (2003), Matt Rossano (2009) and Kevin Laland et al. (2010), in the books by Andreas Wagner (2014), Christopher Wills (1994), and David Sloan Wilson (2015), and in a searching and thoughtful compendium edited by Rose and Rose (2000).

Despite much progress there remain some obstacles to understanding our ancestors' evolution. In particular, there's a pair of mindsets saying first that the genes'-eye view gives us the only useful angle from which to view the problem -- or, more fundamentally, the replicators'-eye view including regulatory `junk' or noncoding DNA, so called -- and, second, that one must ignore selective pressures at levels higher than that of individuals (e.g. Pinker 1997; Dawkins 2009).

The first mindset misses the value of viewing a problem from more than one angle. And the weight of evidence against the second is getting stronger and stronger. The persistence of the two mindsets has always puzzled me, but it seems possible that they originated with mathematical models now seen as grossly oversimplified -- the oldest population-genetics models, the models that originally gave rise to `selfish-gene theory'. (Remarkably, the selfish-gene metaphor remains useful for many purposes, but we'll soon encounter its limitations.) The second mindset, against group-level selection in particular, may have been a reaction to the sloppiness of some old non-mathematical arguments for such selection, for instance ignoring the complex game-theoretic aspects such as multi-agent `reciprocal altruism', and conflating altruism as conscious sentiment `for the good of the group' with altruism as actual behaviour, including its deeply unconscious aspects.

Of course none of this is helped by the `lucidity failures' in the research literature such as failures to be explicit in defining mathematical symbols and -- more seriously -- failures to be explicit about what's assumed in various models. One example is an assumption that multi-timescale processes are unimportant. That assumption, whether conscious or unconscious, now looks like one of the more serious mistakes in the older literature. It's one of the problems with the oldest population-genetics models and therefore with selfish-gene theory, not least as applied to ourselves.

The mistake is perhaps surprising because of the familiarity, in other scientific contexts, of multi-timescale processes in which strongly interacting dynamical mechanisms have vastly disparate timescales. Such processes have been recognized for well over a century as crucial to many phenomena. One of them is gas pressure. (Fast molecular collisions mediate slow thermodynamic changes -- a strong dynamical interaction across arbitrarily disparate timescales.) But the biological literature often speaks of `proximate causes' and `ultimate causes', referring respectively to fast mechanisms affecting individual organisms and much slower, evolutionary-timescale mechanisms. The slow mechanisms are not only assumed but also, quite often, confidently declared, to be wholly independent of the fast mechanisms, simply because the timescales are so different. That such assumptions are inconsistent with the weight of evidence is increasingly recognized today (e.g. Thierry 2005, Laland et al. 2011, Danchin and Pocheville 2014).

Mathematical equations can hide other aspects of the evolution problem. A simple example is the effect of population heterogeneity. It becomes invisible if you average over entire populations, without attending to spatial covariances. Chapter 3 of Wilson (2015) shows how such averaging has impeded understanding. To quote Wills (1994), researchers can sometimes become `prisoners of their mathematical models'.

Wills notes two examples of such imprisonment, one relating to the coevolution of genome and culture and the other to an old dispute, in the 1970s, about adaptive genomic changes -- those that give an organism an immediate advantage -- versus neutral genomic changes, which have no immediate effect. That dispute has been transcended by many subsequent developments including the breakthrough described in Wagner (2014); see also the review by Lynch (2007). Wagner and co-workers have shown in detail at molecular level why both kinds of change are practically speaking inevitable, and how neutral changes contribute to the huge genetic diversity that's key to survival in an ever-changing environment. What started as neutral can become adaptive, and does so in many cases. A spinoff from this work is a deeper insight into what we should mean by functionality within so-called junk DNA.

So it's one thing to write down impressive-looking equations but quite another to write, test, calibrate, and fully understand equations that model the complex reality in a useful way, viewing things from more than one angle. Fortunately, models closer to the required complexity (e.g. Lynch 2007, Laland et al. 2010, 2011, & refs., Schonmann et al. 2013, Werfel et al. 2015) have become easier to explore and to calibrate thanks to the power of today's computers, and to the evidence from today's genomic-sequencing technologies.

I sometimes wonder whether mathematical imprisonment, and the resulting legacy of confusion (e.g. Segerstråle 2000), mightn't have involved an exaggerated respect for the mathematical equations themselves, in some cases at least. There's a tendency to think that an argument is made more `rigorous' just by bringing in equations. But even the most beautiful equations can describe a model that's too simple for its intended purposes, or just plain wrong. As the old aphorism says, there's no point in being quantitatively right if you're qualitatively wrong. Or an equation can be beautiful, and useful, and correct as far as it goes, but incomplete. An example is the celebrated Price equation, sometimes called the E=mc2 of population genetics. It's useful as a way of making population heterogeneity or spatial covariance more visible. It does not, however, itself represent everything one needs to know. It isn't the Answer to Everything! And when confronted with a phrase such as `mathematical proofs from population genetics', and similarly grandiose claims, one needs to ask what assumptions were made.

The sheer trickiness of all this reminds me of my own work, alongside that of many colleagues, on jetstreams and the ozone hole. It involved equations giving deep insight into some aspects of the problem though, in our case, precisely because other aspects are kept hidden. (Technically it comes under headings like `balanced flow', `potential-vorticity inversion', and the `slow quasimanifold' mentioned earlier.)  Keeping some aspects hidden happens to be useful here for reasons that have been closely assessed and well studied, both at an abstract theoretical level and, for instance, through experience in numerical weather forecasting. Precisely what's hidden (sound waves and something called `inertia-gravity waves') is well understood and attended to, and demonstrably unimportant in many cases. And the backdrop to all of it is the surprising, fascinating, and awkward fact noted in the Prelude -- and emphasized in Wilson (2015) -- that equations can take alternative forms that are mathematically equivalent yet `psychologically very different'.

For the human species and our ancestors' evolution it seems to me that we do, nevertheless, have enough understanding to say something useful despite all the difficulties. The essence of that understanding is cogently summarized in the book by Wills (1994), which in many ways was well ahead of its time. Some important recent developments are reviewed in Laland et al. (2010, 2011), Richerson et al. (2010), and Danchin and Pocheville (2014), within a rapidly expanding research literature.

What's key for our species is that culturally-relevant traits and propensities -- including our compassion and generosity as well as our less pleasant traits -- and our sheer versatility -- must have come from the coevolution of genome and culture for at least hundreds of millennia and probably much longer, thanks to selective pressures on tightly-knit, ground-dwelling groups facing other groups and large predators in a changing climate. The selective pressures, including social pressures, must have increasingly favoured survival through group solidarity and cooperation. Indeed, the distinguished palaeoanthropologist Phillip Tobias has argued for thousands of millennia of such coevolution, with a two-way feedback loop between the slow and fast processes, genomic and cultural:

... the brain-culture relationship was not confined to one special moment in time. Long-continuing increase in size and complexity of the brain was paralleled for probably a couple of millions of years [my emphasis] by long-continuing elaboration and `complexification'... of the culture. The feedback relationship between the 2 sets of events is as indubitable as it was prolonged in time...

(Tobias 1971). Remarkably, Tobias' insight, clearly recognizing the multi-timescale aspects, seems to have been forgotten in the more recent literature on genome-culture coevolution. There has also been a tendency to see the technology of stone tools as the only important aspect of `culture', giving an impression that culture stood still for one or two million years, just because the stone tools didn't change very much.

It's worth stressing again that an absence of beads and bracelets and other `cultural' archaeological durables is no evidence for an absence, or a standing-still, of culture and language, or proto-language whether gestural or vocal. It hardly needs saying, but apparently does need saying, again, that culture and language can be mediated purely by sound waves and light waves, leaving no archaeological trace. The gradually-developing ability to create what eventually became music, dance, poetry, rhetoric, and storytelling, held in the memories of gifted individuals and in a group's collective, cultural memory, would have produced selective pressures for further brain development and for generations of individuals still more gifted. Not only the best craftspeople, hunters, fighters, tacticians and social manipulators but also, in due course, the best singers and storytellers -- or more accurately, perhaps, the best and most sophisticated singer-storytellers -- would have had the best mating opportunities. Intimately part of all this would have been the kind of social intelligence we call `theory of mind' -- the ability to guess what others are thinking and, in due course, what others think I'm thinking, and so on. The implied genome-culture feedback loop is a central theme in Wills' important book. From a deep knowledge of palaeoanthropology and experimental population genetics Wills suggests, following Tobias, that

at the same time as our ancestors' brains were growing larger, their posture was becoming more upright... and vocal signals were graduating into speech...

(Wills 1994, p. 8). We need not argue about the overall timescale except to say that it was far, far longer than the recent tens of millennia we call the Upper Palaeolithic, the time of the beads and bracelets, as well as the beautiful cave paintings and many other such durables. In particular, there was plenty of time for the self-assembling building blocks of language and culture to seed themselves within genetic memory -- the genetically-enabled automata for language and culture -- from rudimentary beginnings or proto-language as echoed, perhaps, in the speech and gestural signing of a two-year-old today.

It's precisely on this point that the mindset against group-level selection, and above all the neglect of multi-timescale feedback processes, conspire to mislead us most severely in the case of human evolution at least.

At some early stage in our ancestors' evolution, perhaps a million years ago or even more, language or proto-language barriers must have become increasingly significant. Little by little, they'd have sharpened the separation of one group from another. The groups, regarded as evolutionary `survival vehicles', would have developed increasingly tight outer boundaries. Such boundaries would have enhanced the efficiency of those vehicles as carriers of replicators into future generations within each group (e.g. Pagel 2012). The replicators would have been cultural as well as genomic. This channelling of both kinds of replicator within groups down the generations must have strengthened the feedback, the multi-timescale dynamical interplay, between cultures and genomes. And it was likely to have intensified the selective pressures exerted at group-versus-group level, for part of the time at least.

The importance or otherwise of group-level genomic selection would no doubt have varied over past millions of years, as the ground-dwelling groups continued to survive large predators and to compete for resources whilst becoming more and more socially sophisticated -- with ever-increasing reliance on proto-language whether vocal, or gestural, or both. In the most recent stages of evolution, with runaway brain evolution as Wills called it, and languages approaching today's complexity -- perhaps over the past several hundred millennia -- the art of warfare between large tribes might have made the within-group channelling of genomic information less effective than earlier through, for instance, the enslavement of enemy females, increasing cross-tribal gene flow. But then again, that's one-way flow into the prevailing tribe, providing extra genetic diversity and adaptive advantage if it penetrates across internal caste boundaries and slave boundaries.

Such group-wise or caste-wise-heterogeneous multi-timescale processes are dauntingly complex, and invisible to selfish-gene theory. But some of these complex processes are beginning to be captured in models that are far more sophisticated, reflecting several viewpoints of which the gene-centric view is only one (e.g. Schonmann et al. 2013). And new lines of enquiry are sharpening our understanding of the multifarious dynamical mechanisms and feedbacks involved (e.g. Noble 2006, Laland et al. 2011, Danchin and Pocheville 2014). Some of those mechanisms have been discussed for a decade or more under headings such as `evo-devo' (evolutionary developmental biology) and, more recently, the controversial `extended evolutionary synthesis', which includes new mechanisms of `epigenetic heritability' that operate outside the DNA sequences themselves -- all of which says that genetic memory and genetically-enabled automata are even more versatile and flexible than previously thought.

Some researchers today even question the use of the words `genetic' and `genetically' for this purpose, but I think the words remain useful as a pointer to the important contribution from the DNA-mediated information flow, alongside many other forms of information flow into future generations including those via language, culture and `niche construction'. And the idea of genetically-enabled automata seems to me so important -- not least as an antidote to the old genetic-blueprint idea -- that I propose to use it without further apology. It's crucial, though, to remember that the manner in which these automata do or do not assemble themselves is very circumstance-dependent.

That the human language ability involves genetically-enabled automata has been spectacularly confirmed, in an independent way, by recent events in Nicaragua of which we'll be reminded shortly. Of course the automata in question must be like those involved in developing the visual system, or in any other such biological self-assembly process, in that they're more accurately characterized as hierarchies and networks of automata linked to many genes and regulatory DNA segments and made of molecular-biological circuits and assemblies of such circuits -- far more complex than the ultra-simple automata used in mathematical studies of computation and artificial intelligence, and far more complex, and circumstance-dependent, than the hypothetical `genes', so called, of the old population-genetics models; see my notes on Danchin and Pocheville (2014).

And to say that there's an innate potential for language mediated by genetically-enabled automata is quite different from saying that language is innately `hard-wired' or `blueprinted'. As with the visual system, the building blocks are not the same thing as the assembled product -- assembled of course under the influence of a particular environment, physical and cultural. Recognition of this distinction between building blocks and assembled product might even, I dare hope, get us away from the silly quarrels about `all in the genes' versus `all down to culture'.

(Yes, language is in the genes and regulatory DNA and culturally constructed, where of course we must understand the construction as being largely unconscious, as great artists, great writers, and great scientists have always recognized -- consciously or unconsciously!  And there's no conflict with the many painstaking studies of comparative linguistics, showing the likely pathways and relatively short timescales for the cultural ancestry of today's languages. Particular linguistic patterns, such as Indo-European, are one thing, while the innate potential for language is another.)

But what about those multi-timescale aspects? How on Earth can genome, language and culture co-evolve, and interact dynamically, when their timescales are so very, very different? And above all, how can the latest cultural whim or flash in the pan influence so slow a process as genomic evolution? Isn't the comparison with gas pressure too simplistic?

Well, there are many other examples. Many have far greater complexity than gas pressure, even if well short of biological complexity. The ozone hole is one such example. One might equally well ask, how can the very fast and very slow processes involved in the ozone hole have any significant interplay? How can the seemingly random turbulence that makes us fasten our seat belts have any role in a stratospheric phenomenon on a spatial scale bigger than Antarctica, involving timescales out to a century or so?

Well -- as I was forced to recognize in my own research -- there is a significant and systematic interplay between atmospheric turbulence and the ozone hole. It's now well understood. Among other things it involves a sort of fluid-dynamical jigsaw puzzle made up of waves and turbulence. Despite differences of detail, and greater complexity, it's a bit like what happens in the surf zone near an ocean beach. There, tiny, fleeting eddies within the foamy turbulent wavecrests not only modify, but are also shaped by, the wave dynamics in an intimate interplay that, in turn, generates and interacts with mean currents, including rip currents, and with sand and sediment transport over far, far longer timescales.

The ozone hole is even more complex, and involves two very different kinds of turbulence. The first kind is the familiar small-scale, seat-belt-fastening turbulence, on timescales of seconds to minutes. The second is a much slower, larger-scale phenomenon involving a chaotic interplay between jetstreams, cyclones and anticyclones. Several kinds of waves are involved, including jetstream meanders. And interwoven with all that fluid-dynamical complexity we have regions with different chemical compositions, and an interplay between the transport of chemicals, on the one hand, and a large set of fast and slow chemical reactions on the other. The chemistry interacts with solar and terrestrial radiation, over a vast range of timescales from thousand-trillionths of a second as photons hit molecules out to days, weeks, months, years, and longer as chemicals are moved around by global-scale mean circulations. The key point about all this, though, is that what looks like a panoply of chaotic, flash-in-the-pan, fleeting and almost random processes on the shorter timescales has systematic mean effects over far, far longer timescales.

In a similar way, then, our latest cultural whims and catch-phrases may seem capricious, fleeting and sometimes almost random -- almost a `cultural turbulence' -- while nevertheless exerting long-term selective pressures that systematically favour the talents of gifted and versatile individuals who can grasp, exploit, build on, and reshape traditions and zeitgeists in what became the arts of communication, storytelling, imagery, politics, technology, music, dance, and comedy. The feeling that it's `all down to culture' surely reflects the near-impossibility of imagining the vast overall timespans, out to millions of years, over which the automata or building blocks that mediate language and culture must have evolved under those turbulent selective pressures -- all the way from rudimentary beginnings millions of years ago.

The gas-pressure, ocean-beach and ozone-hole examples are enough to remind us that multi-timescale coevolution is possible, with strong interactions across vastly disparate timescales. So for the coevolution of genome and culture over millions of years there's no need for accelerated rates of genomic evolution, as has sometimes been thought (e.g. Wills 1995, pp. 10-13; Segerstråle 2000, p. 40). And the existence of genetically-enabled automata for language itself has been spectacularly verified by some remarkable events in Nicaragua.

From beginnings in the late 1970s, Nicaragua saw the creation of a new Deaf community and an entirely new sign language, Nicaraguan Sign Language (NSL). Beyond superficial borrowings, NSL is considered by sign-language experts to be entirely distinct from any pre-existing sign language, such as American Sign Language or British Sign Language. It's clear moreover that NSL was created by, or emerged from, a community of schoolchildren with hardly any external linguistic input.

Deaf people had no communities in Nicaragua before the late 1970s, a time of drastic political change. It was then that dozens, then hundreds, of deaf children first came into social contact. This came about through an expanding new educational programme. It included schools for the deaf. Today, full NSL fluency at native-speaker level, or rather native-signer level, is found in just one group of people. They are those, and only those, who were young children during or after the late 1970s. That's a simple fact on the ground. It's therefore practically certain that NSL was somehow created by the children, and that NSL was nonexistent before the late 1970s.

Linguists quarrel over how to interpret this situation in part because the detailed evidence, as set out most thoroughly, perhaps, in Kegl et al. (2001), contradicts some well-entrenched ideas on how languages come into being. The feeling that it's `all down to culture' seems to be involved, with a single `human mother tongue' (e.g. Pagel 2012) having been invented as a purely cultural development and a by-product of increasing social intelligence.

If, however, we take the facts on the ground in Nicaragua and put them together with the improved understanding of natural selection already mentioned, including multi-level selection and the genome-culture feedback loop -- the complex, multi-timescale interplay between so-called nature and so-called nurture -- then we're forced to the conclusion that language acquisition and creation do require genetically-enabled automata among other things. Regardless of how we characterize the emergence of NSL, the evidence shows that the youngest children played a crucial role. And a key aspect must have been a child's unconscious urge to impose syntactic function and syntactic regularity on whatever language is being acquired or created. After all, it's common observation that a small child learning English will say things like `I keeped mouses in a box' rather than `I kept mice in a box'. It's the syntactic irregularities that need to be taught by older people, not the syntactic function itself.

This last point was made long ago by Noam Chomsky among others. But the way it fits in with natural selection was unclear at the time. We didn't then have today's insights into multi-level selection, multi-timescale genome-culture feedback, and genetically-enabled automata.

And as for the Nicaraguan evidence, the extensive account in Kegl et al. (2001) is a landmark. It describes careful and systematic studies using video and transcription techniques developed by sign-language experts. Those studies brought to light, for instance, what are called the pidgin and creole stages in the collective creation of NSL by, respectively, the older and the younger children, with full syntactic functionality arising at the creole stage only. Pinker (1994) gives an excellent popular account. More recent work illuminates how the repertoire of syntactic functions in NSL is being filled out, and increasingly standardized, by successive generations of young children (e.g. Senghas 2010).

And what of the changing climate with which our ancestors had to cope? During the past several hundred millennia -- culminating in Wills' runaway brain evolution -- the climate was far more variable than we're used to today. Figure 3 is a palaeoclimatic record giving a coarse-grain overview of climate variability going back 800 millennia. Time runs from right to left, and the upper graph shows temperature changes. Human recorded history occupies only a small sliver at the left-hand edge of Figure 3 extending about as far as the leftmost temperature maximum, a tiny peak to the left of the `H' of `Holocene'. The Holocene is the slightly longer period up to the present during which the climate was relatively warm.


Overview of the past 800 millennia, from Lüthi et al

Figure 3: Antarctic ice-core data from Lüthi et al. (2008) showing estimated temperature (upper graph) and measured atmospheric carbon dioxide (lower graph). Time, in millennia, runs from right to left up to the present day. The significance of the lower graph is discussed in the Postlude. The upper graph estimates air temperature changes over Antarctica, indicative of worldwide changes. The temperature changes are estimated from the amount of deuterium (hydrogen-2 isotope) in the ice, which is temperature-sensitive because of fractionation effects as water evaporates, transpires, precipitates, and redistributes itself between oceans, atmosphere, and ice sheets. The shaded bar corresponds to the relatively short time interval covered in Figure 4 below. The `MIS' numbers denote the `marine isotope stages' whose signatures are recognized in many deep-ocean mud cores, and `T' means `termination' or `major deglaciation'. The thin vertical line at around 70 millennia marks the time of the Lake Toba supervolcanic eruption.


The temperature changes in the upper graph of Figure 3 are estimated from a reliable record in Antarctic ice and are indicative of worldwide temperature changes. There are questions of detail and precise magnitudes, but little doubt as to the order of magnitude of the estimated changes. The changes were huge, especially during the past four hundred millennia, with peak-to-peak excursions of the order of ten degrees Celsius or more. A good cross-check is that the associated global mean sea-level excursions were also huge, up and down by well over a hundred metres, as temperatures went up and down and the great land-based ice sheets shrunk and expanded. There are two clear and independent lines of evidence on sea levels, further discussed in the Postlude below. Also discussed there is the significance of the lower graph, which shows concentrations of carbon dioxide in the atmosphere. Carbon dioxide as a gas is extremely stable chemically, allowing it to be reliably measured from the air trapped in Antarctic ice. The extremes of cold, of warmth, and of sea levels mark what are called the geologically recent glacial-interglacial cycles.

When we zoom in to much shorter timescales, we see that some climate changes were not only severe but also abrupt, over time intervals comparable to, or even shorter than, an individual human lifetime. We know this thanks to patient and meticulous work on the records in ice cores and oceanic mud cores and in many other palaeoclimatic records (e.g. Alley 2000, 2007). The sheer skill and hard labour of fine-sampling, assaying, and carefully decoding such material to increase the time resolution, and to cross-check the interpretation, is a remarkable story of high scientific endeavour.

Not only were there occasional nuclear-winter-like events from volcanic eruptions, including the Lake Toba supervolcanic eruption around 70 millennia ago (thin vertical line in Figure 3) -- a far more massive eruption than any in recorded history -- but there was large-amplitude internal variability within the climate system itself. Even without volcanoes the system has so-called chaotic dynamics, with scope for rapid changes in, for instance, sea-ice cover and in the meanderings of the great atmospheric jetstreams and their oceanic cousins, such as the Gulf Stream and the Kuroshio and Agulhas currents.

This chaotic variability sometimes produced sudden and drastic climate change over time intervals as small as a few years or even less -- practically instantaneous by geological and palaeoclimatic standards. Such events are called `tipping points' of the chaotic dynamics. Much of the drastic variability now known -- in its finest detail for the last fifty millennia or so -- takes the form of complex and irregular `Dansgaard-Oeschger cycles' involving a large range of timescales from millennia down to tipping-point timescales and strongly affecting much of the northern hemisphere.

Figure 4 expands the time interval marked by the shaded bar near the left-hand edge of Figure 3. Note again that time runs from right to left. The graph is a record from Greenland ice with enough time resolution to show details for some of the Dansgaard-Oeschger cycles, those conventionally numbered from 3 to 10. The cycles have amplitudes much greater in the northern hemisphere than in the southern. The graph estimates air temperature changes over Greenland (see caption). The thin vertical lines mark the times of major warming events, which by convention define the end of one cycle and the start of the next. Those warmings were huge, typically of the order of ten degrees Celsius, in round numbers, as well as very abrupt. Indeed they were far more abrupt than the graph can show. They take a few years or less, in some cases (Dokken et al. 2013, & refs.); see also Alley (2000).


Dansgaard-Oeschger events 3-10, from Dokken et al          

Figure 4: Greenland ice-core data from Dokken et al. (2013), for the time interval corresponding to the shaded bar in Figure 3. Time in millennia runs from right to left. The graph shows variations in the amount of the oxygen-18 isotope in the ice, from which temperature changes can be estimated in the same way as in Figure 3. The abrupt warmings marked by the thin vertical lines are mostly of the order of 10°C or more. The thicker vertical lines show timing checks from layers of tephra or volcanic debris. The shaded areas refer to geomagnetic excursions.


Between the major warming events we see an incessant variability at more modest amplitudes -- more like a few degrees Celsius -- nevertheless more than enough to have affected our ancestors' food supplies and living conditions. It seems that the legendary years of famine and years of plenty in human storytelling had an origin far more ancient than recorded history. The incessant climatological boom-and-bust, which seems to have been typical of the past few hundred millennia at least, must have produced intense and prolonged selective pressures on our ancestors favouring social skills, adaptability, versatility, group solidarity, and the ability to migrate as a group and to fight competing groups.

And it was indeed the past few hundred millennia that saw the most spectacular human brain-size expansion in the fossil record (e.g. Dunbar 2003, Figure 4). And migration out of Africa, much of it around 70 millennia ago, it seems -- followed by countless hours by the fireside in the harsh winters -- must have furthered not only the crafting of durables, from cave paintings to beads to flutes to powerful weapons, but also the social and cultural skills associated with ever more elaborate rituals, belief systems, songs, and stories passed from generation to generation.

And what stories they must have been!  Great sagas etched into a tribe's collective memory. It can hardly be accidental that the sagas known today tell of battles, of epic journeys, of great floods, and of terrifying deities that are both fickle benefactors and devouring monsters -- just as the surrounding large predators must have appeared to our still-more-remote ancestors, as they scavenged for meat before becoming hunters themselves (e.g. Ehrenreich 1997).

And, to survive, our ancestors must have had strong leaders and willing followers. The stronger and the more willing, the better the chance of surviving hardship, migration, and warfare. Hypercredulity and weak logic-checking must have become central to all this. They must have been strongly selected for as language became more and more sophisticated.

How do you make leadership work? Do you make a reasoned case? Do you ask your followers to check your logic? Do you check it yourself? Of course not! You're a leader because, with your people starving or faced with a hostile tribe, or both, you've emerged as a charismatic visionary driven by deeply-held convictions. You're inspired. The gods have spoken to you. You know best. You have the Answer to Everything, and you say so. Your people believe you. `O my people, I have seen the True Path that we must follow. Come with me! Beyond those mountains, over that distant horizon, that's where we'll win through and find our Promised Land. It is our destiny to find that Land and overcome all enemies because we, and only we, are the True Believers.'  How else, in the incessantly-fluctuating climate, I ask again, did our one species -- our single human genome -- spread all around the globe within the past hundred millennia?

And what of dichotomization? It's far more ancient of course, as well as deeply instinctive. Ever since the Cambrian, half a billion years ago, individual lives have teetered on the brink of fight or flight, edible or inedible, male or female, friend or foe. But with language and hypercredulity in place, dichotomization can take the new forms we see today. Not just friend or foe and with us or against us but also We are right and they are wrong.  It's the absolute truth of our tribe's belief system versus the absolute falsehood of theirs.

And in case you're tempted to dismiss all this as a mere `just so story' -- speculation unsupported by evidence -- let me call your attention to the wealth of supporting evidence and careful thinking, and modest but careful mathematical modelling, summarized in the book by D.S. Wilson (2015). In particular, his chapters 3, 6 and 7 make a strong case not only for multi-level selection but also for the adaptive power of traits such as those I've been calling hypercredulity and dichotomization. In particular, he details their conspicuous role in today's fundamentalisms -- religious and atheist alike -- including the atheist form of free-market fundamentalism whose best-known prophet was its Joan of Arc, the legendary Ayn Rand. Understanding free-market fundamentalism is important because of its profound influence on the rules governing our unstable economies. Individual profit is the supreme goal and the supreme moral imperative. (Short-term profit especially; and `short term' now means fractions of a second, in computer-mediated trading.)

As Wilson demonstrates from detailed case studies, both religious and atheist, the characteristic dichotomization is `Our ideas good, their ideas bad, for everyone without exception'.  Yet again, it's the answer-to-everything mindset, inhibiting deeper understanding. For instance Rand seemed to claim what Adam Smith did not, that selfishness is absolutely good and altruism absolutely bad, for absolutely everyone -- or, rather, I suspect, for everyone that matters, everyone in my tribe, every true believer who thereby deserves to survive and prosper. It seems that some well-intentioned believers such as Rand's disciple Alan Greenspan were devastated when the 2008 financial crash took them by surprise, shortly after Greenspan's long reign at the US Federal Reserve Bank. By a supreme irony Rand's credo also says, or takes for granted, that `We are rational and they are irrational.' In other words any loosening of thinking, any alternative viewpoint, any hint of pluralism is `irrational' and is to be dismissed out of hand.

Of course many other traits must have been selected for, underpinning our species' extraordinary social sophistication and tribal organization as discussed in, for instance, Pagel (2012). Recent advances in palaeoarchaeology have added much detail to the story of the past hundred millennia, based on evidence that includes the size and structure of our ancestors' campsites. Some of the evidence now points to inter-group trading as early as 70 millennia ago, perhaps accelerated by climate stress from the Toba super-eruption around then (e.g. Rossano 2009) -- suggesting not only warfare between groups but also wheeling and dealing, all of it demanding high levels of social sophistication and organization, and versatility.

Today we must live with the genetic inheritance from all this. In our overcrowded world, awash with powerful military, cyberspatial, and financial weaponry, the dangers are self-evident. And yet there's hope, despite what a human-nature cynic might say. Thanks to our improved understanding of natural selection, we now understand much better how genetic memory works. Genetic memory and `human nature' are not nearly as rigid, not nearly as `hard-wired', as many people think. Wilson (2015) points out that this improved understanding suggests new ways of coping.

For instance practical belief systems have been rediscovered that can avert the `tragedy of the commons', the classic devastation of resources that comes from unrestrained selfishness. That tragedy, as ancient as life itself (Werfel et al. 2015), now threatens our entire planet. And the push toward it by further prioritizing individual profit, and further deregulating the world of finance, is increasingly recognized as -- well -- insane. Not the true path, after all. Not the supreme moral imperative. There are signs, now, of saner and more stable compromises, or rather symbioses, between regulation and market forces, more like Adam Smith's original idea (e.g. Tribe 2008). The ozone hole has given us an inspiring example.

And the idea of genetically-enabled automata or self-assembling building blocks becomes more important than ever, displacing the older, narrower ideas of genetic blueprint, innate hard wiring, selfish genes, and rigid biological determinism. The epigenetic flexibility allowed by the automata are now seen as significant aspects of biological evolution, not least that of our ancestors.

Yes, it's clear that our genetic memory can spawn powerful automata for hypercredulity, hypercompetition, greed, genocide and the rest. Yet, as history shows, cultural change can mean that those automata don't always have to assemble themselves in the same way. Everyone needs some kind of faith or hope; but personal beliefs don't have to be fundamentalist. And genocide isn't hard-wired. It can be outsmarted and avoided. It has been avoided on some occasions. Compassion and generosity can come into play, transcending reciprocal altruism. There are such things as love and redemption, and the sacramental. They too have their automata, deep within our unconscious being -- our epigenetic being. They too are part of our human nature and its potential.

As the philosopher Roger Scruton has written, `love, treated as a summons to sacrifice, becomes a sacred and redeeming force'. It's a force strongly felt in some of the great epics, such as the story of Parsifal. And of course it has its dark side, within the more dangerous fundamentalisms. But I think Scruton has a serious point, difficult though it may be to articulate within today's secular cultures. So does the composer Michael Tippett when he paraphrases one of Carl Gustav Jung's great insights: `I would know my shadow and my light; so shall I at last be whole.'  Tippett set those words to some of the most achingly beautiful music ever written.


Chapter 3: Acausality illusions, and the way perception works

Picture a typical domestic scene. `You interrupted me!'  `No, you interrupted me!

Such stalemates can arise from the fact that perceived timings differ from actual timings in the outside world. I once tested this experimentally by secretly tape-recording a dinner-table conversation. At one point I was quite sure that my wife had interrupted me, and she was equally sure it had been the other way round. When I listened afterwards to the tape, I discovered to my chagrin that she was right. She had started to speak just before I did.

Musical training includes learning to cope with the discrepancies between perceived timings and actual timings. For example, musicians often check themselves with a metronome, a small machine that emits precisely regular clicks. The final performance won't necessarily be metronomic, but practising with a metronome helps to remove inadvertent errors in the fine control of rhythm. `It don't mean a thing if it ain't got that swing...'

There are many other examples. I once heard a radio interviewee recalling how he'd suddenly got into a gunfight:  `It all went intuh slowww... motion.'

(A scientist who claims to know that eternal life is impossible has failed to notice that perceived timespans at death might stretch to infinity. That, by the way, is a simple example of the limitations of science. What might or might not happen to perceived time at death is a question outside the scope of science, because it's outside the scope of experiment and observation. It's here that ancient religious teachings show more wisdom, when they say that deathbed compassion and reconciliation are important to us. Perhaps I should add that I'm not myself conventionally religious. I'm an agnostic whose closest approach to the numinous -- to things transcendental, to the divine if you will -- has been through music.)

Some properties of perceived time are very counterintuitive indeed. They've caused much conceptual and philosophical confusion. For instance, the perceived times of outside-world events can precede the arrival of the sensory data defining those events, sometimes by as much as several hundred milliseconds. At first sight this seems crazy, and in conflict with the laws of physics. Those laws include the principle that cause precedes effect. But the causality principle refers to time in the outside world, not to perceived time. The apparent conflict is a perceptual illusion. I'll refer to such phenomena as `acausality illusions'.

The existence of acausality illusions -- of which music provides outstandingly clear examples, as we'll see shortly -- is a built-in consequence of the way perception works. And the way perception works is well illustrated by the `walking lights'.

Consider for a moment what the walking lights tell us. The sensory data are twelve moving dots in a two-dimensional plane. But they're seen by anyone with normal vision as a person walking -- a particular three-dimensional motion exhibiting organic change.  (The invariant elements include the number of dots, and the distances, in three-dimensional space, between particular pairs of locations corresponding to particular pairs of dots.)  There's no way to make sense of this except to say that the unconscious brain fits to the data an organically-changing internal model that represents the three-dimensional motion, using an unconscious knowledge of Euclidean geometry.

This by the way is what Kahneman (2011) calls a `fast' process, something that happens ahead of conscious thought, and outside our volition. Despite knowing that it's only twelve moving dots, we have no choice but to see a person walking.

Such model-fitting has long been recognized by psychologists as an active process involving unconscious prior probabilities, and therefore top-down as well as bottom-up processes (e.g. Gregory 1970, Hoffman 1998, Ramachandran and Blakeslee 1998). For the walking lights the greatest prior probabilities are assigned to a particular class of three-dimensional motions, privileging them over other ways of creating the same two-dimensional dot motion. The active, top-down aspects show up in neurophysiological studies as well (e.g. Gilbert and Li 2013).

The term pattern-seeking is sometimes used to suggest the active nature of the unconscious model-fitting process. For the walking lights the significant pattern is four-dimensional, involving as it does the time dimension as well as all three space dimensions. Without the animation, one tends to see no more than a bunch of dots. So active is our unconscious pattern-seeking that we are prone to what psychologists call pareidolia, seeing patterns in random images.

And what is a `model'? In the sense I'm using the word, it's a partial and approximate representation of reality, or presumed reality. As the famous aphorism says, `All models are wrong, but some are useful'. Models are made in a variety of ways.

The internal model evoked by the walking lights is made by activating some neural circuitry. The objects appearing in video games and virtual-reality simulations are models made of electronic circuitry and computer code. Children's model boats and houses are made of real materials but are, indeed, models as well as real objects -- partial and approximate representations of real boats and houses. Population-genetics models are made of mathematical equations, and computer code usually. So too are models of photons, of black holes, of lightspeed spacetime ripples, and of jetstreams and the ozone hole. Any of these models can be more or less accurate, and more or less detailed. But they're all partial and approximate.

So ordinary perception, in particular, works by model-fitting. Paradoxical and counterintuitive though it may seem, the thing we perceive  is  -- and can only be -- the unconsciously-fitted internal model. And the model has to be partial and approximate because our neural processing power is finite. The whole thing is counterintuitive because it goes against our visual experience of outside-world reality -- as not just self-evidently external, but also as direct, clearcut, unambiguous, and seemingly exact in many cases. Indeed, that experience is sometimes called `veridical' perception, as if it were perfectly accurate. One often has an impression of sharply-outlined exactness -- for instance with such things as the delicate shape of a bee's wing or flower petal, the precise geometrical curve of a hanging dewdrop, the sharp edge of the sea on a clear day and the magnificence, the sharply-defined jaggedness, of snowy mountain peaks against a clear blue sky.

(Right now I'm using the word `reality' to mean the outside world. Also, I'm assuming that the outside world exists. I'm making that assumption consciously as well as, of course, unconsciously. Notice by the way that `reality' is another dangerously ambiguous word. It's another source of conceptual and philosophical confusion. To start with, the thing we perceive is often called `the perceived reality', whether it's a mountain peak, a person walking, a charging rhinoceros or a car on a collision course or anything else. Straight away we blur the distinction drawn long ago by Kant, Plato and other great thinkers -- the distinction between the thing we perceive and the thing-in-itself in the outside world. And is music real? Is mathematics real? Is our sense of `self' real? Is religious experience real? Are love and redemption real? I'll return to those issues in chapters 5 and 7.)

The walking lights remind us that the unconscious model-fitting takes place in time as well as in space. Perceived times are -- and can only be -- internal model properties. And they must make allowance for the brain's finite information-processing rates. That's why, in particular, the existence of acausality illusions is to be expected, as I'll now explain.

In order for the brain to produce a conscious percept from visual or auditory data, many stages and levels of processing are involved -- top-down as well as bottom-up. The overall timespans of such processing are well known from experiments using high-speed electrical and magnetic recording such as electroencephalography and magnetoencephalography, to detect episodes of brain activity. Timespans are typically of the order of hundreds of milliseconds. Yet, just as with visual perception, the perceived times of outside-world events have the same `veridical' character of being clearcut, unambiguous, and seemingly exact, like the time pips on the radio. It's clear at least that perceived times are often far more accurate than hundreds of milliseconds.

That accuracy is a consequence of biological evolution. In hunting and survival situations, eye-hand-body coordination needs to be as accurate as natural selection can make it. Perceived times need not -- and do not -- await completion of the brain activity that mediates their perception. Our ancestors survived. We've inherited their timing abilities. World-class tennis players time their strokes to a few milliseconds or thereabouts. World-class musicians work to similar accuracies, in the fine control of rhythm and in the most precise ensemble playing. It's more than being metronomic; it's being `on the crest of the rhythm'.

You don't need to be a musician or sportsperson to appreciate the point I'm making. If you and I each tap a plate with a spoon or chopstick, we can easily synchronize a regular rhythm with each other, or synchronize with a metronome, to accuracies far, far better than hundreds of milliseconds. Accuracies more like tens of milliseconds can be achieved without much difficulty. So it's plain that perceived times -- internal model properties -- are one thing, while the timings of associated brain-activity events, spread over hundreds of milliseconds, are another thing altogether.

This simple point has been missed again and again in the philosophical and cognitive-sciences literature. In particular, it has caused endless confusion in the debates about `free will' and `consciousness'. I'll return to those points in chapter 7, where I'll describe the experiments of Grey Walter and Benjamin Libet. In brief, the confusion seems to stem from an unconscious assumption -- which I hope I've shown to be nonsensical -- an assumption that the perceived `when' of hitting a ball or taking a decision should be synchronous with the `when' of some particular brain-activity event.

As soon as that nonsense is blown away, it becomes clear that acausality illusions should occur. And they do occur. The simplest and clearest examples come from music, `the art that is made out of time' as Ursula le Guin once put it. Let's suppose that we refrain from dancing to the music, and that we keep our eyes closed. Then, when we simply listen, the data to which our musical internal models are fitted are the auditory data alone.

I'll focus on Western music. Nearly everyone with normal hearing is familiar, at least unconsciously, with the way Western music works. The unconscious familiarity goes back to infancy or even earlier. Regardless of genre, whether it be commercial jingles, or jazz or folk or pop or classical or whatever -- and, by the way, the classical genre includes most film music, for instance Star Wars -- the music depends among other things on precisely timed events called harmony changes. That's why children learn guitar chords. That's how the Star Wars music suddenly goes spooky, after the heroic opening.

The musical internal model being fitted to the incoming auditory data keeps track of the times of musical events, including harmony changes. And those times are -- can only be -- perceived times, that is, internal model properties.

Figure 5 shows one of the clearest examples I can find. Playback is available from a link in the figure caption. It's from a well known classical piano piece that's simple, slow, and serene, rather than warlike. There are five harmony changes, the third of which is perceived to occur midway through the example, at the time shown by the arrow. Yet if you stop the playback just after that time, say a quarter of a second after, you don't hear any harmony change. You can't, because that harmony change depends entirely on the next two notes, which come a third and two-thirds of a second after the time of the arrow. So in normal playback the perceived time of the harmony change, at the time of the arrow, precedes by several hundred milliseconds the arrival of the auditory data defining the change.


From Mozart's piano sonata K 545

Figure 5: Opening of the slow movement of the piano sonata K 545 by Wolfgang Amadeus Mozart. Here's an audio clip giving playback at the speed indicated. Here's the same with orchestral accompaniment. (Mozart would have done it more subtly -- with only one flute, I suspect -- but that's not the point here.)


That's a clear example of an acausality illusion. It's essential to the way the music works. Almost like the `veridical' perception of a sharp edge, the harmony change has the subjective force of perceived reality -- the perceived `reality' of what `happens' at the time of the arrow.

When I present this example in a lecture, it's sometimes put to me that the perceived harmony change relies on the listener being familiar with the particular piece of music. Having been written by Mozart, the piece is indeed familiar to many classical music lovers. My reply is to present a variant that's unfamiliar, with a new harmony change. It starts diverging from Mozart's original just after the time of the arrow (Figure 6):


Variant on Mozart's piano sonata K 545

Figure 6: This version is the same as Mozart's until the second note after the arrow. Here's the playback. Here's the same with orchestral accompaniment.


As before, the harmony change depends entirely on the next two notes but, as before, the perceived time of the harmony change -- the new and unfamiliar harmony change -- is at, not after, the time of the arrow. The point is underlined by the way any competent composer or arranger would add an orchestral accompaniment, to either example -- an accompaniment of the usual kind found in classical piano concertos. Listen to the second clip in each figure caption. The accompaniments change harmony at, not after, the time of the arrow.

I discussed those examples in greater detail in Part II of Lucidity and Science, with attention to some subtleties in how the two harmony changes work and with reference to the philosophical literature, including Dennett's `multiple-drafts' theory of consciousness, which is a way of thinking about perceptual model-fitting in the time dimension.

Just how the brain manages its model-fitting processes is still largely unknown, even though the cleverness, complexity and versatility of these processes can be appreciated from a huge range of examples. Interactions between many brain regions are involved and, in many cases, more than one sensory data stream.

An example is the McGurk effect in speech perception. Visual data from lip-reading can cause changes in the perceived sounds of phonemes. For instance the sound `baa' is often perceived as 'daa' when watching someone say `gaa'. The phoneme-model is being fitted multi-modally -- simultaneously to more than one sensory data stream, in this case visual and auditory. The brain often takes `daa' as the best fit to the slightly-conflicting data.

The Ramachandran-Hirstein `phantom nose illusion' -- which can be demonstrated without special equipment -- produces a striking distortion of one's perceived body image, a nose elongation well beyond Pinocchio's or Cyrano de Bergerac's (Ramachandran and Blakeslee, p. 59). It's produced by a simple manipulation of tactile and proprioceptive data. They're the data feeding into the internal model that mediates the body image, including the proprioceptive data from receptors such as muscle spindles sensing limb positions.

What's this so-called body image? Well, the brain's unconscious internal models must include a self-model -- a partial and approximate representation of one's self, and one's body, in one's surroundings. Plainly one needs a self-model, if only to be well oriented in one's surroundings and to distinguish oneself from others. `Hey -- you're treading on my toe.'

There's been philosophical confusion on this point, too. Such a self-model must be possessed by any animal. Without it, neither a leopard nor its prey would have a chance of surviving. Nor would a bird, or a bee, or a fish. Any animal needs to be well-oriented in its surroundings, and to be able to distinguish itself from others. Yet the biological, philosophical, and cognitive-science literature sometimes conflates `having a self-model', on the one hand, with `being conscious' on the other.

Compounding the confusion is another misconception, the `archaeological fallacy' that  symbolic representation  came into being only recently, at the start of the Upper Palaeolithic with its beads, bracelets, flutes, and cave paintings, completely missing the point that leopards and their prey can perceive things and therefore need internal models. So do birds, bees, and fish. Their internal models, like ours, are -- can only be -- unconscious symbolic representations. Patterns of neural activity are symbols. Again, symbolic representation is one thing, and consciousness is another. Symbolic representation is far more ancient -- by hundreds of millions of years -- than is commonly supposed.

And what about the brain's two hemispheres? Here I must defer to McGilchrist (2009) and to Ramachandran and Blakeslee (1998), who in their different ways offer a rich depth of understanding coming from neuroscience and neuropsychiatry, far transcending the superficialities of popular culture. For present purposes, the key point is that having two hemispheres is evolutionarily ancient. Even fish have them. The two hemispheres may have originated from the bilaterality of primitive vertebrates but then evolved in different directions. If so, it would be a good example of how a neutral genomic development can later become adaptive.

A good reason to expect such bilateral differentiation, McGilchrist argues, is that survival is helped by having two styles of perception. They might be called holistic on the one hand, and detailed, focused, analytic, and fragmented on the other. The evidence shows that the first, holistic style is a speciality of the right hemisphere, and the second a speciality of the left, or vice versa in a minority of people.

If you're a pigeon who spots some small objects lying on the ground, then you want to focus attention on them because you want to know whether they are, for instance, grains of sand or seeds that are good to eat. That's the left hemisphere's job. It has a style of model-fitting, and a repertoire of models, that's suited to a fragmented, dissected view of the environment, picking out a few chosen details while ignoring the vast majority of others. The left hemisphere can't see the wood for the trees. Or, more accurately, it can't even see a single tree but only, at best, leaves, twigs or buds (which, by the way, might be good to eat). One can begin to see why the left hemisphere is more prone to mindsets.

But suppose that you, the pigeon, are busy sorting out seeds from sand grains and that there's a peculiar flicker in your peripheral vision. Suddenly there's a feeling that something is amiss. You glance upward just in time to see a bird of prey descending and you abandon your seeds in a flash! That kind of perception is the right hemisphere's job. The right hemisphere has a very different repertoire of internal models, holistic rather than dissected. They're often fuzzier and vaguer, but with a surer sense of overall spatial relations, such as your body in its surroundings. They're capable of superfast deployment. The fuzziness, ignoring fine detail, makes for speed when coping with the unexpected.

Ramachandran and Blakeslee point out that another of the right hemisphere's jobs is to be good at detecting gross inconsistencies between incoming data and the left hemisphere's currently-active internal model. When the data contradict the model, the left hemisphere has a tendency to reject the data and cling to the model -- to be trapped in a mindset. `Don't distract me; I'm trying to concentrate!' Brain scans show a small part of the right hemisphere that detects such inconsistencies or discrepancies. If the discrepancy is acute, the right hemisphere bursts in with `Look out, you're making a mistake!' If the right hemisphere's discrepancy detector is damaged, severe mindsets such as anosognosia can result.

McGilchrist points out that our right hemispheres (in most people) are involved in many subtle and sophisticated games, such as playing with the metaphors that permeate language or, one might even say, that mediate language.

And what about combinatorial largeness? Perhaps the point is obvious. For instance there's a combinatorially large number of possible visual scenes, and of possible assemblies of internal models to fit them. Even so simple a thing as a chain with 10 different links can be assembled in 3,628,800 different ways and, with 100 different links, approximately 10158 different ways. Neither we nor any other organism can afford to be conscious of all the possibilities. Phenomena such as early-stage edge detection (e.g. Hofmann 1998) and the unconscious perceptual grouping studied by the Gestalt psychologists (e.g. Gregory 1970) give us glimpses of how the vast combinatorial tree of possibilities is pruned by our extraordinary model-fitting apparatus, ahead of conscious thought.

And what about science itself? What about all those mathematical and computer-coded models of population genetics and of photons, of molecules, of black holes, of lightspeed spacetime ripples, of jetstreams and the ozone hole, and of the other entities we deal with in science? Could it be that science itself is always about finding useful models that fit data from the outside world, and never about finding Veridical Absolute Truth? Can science be a quest for truth even if the truth is never Absolute?

The next chapter will argue that the answer to both questions is clearly yes. One of the key points will be that, even if one were to find a candidate `Theory of Everything', one could never test it at infinite accuracy, in an infinite number of cases, and in all parts of the Universe or Universes. One might achieve superlative scientific confidence, with many accurate cross-checks, within a very wide domain of applicability. And that would be wonderful. But in principle there'd be no way to be Absolutely Certain that it's Absolutely Correct, Absolutely Accurate, and Applicable to Everything. That's kind of obvious, isn't it?


Chapter 4: What is science?

So I'd like to replace all those books on the philosophy of science by one simple, yet profound and far-reaching, statement. It not only says what science is, in the most fundamental possible way, but it also clarifies the power and limitations of science. It says that science is an extension of ordinary perception, meaning perception of outside-world reality. Like ordinary perception, science fits models to data.

If that sounds glib and superficial to you, dear reader, then all I ask is that you think again about the sheer wonder of so-called ordinary perception. It too has its power and its limitations, and its fathomless subtleties, agonized over by generations of philosophers. Both science and ordinary perception work by fitting models -- symbolic representations -- to data from the outside world. Both science and ordinary perception must assume that the outside world exists, because it can't be proven absolutely. Models, and collections and hierarchies of models -- schemas or schemata as they're sometimes called -- are partial and approximate representations, or potential representations, of outside-world reality. Those representations can be anything from superlatively accurate to completely erroneous.

Notice that the walking-lights animation points to the tip of a vast iceberg, a hierarchy of unconscious internal models starting with the three-dimensional motion itself, but extending all the way to the precise manner of walking and the associated psychological and emotional subtleties. The main difference between science and so-called ordinary perception is that, in science, the set of available models is different and the model-fitting process to some extent more conscious, as well as being far slower, and dependent on vastly extended data acquisition, computation, and cross-checking.

And yes, all our modes of observation of the outside world are theory-laden, or prior-probability-laden. That's a necessary aspect of the model-fitting process. But that doesn't mean that `science is mere opinion' as some postmodernists say. Some models fit much better than others. And some are a priori more plausible than others, with more cross-checks to boost their prior probabilities. And some are simpler and more widely applicable than others, for example Newton's and Einstein's theories of gravity. These are both, of course, partial and approximate representations of reality even though superlatively accurate, and repeatedly cross-checked, in countless ways, within their very wide domains of applicability -- Einstein's still wider than Newton's because it includes, for instance, the orbital decay and merging of a pair of black holes and the resulting spacetime ripples, or gravitational waves, which were first observed on 14 September 2015 (Abbott et al. 2016) and which provided yet another cross-check on Einstein's theory and opened a new window on the Universe. And both theories are simple and mathematically beautiful.

Notice that all this has to do with cross-checking, data quality, goodness of fit, and beauty and economy of modelling, never with Absolute Truth and Absolute Proof, nor even with uniqueness of model choice. Currently Einstein's theory has no serious competitors in its domain of applicability, but in general the choice of model needn't be unique. There might be two or more alternative models that work equally well. They might have comparable simplicity and accuracy and offer complementary, and equally powerful, insights into outside-world reality.

The possibility of non-uniqueness is troublesome for believers in Absolute Truth, and is much agonized over in the philosophy-of-science literature. However, as I keep saying, even the existence of the outside world can't be proven absolutely. It has to be assumed. Both science and ordinary perception proceed on that assumption. The justification is no more and no less than our experience that the model-fitting process works, again and again -- never perfectly, but often well enough to gain our respect.

If you observe a rhinoceros charging toward you, then it's probably a good idea to jump out of the way even though your observations are, unconsciously, theory-laden and even though there's no absolute proof that the rhinoceros exists. And the spacetime ripples gain our respect not only for the technical triumph of observing them but also because the merging black holes emit a very specific wave pattern, closely matching the details of what's computed from Einstein's equations when the black holes have particular masses and spins.

So beauty and economy of modelling can be wonderful and inspirational, especially at the most basic levels of physics. Yet the same cautions apply. Indeed, Unger and Smolin (2015) argue that the current crisis in physics and cosmology has its roots in a tendency to conflate outside-world reality with mathematical models of it. The mathematical models tend to be viewed as the same thing as the outside-world reality. Jaynes (2003) calls this conflation the `mind projection fallacy'. (The late Edwin T. Jaynes was one of the great thinkers about model-fitting, prior probabilities, and statistical inference, where the fallacy has long been an impediment to understanding. Probability distribution functions are model components, not things in the outside world.) The mind projection fallacy seems to be bound up with the hypercredulity instinct. In physics and cosmology, it generates a transcendental vision of Absolute Truth in which the entire Universe is seen as a single mathematical object of supreme beauty, a Theory of Everything -- an Answer to Everything -- residing within that Ultimate Reality, the Platonic world of perfect forms. Alleluia!

Because the model-fitting works better in some cases than in others, there are always considerations of just how well it is working, involving a balance of probabilities. We must consider how many independent cross-checks have been done and to what accuracies. For Einstein's equations, the spacetime ripples from merging black holes provide a new set of independent cross-checks, adding to the half a dozen or so earlier kinds of cross-check that include an astonishingly accurate one from the orbital decay of a binary pulsar -- accurate to about 14 significant figures, or one part in a hundred million million.

If you can both hear and see the charging rhinoceros and if your feet feel the ground shaking in synchrony, then you have some independent cross-checks. You're checking a single internal model, unconsciously of course, against three independent sensory data streams. Even a postmodernist might jump out of the way. With so much cross-checking, it's a good idea to accept the perceived reality as a practical certainty. We do it all the time. Think what's involved in riding a bicycle, or in playing tennis, or in pouring a glass of wine. But the perceived reality is still the internal model within your unconscious brain, paradoxical though that may seem. And, again, the outside world is something whose existence must be assumed.

One reason I keep on about these issues is the quagmire of philosophical confusion that has long surrounded them (e.g. Smythies 2009). The Vienna Circle thought that there were such things as direct, or absolute, or veridical, observations -- sharply distinct from theories or models. That's what I called the `veridical perception fallacy' in Part II of Lucidity and Science. Others have argued that all mental constructs are illusions. Yet others have argued that the entire outside world is an illusion, subjective experience being the only reality. But none of this helps! Like the obsession with absolute proof and absolute truth, it just gets us into a muddle, often revolving around the ambiguity of the words `real' and `reality'.

Journalists, too, often seem hung up on the idea of absolute proof, unconsciously at least. They often press us to say whether something is scientifically `proven' or not. But as Karl Popper emphasized long ago, that's a false dichotomy and an unattainable mirage. I have a dream that professional codes of conduct for scientists will clearly say that, especially in public, we should talk instead about the balance of probabilities and the degree of scientific confidence. Many scientists do that already, but others still talk about the truth, as if it were absolute (cf. Segerstråle 2000).

Let me come clean. I admit to having had my own epiphanies, my eurekas and alleluias, from time to time. But as a scientist I wouldn't exhibit them in public, at least not as absolute truths. They should be for consenting adults in private -- an emotional resource to power our research efforts -- not something for professional scientists to air in public. I think most of my colleagues would agree. We don't want to be lumped with all those cranks and zealots who believe, in Max Born's words, `in a single truth and in being the possessor thereof'. And again, even if a candidate Theory of Everything, so called, were to be discovered one day, the most that science could ever say is that it fits a large but finite dataset to within a small but finite experimental error. Unger and Smolin (2015) are exceptionally clear on this point and on its implications for cosmology.

Consider again the walking-lights animation. Instinctively, we feel sure that we're looking at a person walking. `Hey, that's a person walking. What could be more obvious?' Yet the animation might not come from a person walking at all. The prior probabilities, the unconscious choice of model, might be wrong. The twelve moving dots might have been produced in some other way -- such as luminous pixels on a screen! The dots might `really' be moving in a two-dimensional plane, or three-dimensionally in any number of ways. Even our charging rhinoceros might, just might, be a hallucination. As professional scientists we always have to consider the balance of probabilities, trying to get as many cross-checks as possible and trying to reach well-informed judgements about the level of scientific confidence. That's what was done with the ozone-hole work in which I was involved, which eventually defeated the ozone-hole disinformers. That's what was done with the discovery and testing of quantum theory, where nothing is obvious.

There is of course a serious difficulty here. We do need quick ways to express extremely high confidence, such as the sun rising tomorrow. We don't want to waste time on such things when confronted with far greater uncertainties. Scientific research is like driving in the fog. Sometimes the fog is very thick. So, especially between consenting adults in private, we do tend to use terms like `proof' and `proven' as a shorthand to indicate things to which we attribute practical certainty, things that we don't have to worry about when trying to see through the fog. But because of all the philosophical confusion, and because of the hypercredulity instinct, and the dichotomization instinct, I think it preferable in public to avoid terms like `proof' or `proven', or even `settled', and instead to use terms like `highly probable', `practically certain',`well established', `hard fact', `indisputable', and so on, when we feel that strong statements are justifiable in the current state of knowledge. Such terms sound less final and less absolutist, especially when we're explicit about the strength of the evidence and cross-checks. I try to set a good example in the Postlude on climate.

And I think we should avoid the cliché fact `versus' theory. It's a false dichotomy and it perpetuates the veridical perception fallacy. Even worse, it plays straight into the hands of the professional disinformers, the well-resourced masters of camouflage and deception who work to discredit good science when they think it threatens profits, or political power, or any other vested interest. The `fact versus theory' mindset gives them a ready-made framing tactic, paralleling `good versus bad' (e.g. Lakoff 2014).

I want to return to the fact -- the indisputable practical certainty -- that what's complex at one level can be simple, or at least understandable, at another. And multiple levels of description are not only basic to science but also, unconsciously, basic to ordinary perception. They're basic to how our brains work. Straight away, our brains' left and right hemispheres give us at least two levels of description, respectively a lower level that dissects fine details, and a more holistic higher level. And neuroscience has revealed a variety of specialized internal models or model components that symbolically represent different aspects of outside-world reality. In the case of vision there are separate model components representing not only fine detail on the one hand, and overall spatial relations on the other but also, for instance, motion and colour (e.g. Sacks 1995, chapter 1; Smythies 2009). For instance damage to a part of the brain dealing with motion can produce visual experiences like successions of snapshots or frozen scenes -- very disabling if you're trying to cross the road.

In science, as recalled in the Prelude, progress has always been about finding a level of description and a viewpoint, or viewpoints, from which something at first sight hopelessly complex becomes simple enough to be understandable. And different levels of description can look incompatible with each other, if only because of emergent properties or emergent phenomena -- phenomena that are recognizable at a particular level of description but unrecognizable amidst the chaos and complexity of lower levels.

The need to consider multiple levels of description is especially conspicuous in the biological sciences. For instance molecular-biological circuits, or regulatory networks, are now well recognized entities. They involve patterns of highly specific interactions between molecules of DNA, of RNA, and of proteins as well as many other large and small molecules (e.g. Noble 2006, Danchin and Pocheville 2014, Wagner 2014). Some protein molecules have long been known to be allosteric enzymes. That is, they behave somewhat like the transistors within electronic circuits (e.g. Monod 1970). Genes are switched on and off. But such `circuits' are impossible to recognize from lower levels such as that of chemical bonds and bond strengths within thermally-agitated molecules, and still less from the levels of atoms, or of atomic nuclei, or of electrons, or of quarks. And again, there are of course very many higher levels of description, above the level of molecular-biological circuits and assemblies of such circuits, going up to the levels of bacteria and bacterial communities, of yeasts, of multicellular organisms, of ecologies, and of ourselves and our families, our communities, our nations, our globalized plutocracies, and the entire planet, which Newton treated as a point mass.

None of this would need saying were it not for the ubiquity, even today, of an extreme-reductionist view -- I think it's partly unconscious -- saying, or assuming, that looking for the lowest possible level and for `units' such as quarks, or atoms, or genes, or memes, gives us the Answer to Everything and is therefore the only useful angle from which to view a problem. Yes, in some cases it can be enormously useful; but no, it isn't the Answer to Everything! Noble (2006) makes both these points very eloquently. In some scientific problems, including those I've worked on myself, the most useful models aren't at all atomistic. In fluid dynamics we use accurate `continuum-mechanics' models in which highly nonlocal, indeed long-range, interactions are crucial. They're mediated by the pressure field. They're a crucial part of, for instance, how birds, bees and aircraft stay aloft, and how a jetstream can circumscribe and contain the ozone hole.

McGilchrist tells us that extreme reductionism comes from our left hemispheres. It is indeed a highly dissected view of things. His book, along with others such as Carroll (2016), can be read as a passionate appeal for more pluralism -- for more of Max Born's `loosening of thinking', for the deeper understanding that can come from looking at things on more than one level and from more than one viewpoint, and from a better collaboration between our garrulous and domineering left hemispheres and our quieter, indeed wordless, but also passionate, right hemispheres.

So I'd argue that professional codes of conduct for scientists -- to say nothing of lucidity principles -- should encourage us to be explicit, in particular, about which level or levels of explanation we're talking about. And when even the level of explanation isn't clear, or when questions asked are `wicked questions' having no clear meaning, still less any clear answer, we should be explicit in acknowledging such difficulties. We should be more explicit than we feel necessary. In chapter 7 I'll argue that much of the confusion about `consciousness' and `free will' comes from just such difficulties and in particular that some of it comes from conflating, or not even recognizing, different levels of description.

I like the aphorism that free will is a biologically indispensable illusion, but a socially indispensable reality. There's no conflict between the two statements. They belong to different levels of description -- vastly different, incompatible levels. And they illustrate the profound ambiguity of the much-bandied words `illusion' and `reality'.


Chapter 5: Music, mathematics, and the Platonic

This chapter is mainly about our unconscious mathematics, and our unconscious power of abstraction. I want to take us beyond the Euclidean geometry that's called on by the walking lights, and the arithmetic that enables us to count, add, and multiply using arrays of pebbles or other small objects. I'll show that we also have an unconscious `higher mathematics' including what's called `calculus', the mathematics of continuous change. Hints of it are present in the organic-change principle, in the connections between music and mathematics, and in the Platonic world of perfect forms.

The Platonic world includes the curves that everyone calls `mathematical', such as perfect circles, ellipses, straight lines, and other smooth curves (Figure 7). Experience suggests that such `Platonic objects', as I'll call them, are of special interest to the unconscious brain. Whenever one sees natural phenomena exhibiting what look like straight lines or smooth curves, such as the edge of the sea on a clear day, or the edge of the full moon, or the shape of a hanging dewdrop, they tend to excite our interest and our sense of beauty. So do the great pillars of the Parthenon, and the smooth curves of the Sydney Opera House. We feel their shapes as resonating with something `already there' and, as Plato saw it, in some ultimate sense more `real' than the commonplace messiness of the outside world.


circle, ellipse, crescent, liquid drop

Figure 7: Some Platonic objects.


My heart is with Plato here. When the shapes look truly perfect, they can excite a sense of great wonder, even mystery. How can such perfection exist at all?

Indeed, so powerful is our unconscious interest in such perfection that we see smooth curves even when they're not actually present in the incoming visual data. For instance we see them in the form of what psychologists call `illusory contours'.  Figure 8 shows an example -- a smooth curve constructed by our visual system at the inner edges of the black marks:


circle, ellipse, crescent, liquid drop

Figure 8: An illusory contour.


The construction of such curves requires a branch of mathematics that we call the calculus of variations. It is a way of considering all the possible curves that can be fitted to the inner edges of the black marks, and of picking out a curve that's as smooth as possible, in a sense to be specified. So we have not only unconscious Euclidean geometry, but also an unconscious calculus of variations -- part of what is sometimes called `higher mathematics'.

The Platonic world is, indeed, `already there' in the sense of being evolutionarily ancient -- something that comes to us through genetic memory and the automata that it enables -- self-assembling into, among many other things, the special kinds of symbolic representation that manifest themselves as Platonic objects. That's because of combinatorial largeness. Over vast stretches of time, natural selection has put the unconscious brain under pressure to make its model-fitting processes as simple as the data allow. That requires a repertoire of model components that are as simple as the data allow. Some of these components are recognizable as Platonic objects, or rather their symbolic representations. (Please remember that actual or latent patterns of neural activity are symbols, even though we don't yet have the `codecs' for reading them directly.) A perfect circle is a Platonic object simply because it's simple.

`We see smooth curves even when they're not actually present.' Look again at Figure 7. None of the Platonic objects we see are actually present in the figure. Take the circle, or the ellipse as it may appear on some screens. It's actually more complex. With a magnifying glass, one can see staircases of pixels. Zooming in more and more, one begins to see more and more detail, such as irregular or blurry pixel edges. One can imagine zooming in to the atomic, nuclear and subnuclear scales. Model-fitting is partial and approximate. What's complex at one level can be simple at another.

Much the same argument, by the way, can be made for the musical `harmonic series'. This too is a `Platonic object', symbolically speaking. It's of special interest to the unconscious brain for fundamentally the same reasons. It's one of the model components needed for auditory model-fitting and especially for what's called `auditory scene analysis' or `sound location'. That's the perceptual process enabling us to identify different sound sources in what might be called `a jungleful of animal sounds'.  Identifying a sound source has survival value. In chapter 6 I'll go deeper into what this means for how music works.

...

...

[Musical counterparts of illusory contours -- Mozart's "flowing oil", revealing a musical `calculus of variations'. Other deep mathematics-music connections, deeper than arrays of pebbles! Pitch perception and unconscious Fourier transformation, neural circuits good at timing, Boomsliter and Creel's `long-pattern hypothesis` -- postpone all this to chapter 6?]

...

...

Regarding models that are made of mathematical equations, there's an essay that every physicist knows of, I think, by the famous physicist Eugene Wigner who talked about the `unreasonable effectiveness of mathematics' in describing the real world. But what's surprising is not the fact that mathematics comes in. Mathematics is just a way of handling many possibilities at once, in a self-consistent way. What's surprising is the fact that very simple mathematics comes in when you build accurate models of sub-atomic Nature. It's not the mathematics that's unreasonable; it's the simplicity. So I think Wigner should have talked about the `unreasonable simplicity of sub-atomic Nature'. It just happens, and I don't think anyone knows why, that at the level of photons and electrons, say, things look very simple. That's just the way nature seems to be at that level. And of course that means that the mathematics is simple too.]

...

...

[The veridical-perception and mind-projection fallacies again -- conflating the outside world and our internal models thereof -- built into the word `reality' and going back to Plato. Indeed, famously, Plato felt his imagined world of perfect forms to be more real than the messy outside world. Platonism versus constructivism in mathematics -- yet another false dichotomy!]

...

...

[Near end of this chapter, or better chapter 7, after noting again the ambiguity of the word `real':

The experience of revelation, of epiphany, of direct and vivid perception of some ultimate Truth or Reality can be so powerful that it has spawned many mystical, religious and philosophical movements, and indeed scientific paradigms. Such direct, or seemingly direct, experience makes it hard to keep clear the distinction between the outside world and our unconscious models thereof, and the associated unconscious assumptions, such as the assumption that perceived times are the same as outside-world times.

The so-called hard problems of consciousness and free will are not only hard in this sense, but even harder in that they require a detailed understanding of our self-models and how they work. That's hard indeed -- hard enough, not to say impossible, even after recognizing not only the distinction between models and outside-world reality but also the complexity, the multi-level aspects, and the unconscious aspects, of the brain's model-fitting processes. But I find it helpful at least to recognize that consciousness, free will, and qualia are -- can only be -- properties of the self-model. They're not absolutes, any more than perceived time is an absolute. And they belong to a level of description that's very different from -- very distant from -- the levels of quarks, of atoms, of genomes, of neurons, of organisms, and of brains.

The unconscious assumption that perceived times are the same as outside-world times is enough to get parts of the free-will literature tied in knots. Observations of the brain activity that precedes conscious decisions (e.g. experiments of Grey Walter and Benjamin Libet) have been thought to reveal a mysterious `Orwellian' process, a `backward referral in time', as if `time' had an unambiguous meaning.

Some of the epiphany-based thought-systems claim to be free of presuppositions. They claim to be free of prior assumptions of any kind. Examples include `phenomenology' (meaning the philosophy of Edmund Husserl) and the old frequentist school of probability theory and statistical inference, whose mantra was `Let the data speak for themselves'. That's like telling someone to look at the walking lights and `see for yourself'. Claiming to be free of prior assumptions can only imply, of course, that the prior assumptions are all unconscious. As always, it's the danger from `what you see is all there is' (Kahneman 2011).

Frequentist statistical inference is now increasingly recognized as a small, even though valuable, part of a much wider probabilistic framework based on the theorems of Richard Threlkeld Cox. Within that framework can be found what is called Bayesian inference, a far more versatile system of statistical inference whose great advantage is that it requires more of our unconscious assumptions to be made conscious, while forcing us to do so in a self-consistent way. Such a wider framework is crucial, for instance, in scientific problems for which large numbers of repeated experiments are impractical, as when studying our Universe as a whole, or when studying the Earth's life-support system.

The wider framework recognizes the mathematical objects it deals with, such as so-called probability distribution functions, as model components rather than outside-world properties. The frequentist school, by contrast, considered such objects to be outside-world properties -- Jaynes' `mind-projection fallacy', or in cosmology the `Mathematical Universe Hypothesis', saying that the Universe is not merely modelled mathematically but consists of mathematics.]

...

...


Chapter 6: A journey into musical hyperspace

There was a young lady of Deane
Whose ear was not music'lly keen;
She said, 'It is odd
That I cannot tell God
Save the Weasel
from Pop Goes The Queen.'

This chapter is mainly for musicians and music-lovers, including those interested in the fuss about the `Tristan chord' wrongly attributed to Richard Wagner. Wagner was a remarkable musical genius, but the chord itself is unremarkable, having long had common currency among composers. It occurs more than twenty times in Purcell's `Dido's Lament'. I'll show _why_ it's common currency, and more about how harmony works, going beyond the textbooks and `rules' of harmony and remembering that it isn't about chords so much as about counterpoint and chords-in-context, about organic change and the pull of voices against each other. Nor is it about the false dichotomy `tonal versus atonal' and knowing what key you are in. All this comes from widening the perspective to include basic insights from evolutionary biology and perception psychology.

Before going further we need to unlearn something from the musical-acoustics textbooks, namely the myth that the inner ear's basilar membrane is the frequency filter giving us pitch perception. As with simplistic evolutionary theory, the basilar-membrane myth is strangely persistent. Yet, if pitch perception did work that way, then there'd be no such thing as musical harmony, as I'll show. All those kids learning guitar chords would be wasting their time.

Consider the sheer accuracy of normal pitch perception. Any musician, or music-lover who finds out-of-tune singing painful -- or indeed anyone outside a small minority whose ears are `not music'lly keen' -- is familiar with that accuracy. In terms of acoustic frequencies, it's often in the ballpark of a fifth of a percent or slightly better -- in musical language a few cents or hundredths of a semitone. That's far more accurate than the frequency resolution of the basilar membrane, whose workings and fluid dynamics have been well studied.

It's clear, moreover, from listening to certain kinds of birdsong that other creatures have extremely accurate pitch perception. I'll present some audio clips to show this.

The accuracy of pitch perception is a by-product of something else that's important for survival, the Platonic object mentioned in chapter 5. It's the special sequence of pitches that musicians call the harmonic series, though, biologically speaking, it's a set of internal model components in the form of simple temporal patterns -- simple because they repeat periodically at audio frequencies, and temporal because our neural circuitry is good at timing things. The close relationship between all such patterns and the harmonic series follows from a famous mathematical theorem of Joseph Fourier.

The ear-brain system is unconsciously interested in patterns in the incoming auditory data that can be fitted, approximately, to these periodic internal patterns. That's because such patterns are idealized versions of many natural sound sources, not only birdsong but also vocalizations of the kind we and many other creatures make. What's important for survival is the ability to identify and distinguish many such sound sources. The ear-brain system's ability to do this involves the accurate neural timing ability that also gives us, and other creatures, accurate pitch perception as a by-product.

...

...

[Carrying on with some of the material from my old `hyperspace' notes, with audio clips added...

...

...


Chapter 7: What are these things called free will and consciousness?

Scene in a courtroom: `Yes, I did kill my wife. But it wasn't my fault. It was my selfish genes wot done it.' [Shades of Hamlet!]

Here, recalling the end of chapter 4, we see the need for multi-level thinking at its starkest, to say nothing of the need for a touch of humility -- daring to admit that extremist reductionism isn't the Answer to Everything -- that knowing about quarks, or even about hydrogen bonds, doesn't help us to understand our perceptual self-models and the way they work in human society.

...

[Recognition of acausality illusions as a good starting point, blowing away some of the confusion surrounding the Grey Walter and Benjamin Libet experiments. Again the self-model or self-avatar in its surroundings, as in Lucidity and Science, Part II. Also notes on V. S. Ramachandran's wonderful Reith Lectures, and Michael Graziano's recent exploration of the nature of the self-model, and its relation to theories of awareness and consciousness. Pick up on `unconscious wilful blindness', chapter 1. Need to deal with the common misconception that conflates our `consciousness' and self-awareness on the one hand, with having a self-model on the other (as already pointed out in chapter 4). A cat stalking a bird must have a self-model, to orient it in its surroundings whether consciously or unconsciously. The bird must have a self-model too, complete with `avionics'. Having a self-model is necessarily part of any living perceptual system -- even our immune system with its recognition of `self' versus `non-self' that sometimes goes so horribly wrong...]

...

...


Postlude: the amplifier metaphor for climate

Chapter 1 mentioned audio amplifiers and two different questions one might ask about them: firstly what powers them, and secondly what they're sensitive to. Is there any relevance to the Earth's climate system? Pulling the amplifier's power plug corresponds to switching off the Sun. But is there anything corresponding to a sensitive input signal? Today we have a clear and very simple answer, showing why climate change is a serious concern, and why climate-denial politics will fail.

Yes, the climate system is -- with certain qualifications to be discussed below -- a powerful but slowly-responding amplifier with sensitive inputs. Among the climate system's sensitive inputs are small changes in the Earth's tilt and orbit. They have repeatedly triggered large climate changes, with global mean sea levels going up and down by well over 100 metres. Those were the glacial-interglacial cycles encountered in chapter 2, `glacial cycles' for brevity, with overall timespans of about a hundred millennia per cycle. And `large' is a bit of an understatement. As is clear from the sea levels and the corresponding ice-sheet changes, those climate changes were huge by comparison with the much smaller changes projected for the coming century. I'll discuss the sea-level evidence below.

Another sensitive input is the injection of carbon dioxide into the atmosphere. Carbon dioxide, whether injected naturally or artificially, has a central role in the climate system not only as a plant nutrient but also as our atmosphere's most important non-condensing greenhouse gas. Without recognizing that central role it's impossible to make sense of climate behaviour in general, and of the huge magnitudes of the glacial cycles in particular. Those cycles depended not only on the small orbital changes, and on the dynamics of the great land-based ice sheets, but also on natural injections of carbon dioxide into the atmosphere from the deep oceans. Of course to call such natural injections `inputs' is strictly speaking incorrect, except as a thought-experiment, but along with the ice sheets they're part of the amplifier's sensitive `input circuitry' as I'll try to make clear.

The physical and chemical properties of so-called greenhouse gases are well established and uncontentious, with very many cross-checks. Greenhouse gases in the atmosphere make the Earth's surface warmer than it would otherwise be. For reasons connected with the properties of heat radiation, almost any gas whose molecules have three or more atoms can act as a greenhouse gas. (More precisely, to interact strongly with heat radiation the gas molecules must have a structure that supports a fluctuating electrostatic `dipole moment', at the frequency of the heat radiation.) Examples include carbon dioxide, water vapour, methane, and nitrous oxide. By contrast, the atmosphere's oxygen and nitrogen molecules have only two atoms and are very nearly transparent to heat radiation.

One reason for the special importance of carbon dioxide is its great chemical stability as a gas. Other carbon-containing, non-condensing greenhouse gases such as methane tend to be converted fairly quickly into carbon dioxide. Fairly quickly means within a decade or two, for methane. And of all the non-condensing greenhouse gases, carbon dioxide has always had the most important long-term heating effect, not only today but also during the glacial cycles. That's clear from ice-core data, to be discussed below, along with the well-established heat-radiation physics.

Water vapour has a central but entirely different role in the climate system. Unlike carbon dioxide, water vapour can and does condense or freeze, in vast amounts, as well as being copiously supplied by evaporation from the oceans, the rainforests, and elsewhere. This solar-powered supply of water vapour -- sometimes called `weather fuel' because of the thermal energy released on condensing or freezing -- makes it part of the climate system's power-supply or power-output circuitry rather than its input circuitry. The power output includes short-term fluctuations in greenhouse heating together with cyclonic storms, weather fronts, and their precipitation, in which the energy released can be huge, dwarfing the energy of many thermonuclear bombs. It is huge whether the precipitation takes the form of rain, hail, or snow. Tropical cyclones (hurricans and typhoons), and other extreme precipitation and flooding events, both tropical and extratropical, remind us what these huge energies mean in reality.

A century or two ago, the artificial injection of carbon dioxide into the atmosphere was only a thought-experiment, of interest to a few scientists such as Joseph Fourier, John Tyndall, and Svante Arrhenius. Tyndall did simple but ingenious laboratory experiments to show how heat radiation interacts with carbon dioxide. For more history and technical detail I strongly recommend the textbook by Pierrehumbert (2010). Today, inadvertently, we're doing such an injection experiment for real. And we now know that the consequences will be very large indeed.

How can I say that? As with the ozone-hole problem, it's a matter of spotting what's simple about a problem at first sight hopelessly complex. But I also want to sound a note of humility. All I'm claiming is that the climate-science community now has enough insight, enough in-depth understanding, and enough cross-checks, to say that the climate system is sensitive to carbon dioxide injections and that the consequences will be very large.

Such sensitivity is not, by the way, what's meant by the term `climate sensitivity' encountered in many of the community's technical writings. There are various technical definitions, in all of which atmospheric carbon dioxide values are increased by some given amount but, in all of which, artificial constraints are imposed. The constraints are often left unstated. Imposing the constraints usually corresponds to a thought-experiment in which the more slowly-responding parts of the system -- including the deep oceans, the ice sheets, and large underground reservoirs of methane -- are all held fixed in an artificial and unrealistic way. Adding to the confusion, the state reached under some set of artificial constraints is sometimes called `the equilibrium climate', as if it represented some conceivable reality. And, to make matters even worse, attention is often confined to global-mean temperatures, concealing all the many other aspects of climate change including ocean heat content, and the statistics of weather extremes such as flash flooding.

Many climate scientists try to minimize confusion by spelling out which thought-experiment they have in mind. That's an important example of the explicitness principle in action. And the thought-experiments and the computer model experiments are improving year by year. As I'll try to clarify further, the climate system has many different `sensitivities', depending on the circumstances, and on the aspects considered. That's one reason why the amplifier metaphor needs qualification. In technical language, we're dealing with a system that's highly `nonlinear'. In that respect, and as regards its generally slow response, the climate-system amplifier is very unlike an audio amplifier. We still, however, need some way of talking about climate that recognizes some parts of the system as being more sensitive than others.

There are still many serious uncertainties, as well as communication difficulties. But our understanding is now good enough, deep enough, and sufficiently cross-checked, to show that the uncertainties are mainly about the precise timings and sequence of events over the coming decades and centuries, including nonlinear step changes or `tipping points'. Those details are highly uncertain, but in my judgement there's no significant uncertainty about the response being very large, sooner or later, and practically speaking irreversible.

Science is one thing and politics is another. I'm only a scientist. My aim here is to get the most robust and reliable aspects of the science stated clearly, simply, accessibly, and dispassionately, along with the implications under various assumptions about the politics and the workings of the human hypercredulity instinct. I'll draw on the wonderfully meticulous work of very many scientific colleagues including the late Nick Shackleton and his predecessors and successors, who have laboured so hard, and so carefully, to tease out information about past climates. Past climates, especially those of the past several hundred millennia, are our main source of information about the workings of the real system, taking full account of its vast complexity all the way down to the details of such things as clouds, forest canopies, soil ecology, plankton, and the tiniest of ocean eddies.

Is such an exercise useful at all? The optimist in me says it is. And I hope, dear reader, that you might agree because, after all, we're talking about the Earth's life-support system and the possibilities for some kind of future civilization.

In recent decades there's been a powerful disinformation campaign against such understanding, adding yet more confusion. Superficial viewpoints hold sway. Significant aspects of the problem are ignored or camouflaged. The postmodernist idea of `science as mere opinion' is used when convenient. Risk management is postponed or kept secret. For me it's a case of déja vu, because the earlier ozone-hole disinformation campaign was strikingly similar.

We now know that that similarity was no accident. According to extensive documentation cited in Oreskes and Conway (2010) -- including formerly secret documents now exposed through anti-tobacco litigation -- the current climate-disinformation campaign was seeded, originally, by the same few professional disinformers who masterminded the ozone-hole campaign and, before that, the tobacco companies' lung-cancer campaigns. The secret documents describe how to manipulate the newsmedia and sow confusion in place of understanding. For climate the confusion has spread into significant parts of the scientific community, including influential senior scientists most of whom are not, to my knowledge, among the professional disinformers and their political allies but who have tended to focus too narrowly on the shortcomings of the big climate models, ignoring many other lines of evidence. And such campaigns and their political fallout are, of course, threats to other branches of science as well, and indeed to the very foundations of good science. The more intense the politicization, the harder it becomes to live up to the scientific ideal and ethic.

One reason why the amplifier metaphor is important despite its limitations is that the climate disinformers ignore it, especially when comparing water vapour with carbon dioxide. They use the copious supply of water vapour from the oceans and elsewhere as a way of suggesting that the relatively small amounts of carbon dioxide are unimportant for climate. That's like focusing on an amplifier's power-output circuitry and ignoring the input circuitry, exactly the `energy budget' mindset mentioned in chapter 1.

In all humility, I think I can fairly claim to be qualified as a dispassionate observer of the climate-science scene. I would dearly love to believe the disinformers when they say that carbon dioxide is unimportant for climate. And my own professional work has never been funded for climate science as such.

However, my professional work on the ozone hole and the fluid dynamics of the great jetstreams has taken me quite close to research issues in the climate-science community. Those of its members whom I know personally are ordinary, honest scientists, respectful of the scientific ideal and ethic. They include many brilliant thinkers and innovators. Again and again, I have heard members of the community giving careful conference talks on the latest findings. They are well aware of the daunting complexity of the problem, of the imperfections of the big climate models, of the difficulty of weeding out data errors, and of the need to avoid superficial viewpoints, false dichotomies, and exaggerated claims. Those concerns are reflected in the restrained and cautious tone of the vast reports published by the Intergovernmental Panel on Climate Change (IPCC). The reports make heavy reading but contain reliable technical information about the basic physics and chemistry I'm talking about such as, for instance, the magnitude of greenhouse-gas heating as compared with variation in the Sun's output.

As it happens, my own professional work has involved me in solar physics as well; and I'd argue that the most recent IPCC assessment of solar variation is substantially correct, namely that solar variation is too small to compete with past and present carbon-dioxide injections. That's based on very recent improvements in our understanding of solar physics, to be mentioned below.

*   *   *

Let's pause for a moment to draw breath. I want to be more specific on how past climates have informed us about these issues, using the latest advances in our understanding. I'll try to state the leading implications and the reasoning behind them. The focus will be on implications that are extremely clear and extremely robust. They are independent of fine details within the climate system, and independent of the imperfections of the big climate models.

*   *   *

The first point to note is that human activities will increase the carbon dioxide in the atmosphere by an amount that will be large.

It will be large in the only relevant sense, that is, large by comparison with its natural range of variation when the Earth system is close to its present state, with the Antarctic ice cap still intact. The natural range is well determined from ice-core data, recording the extremes of the hundred-millennium glacial cycles. That's one of the hardest, clearest pieces of evidence we have. It comes from the ability of ice to trap air, beginning with compacted snowfall, giving us clean air samples from the past 800 millennia from which carbon dioxide concentrations can be reliably measured.

In round numbers the natural range of variation of atmospheric carbon dioxide is close to 100 ppmv, 100 parts per million by volume. The increase since pre-industrial times has so far been by a further 120 ppmv or so. In round numbers we have gone from an glacial 180 ppmv through a pre-industrial 280 ppmv up to today's values, around 400 ppmv. And on current trends the 400 ppmv will have increased to 800 ppmv or more by the end of this century. An increase from 180 to 800 ppmv is an increase of the order of six times the natural range of variation across glacial cycles. Whatever happens, therefore, the climate system will be like a sensitive amplifier subject to a large new input signal, the only question being just how large -- just how many times larger than the natural range.

For comparison with, say, 800 ppmv, the natural variation across recent glacial cycles has been roughly from minima around 180-190 ppmv to maxima around 280-290 ppmv but then back again, i.e., in round numbers, over the aforementioned natural range of about 100 ppmv -- repeatedly and consistently back and forth over several hundreds of millennia (recall Figure 3 in chapter 2). The range appears to have been determined largely by deep-ocean storage and leakage rates. Storage of carbon in the land-based biosphere, and input from volcanic eruptions, appear to have played only secondary roles in the glacial cycles, though wetland biogenic methane emissions are probably among the significant amplifier mechanisms or positive feedbacks.

Recent work (e.g. Shakun et al. 2012, Skinner et al. 2014) has begun to clarify how the natural 100 ppmv carbon-dioxide injections involved in `deglaciations', the huge transitions from the coldest to the warmest extremes, arose mainly by release of carbon dioxide from the oceans through an interplay of ice-sheet and ocean-circulation changes, and many other events in a complicated sequence triggering positive feedbacks -- the whole sequence having been initiated then reinforced by a small orbital change, as explained below. (The disinformers ignore all these complexities by saying that the Earth somehow warmed. The warming then caused the release of carbon dioxide, they say, with little further effect on temperature.)

The deglaciations show us just how sensitive the climate-system amplifier can be. What I'm calling its input circuitry includes ice-sheet dynamics and what's called the natural `carbon cycle', though a better name would be `carbon sub-system'. Calling it a sub-system would do more justice to the vast complexity already hinted at, dependent on deep-ocean storage (mostly as bicarbonate ions), on chemical and biochemical transformations on land and in the oceans, on complex groundwater, atmospheric and oceanic flows down to the finest scales of turbulence, on sea-ice cover and upper-ocean layering and indeed on biological and ecological adaptation and evolution -- nearly all of which is well outside the scope of the big climate models. Much of it is also outside the scope of specialist carbon-cycle models, if only because they grossly oversimplify the transports of carbon and biological nutrients by fluid flows, within the sunlit upper ocean for instance. But we know that the input circuitry was sensitive during deglaciations without knowing all the details of the circuit diagram. It's the only way to make sense of the records in ice cores, in caves, and in the sediments under lakes and oceans that indicate the climate system's actual past behaviour (e.g. Alley 2000, 2007).

The records showing the greatest detail are those covering the last deglaciation. Around 18 millennia ago, just after the onset of an initiating orbital change, atmospheric carbon dioxide started to build up from a near-minimum glacial value around 190 ppmv toward the pre-industrial 280 ppmv. Around 11 millennia ago, it was already close to 265 ppmv. That 75 ppmv increase was the main part of what I'm calling a natural injection of carbon dioxide into the atmosphere. It must have come from deep within the oceans since, in the absence of artificial injections by humans, it's only the oceans that have the ability to store the required amounts of carbon, in suitable chemical forms. Indeed, land-based storage worked mostly in the opposite direction as ice retreated and forests spread.

The oceans not only have more than enough storage capacity, as such, but also mechanisms to store and release extra carbon dioxide, involving limestone-sludge chemistry (e.g. Marchitto et al. 2006). How much carbon dioxide is actually stored or released is determined by a delicate competition between storage rates and leakage rates. For instance one has storage via dead phytoplankton sinking from the sunlit upper ocean into the deepest waters. That storage process is strongly influenced, it's now clear, by details of the ocean circulation near Antarctica and the effects on gas exchange between deep waters and atmosphere and on phytoplankton nutrient supply and uptake, all of which is under scrutiny in current research (e.g. Burke et al. 2015; Watson et al. 2015, & refs.).

In addition to the ice-core record of atmospheric carbon-dioxide buildup starting 18 millennia ago, we have hard evidence for what happened to sea levels. The sea level rise began in earnest about two millennia afterwards, that is, about 16 millennia ago, and a large fraction of it had taken place within a further 8 millennia. The total sea level rise over the whole deglaciation was by most estimates well over 100 metres, perhaps as much as 140. It required the melting of huge volumes of land-based ice.

Our understanding of how the ice melted is incomplete, but it must inevitably have involved a complex interplay between snow deposition, ice flow and ablation, and ocean-circulation and sea-ice changes releasing carbon dioxide. The main carbon-dioxide injection starting 18 millennia ago must have significantly amplified the whole process. That statement holds independently of climate-model details, being a consequence only of the persistence, the global scale, and the known order of magnitude of the greenhouse heating from carbon dioxide, all of which are indisputable. Still earlier, between 20 and 18 millennia ago, a relatively small amount of orbitally-induced melting of the northern ice sheets seems to have triggered a massive Atlantic-ocean circulation change, reaching all the way to Antarctica and, in this and other ways, to have started the main carbon-dioxide injection. The buildup of greenhouse heating was then able to reinforce a continuing increase in the orbitally-induced melting. That in turn led to the main acceleration in sea level rise, two millennia later. Some of the recent evidence supporting this picture is summarized here.

The small orbital changes are well known and can be calculated very precisely over far greater, multi-million-year timespans, thanks to the remarkable stability of the solar system's planetary motions. The orbital changes include a 2° oscillation in the tilt of the Earth's axis (between about 22° and 24°) and a precession that keeps reorienting the axis relative to the stars, redistributing solar heating in latitude and time while hardly changing its average over the globe and over seasons. Figure 9, taken from Shackleton (2000), shows the way in which the midsummer peak in solar heating at 65°N has varied over the past 400 millennia:


from Fig 1 of Shackleton (2000)

Figure 9: Midsummer diurnally-averaged insolation at 65°N, in W m-2, from Shackleton (2000), using orbital calculations carried out by André Berger and co-workers. They assume constant solar output but take careful account of variations in the Earth's orbital parameters in the manner pioneered by Milutin Milanković. Time in millennia runs from right to left.


The vertical scale on the right is the local, diurnally-averaged midsummer heating rate from incoming solar radiation at 65°N, in watts per square metre. It is these local peaks that are best placed to initiate melting on the northern ice sheets. One gets a peak when closest to the Sun with the North Pole tilted toward the Sun. However, such melting is not in itself enough to produce a full deglaciation. Only one peak in every five or so is associated with anything like a full deglaciation. They are the peaks marked with vertical bars. The timings can be checked from Figure 3 of chapter 2. The marked peaks were accompanied by the biggest carbon-dioxide injections, as measured by atmospheric concentrations reaching 280 ppmv or more. It's noteworthy that, of the two peaks at around 220 and 240 millennia ago, it's the smaller peak around 240 millennia that's associated with the bigger carbon-dioxide and temperature response. The bigger peak around 220 millennia is associated with a somewhat smaller response.

In terms of the amplifier metaphor, therefore, we have an input circuit whose sensitivity varies over time. In particular, the sensitivity to high-latitude solar heating must have been greater at 240 than at 220 millennia ago. That's another thing we can say independently of the climate models.

There are well known reasons to expect such variations in sensitivity. One is that the system became more sensitive when it was fully primed for the next big carbon-dioxide injection. To become fully primed it needed to store enough extra carbon dioxide in the deep oceans. Extra storage was favoured in the coldest conditions, which tended to prevail during the millennia preceding full deglaciations. How this came about is now beginning to be understood, with changes in ocean circulation near Antarctica playing a key role, alongside limestone-sludge chemistry and phytoplankton fertilization from iron in airborne dust (e.g. Watson et al. 2015, & refs.). Also important was a different priming mechanism, the slow buildup and areal expansion of the northern land-based ice sheets. The ice sheets slowly became more vulnerable to melting in two ways, first by expanding equatorward into warmer latitudes, and second by bearing down on the Earth's crust, taking the upper surface of the ice down to warmer altitudes. This priming mechanism would have made the system more sensitive still.

Specialized model studies (e.g. Abe-Ouchi et al. 2013, & refs.) have long supported the view that both priming mechanisms are important precursors to deglaciation. It appears that both are needed to account for the full magnitudes of deglaciations like the last. It must be cautioned, however, that our ability to model the details of ice flow and snow deposition is still extremely limited. I'll return to that point because it's related to some of the uncertainties now facing us about the future. For one thing, there are signs that parts of the Greenland ice sheet are becoming more sensitive today, as well as parts of the Antarctic ice sheet, especially the part known as West Antarctica where increasingly warm seawater is starting to intrude sideways underneath the ice, some of which is grounded below sea level.

As regards the deglaciations and the roles of the abovementioned priming mechanisms -- ice-sheet dynamics and deep-ocean carbon dioxide storage -- two separate questions must be distinguished. One concerns the magnitudes of deglaciations. The other concerns their timings, every 100 millennia or so. For instance, why aren't they just timed by the strongest peaks in the orbital curve above?

It's hard to assess the timescale for ocean priming because, here, our modelling ability is even more limited, not least regarding the details of upper-ocean circulation and stratification where phytoplankton live (see for instance Marchitto et al. 2006, and my notes thereto.) We need differences between storage rates and leakage rates, and neither are modelled, nor observationally constrained, with anything like sufficient accuracy. However, Abe-Ouchi et al. make a strong case that the timings of deglaciations, as distinct from their magnitudes, must be largely determined by the ice sheets. That conclusion depends not on a small difference between ill-determined quantities but, rather, on a single gross order of magnitude, namely the extreme slowness of ice-sheet buildup by snow accumulation, which is key to their model results. And ocean priming seems unlikely to be slow enough to account for the full 100-millennia timespan. But the results also reinforce the view that the two priming mechanisms are both important for explaining the huge magnitudes of deglaciations.

Today, in the year 2017, with atmospheric carbon dioxide just over the 400 ppmv mark and far above the pre-industrial 280 ppmv, we have already had a total, natural plus artificial, carbon-dioxide injection more than twice as large as the preceding natural injection, as measured by atmospheric buildup. Even though the system's sensitivity may be less extreme than just before a deglaciation, the climate response would be large even if the buildup were to stop tomorrow.

A consideration of sea levels puts this in perspective. A metre of sea level rise is only a tiny fraction of the 100 metres or more by which sea levels rose between 20 millennia ago and today. It's overwhelmingly improbable that an atmospheric carbon-dioxide buildup twice as large as the natural range, let alone six times as large, or more, as advocated by the climate disinformers and their political allies, would leave sea levels clamped precisely at today's values. There is no known, or conceivable, mechanism for such clamping -- it would be Canute-like to suppose that there is -- and there's great scope for substantial further sea level rise. For instance a metre of global-average sea level rise corresponds to only 5% of today's Greenland ice plus 1% of today's Antarctic. That's nothing at all by comparison with a deglaciation scenario, but it's already very large indeed from a human and geopolitical perspective. And it could easily be several metres or more, over the coming decades and centuries.

To be sure, the response to the artificially-injected carbon dioxide will be moderated by a repartitioning of that carbon dioxide between atmosphere, ocean and land-based biosphere and by what's technically called carbon-dioxide opacity, producing what's called a logarithmic dependence in the greenhouse heating effect (e.g. Pierrehumbert 2010, sec. 4.4.2). Logarithmic dependence means that the magnitude of the heating effect is described by a graph that continues to increase as atmospheric carbon dioxide increases, but progressively less steeply. That's well known and was pointed out long ago by Arrhenius. But those moderating factors are not enough to stop the present and future carbon-dioxide injections from taking us far beyond the pre-industrial state. So I come back to my main point. Several metres of eventual sea level rise above today's levels is not just highly probable, in my judgement -- if the climate disinformers have their way -- but is also an ultra-conservative, minimal estimate.

An integral part of all this is that artificial carbon-dioxide injections have cumulative and, from a human and geopolitical perspective, essentially permanent and irreversible effects on the entire atmosphere-ocean-land system. Among these are large effects on ocean ecosystems and food chains, the destruction of coral reefs for instance, as they respond to rising temperatures and to the ocean acidification that results from repartitioning. Our own food chains will be affected. The natural processes that can take the artificially-injected carbon dioxide back out of the system as a whole have timescales far longer even than tens of millennia (e.g. Archer 2009). To be sure, the carbon dioxide could be taken back out artificially, using known technologies -- that's by far the safest form of geoengineering, so-called -- but the expense makes such a thing politically impossible at present.

Cumulativeness means that the effect of our carbon-dioxide injections on the climate system depends mainly on the total amount injected, and hardly at all on the rate of injection.

From a risk-management perspective it would be wise to assume that the climate-system amplifier is already more sensitive than in the pre-industrial past. Here we move away from ultra-conservative, minimal estimates as the system moves further and further away from its best-known states, those of the past few hundred millennia. There are several reasons to expect increasing sensitivity, among them the ice-sheet sensitivity already mentioned. Another is the loss of sea ice in the Arctic, increasing the area of open ocean exposed to the summer sun. The dark open ocean absorbs solar heat faster than the white sea ice. This is a strong positive feedback, accelerating the melting; it's called an albedo feedback.  A third reason is the existence of what are called methane clathrates, or frozen methane hydrates, large amounts of which are stored underground in high latitudes.

Methane clathrates consist of methane gas trapped in ice instead of in shale. There are large amounts buried in permafrosts, probably dwarfing conventional fossil-fuel and shale-gas reserves although the precise amounts are uncertain (Valero et al. 2011). As the system moves further beyond pre-industrial conditions, increasing amounts of clathrates will melt and release methane gas. It's well documented that such release is happening today, at a rate that isn't well quantified but is almost certainly increasing (e.g. Shakhova et al. 2014, Andreassen et al. 2017). Permafrost has become another self-contradictory term. This is another positive feedback whose ultimate magnitude is highly uncertain but which does increase the probability, already far from negligible, that the Earth system might go all the way into a very hot, very humid state like that of the early Eocene around 56 million years ago. Methane that gets into our atmosphere jolts the system toward hotter states because in the short term it's more powerful than methane that's burnt or otherwise oxidized. Its greenhouse-warming contribution per molecule is far greater than that of the carbon dioxide to which it's subsequently converted within a decade or so (e.g. Pierrehumbert 2010, sec. 4.5.4).

Going into a new Eocene would mean first that there would be no great ice sheets at all, even in Antarctica, second that sea levels would be many tens of metres higher than today, i.e., a few hundred feet higher, and third that cyclonic storms would probably be more powerful than today, perhaps far more powerful. A piece of robust and well-established physics, called the Clausius-Clapeyron relation, says that air can hold increasing amounts of weather fuel, in the form of water vapour, as temperatures increase -- around six to seven percent more weather fuel for each degree Celsius. The effects are seen in today's flash flooding. The geology of the early Eocene shows clear evidence of `storm flood events' and massive soil erosion (e.g. Giusberti et al. 2016). It's therefore no surprise that, as now cross-checked from genetic studies, some land-based mammals found it useful to migrate into the oceans around the time of the early Eocene. Within a relatively short time some of them had evolved into fully aquatic mammals like today's whales and dolphins, as might be expected from selective pressures due to extreme surface storminess.

The early Eocene was hot and humid despite the Sun being about half a percent weaker than today. We do not have accurate records of atmospheric carbon dioxide at that time. But extremely high values, perhaps thousands of ppmv, are to be expected from large-scale volcanic activity. Past volcanic activity was sometimes far greater and more extensive than anything within human experience, as with the pre-Eocene lava flows that covered large portions of India, whose remnants form the Deccan Traps, and -- actually overlapping the time of the early Eocene and even more extensive -- the so-called North Atlantic Igneous Province. Sufficiently high carbon dioxide can easily explain the high temperatures and high humidity, despite the weaker Sun.

The weakness of the Eocene Sun counts as something else that we know about with extremely high scientific confidence. The rate at which the Sun's total power output gets stronger is roughly 1 percent per hundred million years. The solar models describing that power-output increase have become extremely secure -- very tightly cross-checked -- now that the so-called neutrino puzzle has been resolved. Even before that puzzle was resolved a few years ago, state-of-the-art solar models were tightly constrained by a formidable array of observational data, including very precise data characterizing the Sun's acoustic vibrations, called helioseismic data. The same solar models are now known to be consistent, also, with the measured fluxes of different kinds of neutrino. That's a direct check on conditions near the centre of the Sun.

These solar models, plus recent high-precision observations, plus recent advances in understanding the details of radiation from the Sun's surface and atmosphere, point strongly to another significant conclusion. Variability in the Sun's output on timescales less than millions of years comes from variability in sunspots and other magnetic phenomena. These phenomena are by-products of the fluid motion caused by thermal convection in the Sun's outer layers. That variability is now known to have climatic effects distinctly smaller than the effects of carbon dioxide injections to date, and very much smaller than those to come. The climatic effects from solar magnetism include not only the direct response to a slight variability in the Sun's total power output, but also some small and subtle effects from a greater variability in the Sun's ultraviolet radiation, which is absorbed mainly at stratospheric and higher altitudes. The main points are well covered in reviews by Foukal et al. (2006) and Solanki et al. (2013). Controversially, there might be an even more subtle effect from cloud modulation by cosmic-ray shielding. But to propose that any of these effects predominate over greenhouse-gas heating and even more that their timings should coincide with, for instance, the timings of full deglaciations -- the timings of the marked peaks in the orbital curve above -- would be to propose something that's again overwhelmingly improbable.

With the Sun half a percent stronger today, and a new Eocene in prospect -- one might call it the Eocene syndrome -- we must also consider what might similarly be called the Venus syndrome. That's the ocean-destroying, life-extinguishing `runaway greenhouse' leading to a state like the observed state of the planet Venus, with its molten-lead surface temperatures. Here we can be cautiously optimistic. Even if the Earth does go into a new Eocene -- perhaps after a few centuries, or a millennium or two -- the Venus syndrome seems unlikely to follow, on today's best estimates. Modelling studies suggest that the Earth can probably avoid tropical runaway-greenhouse conditions with the help of the same powerful cyclonic storms, transporting heat and weather fuel more and more copiously away from an expanding area of tropical ocean into high winter latitudes.

Coming back to our time in the twenty-first century, let's take a closer look at the storminess issue for the near future. Once again, the Clausius-Clapeyron relation is basic. And in today's conditions, and in those of coming decades, the large-scale fluid dynamics will strongly influence the characteristic spatial scales and morphologies of extratropical jetstreams and cyclones in the manner that's familar today, suggesting in turn that the peak intensities of the jetstreams, cyclones and concentrated rainfall events will increase as they're fed with, on average, more and more weather fuel from the tropics and subtropics -- more and more weather fuel going into similar-sized regions.

So the transition toward a hotter, more humid climate is likely to show extremes in both directions at first: wet and dry, hot and cold, heatwaves and severe winters. Fluctuations involving jetstream meanders, cyclones, and tropical moist convection are all likely to intensify, on average, with increasingly large excursions in both directions. They're all tied together by the fluid dynamics. Thanks to the Earth's rotation, fluid-dynamical influences operate all the way out to planetary scales. In the technical literature such long-range influences are called `teleconnections'. They form a global-scale jigsaw of influences, a complex web of cause and effect, operating over a range of timescales out to decades. They have a role for instance in El Niño and other phenomena involving large-scale fluctuations in tropical sea-surface temperatures on decadal timescales.

None of these phenomena are adequately represented in the big climate models. Although the models are important as part of our hypothesis-testing toolkit -- when used appropriately -- they cannot yet accurately simulate such things as the fine details and morphology of jetstreams, cyclones, tropical moist convection, the teleconnections between them, and their peak intensities. So as yet they're inadequate as a way of predicting statistically the timings, sequences, and geographic patterns of events, and the precise magnitudes of extreme events, over the coming decades and centuries. Estimating weather extremes in the near future is one of the toughest challenges for climate science.

For instance, what will be the frequencies, intensities, and locations of droughts and floods? It's highly probable that extreme rainstorms and hurricanes will get worse, but highly uncertain where they'll strike next, and how often. Will there be tipping points further down the line when, for instance, the release of methane from clathrates accelerates further? Will sea levels be up by a fraction of a metre, or by two metres, by the end of this century? For risk-management purposes I'd say that anything up to two metres is a reasonable educated guess at the large range of uncertainty. For reasons to be touched on shortly, there's even a slight chance that sea levels might go down for a while, bucking the long-term trend. A recent, authoritative and very careful summary (NAS-RS 2014) aptly warns us that `the climate system may be full of surprises'.

Among the many reasons why there are such large uncertainties about, for instance, future sea level changes, let me mention just two of the simplest reasons -- leaving aside clathrates and albedo feedback and many other concerns. First, the big climate models cannot simulate ice flow accurately enough. Second, as already indicated they cannot simulate weather systems, storm locations and intensities, and the precipitation of rain and snow accurately enough. So for instance they cannot even predict whether the Greenland and Antarctic ice sheets will grow or decay at first. For that, and for the associated contributions to sea level change, yet again you need the difference between two ill-determined quantities. In this case they are snow accumulation rates and ice-sheet loss rates. The big climate models, by the way, are far worse at simulating individual storms and precipitation than today's operational weather-forecasting models. The latter owe their much better though still imperfect performance to a far finer spatial resolution, implying a far greater computational cost per day's simulation.

The computational cost still makes it impossible to run such operational models out to many centuries. In a recent landmark study, however, an operational model was run on a UK Meteorological Office supercomputer long enough, for the first time, to support the expectation that climate change will increase the magnitudes and frequencies of extreme summer rainfall events in the UK (Kendon et al. 2014). The results point to even greater extremes than expected from the Clausius-Clapeyron relation alone. There's a positive feedback in which more weather fuel amplifies thundercloud updrafts, enabling them to suck up still more weather fuel, for a short time at least.

Such rainfall extremes are spatially compact and the most difficult of all to simulate. As computer power increases, though, there will be many more such studies -- transcending recent IPCC estimates by more accurately describing the statistics of extreme rainstorms and snowstorms, and droughts and heatwaves, in all seasons and locations. Winter storms are spatially more extensive and are better simulated, but again only by the operational weather-forecasting models and not by the big climate models.

It's mainly for the two reasons already noted -- inaccuracy in ice-flow and storm simulation by the big climate models -- that for risk-management purposes I would still stick to a larger range of uncertainty about storminess extremes and about, for instance, end-of-century sea levels, beyond IPCC's current estimates, which stretch only to just under a metre. Ice-flow modelling is peculiarly difficult because of the need to describe slipping and lubrication at the base of an ice sheet, over areas whose sizes, shapes, and frictional properties are hard to predict, while accounting for the highly complex fracture patterns that might or might not develop as meltwater chisels downwards or seawater intrudes sideways.

Dear reader, before taking my leave I owe you a bit more explanation of the amplifier metaphor. As should already be clear, it's an imperfect metaphor at best. To portray the climate system as an amplifier we need to recognize not only its highly variable sensitivity but also its many intricately-linked components operating over a huge range of timescales -- some of them out to multi-decadal, multi-century, multi-millennial and even longer. And the climate-system amplifier would pretty terrible as an audio amplifier if only because it has so much internal noise and variability, on so many timescales, manifesting the `nonlinearity' already mentioned. An audio aficionado would call it a nasty mixture of gross distortions and feedback instabilities -- as when placing a microphone too close to the loudspeakers -- except that the instabilities have many timescales. Among the longer-timescale components there are land-based processes including the waxing and waning of forests, wetlands, grasslands, and deserts, as well as the ice-flow sensitivity and deep-ocean storage of carbon dioxide already mentioned -- these last two, with their massive but highly intermittent consequences, operating on timescales all the way out to 100 millennia.

Some of the system's noisy internal fluctuations are relatively sudden, for instance showing up as the Dansgaard-Oeschger warming events encountered in chapter 2, subjecting tribes of our ancestors to major climatic change well within an individual's lifetime and probably associated with a collapse of upper-ocean stratification and sea-ice cover in the Nordic Seas. A similar tipping point might or might not occur in the Arctic Ocean in the not-too-distant future, with hard-to-predict consequences for the Greenland ice sheet and the clathrates.

All these complexities help the climate disinformers, of course, because from all the many signals within the system one can always pick out some that seem to support practically any view one wants, especially if one replaces insights into the workings of the system, as seen from several viewpoints, by superficial arguments that conflate timing with cause and effect. Natural variability and noise in the data provide many ways to cherry-pick data segments, showing what looks like one or another trend or phase relation and adding to the confusion about different timescales. To gain what I'd call understanding, or insight, one needs to include good thought-experiments in one's conceptual arsenal. Such thought-experiments are involved, for instance, when considering injections of carbon dioxide and methane into the atmosphere, whether natural or artificial or both.

I also need to say more about why we can trust the ice-core records of atmospheric carbon dioxide, and methane as well. Along with today's atmospheric measurements the ice-core records count as hard evidence, in virtue of the simplicity of the chemistry and the meticulous cross-checking that's been done -- for instance by comparing results from different methods to extract the carbon dioxide trapped in ice, by comparing results between different ice cores having different accumulation rates, and by comparing with the direct atmospheric measurements that have been available since 1958. We really do know with practical certainty the past as well as the present atmospheric carbon-dioxide concentrations, with accuracies of the order of a few percent, as far back as about eight hundred millennia even though not nearly as far back as the Eocene. Throughout the past eight hundred millennia, atmospheric carbon dioxide concentrations varied roughly within the range 180 to 290 ppmv, as already noted. More precisely, all values were within that range except for a very few outlier values closer to 170 and 300 ppmv. All values without exception were far below today's 400 ppmv, let alone the 800 ppmv that might be reached by the end of this century.

And why do I trust the geological record of past sea levels, going up and down by 100 metres or more? We know about sea levels from several hard lines of geological evidence, including direct evidence from old shoreline markings and coral deposits. It's difficult to allow accurately for such effects as the deformation of the Earth's crust and mantle by changes in ice and ocean-mass loading, and tectonic effects generally. But the errors from such effects are likely to be of the order of metres, not many tens of metres, over the last deglaciation at least. And, as is well known, an independent cross-check comes from oxygen isotope records (e.g. Shackleton 2000), reflecting in part the fractionation between light and heavy oxygen isotopes when water is evaporated from the oceans and deposited as snow on the great ice sheets. That cross-check is consistent with the geological estimates.

*   *   *

So -- in summary -- we may be driving in the fog, but the fog is clearing. The disinformers urge us to shut our eyes and step on the gas. The current US president wants us to burn lots of `beautiful clean coal', pushing hard toward a new Eocene. But I dare to hope that their campaign will soon meet the same fate as it did, in the end, for the ozone hole and for tobacco and lung cancer.

Earth observation and modelling will continue to improve. Younger generations of scientists, engineers and business entrepreneurs will see more and more clearly through the real fog of scientific uncertainty, as well as through the artificial fog of disinformation. The connection between fossil-fuel burning and weather extremes will become increasingly clear as computer power increases and case studies accumulate. Scientists will continue to become more skilful as communicators.

Of course climate isn't the only huge challenge ahead. There's the evolution of pandemic viruses and of antibiotic resistance in bacteria. There's the threat of asteroid strikes. There's the enormous potential for good or ill in new nanostructures and materials, and in genetic engineering, information technology, social media, cyberwarfare, AI (teachable artificial intelligence), and automated military warfare, `Petrov's nightmare', one might call it, all of which demand clear thinking and risk management (e.g. Rees 2014). On AI, for instance, clear thinking requires escape from yet another false dichotomy, the mindset that it's us versus them, with either us, or them, the machines, ending up `in charge' or `taking control' -- whatever that might mean -- and completely missing the complexity, and plurality, of human-machine interaction and the possibility that it might be cooperative, with each playing to its strengths. Why not have a few more `brain hemispheres', natural and artificial, helping us to solve our problems and to cope with the unexpected.

On risk management, the number of ways to go wrong is combinatorially large, and some of them have low probability but also the potential for cataclysmic consequences. So I come back to my hope that good science -- which in practice means open science with its powerful ideal and ethic, its openness to the unexpected, and its humility -- will continue to survive and prosper despite all the forces ranged against it, commercial, political, and bureaucratic on top of hypercredulous narrow-mindedness, dichotomization, and plain old human weakness.

After all, there are plenty of daring and inspirational examples. One of them is open-source software, and another is Peter Piot's work on HIV/AIDS and other viral diseases. Yet another is the human genome story. There, the scientific ideal and ethic prevailed against corporate might (Sulston and Ferry 2003), keeping the genomic data available to open science. When one contemplates not only human weakness but also the vast resources devoted to short-term profit, by fair means or foul, one can't fail to be impressed that good science gets anywhere at all. That it has done so again and again, against the odds, is to me, at least, very remarkable and indeed inspirational.

The ozone-hole story, in which I myself was involved professionally, is another such example. The disinformers tried to discredit everything we did, using the full power of their commercial and political weapons. What we did was seen as heresy -- as with lung cancer -- a threat to share prices and profits. And yet the science, including all the cross-checks between different lines of evidence both observational and theoretical, became strong enough, adding up to enough in-depth understanding, despite the complexity of the problem, to defeat the disinformers in the end. The result was the Montreal Protocol on ozone-depleting chemicals, a new symbiosis between regulation and market forces. That too was inspirational. And it has bought us a bit more time to deal with climate, because the ozone-depleting chemicals happen to be potent greenhouse gases. If left unregulated, they would have accelerated climate change still further.

And on climate itself we now seem to have reached a similar turning point. The Paris climate agreement of December 2015 prompts a dawning hope that the politics is changing enough to allow another new, and similarly heretical, symbiosis. The disinformers are still very powerful, within the newsmedia and within many political circles and constituencies. But free-market fundamentalism and triumphalism were somewhat weakened by the 2008 financial crash. On top of that, the old push to burn all fossil-fuel reserves (e.g. Klein 2014) -- implying a huge input to the climate amplifier -- is increasingly seen as risky even in purely financial terms. It is seen as heading toward another financial crash, which will be all the bigger the longer it's delayed -- what's now called the bursting of the shareholders' carbon bubble. Indeed, some of the fossil-fuel companies have now recognized the need to change their business models and are seriously considering, for instance, carbon capture and storage -- allowing fossil fuels to be burnt without emitting carbon dioxide into the atmosphere -- as well as helping to scale up carbon-neutral, `renewable' energy including third-world-friendly distributed energy systems. That's the path to prosperity noted by economist Nicholas Stern (2009); see also Oxburgh (2016).

A further sign of hope is the recent publication of an outstandingly cogent climate-risk assessment (King et al. 2015), drawing on professional expertise not only from science but also from the insurance industry and the military and security services, saying that there's no need for despair or fatalism because `The risks of climate change may be greater than is commonly realized, but so is our capacity to confront them.' And there are signs of a significant corporate response in, for instance, the 2015 CDP Global Climate Change Report (Dickinson et al. 2015). If we're lucky, all this might tip the politics enough for the Paris agreement to take hold, despite the inevitable surge of disinformation against it.

As regards good science in general, an important factor in the genome story, as well as in the ozone-hole story, was a policy of open access to experimental data. That policy was one of the keys to success. The climate-science community was not always so clear on that point, giving the disinformers further opportunities. However, the lesson now seems to have been learnt.

I don't think, by the way, that everyone contributing to climate disinformation is consciously dishonest. Honest scepticism is crucial to science; and I wouldn't question the sincerity of colleagues I know personally who feel, or used to feel, that the climate-science community got things wrong. Indeed I'd be the last to suggest that that community, or any other scientific community, has never got anything wrong even though my own sceptical judgement is that today's climate-science consensus is mostly right and that, if anything, it underestimates the problems ahead.

We have to remember that unconscious assumptions and mindsets are always involved, in everything we do and think about. The anosognosic patient is perfectly sincere in saying that a paralysed left arm isn't paralysed. There's no dishonesty. It's just an unconscious thing, an extreme form of mindset. Of course the professional art of disinformation involves what sales and public-relations people call `positioning' -- the skilful manipulation of other people's unconscious assumptions. It's related to what cognitive scientists call `framing' (e.g. Kahneman 2011, Lakoff 2014, & refs.).

As used by professional disinformers the framing technique exploits, for instance, the dichotomization instinct -- evoking the mindset that there are just two sides to an argument. The disinformers then insist that their `side' merits equal weight!  This and other such techniques illustrate what I called the dark arts of camouflage and deception so thoroughly exploited, now, by the globalized plutocracies and their political allies, drawing on their vast financial resources and their deep knowledge of the way perception works.

One of the greatest such deceptions has been the mindset, so widely and skilfully promoted, that carbon-neutral or renewable energy is `impractical' and `uneconomic', despite all the demonstrations to the contrary. It's inspirational, therefore, to see the disinformers looking foolish and facing defeat once again, as innovations in carbon capture and in the smart technology of renewables, including electricity storage, distributed energy systems, and peak power management, gain more and more traction in the business world.

In science, in business, and no doubt in politics too, it often takes a younger generation to achieve what Max Born called the `loosening of thinking' needed to expose mindsets and make progress. Science, at any rate, has always progressed in fits and starts, always against the odds, and always involving human weakness alongside a collective struggle with mindsets exposed, usually, through the efforts of a younger generation. The great geneticist J.B.S. Haldane famously distinguished four stages: (1) This is worthless nonsense; (2) This is an interesting, but perverse, point of view; (3) This is true, but quite unimportant; (4) I always said so. The disputes over evolution and natural selection are a case in point.

So here's my farewell message to young scientists, technologists, and entrepreneurs. You have the gifts of intense curiosity and open-mindedness. You have the best chance of dispelling mindsets and making progress. You have enormous computing power at your disposal, and brilliant programming tools, and observational and experimental data far beyond my own youthful dreams of long ago. You know the value of arguing over the evidence, not to score personal or political points but to reach toward an improved understanding. You'll have seen how new insights from systems biology have opened astonishing new pathways to technological innovation (e.g. Wagner 2014, chapter 7).

Your generation will see the future more and more clearly. Whatever your field of expertise, you know that it's fun to be curious and to find out how things work. It's fun to do thought-experiments and computer experiments. It's fun to develop and test your in-depth understanding, the illumination that can come from looking at a problem from more than one angle. You know that it's worth trying to convey that understanding to a wide audience, if you get the chance. You know that you're dealing with complexity, and that you'll need to hone your communication skills in any case, if only to develop cross-disciplinary collaboration, the usual first stage of which is jargon-busting -- as far as possible converting turgid technical in-talk into plain, lucid speaking.

So hang in there. Your collective brainpower will be needed as never before. Science and technology don't give us the Answer to Everything, but we're sure as hell going to need them.


The original Lucidity and Science publications, including video and audio demonstrations, can be downloaded via this link.


Acknowledgements: Many kind friends and colleagues have helped me with advice, information, and comments. I am especially grateful for pointers to the recent developments in biology and palaeoclimate on which my account depends so heavily. In addition to those mentioned in the acknowledgements sections of the original Lucidity and Science papers, I'd like to thank especially Pat Bateson, James Jackson, Kevin Laland, James Maas, Nick McCave, Steve Merrick, Gos Micklem, Antony Pay, Mark Salter, Emily Shuckburgh, Luke Skinner, Marilyn Strathern, and John Sulston.


REFERENCES

Abbott, B.P., et al., 2015: Observation of gravitational waves from a binary black hole merger. Physical Review Letters 116, 061102. This was a huge team effort at the cutting edge of high technology, decades in the making, to cope with the tiny amplitude of Einstein's ripples. The `et al.' stands for the names of over a thousand other team members. The first event was observed on 14 September 2015. Another such event, observed on 26 December 2015, and cross-checking Einstein's theory even more stringently, was reported in a second paper Abbott, B.P., et al., 2016, Physical Review Letters 116, 241103. This second paper reports the first observational constraint on the spins of the black holes, with one of the spins almost certainly nonzero.

Abe-Ouchi, A., Saito, F., Kawamura, K., Raymo, M.E., Okuno, J., Takahashi, K., and Blatter, H., 2013: Insolation-driven 100,000-year glacial cycles and hysteresis of ice-sheet volume. Nature 500, 190-194.

Alley, R.B., 2000: Ice-core evidence of abrupt climate changes. Proc. Nat. Acad. Sci. 97, 1331-1334. This brief Perspective is a readable summary, from a respected expert in the field, of the way in which measurements from Greenland ice have demonstrated the astonishingly short timescales of Dansgaard-Oeschger warmings, typically less than a decade and only a year or two in at least some cases, including that of the most recent or `zeroth' such warming about 11.7 millennia ago. The warmings had magnitudes typically, as Dokken et al. (2013) put it, `of 10±5°C in annual average temperature'.

Alley, R.B., 2007: Wally was right: predictive ability of the North Atlantic `conveyor belt' hypothesis for abrupt climate change. Annual Review of Earth and Planetary Sciences 35, 241-272. This paper incorporates a very readable, useful, and informative survey of the relevant palaeoclimatic records and recent thinking about them. Wally Broecker's famous `conveyor belt' is a metaphor for the ocean's global-scale meridional overturning circulation that has greatly helped efforts to understand the variability observed during the glacial cycles. Despite its evident usefulness, the metaphor embodies a fluid-dynamically unrealistic assumption, namely that shutting off North Atlantic deep-water formation also shuts off the global-scale return flow. (If you jam a real conveyor belt somewhere, then the rest of it stops too.) In this respect the metaphor needs refinements such as those argued for in Dokken et al.s (2013), recognizing that parts of the `conveyor' can shut down while other parts continue to move, transporting heat and salt at significant rates. As Dokken et al. point out, such refinements are likely to be important for understanding the most abrupt of the observed changes, the Dansgaard-Oeschger warmings (see also Alley 2000), and the Arctic Ocean tipping point that may now be imminent.

Andreassen, K., Hubbard, A., Winsborrow, M., Patton, H., Vadakkepuliyambatta, S., Plaza-Faverola, A., Gudlaugsson, E., Serov, P., Deryabin, A., Mattingsdal, R., Mienert, J., and Bünz, S, 2017: Massive blow-out craters formed by hydrate-controlled methane expulsion from the Arctic seafloor, Science, 356, 948-953. It seems that the clathrates in high latitudes have been melting ever since the later part of the last deglaciation, probably contributing yet another positive feedback, both then and now. Today, the melting rate is accelerating to an extent that hasn't yet been well quantified but is related to ocean warming and to the accelerated melting of the Greenland and West Antarctic ice sheets, progressively unloading the permafrosts beneath. Reduced pressures lower the clathrate melting point.

Archer, D., 2009: The Long Thaw: How Humans Are Changing the Next 100,000 Years of Earth's Climate. Princeton University Press, 180 pp.

Bateson, P., and Martin, P., 1999: Design for a Life: How Behaviour Develops. London, Jonathan Cape, Random House, 280 pp.

Born, G., 2002: The wide-ranging family history of Max Born. Notes and Records of the Royal Society (London) 56, 219-262 and Corrigendum 56, 403 (Gustav Born, quoting his father Max, who was awarded the Nobel Prize in physics, belatedly in 1954. The quotation comes from a lecture entitled Symbol and Reality (Symbol und Wirklichkeit), given at a meeting in 1964 of Nobel laureates at Lindau on Lake Constance.)

Burke, A., Stewart, A.L., Adkins, J.F., Ferrari, R., Jansen, M.F., and Thompson, A.F., 2015: The glacial mid-depth radiocarbon bulge and its implications for the overturning circulation. Paleoceanography, 30, 1021-1039.

Carroll, Sean, 2016: The Big Picture: On the Origins of Life, Meaning and the Universe Itself. London, Oneworld, 480pp. Sean Carroll is a physicist with unusually wide interests. This book includes some of the best discussions of science and multi-level thinking that I've seen, clearly showing how extreme reductionism limits our understanding.

Conway, F. and Siegelman, J., 1978: Snapping. New York, Lippincott, 254 pp.

Danchin, E. and Pocheville, A., 2014: Inheritance is where physiology meets evolution. Journal of Physiology 592, 2307-2317. This complex but very interesting review is one of two that I've seen -- the other being the review by Laland et al. (2011) -- that goes beyond earlier reviews such as those of Laland et al. (2010) and Richerson et al. (2010) in recognizing the importance of multi-timescale dynamical processes in biological evolution. It seems that such recognition is still a bit unusual, even today, thanks to a widespread assumption that timescale separation implies dynamical decoupling (see also Thierry 2005). In reality there is strong dynamical coupling, the authors show, involving an intricate interplay between different timescales. It's mediated in a rich variety of ways including not only niche construction and genome-culture coevolution but also, at the physiological level, developmental plasticity along with the non-genomic heritability now called epigenetic heritability. One consequence is the creation of hitherto unrecognized sources of heritable variability, the crucial `raw material' that allows natural selection to function. The review feeds into a wider discussion now running in the evolutionary-biology community. A sense of recent issues, controversies and mindsets can be found in, for instance, the online discussion of a Nature Commentary by Laland, K. et al. (2014): Does evolutionary theory need a rethink? Nature 514, 161-164. (In the Commentary, for `gene' read `replicator' including regulatory DNA. See also the online comments on the `3rd alphabet of life', the glycome, which consists of `all carbohydrate structures that get added to proteins post translationally... orders of magnitude more complex than the proteome or genome... takes proteins and completely alters their behavior... or can fine tune their activity... a massive missing piece of the puzzle...')

Dawkins, R., 2009: The Greatest Show On Earth. London, Bantam Press, 470 pp. I am citing this book for two reasons. First, chapter 8 beautifully illustrates why self-assembling building blocks and emergent properties are such crucial ideas in biology, and why the `genetic blueprint' idea is so misleading. Second, however, as in Pinker (1997), it makes an unsupported assertion -- for instance in a long footnote to chapter 3 (p. 62) -- that natural selection takes place via selective pressures exerted solely at one level, that of the individual organism, and that to suppose otherwise is an outright `fallacy'. The argument is circular in that it relies on the oldest population-genetics models, which confine attention to individual organisms and to whole-population averages by prior assumption. To be sure, the flow of genomic information from parents to offspring is less clearcut at higher levels than at individual-organism level. It is more probabilistic and less deterministic. But many lines of evidence show that higher-level information flows can nevertheless be important, especially when the flows are increasingly channeled within group-level `survival vehicles' created, or rather reinforced, by language barriers (Pagel 2012), gradually accelerating the coevolution of genome and culture in all its multi-timescale intricacy (e.g. Danchin and Pocheville 2014 and comments thereon). We are indeed talking about the greatest show on Earth. It is even greater, more complex, more wonderful, and indeed more dangerous, than Dawkins suggests.

Dickinson, P. et al. 2015: Carbon Disclosure Project Global Climate Change Report 2015. This report appears to signal a cultural sea-change as increasing numbers of corporate leaders recognize the magnitude of the climate problem and the implied business risks and opportunities. See also, for instance, the Carbon Tracker website.

Dokken, T.M., Nisancioglu, K. H., Li, C., Battisti, D.S., and Kissel, C., 2013: Dansgaard-Oeschger cycles: interactions between ocean and sea ice intrinsic to the Nordic seas. Paleoceanography, 28, 491-502. This is the first fluid-dynamically credible explanation of the extreme rapidity and large magnitude (see also Alley 2000) of the Dansgaard-Oeschger warming events. These events left clear imprints in ice-core and sedimentary records all over the Northern Hemisphere and were so sudden, and so large in magnitude, that a tipping-point mechanism must have been involved. The proposed explanation represents the only such mechanism suggested so far that could be fast enough.

Doolittle, W.F., 2013: Is junk DNA bunk? A critique of ENCODE. Proc. Nat. Acad. Sci., 110, 5294-5300. ENCODE is a large data-analytical project to look for signatures of biological functionality in genomic sequences. The word `functionality' well illustrates human language as a conceptual minefield. For instance the word is often, it seems, read to mean `known functionality having an adaptive advantage', excluding the many neutral variants, redundancies, and multiplicities revealed by studies such as those of Wagner (2014).

Dunbar, R.I.M., 2003: The social brain: mind, language, and society in evolutionary perspective. Annu. Rev. Anthropol. 32, 163-181. This review offers important insights into the selective pressures on our ancestors, drawing on the palaeoarchaeological and palaeoanthropological evidence. Figure 4 shows the growth of brain size over the past 3 million years, including its extraordinary acceleration in the past few hundred millennia.

Ehrenreich, B., 1997: Blood Rites: Origins and History of the Passions Of War. London, Virago and New York, Metropolitan Books, 292 pp. Barbara Ehrenreich's insightful and penetrating discussion contains much wisdom, it seems to me, not only about war but also about the nature of mythical deities and about human sacrifice, ecstatic suicide, and so on -- as in Stravinsky's Rite of Spring, and long pre-dating 9/11 and IS/Daish. (Talk about ignorance being expensive!)

Foukal, P., Fröhlich, C., Spruit, H., and Wigley, T.M.L., 2006: Variations in solar luminosity and their effect on the Earth's climate. Nature 443, 161-166, © Macmillan. An extremely clear review of some robust and penetrating insights into the relevant solar physics, based on a long pedigree of work going back to 1977. For a sample of the high sophistication that's been reached in constraining solar models, see also Rosenthal, C. S. et al., 1999: Convective contributions to the frequency of solar oscillations, Astronomy and Astrophysics 351, 689-700.

Gelbspan, R., 1997: The Heat is On: The High Stakes Battle over Earth's Threatened Climate. Addison-Wesley, 278 pp. See especially chapter 2.

Gilbert, C.D. and Li, W. 2013: Top-down influences on visual processing. Nature Reviews (Neuroscience) 14, 350-363. This review presents anatomical and neuronal evidence for the active, prior-probability-dependent nature of perceptual model-fitting, e.g. `Top-down influences are conveyed across... descending pathways covering the entire neocortex... The feedforward connections... ascending... For every feedforward connection, there is a reciprocal [descending] feedback connection that carries information about the behavioural context... Even when attending to the same location and receiving an identical stimulus, the tuning of neurons can change according to the perceptual task that is being performed...', etc.

Giusberti, L., Boscolo Galazzo, F., and Thomas, E., 2016: Variability in climate and productivity during the Paleocene-Eocene Thermal Maximum in the western Tethys (Forada section). Climate of the Past 12, 213-240. doi:10.5194/cp-12-213-2016. The early Eocene began around 56 million years ago with the so-called PETM, a huge global-warming episode with accompanying mass extinctions now under intensive study by geologists and paleoclimatologists. The PETM was probably caused by carbon-dioxide injections comparable in size to those from current fossil-fuel burning. The injections almost certainly came from massive volcanism and would have been reinforced, to an extent not yet well quantified, by methane release from submarine clathrates. The western Tethys Ocean was a deep-ocean site at the time and so provides biological and isotopic evidence both from surface and from deep-water organisms, such as foraminifera with their sub-millimetre-sized carbonate shells.

Gregory, R. L., 1970: The Intelligent Eye. London, Weidenfeld and Nicolson, 191 pp. This great classic is still well worth reading. It's replete with beautiful and telling illustrations of how vision works. Included is a rich collection of stereoscopic images viewable with red-green spectacles. The brain's unconscious internal models that mediate visual perception are called `object hypotheses', and the active nature of the processes whereby they're selected is clearly recognized, along with the role of prior probabilities. There's a thorough discussion of the standard visual illusions as well as such basics as the perceptual grouping studied in Gestalt psychology, whose significance for word-patterns I discussed in Part I of Lucidity and Science. In a section on language and language perception, Chomsky's `deep structure' is identified with the repertoire of unconscious internal models used in decoding sentences. The only points needing revision are speculations that the first fully-developed languages arose only in very recent millennia and that they depended on the invention of writing. That's now refuted by the evidence from Nicaraguan Sign Language (e.g. Kegl et al. 1999), showing that there are genetically-enabled automata for language and syntactic function.

Hoffman, D.D., 1998: Visual Intelligence. Norton, 294 pp. Essentially an update on Gregory (1970), with many more illustrations and some powerful theoretical insights into the way visual perception works.

Hunt, M., 1993: The Story of Psychology. Doubleday, Anchor Books, 763 pp. The remarks on the Three Mile Island control panels are on p. 606.

IPCC 2013: Full Report of Working Group 1. Chapter 5 of the full report summarizes the evidence on past sea levels, including those in the penultimate interglacial, misnamed `LIG' (Last InterGlacial).

Jaynes, E. T., 2003: Probability Theory: The Logic of Science. edited by G. Larry Bretthorst. Cambridge, University Press, 727 pp. This great posthumous work blows away the conceptual confusion surrounding probability theory and statistical inference, with a clear focus on the foundations of the subject established by the theorems of Richard Threlkeld Cox. The theory goes back three centuries to James Bernoulli and Pierre-Simon de Laplace, and it underpins today's state of the art in model-fitting and data compression (MacKay 2003). Much of the book digs deep into the technical detail, but there are instructive journeys into history as well, especially in chapter 16. There were many acrimonious disputes. They were uncannily similar to the disputes over biological evolution. Again and again, especially around the middle of the twentieth century, unconscious assumptions impeded progress. They involved dichotomization and what Jaynes calls the mind-projection fallacy, conflating outside-world reality with our conscious and unconscious internal models thereof. There's more about this in my chapter 5 on music, mathematics, and the Platonic.

Kahneman, D., 2011: Thinking, Fast and Slow. London, Penguin, 499 pp. Together with the book by Ramachandran and Blakeslee (1998, q.v.), Kahneman's book provides deep insight into the nature of human perception and cognition and the brain's unconscious internal models that mediate them, especially through experimental demonstrations of how flexible -- how strongly context-dependent -- the prior probabilities can be, as exhibited for instance by the phenomena called `anchoring', `priming', and `framing'.

Kendon, E.J., Roberts, N.M., Fowler, H.J., Roberts, M.J., Chan, S.C., and Senior, C.A., 2014: Heavier summer downpours with climate change revealed by weather forecast resolution model. Nature Climate Change, doi:10.1038/nclimate2258 (advance online publication).

Kegl, J., Senghas, A., Coppola, M., 1999: Creation through contact: sign language emergence and sign language change in Nicaragua. In: Language Creation and Language Change: Creolization, Diachrony, and Development, 179-237, ed. Michel DeGraff. Cambridge, Massachusetts, MIT Press, 573 pp. Included are detailed studies of the children's sign-language constructions, used in describing videos they watched. Also, there are careful and extensive discussions of the controversies amongst linguists.

King, D., Schrag, D., Zhou, D., Qi, Y., Ghosh, A., and co-authors, 2015: Climate Change: A Risk Assessment. Cambridge Centre for Science and Policy, 154 pp. In case of accidents, I have mirrored a copy here under the appropriate Creative Commons licence. Included in this very careful and sober discussion of the risks confronting us is the possibility that methane clathrates, also called methane hydrates or `shale gas in ice', will be added to the fossil-fuel industry's extraction plans (§7, p. 42). The implied carbon-dioxide injection would be very far indeed above IPCC's highest emissions scenario. This would be the `Business As Usual II' scenario of Valero et al. 2011.

Klein, Naomi, 2014: This Changes Everything: Capitalism vs the Climate. Simon & Schuster, Allen Lane, Penguin. Chapter 4 describes the push to burn all fossil-fuel reserves, the old business plan of the fossil-fuel industry. Even though it's a plan for climate disaster and social disaster the disinformation campaign supporting it remains powerful, at least in the English-speaking world, as seen in the US Congress and in recent UK Government policy reversals. For instance, only months before the Paris agreement the UK Government suddenly withdrew support for solar and onshore wind renewables -- sabotaging long-term investments and business models at a stroke, without warning, and destroying many thousands of renewables jobs while increasing fossil-fuel subsidies. Equally suddenly, it shut down support for the first full-scale UK effort in CCS, carbon capture and storage. However, it seems that the politicians responsible will nevertheless fail to stop the development of newer and smarter forms of CCS (Oxburgh 2016). Nor, it seems, will they stop the scaling-up of smart energy storage, smart grids, distributed energy generation and the increasingly competitive -- and terrorist-resistant -- carbon-neutral renewable energy sources. All this is now gaining momentum in the business world, and now in the business plans of some fossil-fuel companies, whose engineering knowhow will be needed as soon as CCS is taken seriously. A useful summary of the current upheavals in those business plans is published in a Special Report on Oil in the 26 November 2016 issue of the UK Economist magazine. The 26 November issue is headlined `The Burning Question: Climate Change in the Trump Era'.

Lakoff, G., 2014: Don't Think of an Elephant: Know Your Values and Frame the Debate. Vermont, Chelsea Green Publishing, www.chelseagreen.com. Following the classic work of Kahneman and Tversky, Lakoff shows in detail how those who exploit free-market fundamentalism -- including its quasi-Christian version advocated by, for instance, the writer James Dobson -- combine their mastery of lucidity principles with the technique Lakoff calls `framing', in order to perpetuate the unconscious assumptions that underpin their political power. Kahneman (2011) provides a more general, in-depth discussion of framing, and of related concepts such as anchoring and priming. All these concepts are needed in order to understand many cognitive-perceptual phenomena.

Laland, K., Odling-Smee, J., and Myles, S., 2010: How culture shaped the human genome: bringing genetics and the human sciences together. Nature Reviews: Genetics 11, 137-148. This review notes the likely importance, in genome-culture coevolution, of more than one timescale. It draws on several lines of evidence. The evidence includes data on genomic sequences, showing the range of gene variants (alleles) in different sub-populations. As the authors put it, in the standard mathematical-modelling terminology, `... cultural selection pressures may frequently arise and cease to exist faster than the time required for the fixation of the associated beneficial allele(s). In this case, culture may drive alleles only to intermediate frequency, generating an abundance of partial selective sweeps... adaptations over the past 70,000 years may be primarily the result of partial selective sweeps at many loci' -- that is, locations within the genome. `Partial selective sweeps' are patterns of genomic change responding to selective pressures yet retaining some genetic diversity, hence potential for future adaptability and versatility. The authors confine attention to very recent coevolution, for which the direct lines of evidence are now strong in some cases -- leaving aside the earlier coevolution of, for instance, proto-language. There, we can expect multi-timescale coupled dynamics over a far greater range of timescales, for which direct evidence is much harder to obtain, as discussed also in Richerson et al. (2010).

Laland, K., Sterelny, K., Odling-Smee, J., Hoppitt, W., and Uller, T., 2011: Cause and effect in biology revisited: is Mayr's proximate-ultimate dichotomy still useful? Science 334, 1512-1516. The dichotomy, between `proximate causation' around individual organisms and `ultimate causation' on evolutionary timescales, entails a belief that the fast and slow mechanisms are dynamically independent. This review argues that they are not, even though the dichotomy is still taken by many biologists to be unassailable. The review also emphasizes that the interactions between the fast and slow mechanisms are often two-way interactions, or feedbacks, labelling them as `reciprocal causation' and citing many lines of supporting evidence. This recognition of feedbacks is part of what's now called the `extended evolutionary synthesis'. See also my notes on Danchin and Pocheville (2014) and, for instance, Thierry (2005).

Lüthi, D., et al., 2008: High-resolution carbon dioxide concentration record 650,000-800,000 years before present. Nature 453, 379-382. Further detail on the deuterium isotope method is given in the supporting online material for a preceding paper on the temperature record.

Lynch, M., 2007: The frailty of adaptive hypotheses for the origins of organismal complexity. Proc. Nat. Acad. Sci., 104, 8597-8604. A lucid and penetrating overview of what was known in 2007 about non-human evolution mechanisms, as seen by experts in population genetics and in molecular and cell biology and bringing out the important role of neutral, as well as adaptive, genomic changes, now independently confirmed in Wagner (2014).

MacKay, D.J.C., 2003: Information Theory , Inference, and Learning Algorithms. Cambridge University Press, 628 pp. On-screen viewing permitted at http://www.inference.phy.cam.ac.uk/mackay/itila/. This book by the late David MacKay is a brilliant, lucid and authoritative analysis of the topics with which it deals, at the most fundamental level. It builds on the foundation provided by Cox's theorems (Jaynes 2003) to clarify (a) the implications for optimizing model-fitting to noisy data, usually discussed under the heading `Bayesian inference', and (b) the implications for optimal data compression. And from the resulting advances and clarifications we can now say that `data compression and data modelling are one and the same' (p. 31).

Marchitto, T.M., Lynch-Stieglitz, J., and Hemming, S.R., 2006: Deep Pacific CaCO3 compensation and glacial-interglacial atmospheric CO2. Earth and Planetary Science Letters 231, 317-336. This technical paper contains an unusually clear explanation of the probable role of limestone sludge (CaCO3) and seawater chemistry in the way carbon dioxide (CO2) was stored in the oceans during the recent ice ages. The paper gives a useful impression of our current understanding, and of the observational evidence that supports it. The evidence comes from meticulous and laborious measurements of tiny variations in trace chemicals that are important in the oceans' food chains, and in isotope ratios of various elements including oxygen and carbon, laid down in layer after layer of ocean sediments over very many tens of millennia. Another reason for citing the paper, which requires the reader to have some specialist knowledge, is to highlight just how formidable are the obstacles to building accurate models of the carbon sub-system, including the sinking phytoplankton. Such models try to represent oceanic carbon-dioxide storage along with observable carbon isotope ratios, which are affected by the way in which carbon isotopes are taken up by living organisms via processes of great complexity and variability. Not only are we far from modelling oceanic fluid-dynamical transport processes with sufficient accuracy, including turbulent eddies over a vast range of spatial scales, but we are even further from accurately modelling the vast array of biogeochemical processes involved throughout the oceanic and terrestrial biosphere -- including for instance the biological adaptation and evolution of entire ecosystems and the rates at which the oceans receive trace chemicals from rivers and airborne dust. The oceanic upper layers where plankton live have yet to be modelled in fine enough detail to represent the recycling of nutrient chemicals simultaneously with the gas exchange rates governing leakage. It's fortunate indeed that we have the hard evidence, from ice cores, for the atmospheric carbon dioxide concentrations that actually resulted from all this complexity.

McGilchrist, I., 2009: The Master and his Emissary: the Divided Brain and the Making of the Western World. Yale University Press, 597 pp. In this wideranging and densely argued book, `world' often means the perceived world consisting of the brain's unconscious internal models. The Master is the right hemisphere with its holistic `world', while the Emissary is the left hemisphere with its analysed, dissected and fragmented `world' and its ambassadorial communication skills.

Monod, J., 1970: Chance and Necessity. Glasgow, William Collins, 187 pp., beautifully translated from the French by Austryn Wainhouse. This classic by the great molecular biologist Jacques Monod -- one of the sharpest and clearest thinkers that science has ever seen -- highlights the key roles of genome-culture coevolution and multi-level selection in the genesis of the human species. See the last chapter, chapter 9, The Kingdom and the Darkness. Monod's kingdom is the `transcendent kingdom of ideas, of knowledge, and of creation'.  He ends with a challenge to mankind. `The kingdom above or the darkness below: it is for him to choose.'

NAS-RS, 2014 (US National Academy of Sciences and UK Royal Society): Climate Change: Evidence & Causes. A brief, readable, and very careful summary from a high-powered team of climate scientists, supplementing the vast IPCC reports and emphasizing the many cross-checks that have been done.

Noble, D., 2006: The Music of Life: Biology Beyond Genes. Oxford University Press. This short and lucid book by a respected biologist clearly brings out the complexity, versatility, and multi-level aspects of biological systems, and the need to avoid extreme reductionism and single-viewpoint thinking, such as saying that the genome `causes' everything. A helpful first metaphor for the genome, it's argued, is a digital music recording. Yes, reading the digital data on one sense `causes' a musical and possibly emotional experience but, if that's all you say, you miss the countless other things on which the experience depends, not least the brain's unconscious model-fitting processes and its web of associations, all strongly influenced by past experience and present circumstance as well as by the digital data. Reading the data into a playback device mediates or enables the listener's experience, rather than solely causing it. Other metaphors loosen our thinking still further, penetrating across the different levels. A wonderful example is the metaphor of the Chinese (kangxi or kanji) characters of which so many thousands are used in the written languages of east Asia, and whose complexities are so daunting to Western eyes -- rather like the complexities of the genome. However, their modular structure uses only a few hundred sub-characters, many of them over and over again in different permutations for different purposes -- just as in the genome, in genomic exons, and in other components of biological systems.

Oreskes, N. and Conway, E.M., 2010: Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. Bloomsbury, 2010.

Oxburgh, R., 2016: Lowest Cost Decarbonisation for the UK: The Critical Role of CCS. Report to the Secretary of State for Business, Energy and Industrial Strategy from the Parliamentary Advisory Group on Carbon Capture and Storage, September 2016. Available from http://www.ccsassociation.org/news-and-events/reports-and-publications/parliamentary-advisory-group-on-ccs-report/

Pagel, M., 2012: Wired for Culture. London and New York, Allen Lane, Penguin, Norton, 416 pp. The author describes an impressive variety of observations on human culture and human behaviour, emphasizing the important role of language barriers as the outer skins or containers of the 'survival vehicles' whereby our ancestors were bound into strongly segregated, inter-competing groups of cooperating individuals. `Vehicle' has its usual meaning in evolutionary theory as a carrier of replicators into future generations. In the book, the replicators carried by our ancestors' survival vehicles are taken to be cultural only, including particular languages and customs. Cultural evolution, with its Lamarckian aspect and timescales far shorter than genomic timescales, gave our ancestors a prodigious versatility and adaptability that continues today. However, it seems obvious that the same survival vehicles must have carried segregated genomic information as well. Such segregation, or channelling, would have intensified the multi-timescale coevolution of genomes and cultures. The tightening of vehicle containment by language barriers is an efficient way of strengthening population heterogeneity, hence group-level selective pressures, on genomes as well as on cultures. In this respect there's a peculiar inconsistency in the book, namely that the discussion is confined within the framework of selfish-gene theory and assumes that language and language barriers were very late developments, starting with a single `human mother tongue' (p. 299) that arose suddenly and then evolved purely culturally. While recognizing that groups of our ancestors must have competed with one another the author repudiates group-level genomic selection, saying that it is `weak' and by implication unimportant (p. 198). This view comes from the oldest population-genetics models. Those were the mathematical models that led to selfish-gene theory in the 1960s and 1970s along with its game-theoretic spinoffs, such as reciprocal-altruism theory. The crucial effects of language barriers, population heterogeneity, multi-level selection, and multi-timescale processes are all excluded from those models by prior assumption. Another weakness, in an otherwise impressive body of arguments, is too ready an acceptance of the `archaeological fallacy' that symbolic representation came into being only recently, at the start of the Upper Palaeolithic with all its archaeological durables including cave paintings. The fallacy seems to stem from ignoring the unconscious symbolic representations, the brain's internal models, that mediate perception -- as well as ignoring the more conscious cultural modalities that depend solely on sound waves and light waves, modalities that leave no archaeological trace. Likely examples would include gestures, vocalizations, and dance routines that deliberately mimic different animals. Already, that's not just symbolic play but conscious symbolic play. Cave paintings aren't needed!

Pierrehumbert, R.T., 2010: Principles of Planetary Climate. Cambridge University Press, 652 pp.

Pinker, S., 1994: The Language Instinct. London, Allen Lane, 494 pp. The Nicaraguan case is briefly described in chapter 2, as far as it had progressed by the early 1990s.

Pinker, S., 1997: How the Mind Works. London, Allen Lane, 660 pp. Regarding mathematical models of natural selection, notice the telltale phrase `a strategy that works on average' (my italics) near the end of the section `I and Thou' in chapter 6, page 398 in my copy. The phrase `on average' seems to be thrown in almost as an afterthought. To restrict attention to what works `on average' is to restrict attention to the oldest mathematical models of population genetics in which all environmental and population heterogeneities, hence all higher-level selection mechanisms, have been obliterated by averaging over an entire population. Such models are also mentioned about nine pages into the section `Life's Designer' in chapter 3 -- page 163 in my copy -- in the phrase `mathematical proofs from population genetics' (my italics). Not even the Price equation, perhaps the first attempt to allow for heterogeneity, in the mid-1970s, is mentioned. A recent debate on these issues is available online here. In that debate it's noticeable how the dichotomization instinct kicks in again and again -- the unconscious assumption that one viewpoint excludes another -- despite sterling efforts to counter it in, for instance, a thoughtful contribution from David C. Queller. Earlier such debates, and disputes, over several decades, are thoroughly documented in the book by Segerstråle (2000) and further discussed in Wills (1994) and in Rose and Rose (2000). Dichotomization is conspicuous throughout.

Pomerantsev, P., 2015: Nothing is True And Everything is Possible -- Adventures in Modern Russia. Faber & Faber. Peter Pomerantsev is a television producer who worked for nearly a decade in Moscow with Russian programme-makers. He discusses the remarkable cleverness of the programme-makers and their government supervisors, in exploiting postmodernist thinking and other cultural undercurrents to create an impression of democratic pluralism in Russia today -- part of the confused `virtual reality' described also by Arkady Ostrovsky in his 2015 book The Invention of Russia and created using techniques that include `weaponized relativism'.

Ramachandran, V.S. and Blakeslee, S., 1998: Phantoms in the Brain. London, Fourth Estate. The phantoms are the brain's unconscious internal models that mediate perception and cognition. This book and Kahneman's are the most detailed and penetrating discussions I've seen of the nature and workings of those models. Many astonishing experiments are described, showing how flexible -- how strongly context-dependent -- the prior probabilities can be. The two books powerfully complement each other as well as complementing those of Gregory, Hoffman, McGilchrist, and Sacks. The experiments include some from neurological research and clinical neurology, and some that can be repeated by anyone, with no special equipment. There are many examples of multi-modal perception including Ramachandran's famous phantom limb experiments, and the Ramachandran-Hirstein `phantom nose illusion' described on page 59. Chapter 7 on anosognosia includes the brain-scan experiments of Ray Dolan and Chris Frith, revealing the location of the right hemisphere's discrepancy detector.

Rees, M., 2014: Can we prevent the end of the world? This seven-minute TED talk, by Astronomer Royal Martin Rees, makes the key points very succinctly. The talk is available here, along with a transcript. Two recently-established focal points for exploring future risk are the Cambridge Centre for the Study of Existential Risk and the Future of Life Institute.

Richerson, P.J., Boyd, R., and Henrich, J., 2010: Gene-culture coevolution in the age of genomics. Proc. Nat. Acad. Sci. 107, 8985-8992. This review takes up the scientific story as it has developed after Wills (1994), usefully complementing the review by Laland et al. (2010). The discussion comes close to recognizing two-way, multi-timescale dynamical coupling but doesn't quite break free of asking whether culture is `the leading rather than the lagging variable in the coevolutionary system' (my italics, to emphasize the false dichotomy).

Rose, H. and Rose. S. (eds), 2000: Alas, Poor Darwin: Arguments against Evolutionary Psychology. London, Jonathan Cape, 292 pp. This compendium offers a variety of perspectives on the oversimplified genetic determinism or `Darwinian fundamentalism' of recent decades, as distinct from Charles Darwin's own more pluralistic view recognizing that natural selection -- centrally important though it is to biological evolution, among other mechanisms -- cannot be the Answer to Everything in a scientific problem of such massive complexity, let alone in human and social problems. See especially chapters 4-6 and 9-12 (and more recently Danchin and Pocheville 2014) for examples of developmental plasticity, or epigenetic flexibility. Chapters 10 and 11 give instructive examples of observed animal behaviour. One is the promiscuous bisexuality of female bonobo chimpanzees and its role in their naturally-occurring societies, demolishing the fundamentalist tenet that `natural' sex is for procreation only. Chapter 12 addresses some of the human social problems compounded by the shifting conceptual minefield we call human language. A deeply thoughtful commentary, chapter 12 touches on many salient issues including the postmodernist backlash against scientific fundamentalism.

Rossano, M.J., 2009: The African Interregnum: the "where," "when," and "why" of the evolution of religion. In: Voland, E., Schiefenhövel, W. (eds), The Biological Evolution of Religious Mind and Behaviour, pp. 127-141. Heidelberg, Springer-Verlag, The Frontiers Collection, doi:10.1007/978-3-642-00128-4_9, ISBN 978-3-642-00127-7.   The `African Interregnum' refers to the time between the failure of our ancestors' first migration out of Africa, something like 80-90 millennia ago, and the second such migration around 60 millennia ago. Rossano's brief but penetrating survey argues that the emergence of belief systems having a `supernatural layer' boosted the size, sophistication, adaptability, and hence competitiveness of human groups. As regards the Toba eruption around 70 millennia ago, the extent to which it caused a human genetic bottleneck is controversial but not the severity of the disturbance to the climate system, like a multi-year nuclear winter. The resulting resource depletion must have severely stress-tested our ancestors' adaptability -- giving large, tightly-knit and socially sophisticated groups an important advantage. In Rossano's words, they were `collectively more fit and this made all the difference.'

Sacks, O., 1995: An Anthropologist on Mars. New York, Alfred Knopf, 340 pp., Chapter 4, To See and Not See. The two subjects `Virgil' and `S.B.' studied most thoroughly, by Sacks and by Richard Gregory and Jean Wallace, were both 50 years old when the opaque elements were surgically removed to allow light into their eyes. The vision they achieved was very far from normal. An important update is in the 2016 book In the Bonesetter's Waiting Room by Aarathi Prasad, in which chapter 7 mentions recent evidence from Project Prakash, led by Pawan Sinha, providing case studies of much younger individuals blind from birth. There is much variation from individual to individual but it seems that teenagers, for instance, can often learn to see better after surgery, or adjust better to whatever visual functionality they achieve, than did the two 50-year-olds.

Schonmann, R. H., Vicente, R., and Caticha, N., 2013: Altruism can proliferate through population viscosity despite high random gene flow. Public Library of Science, PLoS One, 8, e72043, doi:10.1371/journal.pone.0072043. Improvements in model sophistication, together with a willingness to view a problem from more than one angle, shows that group-selective pressures can be far more effective than the older population-genetics models suggest.

Segerstråle, U., (2000). Defenders of the Truth: The Battle for Science in the Sociobiology Debate and Beyond. Oxford University Press, 493 pp. This important book gives insight into the disputes about natural selection over past decades. It's striking how dichotomization, and the answer-to-everything mindset, kept muddying those disputes even amongst serious and respected scientists -- often under misplaced pressure for `parsimony of explanation', forgetting Einstein's famous warning not to push Occam's Razor too far. Again and again the disputants seemed to be saying that `we are right and they are wrong' and that there is one and only one `truth', to be viewed in one and only one way. Again and again, understanding was impeded by a failure to recognize complexity, multidirectional causality, different levels of description, and multi-timescale dynamics. The confusion was often made worse by failures to disentangle science from politics.

Senghas, A., 2010: The Emergence of Two Functions for Spatial Devices in Nicaraguan Sign Language. Human Development (Karger) 53, 287-302. This later study uses video techniques as in Kegl et al (1999) to trace the development, by successive generations of young children, of syntactic devices in signing space.

Shakhova, N., Semiletov, I., Leifer, I., Sergienko, V., Salyuk, A., Kosmach, D., Chernykh, D. Stubbs, C., Nicolsky, D., Tumskoy, V., and Gustafsson, O., 2014: Ebullition and storm-induced methane release from the East Siberian Arctic Shelf. Nature Geosci., 7, 64-70, doi:10.1038/ngeo2007.   This is hard observational evidence.

Skinner, L.C., Waelbroeck, C., Scrivner, A.C., and Fallon, S.J., 2014: Radiocarbon evidence for alternating northern and southern sources of ventilation of the deep Atlantic carbon pool during the last deglaciation. Proc. Nat. Acad. Sci. Early Edition (online), www.pnas.org/cgi/doi/10.1073/pnas.1400668111

Shackleton, N.S., 2000: The 100,000-year ice-age cycle identified and found to lag temperature, carbon dioxide, and orbital eccentricity. Science 289, 1897-1902.

Shakun, J.D., Clark, P.U., He, F., Marcott, S.A., Mix, A.C., Liu, Z., Otto-Bliesner, B., Schmittner, A., and Bard, E, 2012: Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation. Nature 484, 49-55.

Skippington, E., and Ragan, M.A., 2011: Lateral genetic transfer and the construction of genetic exchange communities. FEMS Microbiol Rev. 35, 707-735. This review article shows among other things how `antibiotic resistance and other adaptive traits can spread rapidly, particularly by conjugative plasmids'. `Conjugative' means that a plasmid is passed directly from one bacterium to another via a tiny tube called a pilus. The two bacteria can belong to different species. The introduction opens with the sentence `It has long been known that phenotypic features can be transmitted between unrelated strains of bacteria.'

Smythies, J. 2009: Philosophy, perception, and neuroscience. Perception 38, 638-651. On neuronal detail this discussion should be compared with that in Gilbert and Li (2013). For present purposes the discussion is of interest in two respects, the first being that it documents parts of what I called the `quagmire of philosophical confusion', about the way perception works and about conflating different levels of description. The discussion begins by noting, among other things, the persistence of the fallacy that perception is what it seems to be subjectively, namely veridical in the sense of being `direct', i.e., independent of any model-fitting process, a simple mapping between appearance and reality. This is still taken as self-evident, it seems, by some professional philosophers despite the evidence from experimental psychology, as summarized for instance in Gregory (1970), in Hoffman (1998), and in Ramachandran and Blakeslee (1998). Then a peculiar compromise is advocated, in which perception is partly direct, and partly works by model-fitting, so that `what we actually see is always a mixture of reality and virtual reality' [sic; p. 641]. (Such a mixture is claimed also to characterize some of the early video-compression technologies used in television engineering -- as distinct from the most advanced such technologies, which work entirely by model-fitting, e.g. MacKay 2003.) The second respect, perhaps of greater interest here, lies in a summary of some old clinical evidence, from the 1930s, that gave early insights into the brain's different model components. Patients described their experiences of vision returning after brain injury, implying that different model components recovered at different rates and were detached from one another at first. On pp. 641-642 we read about recovery from a particular injury to the occipital lobe: `The first thing to return is the perception of movement. On looking at a scene the patient sees no objects, but only pure movement... Then luminance is experienced but... formless... a uniform white... Later... colors appear that float about unattached to objects (which are not yet visible as such). Then parts of objects appear -- such as the handle of a teacup -- that gradually coalesce to form fully constituted... objects, into which the... colors then enter.'

Solanki, S.K., Krivova, N.A., and Haigh, J.D., 2013: Solar Irradiance Variability and Climate. Annual Review of Astronomy and Astrophysics 51, 311-351. This review summarizes and clearly explains the recent major advances in our understanding of radiation from the Sun's surface, showing in particular that its magnetically-induced variation cannot compete with the carbon-dioxide injections I'm talking about. To be sure, that conclusion depends on the long-term persistence of the Sun's magnetic activity cycle, whose detailed dynamics is not well understood. (A complete shutdown of the magnetic activity would make the Sun significantly dimmer during the shutdown, out to times of the order of hundred millennia.) However, the evidence for persistence of the magnetic activity cycle is now extremely strong (see the review's Figure 9). It comes from a long line of research on cosmogenic isotope deposits showing a clear footprint of persistent solar magnetic activity throughout the past 10 millennia or so, waxing and waning over a range of timescales out to millennial. The timing of these changes, coming from the Sun's internal dynamics, can have no connection with the timing of the Earth's orbital changes that trigger terrestrial deglaciations.

Stern, N., 2009: A Blueprint for a Safer Planet: How to Manage Climate Change and Create a New Era of Progress and Prosperity, London, Bodley Head, 246 pp. See also Oxburgh (2016).

Strunk, W., and White, E.B., 1979: The Elements of Style, 3rd edn. New York, Macmillan, 92 pp.

Sulston, J.E., and Ferry, G., 2003: The Common Thread: Science, Politics, Ethics and the Human Genome, Corgi edn. London, Random House (Transworld Publishers), 348 pp, also Washington DC, Joseph Henry Press. See also People patenting. This important book records how the scientific ideal and ethic prevailed against corporate might -- powerful business interests aiming to use the genomic data for short-term profit.

Tobias, P.V., 1971: The Brain in Hominid Evolution. New York, Columbia University Press, 170 pp. See also Monod (1970), chapter 9.

Thierry, B., 2005: Integrating proximate and ultimate causation: just one more go! Current Science 89, 1180-1183. A thoughtful commentary on the history of biological thinking, in particular tracing the tendency to neglect multi-timescale processes, with fast and slow mechanisms referred to as `proximate causes' and `ultimate causes', assumed independent solely because `they belong to different time scales' (p. 1182a) -- respectively individual-organism and genomic timescales. See also Laland et al. (2011) and Danchin and Pocheville (2014).

Trask, L., Tobias, P.V., Wynn, T., Davidson, I., Noble, W., and Mellars, P., 1998: The origins of speech. Cambridge Archaeological J., 8, 69-94. (A short compendium of discussions by linguists, palaeoanthropologists, archaeologists and others interested. It usefully exposes the levels of argument within controversies over the origins of language. See also Dunbar (2003).

Tribe, K., 2008: `Das Adam Smith Problem' and the origins of modern Smith scholarship. History of European Ideas 34, 514-525, doi:10.1016/j.histeuroideas.2008.02.001. This paper provides a forensic overview of Adam Smith's writings and of the many subsequent misunderstandings of them that accumulated in the German, French, and English academic literature of the following centuries -- albeit clarified as improved editions, translations, and commentaries became available. Smith dared to view the problems of ethics, economics, politics and human nature from more than one angle, and saw his two great works The Theory of Moral Sentiments (1759) and An Inquiry into the Nature and Causes of the Wealth of Nations (1776) as complementing each other.

Unger, R. M., and Smolin, L., 2015: The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy. Cambridge, University Press, 543 pp. A profound and wide-ranging discussion of how progress might be made in fundamental physics and cosmology. The authors -- two highly respected thinkers in their fields, philosophy and physics -- make a strong case that the current logjam has to do with our tendency to conflate the outside world with our mathematical models thereof, what Jaynes (2003) calls the `mind-projection fallacy'. Unger and Smolin point out that `part of the task is to distinguish what science has actually found out about the world from the metaphysical commitments for which the findings of science are often mistaken.'

Valero, A., Agudelo, A, and Valero, A., 2011: The crepuscular planet. A model for the exhausted atmosphere and hydrosphere. Energy, 36, 3745-3753. A careful discussion including up-to-date estimates of proven and estimated fossil-fuel reserves.

Valloppillil, V., and co-authors, 1998: The Halloween Documents: Halloween I, with commentary by Eric S. Raymond. On the Internet and mirrored here. This leaked document from the Microsoft Corporation recorded Microsoft's secret recognition that software far more reliable than its own was being produced by the open-source community, a major example being Linux.  Halloween I states, for instance, that the open-source community's ability `to collect and harness the collective IQ of thousands of individuals across the Internet is simply amazing.'  Linux, it goes on to say, is an operating system in which `robustness is present at every level' making it `great, long term, for overall stability'. I well remember the non-robustness and instability, and user-unfriendliness, of Microsoft's own secret-source software during its near-monopoly in the 1990s. Recent improvements may well owe something to the competition from the open-source community.

van der Post, L., 1972: A Story Like the Wind. London, Penguin. Laurens van der Post celebrates the influence he felt from his childhood contact with some of Africa's `immense wealth of unwritten literature', including the magical stories of the San or Kalahari-Desert Bushmen, stories that come `like the wind... from a far-off place.' See also 1961, The Heart of the Hunter (Penguin), page 28, on how a Bushman told what had happened to his small group: "They came from a plain... as they put it in their tongue, `far, far, far away'... It was lovely how the `far' came out of their mouths. At each `far' a musician's instinct made the voices themselves more elongated with distance, the pitch higher with remoteness, until the last `far' of the series vanished on a needle-point of sound into the silence beyond the reach of the human scale. They left... because the rains just would not come..."

Vaughan, Mark (ed.), 2006: Summerhill and A. S. Neill, with contributions by Mark Vaughan, Tim Brighouse, A. S. Neill, Zoë Neill Readhead and Ian Stronach. Maidenhead, New York, Open University Press/McGraw-Hill, 166 pp.

Wagner, A., 2014: Arrival of the Fittest: Solving Evolution's Greatest Puzzle. London, Oneworld. There is a combinatorially large number of viable metabolisms, that is, possible sets of enzymes hence sets of chemical reactions, that can perform some biological function such as manufacturing cellular building blocks from a fuel like sunlight, or glucose, or hydrogen sulphide -- or, by a supreme irony, even from the antibiotics now serving as fuel for some bacteria. Andreas Wagner and co-workers have shown in recent years that within the unimaginably vast space of possible metabolisms, which has around 5000 dimensions, the viable metabolisms, astonishingly, form a joined-up `genotype network' of closely adjacent metabolisms. This adjacency means that single-gene, hence single-enzyme, additions or deletions can produce combinatorially large sets of new viable metabolisms, including metabolisms that are adaptively neutral or spandrel-like but advantageous in new environments, as seen in the classic experiments of C. H. Waddington on environmentally-stressed fruit flies (e.g. Wills 1994, p. 241). Such neutral changes can survive and spread within a population because, being harmless, they are not deleted by natural selection. Moreover, they promote massive functional duplication or redundancy within metabolisms, creating a tendency toward robustness, and graceful degradation, of functionality. And the same properties of adjacency, robustness, evolvability and adaptability are found within the similarly vast spaces of, for instance, possible protein molecules and possible DNA-RNA-protein circuits and other molecular-biological circuits. Such discoveries may help to resolve controversies about functionality within so-called junk DNA (e.g. Doolittle 2013). These breakthroughs, in what is now called `systems biology', add to insights like those reviewed in Lynch (2007) and may also lead to new ways of designing, or rather discovering, robust electronic circuits and computer codes. Further such insights come from recent studies of artificial self-assembling structures in, for instance, crowds of `swarm-bots'. For a general advocacy of systems-biological thinking as an antidote to extreme reductionism, see Noble (2006).

Watson, A.J., Vallis, G.K., and Nikurashin, M., 2015: Southern Ocean buoyancy forcing of ocean ventilation and glacial atmospheric CO2. Nature Geosci, 8, 861-864.

Werfel, J., Ingber, D. I., and Bar-Yam, I., 2015: Programed death is favored by natural selection in spatial systems. Phys. Rev. Lett. 114, 238103. This detailed modelling study illustrates yet again how various `altruistic' traits are often selected for, in models that include population heterogeneity and group-level selection. The paper focuses on the ultimate unconscious altruism, mortality -- the finite lifespans of most organisms. Finite lifespan is robustly selected for, across a wide range of model assumptions, simply because excessive lifespan is a form of selfishness leading to local resource depletion. The tragedy of the commons, in other words, is as ancient as life itself. The authors leave unsaid the implications for our own species.

Wills, C., 1994: The Runaway Brain: The Evolution of Human Uniqueness. London, HarperCollins, 358 pp. This powerful synthesis builds on an intimate working knowledge of palaeoanthropology and population genetics. It offers many far-reaching insights, not only into the science itself but also into its history. The introduction and chapter 8 give interesting examples of how progress was blocked, or impeded, from time to time, by researchers becoming `prisoners of their mathematical models'. Chapter 8 concerns a classic dispute in which such mathematical imprisonment caught the protagonists in a false dichotomy, about adaptively neutral versus adaptively advantageous genomic changes, seen as mutually exclusive. The role of each kind of change -- both in fact being important for evolution -- has been illuminated by the breakthrough described in Wagner (2014). See also the recent review by Lynch (2007). In my notes on Pinker (1997), the allusion to mathematical `proof' from population genetics also points to a case of mathematical imprisonment. The word `proof' can be a danger signal especially when used by non-mathematicians. And another case, from Wills' introduction (pp. 10-12), concerns a famous but grossly-oversimplified view of genome-culture coevolution, published in 1981, which for one thing failed to recognize the timescale separation involved -- the ozone-hole-like interplay of fast and slow processes.

Wilson, D. S., 2015: Does Altruism Exist?: Culture, Genes, and the Welfare of Others. Yale University Press. See also Wilson's recent short article on multi-level selection and the scientific history of the idea. Of course the word `altruism' is another dangerously ambiguous word and source of confusion -- as Wilson points out -- deflecting attention from what matters most, which is actual behaviour. The explanatory power of models allowing multi-level selection in heterogeneous populations is further illustrated in, for instance, the recent work of Werfel et al. 2015.

Yunus, M., 1998: Banker to the Poor. London, Aurum Press, 313 pp. This is the story of the founding of the Grameen Bank of Bangladesh, which pioneered microlending and the emancipation of women against all expectation.


Back to my home page, http://www.atm.damtp.cam.ac.uk/people/mem/


Copyright © Michael Edgeworth McIntyre 2013. Last updated 22 August 2017 and (from 23 June 2014 onward) incorporating a sharper understanding of the last deglaciation and of the abrupt `Dansgaard-Oeschger warmings', thanks to generous advice from several colleagues including Dr Luke Skinner.
Valid HTML 4.01!