Meta is Murder

Free Will & the Fallibility of Science

One of the most significant intellectual errors educated persons make is in underestimating the fallibility of science. The very best scientific theories containing our soundest, most reliable knowledge are certain to be superseded, recategorized from “right” to “wrong”; they are, as physicist David Deutsch says, misconceptions:

I have often thought that the nature of science would be better understood if we called theories “misconceptions” from the outset, instead of only after we have discovered their successors. Thus we could say that Einstein’s Misconception of Gravity was an improvement on Newton’s Misconception, which was an improvement on Kepler’s. The neo-Darwinian Misconception of Evolution is an improvement on Darwin’s Misconception, and his on Lamarck’s… Science claims neither infallibility nor finality.

This fact comes as a surprise to many; we tend to think of science —at the point of conclusion, when it becomes knowledge— as being more or less infallible and certainly final. Science, indeed, is the sole area of human investigation whose reports we take seriously to the point of crypto-objectivism. Even people who very much deny the possibility of objective knowledge step onto airplanes and ingest medicines. And most importantly: where science contradicts what we believe or know through cultural or even personal means, we accept science and discard those truths, often wisely.

An obvious example: the philosophical problem of free will. When Newton’s misconceptions were still considered the exemplar of truth par excellence, the very model of knowledge, many philosophers felt obliged to accept a kind of determinism with radical implications. Give the initial-state of the universe, it appeared, we should be able to follow all particle trajectories through the present, account for all phenomena through purely physical means. In other words: the chain of causation from the Big Bang on left no room for your volition:

Determinism in the West is often associated with Newtonian physics, which depicts the physical matter of the universe as operating according to a set of fixed, knowable laws. The “billiard ball” hypothesis, a product of Newtonian physics, argues that once the initial conditions of the universe have been established, the rest of the history of the universe follows inevitably. If it were actually possible to have complete knowledge of physical matter and all of the laws governing that matter at any one time, then it would be theoretically possible to compute the time and place of every event that will ever occur (Laplace’s demon). In this sense, the basic particles of the universe operate in the same fashion as the rolling balls on a billiard table, moving and striking each other in predictable ways to produce predictable results.

Thus: the movement of the atoms of your body, and the emergent phenomena that such movement entails, can all be physically accounted for as part of a chain of merely physical, causal steps. You do not “decide” things; your “feelings” aren’t governing anything; there is no meaning to your sense of agency or rationality. From this essentially unavoidable philosophical position, we are logically-compelled to derive many political, moral, and cultural conclusions. For example: if free will is a phenomenological illusion, we must deprecate phenomenology in our philosophies; it is the closely-clutched delusion of a faulty animal; people, as predictable and materially reducible as commodities, can be reckoned by governments and institutions as though they are numbers. Freedom is a myth; you are the result of a process you didn’t control, and your choices aren’t choices at all but the results of laws we can discover, understand, and base our morality upon.

I should note now that (1) many people, even people far from epistemology, accept this idea, conveyed via the diffusion of science and philosophy through politics, art, and culture, that most of who you are is determined apart from your will; and (2) the development of quantum physics has not in itself upended the theory that free will is an illusion, as the sort of indeterminacy we see among particles does not provide sufficient room, as it were, for free will.

Of course, few of us can behave for even a moment as though free will is a myth; there should be no reason for personal engagement with ourselves, no justification for “trying” or “striving”; one would be, at best, a robot-like automaton incapable of self-control but capable of self-observation. One would account for one’s behaviors not with reasons but with causes; one would be profoundly divested from outcomes which one cannot affect anyway. And one would come to hold that, in its basic conception of time and will, the human consciousness was totally deluded.

As it happens, determinism is a false conception of reality. Physicists like David Deutsch and Ilya Prigogine have, in my opinion, defended free will amply on scientific grounds; and the philosopher Karl Popper described how free will is compatible in principle with a physicalist conception of the universe; he is quoted by both scientists, and Prigogine begins his book The End of Certainty, which proposes that determinism is no longer compatible with science, by alluding to Popper:

Earlier this century in The Open Universe: An Argument for Indeterminism, Karl Popper wrote,” Common sense inclines, on the one hand, to assert that every event is caused by some preceding events, so that every event can be explained or predicted… On the other hand, … common sense attributes to mature and sane human persons… the ability to choose freely between alternative possibilities of acting.” This “dilemma of determinism,” as William James called it, is closely related to the meaning of time. Is the future given, or is it under perpetual construction?

Prigogine goes on to demonstrate that there is, in fact, an “arrow of time,” that time is not symmetrical, and that the future is very much open, very much compatible with the idea of free will. Thus: in our lifetimes we have seen science —or parts of the scientific community, with the rest to follow in tow— reclassify free will from “illusion” to “likely reality”; the question of your own role in your future, of humanity’s role in the future of civilization, has been answered differently just within the past few decades.

No more profound question can be imagined for human endeavor, yet we have an inescapable conclusion: our phenomenologically obvious sense that we choose, decide, change, perpetually construct the future was for centuries contradicted falsely by “true” science. Prigogine’s work and that of his peers —which he calls a “probabilizing revolution” because of its emphasis on understanding unstable systems and the potentialities they entail— introduces concepts that restore the commonsensical conceptions of possibility, futurity, and free will to defensibility.

If one has read the tortured thinking of twentieth-century intellectuals attempting to unify determinism and the plain facts of human experience, one knows how submissive we now are to the claims of science. As Prigogine notes, we were prepared to believe that we, “as imperfect human observers, [were] responsible for the difference between past and future through the approximations we introduce into our description of nature.” Indeed, one has the sense that the more counterintuitive the scientific claim, the eagerer we are to deny our own experience in order to demonstrate our rationality.

This is only degrees removed from ordinary orthodoxies. The point is merely that the very best scientific theories remain misconceptions, and that where science contradicts human truths of whatever form, it is rational to at least contemplate the possibility that science has not advanced enough yet to account for them; we must be pragmatic in managing our knowledge, aware of the possibility that some truths we intuit we cannot yet explain, while other intuitions we can now abandon. My personal opinion, as you can imagine, is that we take too little note of the “truths,” so to speak, found in the liberal arts, in culture.

It is vital to consider how something can be both true and not in order to understand science and its limitations, and even more the limitations of second-order sciences (like social sciences). Newton’s laws were incredible achievements of rationality, verified by all technologies and analyses for hundreds of years, before their unpredicted exposure as deeply flawed ideas applied to a limited domain which in total provide incorrect predictions and erroneous metaphorical structures for understanding the universe.

I never tire of quoting Karl Popper’s dictum:

Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve.

It is hard but necessary to have this relationship with science, whose theories seem like the only possible answers and whose obsolescence we cannot envision. A rational person in the nineteenth century would have laughed at the suggestion that Newton was in error; he could not have known about the sub-atomic world or the forces and entities at play in the world of general relativity; and he especially could not have imagined how a theory that seemed utterly, universally true and whose predictive and explanatory powers were immense could still be an incomplete understanding, revealed by later progress to be completely mistaken about nearly all of its claims.

Can you imagine such a thing? It will happen to nearly everything you know. Consider what “ignorance” and “knowledge” really are for a human, what you can truly be certain of, how you should judge others given this overwhelming epistemological instability!

Objectivity and Art

As a Popperian, I believe that the distinction between the objective and the subjective (or the relative) has been misunderstood and hyperbolized. Perhaps nothing is objective, but that does not mean that all is subjective. Newton’s proposed laws of motion were, for centuries, “objectively” true; confirmed by all experimental tests, they formed the basis of thousands of discoveries in physics and other fields. These discoveries were themselves experimentally tested, and themselves led to thousands of discoveries in the exponential fashion to which we’ve become accustomed.

But Newton was wrong; his laws were inaccurate. In David Deutsch’s terms, they were very, very good misconceptions, just as Einstein’s better ideas are very, very good misconceptions that will eventually be replaced by even better, more accurate, deeper ideas that explain more with less. This process is progressive: science gets better and better, even though it is purely the creation of “subjective” human conjecture —imagination— tested against reality for utility. We might say that the history of human knowledge is one of conjectures which are never complete or objective but which are ever-improving. To be ever-improving, they must be moving towards something; if they cannot reach it, they approach it as a line does an asymptote. Science asymptotically approaches objective, complete truth, never arriving but getting closer and closer (1) . It is not objective —as the work of humans, how could it be?— but neither is it aimless or subjective.

But what about art? We do not tend to think that art is progressive. Indeed, the attitude of the age treats art as a private utterance, as pure subjectivity, or at best as a personal religion of some entertaining use to others. One epistemological consequence of the democratic ethos, unmoored from axiomatic values, is that we struggle with the idea of objectivity in anything, although we incoherently exempt the sciences from our anxious doubt. But this is a temporary phase, a confusion. It is not the case that art is purely subjective, aimless, without teleology or purpose; it is rather the case that art, like science, improves over time because it asymptotically approaches something. It happens to be the same “something” that science hews to: reality.

Consider the following work of art from tens of thousands of years ago:

image

From Chauvet, this depiction is among the earliest instances of art; it features a range of animals including, most prominently, cave lions. From tens of thousands of years later, in the 19th century, here is the head of a lion painted by Théodore Géricault:

image

It’s obvious that this is a better depiction, in part because we can reasonably assume that the intent of these two artists, across so much time, was similar: to capture and convey something essential about the lion. This intent was almost certainly inexplicit for the ancient artist, and may have expressed itself in other ways which recur throughout the history of art. For example, artists have occasionally conceived of their mission in ceremonial, religious, or supernatural terms, imagining that by performing acts in concert with images they might control reality (2). In later centuries, they might consider their art in more subtle religious, political, pedagogical, ideological, or emotional terms. But a sufficiently abstract definition might cover most cases:

Art seeks to virtualize phenomena for human benefit.

By “virtualize,” I mean only that what art offers us it offers on our terms. One can experience tragedy when a loved-one dies; one can know the awe and power of the lion when one sees it enter a cave in which one’s family is camped. Art seeks to make these phenomena, and the meanings they provide, available to you apart from the uncontrollable and contingent world, for a variety of reasons. Through art, we are enriched by experiences with less risk of suffering or injury; experiences are made more portable and reproducible, and are freed from temporality; we can begin at least to portray what we imagine, even if we cannot yet build it; and so on. Art, then, supports the same accelerated development of knowledge that consciousness, metaphor and language, and reason support, and all are related. Whereas we once built knowledge accidentally and slowly, when the inexplicit knowledge of environment and utility embodied by genes would lead to those genes’ replication and spread, we now have a range of means for building knowledge rapidly and at little cost. We can, at our discretion, experience alternative modes of being, the lives of others, worlds we’ve never seen; we can be taken deep within ourselves or so far away that we can no longer remember our names.

And from this, we learn. From art, from the virtualization of phenomena far removed from our practical realities, we derive values, politics, and purposes, in addition to whatever assortment of facts and information the art carries with it. Some essential values we seem incapable of arriving at any other way, especially in the absence of axioms or authority: compassion and empathy, for example, depend on the recognition of the humanness of others but are hardly logically compulsory propositions; art is unparalleled at conveying, in experiential and therefore broadly-intelligible terms, the bases of such moral notions, even to the ignorant and resistant. (3) Art is where we find meanings we cannot reason and experiences that we cannot otherwise have; that we recognize the value and utility of these experiences and meanings but cannot yet rationally justify them doesn’t mean that they’re purely subjective. The fact that our ancestors didn’t understand the stars by which they navigated didn’t make those stars subjective either. They were simply little-understood, but their utility was evident to all. The same is true of art and culture, emergent phenomena we dismiss because of weaknesses in our contemporary philosophies. What we cannot reduce we pretend doesn’t exist.

The consequences of purpose

If we say that “art seeks to virtualize phenomena for human benefit,” we can begin to critique art apart from distracting historicisms. This liberates us from, among other traps, referentiality and academic preoccupations. We can attempt to discuss art concretely in terms of its aims:

  • Does the work virtualize phenomena well? Does it use the best forms for the phenomena it pursues? Does it use effective available techniques for their virtualization? Are the relevant parts of the phenomena captured and expressed? Does the work have a purpose, and are its aesthetic choices suitable for that purpose?
  • Is the work novel? If it isn’t, it won’t “work,” for just as sound science that discovers what science already knows is redundant and contributes nothing, repetitive art with cliched expressions, moribund forms, or a derivative purpose is redundant and contributes nothing. Novelty is what permits consciousness to attend to phenomena, and is therefore a foundational value in art.
  • Do humans benefit? The benefit may be to the artist alone, which is perfectly fine but should be understood as an extremely narrow sort of aim, like a scientific discovery that extends the life of a single human. The tension between an artist’s desire to express himself purely and without calculations about reception and the fact that art must benefit humans or be pointless is irreducible and beneficial, itself a metaphor for the paradox of selfhood.
  • Art that is about art is as science about science: useful for practitioners but insufficiently universal in scope. Art that is about artists is as science about scientists: likely to be worthless where it cannot be generalized, and where it can it is hardly about individuals anyway.

An important note: art makes virtualized reality possible both for external sense experiences like seeing a lion or a landscape and internal, phenomenological experiences like emotional states or even qualia. The virtualization of meaningful human phenomena might involve nothing representational —music often does not— or taken from the world outside of us. A work of art which captures, provokes, or explores something like sorrow, hope, love, or fear might be highly abstract, impressionistic, unusual, just as our internal life is.

Artists are technologists

I’ve mentioned qualia twice, once implicitly noting that some do not believe they exist and once by noting that art captures them well. Qualia were first described by C.I. Lewis in 1929:

There are recognizable qualitative characters of the given, which may be repeated in different experiences, and are thus a sort of universals; I call these “qualia.” But although such qualia are universals, in the sense of being recognized from one to another experience, they must be distinguished from the properties of objects.

Another way of putting it: when you look at a red sign, the “redness” you see doesn’t exist anywhere. The sign is an almost entirely-empty latticework of vibrating particles. Photons bounce off of some of these and enter your eye at a wavelength, but that wavelength is a mathematical description: it has no color in it, and photons themselves are colorless. Your mind experiences “redness,” but you might also say that it “creates” or “invents” redness when prompted by certain light phenomena which themselves have nothing to do, now or ever, with “redness,” which doesn’t exist. Erwin Schrödinger, the Nobel-prize winning quantum physicist, put it thus:

The sensation of colour cannot be accounted for by the physicist’s objective picture of light-waves. Could the physiologist account for it, if he had fuller knowledge than he has of the processes in the retina and the nervous processes set up by them in the optical nerve bundles and in the brain? I do not think so.

That one of the founders of modern physics didn’t believe a physical or physiological explanation for qualia would be forthcoming is arresting. But more to the point, while scientists and philosophers try to determine what “redness” or “sorrow” really is, as a quale, artists are virtualizing qualia and catalyzing them in audiencesIndeed, much of the personal quality that art has consists in its relationship to deep, individuated qualia we ourselves hardly comprehend.

For millennia art outstripped the sciences in its ability to understand and recreate qualia, virtualize reality, and provide ennobling, edifying, educational, and entertaining simulations for humans. Indeed, art pushed science, demanding better technologies which required deeper understanding in dozens of fields. The demands of art pushed architecture, and therefore engineering and chemistry and materials sciences; art required new resources for colors and sculptures, shaping societies economically; the musical arts were constrained awfully until technology turned music from vanishing performances into enduring, widely-distributed works.

All of which is to say: artists are natural technologists. Historically, they’ve pursued the newest and best techniques, materials, and forms. When the methodology for achieving perspective became clear, few resisted it on the basis of a calcified iconographic style considered to be “high art,” or if some did they’ve been suitably forgotten. And had new inks, better canvases, or some unimaginable invention given superior means to the impressionists to capture washes of light and mood —like, say, film— they’d have used whatever was available. The purpose of painting isn’t paint, after all; nor is the purpose of writing a book. (4)

The purpose is instead to virtualize phenomena for the benefit of humans. The best techniques for doing so do indeed change; the schools of thought that shape artists wax, wane, wear out; intellectual movements, critical and popular reaction, and technology are all part of the contingency in which we work. But the orientation of art should not be towards the ephemeral (except in exploring ephemerality itself, permanent and vexing) but towards deeper, universal, clarifying aims.

In elementary school, we were taught about Europe’s cathedrals. Centuries of fatality- and error-filled construction and engineering innovation on the edge of recklessness produced spaces intended to virtualize the experience of heavenly light, spiritual elevation, credence in the sacred. A peasant from the fields could enter one and immediately understand; he’d not know Suger’s theories or the tradeoffs involved in the buttresses, but the purpose and effect of the art were somehow not lost on him. The same would likely have been true had he seen Michelangelo’s David or been permitted to hear Mozart or Hildegard of Bingen. With exceptions, of course, art has aspired to universality.

The extraordinary present circumstance in which art is not expected to be intelligible, to have any “benefit” beyond the meaninglessly subjective “enjoyment” of the “consumer” is an aberration. That art is denied its progressive success at virtualizing greater and greater parts of reality, conveying ever-more phenomena with ever-greater fidelity to ever-more people, is the result of a philosophical disruption and a subsequent error. We found God dead; we asked what had god-like authority and reeled to realize that nothing can. But we’ve accepted that somehow, science exceeds merely moody paradigms. It works. It gives us control over the universe and ourselves, reduces contingency and accident, allows us to be what we think we should be.

Art is part of the same process, and can be evaluated similarly. In allowing us to virtualize and experiment with realities and phenomena, and, gradually, to live in those realities, it is part of the same epistemological and creative process as science. We are simply at an earlier stage, and just as someone might have surveyed the globe in 500 CE and concluded, “There is nothing objective about the so-called sciences; it appears that every culture and every society simply invents its own ideas and none is really any better than the rest,” so we now struggle to understand how aesthetics and morality might someday be understood teleologically, not as expressions of “taste” but as forms of knowledge-generation, experimentation, and even reality-building.

Perhaps we are transitioning from artists-as-depictors and artists-as-catalyzers (5) to artists-as-world-makersTo create something, you must first understand it; to create a world for humans to experience, you must first understand how humans experience the world. Once you can reliably replicate any sense-perception, you must think of how such sense-perceptions are experienced in the mind: as qualia. Then you must think of how to generalize or objectify qualia, or how to catalyze them. This is not a task for science alone, though whether it is not yet or not at all I cannot say. It will involve art, however, particularly in the form it takes when it wants to extend itself into life: design.

Design is art which cannot ignore the outcome it pursues, which uses every technology or tool it can conjure to succeed, and which accepts the judgement of audiences. In this way, one can understand why so much of the vitality of art now resides in the commercial space: there, the artists still care about audiences, still have aims apart from themselves, still seek resonance, utility, universality. My anxieties about art stem mostly from this concern: if purposive, deliberate, universal art becomes the province of commercial design, art’s values will gravitate towards market values. The hope: those values will evolve intelligently through self-correction. But it seems safer to me to have a cultural space which accords art precisely the same sort of respect we pay science so that the arts can pursue their ends purely —ends far deeper than markets, capitalism, any historicism, incidentally— just as science exists apart from technology and its commercialization. But I doubt whether such a space is possible so long as we insist that all art is subjective, no teleology is imaginable, and there is no such thing as progress. Such an insistence is, in my view, both materially incorrect and snobbish, arising more from nostalgia for older forms or aristocratic art-culture than any real analysis of the present. We live in a world in which more people read, listen to music, and experience works of art than ever before. This is both art’s triumph and a prelude to its expanding role. From its earliest efforts to virtualize reality through its portrayal and later attempts to produce specific experiences in audiences, art aspires to the creation of worlds. As it converges with technology —in video games, for example— these worlds will grow to support the range of experiences and meanings humans desire, as art always has.


  1. Much of the confusion about subjective and objective sorts of knowledge comes from this simple fact: that we cannot have authority in knowledge means that nothing can be “final”; nothing is beyond interrogation, nothing is exempt from revision and improvement. That does not mean that all is equivalent, comparable, meaningless, a matter of preference. There are “criteria for reality,” in Deutsch’s terms, and they’re perfectly adequate to the actual epistemological tasks at hand, particularly in the sciences, where academics haven’t managed to confuse everyone’s sense of purpose yet. 

  2. As it happens, using virtualizations of reality to control reality seems likely to play an important role in humanity’s future. 

  3. The invention of new therapeutic diagnoses for the insufficiently empathetic, and their subsequent ineffectual medication, is a likelier course of action for our society. 

  4. The mistaking of a temporary medium —and all media, even those that endure for thousands of years, are temporary— for the purpose of art itself is precisely the sort of confusion that happens when ends vanish and means must suffice. If you cannot believe that art has a purpose deeper than its forms, its forms seem really important. But if you think the purpose of art is to virtualize phenomena for the benefit of humans (or the glorification of God or Marx), it’s not hard to accept that we might read off of screens or never care about painting again. If art matters, the texts on screens will do for us what oral traditions did for the Greeks and tomes did for the Enlightenment. The chapter of visual art obliged by technological-limitation to ignore movement will come to an end, or, if it can still open us to experience, teach us, console us, will continue. 

  5. Perhaps the mayhem of the successive schools of non-representational art can be understood both in terms of internecine disorder during the revaluation of values and as the working-out of experimental methods and techniques for orthogonal approaches to virtualization. Experimental art can, of course, be vitally useful. 

User Interface of the Universe

Quantum physicist and philosopher David Deutsch describes a fantasy of instrumentalism: an extraterrestrial computer like an oracle which can predict the outcome of any experiment:

[I]magine that an extraterrestrial scientist has visited the Earth and given us an ultra-high-technology ‘oracle’ which can predict the outcome of any possible experiment, but provides no explanations… How would the oracle be used in practice? In some sense it would contain the knowledge necessary to build, say, an interstellar spaceship. But how exactly would that help us to build one, or to build another oracle of the same kind — or even a better mousetrap? The oracle only predicts the outcomes of experiments. Therefore, in order to use it at all we must first know what experiments to ask it about. If we gave it the design of a spaceship, and the details of a proposed test flight, it could tell us how the spaceship would perform on such a flight. But it could not design the spaceship for us in the first place. And even if it predicted that the spaceship we had designed would explode on take-off, it could not tell us how to prevent such an explosion. That would still be for us to work out. And before we could work it out, before we could even begin to improve the design in any way, we should have to understand, among other things, how the spaceship was supposed to work. Only then would we have any chance of discovering what might cause an explosion on take-off. Prediction —even perfect, universal prediction— is simply no substitute for explanation.

Similarly, in scientific research the oracle would not provide us with any new theory. Not until we already had a theory, and had thought of an experiment that would test it, could we possibly ask the oracle what would happen if the theory were subjected to that test. Thus, the oracle would not be replacing theories at all: it would be replacing experiments. It would spare us the expense of running laboratories and particle accelerators. Instead of building prototype spaceships, and risking the lives of test pilots, we could do all the testing on the ground with pilots sitting in flight simulators whose behavior was controlled by the predictions of the oracle.

The oracle would be very useful in many situations, but its usefulness would always depend on people’s ability to solve scientific problems in just the way they have to now, namely by devising explanatory theories. It would not even replace all experimentation, because its ability to predict the outcome of a particular experiment would in practice depend on how easy it was to describe the experiment accurately enough for the oracle to give a useful answer, compared with doing the experiment in reality. After all, the oracle would have to have some sort of ‘user interface’. Perhaps a description of the experiment would have to be entered into it, in some standard language. In that language, some experiments would be harder to specify than others. In practice, for many experiments the specification would be too complex to be entered. Thus the oracle would have the same general advantages and disadvantages as any other source of experimental data, and it would be useful only in cases where consulting it happened to be more convenient than using other sources. To put that another way: there already is one such oracle out there, namely the physical world. It tells us the result of any possible experiment if we ask it in the right language (i.e. if we do the experiment), though in some cases it is impractical for us to ‘enter a description of the experiment in the required form’ (i.e. to build and operate the apparatus). But it provides no explanations.

The universe is an oracle to which we can submit any properly-phrased question and receive an answer in the form of uninterpreted data. I think that’s a lovely feature of our world. However: it is only the creative, synthetic interpretation of data —the generation of explanations, a form of knowledge constructed so far as we know only by humans— that makes this useful.

Data-collection, testing, experimentation that takes place without meaningful explanations is a popular sort of ignorance in some fields; it accords with the uninterrogated ascent of the quantitative over the qualitative. But experiments derive from explanatory knowledge, not the other way around: and while an experiment can falsify an explanation, it cannot create one or even confirm one in any final sense.

Nice things to consider: our universe is an oracle that will answer any question we put to it; and conjectural creativity is essential for the formation of explanatory knowledge (which catalyzes more questions to pose to the universe, and therefore more explanations to conjure, test, explain…).

“I have often thought that the nature of science would be better understood if we called theories “misconceptions” from the outset, instead of only after we have discovered their successors. Thus we could say that Einstein’s Misconception of Gravity was an improvement on Newton’s Misconception, which was an improvement on Kepler’s. The neo-Darwinian Misconception of Evolution is an improvement on Darwin’s Misconception, and his on Lamarck’s… Science claims neither infallibility nor finality.”

David Deutsch, quantum physicist and philosopher, in The Beginning of Infinity. Deutsch is obliged, in the course of arguing his theses about the nature of knowledge, progress, and human purpose, to rebut reductive notions like instrumentalism and our parochial cultural pessimisms. To do so he often leans on Karl Popper, who described scientific knowledge as being conjectural, ever-improving in its isomorphic fidelity to reality yet always tentative in a strict sense.

It is striking what an effect this clever little substitution has: we know, of course, that all scientific theories are later to be subsumed by better, deeper theories with more explanatory and predictive power; we know earlier theories are now in fact considered erroneous or incomplete for this very reason; but referring to "Einstein’s Misconception" reminds us of just how provisional our knowledge is, how far from any conceivable bedrock we remain. As a matter of philosophical principle, our knowledge is asymptotic: it may increase infinitely, draw nearer and nearer to the foundation, but it will never touch it.

(Perhaps this is so due to something elementally important that Deutsch observes in an unrelated discussion: “All scientific measurements use chains of proxies.” So long as language itself, perception —or more precisely, the inventive synthesis of perceptual data and mental interpretation that creates the world we know—, and measurement tools abstract us from the subject of our study, we can draw infinitely closer to it, but we cannot reach it, so to speak).

Our two deepest theories about the universe, Deutsch notes elsewhere, are in conflict: quantum mechanics and the general theory of relativity do not accord with one another and are, therefore, misconceptions, incomplete or incorrect. In this, we are precisely like ancient humankind, and like our forebears we struggle to conceive of our own ignorance; we tend to believe that we know quite a lot, and with impressive accuracy.

So we do. Deutsch demonstrates that although we will, barring extinction, continue to refine and improve our knowledge infinitely, we will also never stop being able to improve it. Thus we will always live with fallible scientific understanding (and fallible moral theories, fallible aesthetic ideas, fallible philosophical notions, etc.); it is the nature of the relationship between knowledge, mind, and universe.

But it remains odd to say: everything I know is a misconception.

Kateoplis posted a “Moon model by Johann FJ Schmidt at Chicago’s Field Museum, 1898.” One can scarcely imagine a more beautiful representation of knowledge, that strange abstraction which exerts so much control over the irreducible physical cosmos; as David Deutsch noted in his first TED talk:

Now how do we know about an environment that’s so far away, and so different, and so alien, from anything we’re used to? Well, the Earth —our environment, in the form of us— is creating knowledge. Well, what does that mean? Well, look out even further than we’ve just been —I mean from here, with a telescope— and you’ll see things that look like stars. They’re called “quasars.” “Quasars” originally meant quasi-stellar object. Which means things that look a bit like stars. But they’re not stars. And we know what they are. Billions of years ago, and billions of light years away, the material at the center of a galaxy collapsed towards a super-massive black hole. And then intense magnetic fields directed some of the energy of that gravitational collapse. And some of the matter, back out in the form of tremendous jets which illuminated lobes with the brilliance of —I think it’s a trillion suns.
Now, the physics of the human brain could hardly be more unlike the physics of such a jet. We couldn’t survive for an instant in it. Language breaks down when trying to describe what it would be like in one of those jets. It would be a bit like experiencing a supernova explosion, but at point-blank range and for millions of years at a time. And yet, that jet happened in precisely such a way that billions of years later, on the other side of the universe, some bit of chemical scum could accurately describe, and model, and predict, and explain, —above all— what was happening there, in reality. The one physical system, the brain, contains an accurate working model of the other, the quasar. Not just a superficial image of it, though it contains that as well, but an explanatory model, embodying the same mathematical relationships and the same causal structure.
Now that is knowledge. And if that weren’t amazing enough, the faithfulness with which the one structure resembles the other is increasing with time. That is the growth of knowledge. So, the laws of physics have this special property. That physical objects, as unlike each other as they could possibly be, can nevertheless embody the same mathematical and causal structure and to do it more and more so over time.

It is not solely humanity which is capable of this; all life, to some degree, embodies knowledge as a function of selection processes which reward, so to speak, successful adaptive responses to environments. But humans have a vastly greater degree of precision and accuracy in their knowledge than any other creature, in part because our knowledge is so often explicit, rather being than coded into inexplicit, lossy genomic systems; in part because our knowledge is representational in many ways, rather than merely responsive to stimuli; in part because of our capacity for abstraction and generalization; and largely because ours is aided, in innumerable ways, by tools we have constructed to help acquire knowledge.
These tools now themselves contain models precisely as our minds do; inside this room is a model of the moon, just as inside your mind are the models for countless phenomena you will never witness, never touch or feel, and yet whose shape and behavior you can predict with stunning accuracy. We know a great deal through statistical computation, but all such computation is contingent on explanatory models which "embody the same mathematical and causal structure" as this or that element of the natural world.
Man is above all else the maker of models. Real knowledge is not merely predictive but virtualizes; one needn’t go to the moon; one merely keeps a model of it at hand.
Also see E.C. Mendenhall’s notes on the evolution of our model of the moon.

Kateoplis posted a “Moon model by Johann FJ Schmidt at Chicago’s Field Museum, 1898.” One can scarcely imagine a more beautiful representation of knowledge, that strange abstraction which exerts so much control over the irreducible physical cosmos; as David Deutsch noted in his first TED talk:

Now how do we know about an environment that’s so far away, and so different, and so alien, from anything we’re used to? Well, the Earth —our environment, in the form of us— is creating knowledge. Well, what does that mean? Well, look out even further than we’ve just been —I mean from here, with a telescope— and you’ll see things that look like stars. They’re called “quasars.” “Quasars” originally meant quasi-stellar object. Which means things that look a bit like stars. But they’re not stars. And we know what they are. Billions of years ago, and billions of light years away, the material at the center of a galaxy collapsed towards a super-massive black hole. And then intense magnetic fields directed some of the energy of that gravitational collapse. And some of the matter, back out in the form of tremendous jets which illuminated lobes with the brilliance of —I think it’s a trillion suns.

Now, the physics of the human brain could hardly be more unlike the physics of such a jet. We couldn’t survive for an instant in it. Language breaks down when trying to describe what it would be like in one of those jets. It would be a bit like experiencing a supernova explosion, but at point-blank range and for millions of years at a time. And yet, that jet happened in precisely such a way that billions of years later, on the other side of the universe, some bit of chemical scum could accurately describe, and model, and predict, and explain, —above all— what was happening there, in reality. The one physical system, the brain, contains an accurate working model of the other, the quasar. Not just a superficial image of it, though it contains that as well, but an explanatory model, embodying the same mathematical relationships and the same causal structure.

Now that is knowledge. And if that weren’t amazing enough, the faithfulness with which the one structure resembles the other is increasing with time. That is the growth of knowledge. So, the laws of physics have this special property. That physical objects, as unlike each other as they could possibly be, can nevertheless embody the same mathematical and causal structure and to do it more and more so over time.

It is not solely humanity which is capable of this; all life, to some degree, embodies knowledge as a function of selection processes which reward, so to speak, successful adaptive responses to environments. But humans have a vastly greater degree of precision and accuracy in their knowledge than any other creature, in part because our knowledge is so often explicit, rather being than coded into inexplicit, lossy genomic systems; in part because our knowledge is representational in many ways, rather than merely responsive to stimuli; in part because of our capacity for abstraction and generalization; and largely because ours is aided, in innumerable ways, by tools we have constructed to help acquire knowledge.

These tools now themselves contain models precisely as our minds do; inside this room is a model of the moon, just as inside your mind are the models for countless phenomena you will never witness, never touch or feel, and yet whose shape and behavior you can predict with stunning accuracy. We know a great deal through statistical computation, but all such computation is contingent on explanatory models which "embody the same mathematical and causal structure" as this or that element of the natural world.

Man is above all else the maker of models. Real knowledge is not merely predictive but virtualizes; one needn’t go to the moon; one merely keeps a model of it at hand.

Also see E.C. Mendenhall’s notes on the evolution of our model of the moon.

“Look in the mirror, and don’t be tempted to equate transient domination with either intrinsic superiority or prospects for extended survival.”

Stephen Jay Gould, quoted as a creative prompt for me by superheroic Raynor Ganan, who once wrote something that has stayed with me, and among my drafts for further discussion, for more than a year now:

True or false: evolution is a brute force hack.

I think that’s a brilliant question. The tension between it and Gould’s admonishment is both historical and philosophical: historical because Gould, like Sagan and Percy, lived when it was imminently reasonable to ponder the end of man in a nuclear holocaust, when ordinary people -and not solely the religious- felt that armageddon could soon be upon us.

It no longer seems imminent, but it remains immanent: it is not simply Chekovian to observe that the weapon introduced into history will be used and used again, and of course nuclear arms are just one way among many that our species might be eradicated, all our works returned to thoughtless nature. One might dispassionately or bitterly or eagerly contemplate the destruction of cities and civilizations, but it is sobering to reflect on the fact that absent humanity, there is no knowledge whatever of the universe: the isomorphic comprehension of the cosmos is ours alone, so far as we know, and the end of us is the end of conscious understanding.

Or is it? The tension is philosophical because when Raynor asks whether evolution is a brute force hack, he begs the question, likely deliberately: what system or problem is being hacked? What might be unlocked should a solution be found? And what determines whether a particular ‘hack’ -a genetic variation- is successful? What problem is being solved by evolutionary processes, if any at all?

There is an implied teleology in our commonsensical reduction of evolution: we conflate natural selection with a qualitative superiority beyond mere reproductive fitness and feel that life is pursuing some end, but what? What is the teleological aim of mutation, fitness, life, death? (Note than an evolutionary biologist might consider these questions meaningless, unfalsifiable, or extraneous, and so they might be).

David Deutsch argues that we are comprehension machines: that our DNA is a kind of encoded knowledge of “what works,” and what works is the essence of science. Organisms reflect the structure of the physical world back in their own structure; we are mirrors for the universe and the laws of physics, and our specialness as a species is that we know this and have accelerated our mirroring through scientific methodology and technology. Every creature, to thrive in its world, comes to represent a programmatic understanding of its environment; we do so consciously, and therefore at great speed and with novel accuracy.

Is this an intrinsic superiority? It extends the mirror in which the universe beholds itself far beyond our typical environment; it increases the rate at which comprehension is developed by many, many orders of magnitude; it seems likely to permit the expansion of the species into space, to diversify our biosphere, increase our distributed redundancy, improve our chances of survival in the event of cosmic or human disaster, and thereby further ensure that our genetic and cognitive virtualization of the universe is preserved.

While ours might be a transient domination, we are likelier to endure past "the last ding-dong of doom" than any species before us, and we remain the only known instance of the universe consciously understanding itself. This scarcely combats our many destructive tendencies, but the durability, portability, translatability, and extensibility of humans -computation and comprehension machines who can self-program and refashion the material world- is worth considering when we look in the mirror.

(Related).

For a period of time -10-43 seconds, the Planck epoch- after the Big Bang, the universe had a marvelously whole quality: possessed of astonishing uniformity, its constituents had not yet fractured into what we know as reality. Mass and energy, the fundamental forces, electromagnetic radiation, even gravity were all unified. Their manifestation: the Higgs boson, a proposed particle in the standard model that is the theoretical source of mass.
The Higgs boson does not now come into being very commonly in the universe: only where collisions of particles at incredible speeds are occurring could it briefly exist (hence LHC). But all that we are and all we see comes about from its degradation, from its dissolution from a unitary whole into the elements of physical reality. Everything is built from its decay.
It seems hard not to think of this metaphorically, although to do so would surely be distasteful to a scientist who properly understood the processes involved. But aside from my friend B., who explained this to me but is not to blame for any confusion on my part, I don’t know many scientists, so: for what could this be a metaphor?

For a period of time -10-43 seconds, the Planck epoch- after the Big Bang, the universe had a marvelously whole quality: possessed of astonishing uniformity, its constituents had not yet fractured into what we know as reality. Mass and energy, the fundamental forces, electromagnetic radiation, even gravity were all unified. Their manifestation: the Higgs boson, a proposed particle in the standard model that is the theoretical source of mass.

The Higgs boson does not now come into being very commonly in the universe: only where collisions of particles at incredible speeds are occurring could it briefly exist (hence LHC). But all that we are and all we see comes about from its degradation, from its dissolution from a unitary whole into the elements of physical reality. Everything is built from its decay.

It seems hard not to think of this metaphorically, although to do so would surely be distasteful to a scientist who properly understood the processes involved. But aside from my friend B., who explained this to me but is not to blame for any confusion on my part, I don’t know many scientists, so: for what could this be a metaphor?

Wizards in the Gaps4

I have to disagree with Simen’s critique of David Deutsch, whose theory of knowledge is based largely on the work of Karl Popper. Simen argues that Deutsch is wrong to criticize “explanationless theories” because explanation is itself a weak form of knowledge; in Simen’s argument, wizards underlie all explanations because one can endlessly question explanations: “We end up with an infinite regress: what caused that? What caused what caused that?”

Simen’s answer: at the root of all sciences are irreducible explanations which, because of their irreducibility, constitute “assumptions” which may as well be wizards for their magical role in the construction of our theories.

This is not the case, neither in physics nor in other fields (which do in fact materially reduce to physics: a major cause of tension between biologists and physicists in particular, as Schrödinger discussed in his lectures on the structure of life; we do not “kid ourselves” when we assert that no model of knowledge which cannot in principle scale in accordance with the physical structure of reality and its laws has any legitimacy). It is particularly untrue of the fecund territory at the roots of our sciences, where work to more deeply correlate high-level complexity with the fundamental rules of the universe is most exciting.

What is the case: our explanations are momentarily incomplete (we can assume they will remain so, ever-perfectable, asymptotically approaching completion). Simen’s assertions that where there is no knowledge there are wizards is like an anti-epistemological “God of the Gaps” game: if we posit a working model of the subatomic world and a cosmological timeline that explains how that world came into being over the life of the universe, he says, “Well, what caused that?” If we answer, “The nature of the universe at the Big Bang,” he repeats his question. We might answer this game of endless querying:

  1. The concept of causal relations depends on time, which language construes in a manner not consistent with its actual properties in the universe (and which has itself not always existed, so to speak); you are building sentences to query a universe that may not always follow the rules of sentences. (Indeed, many of Simen’s sentences are metaphysical, not scientific; they would not be sensible to a physicist: “How can we explain which [universe] we ended up with?” begs as many questions as it asks, both from language and physics).
  2. Perhaps something did cause, say, the Big Bang, and we don’t know what yet; that we don’t know yet what caused it does not mean we assume wizards, just as those who developed chemistry without understanding atomic properties weren’t assuming wizards.

Neither gods nor wizards live in the spaces where we don’t yet have explanatory models; nor are these spaces filled by “assumptions.” Instead, they await their explanations, which are not merely descriptive or observational in nature but in fact recreate the functional and formal structures they describe in model form, imaginatively. Once we explain something, we have virtualized it mentally. This is the most important property of knowledge, and one that I believe has special significance for humanity’s future in the cosmos. Popper and Deutsch are, in my view, quite right to argue that explanation is the basis of scientific progress, not the accumulation of predictions drawn from uncomprehended data.

“Perhaps the biggest question of all is whether the process of inquiry that has revealed so much about the universe since the time of Galileo and Kepler is nearing the end of the line. “I worry whether we’ve come to the limits of empirical science,” says Lawrence Krauss of Arizona State University. Specifically, Krauss wonders if it will require knowledge of other universes, such as those posed by Carroll, to understand why our universe is the way it is. If such knowledge is impossible to access, it may spell the end for deepening our understanding any further.”

Petichou linked to an article on some of the preoccupations of contemporary physicists, and I was struck by the paragraph above; Krauss’ is a curious concern.

It is often noted that one of the defining qualities of our universe is its comprehensibility, but it might just as well be said that comprehension is a defining quality of mind. This symmetry between the knowable universe and the knowing mind reflects an important quality of the latter: it does not merely observe, record, and inductively detect intelligible connections.

Rather: it encompasses, interiorizes, virtualizes, and explains holistically. That is to say that the mind is an organ which can contain within itself accurate models of all phenomena in the form of explanations. These models are akin to virtualizations: we can recreate within our minds even what we cannot observe, and we can do so such that those recreations are astonishingly isomorphic to their real counterparts.

This is the metaphorical basis for cognition: we construct metaphorical models (theories, ideas, terms) which retain the logical properties and relations of their subjects so that we are not dependent on accessibility for knowledge. We cannot, for example, see the Big Bang; the perplexing flow of time prevents it. Yet we can model it with incredibly acuity, and our virtualizing computational minds allow us to extract from those models conclusions which predict and explain the behavior of the physical universe.

Nothing about the multiverse would be different, regardless of its observational accessibility. I am surprised to read Krauss’ epistemological anxiety, since it would be an event unprecedented in the history of physical reality were we to encounter something fundamentally incomprehensible. I imagine David Deutsch, in particular, would object that such a development would be unlikely given the evolution of mind within physical reality, an evolution which has allowed the former to contain the latter with profound accuracy.

(In this sense, mind –including its externalized components, such as computer networks- may be the only element of reality which can in theory contain reality, although Walker Percy claimed that mind cannot, as a semiotic matter, contain itself: hence the success of the sciences and the failures of modern selfhood).

"Phosphenes," from Andrew Coulter Enright.
The inimitable S. Stratodrive informed me that the phenomenon in which one one sees spiralling, luminescing mosaics and masses of ghostly color when one presses one’s hands into one’s eyes is "an entropic phenomenon called a 'pressure phosphene' and it’s a result of stimulating your retinal ganglion cells.” He also shared that it’s sometimes called “prisoner’s cinema” by those in the darkness of jail.
The stimulation of these cells need not be manual: phosphenes can also result from magnetic fields, radiation, drugs, standing too quickly, or other conditions. Amazingly, astronauts report seeing phosphenes, presumably due to the radiation they encounter in space.
This is evidently because the high-energy particle radiation in space, blocked for us by our atmosphere, activates the cells responsible for detecting light; while I initially assumed this meant that, in a sense, we see such radiation (in a beautiful kaleidoscopic way), another author suggests a different explanation:
These ionizing radiation-induced free radicals generate chemiluminescent photons from lipid peroxidation, which are absorbed by the photoreceptor chromophores, modify[ing] the rhodopsin molecules (bleaching) and start[ing] the photo-transduction cascade resulting in the perception of phosphene lights.
I’m sure Jack can comment further, but I would note that (1) I think phosphenes are beautiful and, in their demonstration of the lower-order processes of our perceptions, fascinating; (2) I learned the word “psychoplasticity” while reading about this; and (3) the image above is a composite of photographs of lightstick chemicals poured into a toilet; I was searching for representations of phosphenes, which I’d like to see, and it was the best I found.
Update: be sure to read the King of Joy’s excellent corrections and expansions on this subject, on his fine site or in the comment below. Thanks, Ben!

"Phosphenes," from Andrew Coulter Enright.

The inimitable S. Stratodrive informed me that the phenomenon in which one one sees spiralling, luminescing mosaics and masses of ghostly color when one presses one’s hands into one’s eyes is "an entropic phenomenon called a 'pressure phosphene' and it’s a result of stimulating your retinal ganglion cells.” He also shared that it’s sometimes called “prisoner’s cinema” by those in the darkness of jail.

The stimulation of these cells need not be manual: phosphenes can also result from magnetic fields, radiation, drugs, standing too quickly, or other conditions. Amazingly, astronauts report seeing phosphenes, presumably due to the radiation they encounter in space.

This is evidently because the high-energy particle radiation in space, blocked for us by our atmosphere, activates the cells responsible for detecting light; while I initially assumed this meant that, in a sense, we see such radiation (in a beautiful kaleidoscopic way), another author suggests a different explanation:

These ionizing radiation-induced free radicals generate chemiluminescent photons from lipid peroxidation, which are absorbed by the photoreceptor chromophores, modify[ing] the rhodopsin molecules (bleaching) and start[ing] the photo-transduction cascade resulting in the perception of phosphene lights.

I’m sure Jack can comment further, but I would note that (1) I think phosphenes are beautiful and, in their demonstration of the lower-order processes of our perceptions, fascinating; (2) I learned the word “psychoplasticity” while reading about this; and (3) the image above is a composite of photographs of lightstick chemicals poured into a toilet; I was searching for representations of phosphenes, which I’d like to see, and it was the best I found.

Update: be sure to read the King of Joy’s excellent corrections and expansions on this subject, on his fine site or in the comment below. Thanks, Ben!

“Oppenheimer, they tell me you are writing poetry. I do not see how a man can work on the frontiers of physics and write poetry at the same time. They are in opposition. In science you want to say something that nobody knew before, in words which everyone can understand. In poetry you are bound to say something that everybody knows already in words that nobody can understand.”
The brilliant physicist Paul Dirac, who seems not to have understood poetry, in a remark to Robert Oppenheimer. Thanks, dad!
One of the photos we took with the telescope. The large feature towards the bottom right is Mare Crisum, the Sea of Storms.
A few years ago, I lost a crucial argument because of this fact:
"The Moon is in synchronous rotation, which means it rotates about its axis in about the same time it takes to orbit the Earth. This results in it keeping nearly the same face turned towards the Earth at all times."
This amazes me. Also, I resent this furtive concealment, even if it’s just lunar modesty, and am trying to determine which side is better so that I can properly attenuate my irritation:
The two hemispheres have distinctly different appearances, with the near side covered in multiple, large maria (Latin for ‘seas’…). The far side has a battered, densely cratered appearance with few maria.
Like everyone, I find whatever is kept away more intriguing and am now cross with the moon for its secrecy.
(Note: this is not related to Cameron’s post about The Great Moon Hoax, or, for that matter, this classic UNLV track).

One of the photos we took with the telescope. The large feature towards the bottom right is Mare Crisum, the Sea of Storms.

A few years ago, I lost a crucial argument because of this fact:

"The Moon is in synchronous rotation, which means it rotates about its axis in about the same time it takes to orbit the Earth. This results in it keeping nearly the same face turned towards the Earth at all times."

This amazes me. Also, I resent this furtive concealment, even if it’s just lunar modesty, and am trying to determine which side is better so that I can properly attenuate my irritation:

The two hemispheres have distinctly different appearances, with the near side covered in multiple, large maria (Latin for ‘seas’…). The far side has a battered, densely cratered appearance with few maria.

Like everyone, I find whatever is kept away more intriguing and am now cross with the moon for its secrecy.

(Note: this is not related to Cameron’s post about The Great Moon Hoax, or, for that matter, this classic UNLV track).

“It would take as many human bodies to make up the sun as there are atoms in each of us. The geometric mean of the mass of a proton and the mass of the sun is 50 kilograms, within a factor of two of the mass of each person here.”

Sir Martin Rees in a TED lecture. He suggests that humans have evolved to this scale, an almost beautiful mean between stars and atomic particles, because we must be large enough to permit massive complexity in structure while small enough to experience minimal gravitational effects.

This idea reminds me of Schrödinger’s amazing explanation of why the fundamental components of human life -particularly DNA- are sized as they are.

It always makes me feel rather happy to think that everything had to be just so for our world, as we know it, to occur. Rees calls this quality of the universe its biophilia and describes it more here.