Meta is Murder


I am an allergic and reactive person, most outraged by the sorts of intellectual atrocities I myself commit. To say this is merely to assert the personal applicability of the now-hoary Hermann Hesse adage:

"If you hate a person, you hate something in him that is part of yourself. What isn’t part of ourselves doesn’t disturb us."

Hesse is a figure whom I regard with suspicion, and again: it seems to me likely that this is due to our mutual habits of appropriation, though whereas he recapitulates Eastern religious ideas in semi-novelistic form for his audience of early 20th-century European exoticists, I recapitulate in semi-essayistic form 20th-century European ideas from Kundera, Gombrowicz, Popper, and others. In this as in all cases, it is the form and not the content that matters.

To describe someone formally, we might say: “She is certain of her rightness, intolerant of those who disagree with her.” But to describe the content is necessarily to stray from the realm of the psychological —which is enduring, for the most part— into the realm of ephemera masquerading as philosophy: “She is for X, fighting against those who believe Y.” You and I have opinions about X and Y; we will judge her according to those opinions, even though in the fullness of time an opinion about X or Y will matter as much as the position of a farmer on the Huguenot question. History does not respect our axes and categories, although we believe as ever that they are of life-and-death import. History looks even less kindly on the sense of certainty which nearly all of us attain about our beliefs.

Art and understanding are concerned with forms; politics and judgement are concerned with content. I think of them algebraically: what can be described in variables has greater range, explanatory power, and reach than the specific arithmetic of some sad concluded homework problem.

Some of my smartest friends love Hesse. When I read him I am often struck by the familiarity of his ideas; I cannot tell whether I learned them through other authors who read him, through ambient culture, or through myself, my own reflections, but I know that they often seem to me to be apt instantiations of ideas nearly folklorish in nature, as is the case with the axiom quoted above. Perhaps it is simply that other moral principles lead to the same conclusion, so that Hesse seems as though he arrives at the end, rather than the middle, of the inquiry.

One such principle is well phrased by Marilynne Robinson in her essay “When I was a Child,” in her collection When I Was a Child I Read Books:

"It may be mere historical conditioning, but when I see a man or a woman alone, he or she looks mysterious to me, which is only to say that for a moment I see another human being clearly."

The idea that a human seen clearly is a mystery is anathema to a culture of judgment —such as ours— which rests on a simple premise: humans can be understood by means of simple schema that map their beliefs or actions to moral categories. Moreover, because there are usually relatively few of these categories, and few important issues of discernment —our range of political concerns being startlingly narrow, after all— humans can be understood and judged at high speed in large, generalized groups: Democrats, Republicans, women, men, people of color, whites, Muslims, Christians, the rich, the poor, Generation X, millennials, Baby Boomers, and so on.

It should but does not go without saying that none of those terms describes anything with sufficient precision to support the kinds of observations people flatter themselves making. Generalization is rarely sound. No serious analysis, no serious effort to understand, describe, or change anything can contain much generalization, as every aggregation of persons introduces error. One can hardly describe a person in full, let alone a family, a city, a class, a state, a race. Yet we persist in doing so, myself included.

Robinson continues:

"Tightly knit communities in which members look to one another for identity, and to establish meaning and value, are disabled and often dangerous, however polished their veneer. The opposition frequently made between individualism on the one hand and responsibility to society on the other is a false opposition as we all know. Those who look at things from a little distance can never be valued sufficiently. But arguments from utility will never produce true individualism. The cult of the individual is properly aesthetic and religious. The significance of every human destiny is absolute and equal. The transactions of conscience, doubt, acceptance, rebellion are privileged and unknowable…"

There is a kind of specious semi-rationalism involved in what she calls “utility”: the rationalism that is not simply concerned with logical operations and sound evidentiary processes but also with excluding anything it does not circumscribe. That is to say: the totalizing rationalism that denies a human is anything more than her utility, be it political or economic or whatever. Such rationalism seems intellectually sound until one, say, falls in love, or first encounters something that resists knowing, or reads about the early days of the Soviet Union: when putatively “scientifically known historical laws of development” led directly to massacres we can just barely admit were a kind of error, mostly because murder seems unsavory (even if murderously hostile judgment remains as appealing to us as ever).

One of the very best things Nietzsche ever wrote:

"The will to a system is a lack of integrity."

But to systematize is our first reaction to life in a society of scale, and our first experiment as literate or educated or even just “grown-up” persons with powers of apprehension, cogitation, and rhetoric. What would a person be online if he lacked a system in which phenomena could be traced to the constellation of ideas which constituted his firmament? What is life but the daily diagnosis of this or that bit of news as “yet another example of” an overarching system of absolutely correct beliefs? To have a system is proof of one’s seriousness, it seems —our profiles so often little lists of what we “believe,” or what we “are”— and we coalesce around our systems of thought just as our parents did around their political parties, though we of course consider ourselves mere rationalists following the evidence. Not surprisingly, the evidence always leads to the conclusion that many people in the world are horrible, stupid, even evil; and we are smart, wise, and good. It should be amusing, but it is not.

I hate this because I am doing this right now. I detest generalization because when I scan Twitter I generalize about what I see: “people today,” or “our generation,” I think, even though the people of today are as all people always have been, even though they are all just like me. I resent their judgments because I feel reduced by them and feel reality is reduced, so I reduce them with my own judgments: shallow thinkers who lack, I mutter, the integrity not to systematize. And I put fingers to keys to note this system of analysis, lacking all integrity, mocking my very position.

I want to maintain my capacity to view each as a mystery, as a human in full, whose interiority I cannot know. I want not to be full of hatred, so I seek to confess that my hatred is self-hatred: shame at the state of my intellectual reactivity and decay. I worry deeply that our systematizing is inevitable because when we are online we are in public: that these fora mandate performance, and worse, the kind of performance that asserts its naturalness, like the grotesquely beautiful actor who says, "Oh, me? I just roll out of bed in the morning and wear whatever I find lying about" as he smiles a smile so practiced it could calibrate the atomic clock. Every online utterance is an angling for approval; we write in the style of speeches: exhorting an audience, haranguing enemies, lauding the choir. People “remind” no one in particular of the correct ways to think, the correct opinions to hold. When I see us speaking like op-ed columnists, I feel embarrassed: it is like watching a lunatic relative address passers-by using the “royal we,” and, I feel, it is pitifully imitative. Whom are we imitating? Those who live in public: politicians, celebrities, “personalities.”

There is no honesty without privacy, and privacy is not being forbidden so much as rendered irrelevant; privacy is an invented concept, after all, and like all inventions must contend with waves of successive technologies or be made obsolete. The basis of privacy is the idea that judgment should pertain only to public acts —acts involving other persons and society— and not the interior spaces of the self. Society has no right to judge one’s mind; society hasn’t even the right to inquire about one’s mind. The ballot is secret; one cannot be compelled to testify or even talk in our criminal justice system; there can be no penalty for being oneself, however odious we may find given selves or whole (imagined) classes of selves.

This very radical idea has an epistemological basis, not a purely moral one: the self is a mystery. Every self is a mystery. You cannot know what someone really is, what they are capable of, what transformations of belief or character they might undergo, in what their identity consists, what they’ve inherited or appropriated, what they’ll abandon or reconsider; you cannot say when a person is who she is, at what point the “real” person exists or when a person’s journey through selves has stopped. A person is not, we all know, his appearance; but do we all know that she is not her job? Or even her politics? 

But totalizing rationalism is emphatic: either something is known or it is irrelevant. Thus: the mystery of the self is a myth; there is no mystery at all. A self is valid or invalid, useful or not, correct or incorrect, and if someone is sufficiently different from you, if their beliefs are sufficiently opposed to yours, their way of life alien enough, they are to be judged and detested. Everyone is a known quantity; simply look at their Twitter bio and despise.

But this is nonsense. In truth, the only intellectually defensible posture is one of humility: all beliefs are misconceptions; all knowledge is contingent, temporary, erroneous; and no self is knowable, not truly, not to another. We can perhaps sense this in ourselves —although I worry that many of us are too happy to brag about our conformity to this or that scheme or judgment, to use labels that honor us as though we’ve earned ourselves rather than chancing into them— but we forget that this is true of every single other, too. This forgetting is the first step of the so-called othering process: forget that we are bound together in irreducibility, forget that we ought to be humble in all things, and especially in our judgments of one another.

Robinson once more:

"Only lonesomeness allows one to experience this sort of radical singularity, one’s greatest dignity and privilege."

Lonesomeness is what we’re all fleeing at the greatest possible speed, what our media now concern themselves chiefly with eliminating alongside leisure. We thus forget our radical singularity, a personal tragedy, an erasure, a hollowing-out, and likewise the singularity of others, which is a tragedy more social and political in nature, and one which seems to me truly and literally horrifying. Because more than any shared “belief system” or political pose, it is the shared experience of radical singularity that unites us: the shared experience of inimitability and mortality. Anything which countermands our duty to recognize and honor the human in the other is a kind of evil, however just its original intention.

Free Will & the Fallibility of Science

One of the most significant intellectual errors educated persons make is in underestimating the fallibility of science. The very best scientific theories containing our soundest, most reliable knowledge are certain to be superseded, recategorized from “right” to “wrong”; they are, as physicist David Deutsch says, misconceptions:

I have often thought that the nature of science would be better understood if we called theories “misconceptions” from the outset, instead of only after we have discovered their successors. Thus we could say that Einstein’s Misconception of Gravity was an improvement on Newton’s Misconception, which was an improvement on Kepler’s. The neo-Darwinian Misconception of Evolution is an improvement on Darwin’s Misconception, and his on Lamarck’s… Science claims neither infallibility nor finality.

This fact comes as a surprise to many; we tend to think of science —at the point of conclusion, when it becomes knowledge— as being more or less infallible and certainly final. Science, indeed, is the sole area of human investigation whose reports we take seriously to the point of crypto-objectivism. Even people who very much deny the possibility of objective knowledge step onto airplanes and ingest medicines. And most importantly: where science contradicts what we believe or know through cultural or even personal means, we accept science and discard those truths, often wisely.

An obvious example: the philosophical problem of free will. When Newton’s misconceptions were still considered the exemplar of truth par excellence, the very model of knowledge, many philosophers felt obliged to accept a kind of determinism with radical implications. Give the initial-state of the universe, it appeared, we should be able to follow all particle trajectories through the present, account for all phenomena through purely physical means. In other words: the chain of causation from the Big Bang on left no room for your volition:

Determinism in the West is often associated with Newtonian physics, which depicts the physical matter of the universe as operating according to a set of fixed, knowable laws. The “billiard ball” hypothesis, a product of Newtonian physics, argues that once the initial conditions of the universe have been established, the rest of the history of the universe follows inevitably. If it were actually possible to have complete knowledge of physical matter and all of the laws governing that matter at any one time, then it would be theoretically possible to compute the time and place of every event that will ever occur (Laplace’s demon). In this sense, the basic particles of the universe operate in the same fashion as the rolling balls on a billiard table, moving and striking each other in predictable ways to produce predictable results.

Thus: the movement of the atoms of your body, and the emergent phenomena that such movement entails, can all be physically accounted for as part of a chain of merely physical, causal steps. You do not “decide” things; your “feelings” aren’t governing anything; there is no meaning to your sense of agency or rationality. From this essentially unavoidable philosophical position, we are logically-compelled to derive many political, moral, and cultural conclusions. For example: if free will is a phenomenological illusion, we must deprecate phenomenology in our philosophies; it is the closely-clutched delusion of a faulty animal; people, as predictable and materially reducible as commodities, can be reckoned by governments and institutions as though they are numbers. Freedom is a myth; you are the result of a process you didn’t control, and your choices aren’t choices at all but the results of laws we can discover, understand, and base our morality upon.

I should note now that (1) many people, even people far from epistemology, accept this idea, conveyed via the diffusion of science and philosophy through politics, art, and culture, that most of who you are is determined apart from your will; and (2) the development of quantum physics has not in itself upended the theory that free will is an illusion, as the sort of indeterminacy we see among particles does not provide sufficient room, as it were, for free will.

Of course, few of us can behave for even a moment as though free will is a myth; there should be no reason for personal engagement with ourselves, no justification for “trying” or “striving”; one would be, at best, a robot-like automaton incapable of self-control but capable of self-observation. One would account for one’s behaviors not with reasons but with causes; one would be profoundly divested from outcomes which one cannot affect anyway. And one would come to hold that, in its basic conception of time and will, the human consciousness was totally deluded.

As it happens, determinism is a false conception of reality. Physicists like David Deutsch and Ilya Prigogine have, in my opinion, defended free will amply on scientific grounds; and the philosopher Karl Popper described how free will is compatible in principle with a physicalist conception of the universe; he is quoted by both scientists, and Prigogine begins his book The End of Certainty, which proposes that determinism is no longer compatible with science, by alluding to Popper:

Earlier this century in The Open Universe: An Argument for Indeterminism, Karl Popper wrote,” Common sense inclines, on the one hand, to assert that every event is caused by some preceding events, so that every event can be explained or predicted… On the other hand, … common sense attributes to mature and sane human persons… the ability to choose freely between alternative possibilities of acting.” This “dilemma of determinism,” as William James called it, is closely related to the meaning of time. Is the future given, or is it under perpetual construction?

Prigogine goes on to demonstrate that there is, in fact, an “arrow of time,” that time is not symmetrical, and that the future is very much open, very much compatible with the idea of free will. Thus: in our lifetimes we have seen science —or parts of the scientific community, with the rest to follow in tow— reclassify free will from “illusion” to “likely reality”; the question of your own role in your future, of humanity’s role in the future of civilization, has been answered differently just within the past few decades.

No more profound question can be imagined for human endeavor, yet we have an inescapable conclusion: our phenomenologically obvious sense that we choose, decide, change, perpetually construct the future was for centuries contradicted falsely by “true” science. Prigogine’s work and that of his peers —which he calls a “probabilizing revolution” because of its emphasis on understanding unstable systems and the potentialities they entail— introduces concepts that restore the commonsensical conceptions of possibility, futurity, and free will to defensibility.

If one has read the tortured thinking of twentieth-century intellectuals attempting to unify determinism and the plain facts of human experience, one knows how submissive we now are to the claims of science. As Prigogine notes, we were prepared to believe that we, “as imperfect human observers, [were] responsible for the difference between past and future through the approximations we introduce into our description of nature.” Indeed, one has the sense that the more counterintuitive the scientific claim, the eagerer we are to deny our own experience in order to demonstrate our rationality.

This is only degrees removed from ordinary orthodoxies. The point is merely that the very best scientific theories remain misconceptions, and that where science contradicts human truths of whatever form, it is rational to at least contemplate the possibility that science has not advanced enough yet to account for them; we must be pragmatic in managing our knowledge, aware of the possibility that some truths we intuit we cannot yet explain, while other intuitions we can now abandon. My personal opinion, as you can imagine, is that we take too little note of the “truths,” so to speak, found in the liberal arts, in culture.

It is vital to consider how something can be both true and not in order to understand science and its limitations, and even more the limitations of second-order sciences (like social sciences). Newton’s laws were incredible achievements of rationality, verified by all technologies and analyses for hundreds of years, before their unpredicted exposure as deeply flawed ideas applied to a limited domain which in total provide incorrect predictions and erroneous metaphorical structures for understanding the universe.

I never tire of quoting Karl Popper’s dictum:

Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve.

It is hard but necessary to have this relationship with science, whose theories seem like the only possible answers and whose obsolescence we cannot envision. A rational person in the nineteenth century would have laughed at the suggestion that Newton was in error; he could not have known about the sub-atomic world or the forces and entities at play in the world of general relativity; and he especially could not have imagined how a theory that seemed utterly, universally true and whose predictive and explanatory powers were immense could still be an incomplete understanding, revealed by later progress to be completely mistaken about nearly all of its claims.

Can you imagine such a thing? It will happen to nearly everything you know. Consider what “ignorance” and “knowledge” really are for a human, what you can truly be certain of, how you should judge others given this overwhelming epistemological instability!

The Charisma of Leaders

“The leaders always had good consciences, for conscience in them coalesced with will, and those who looked on their face were as much smitten with wonder at their freedom from inner restraint as with awe at the energy of their outward performances.”

In The Varieties of Religious Experience, William James identifies the union of conscience and will in leaders as one of their defining attributes. By conscience he means their values, their morality, their meaning-systems; and by will he means their volition, their drive, their constant, daily intentionality. Thus: their actions are in accord with their ideals. Their desires constantly reflect their beliefs.

For most of us, this is not so: there is a frustrating gap between them, such that we’re not in accord with our own values, no matter how badly we wish to be. Our moral commitments are overwhelmed routinely, and our behavior subverts, distracts, and disappoints us. Perhaps we accept a remunerative job rather than dedicating our lives to what we feel is most important; or we pursue the important, but we get sleepy and head home from the office earlier than we suspect we should; we call in sick when we’re perfectly well; or we come to feel that our calling isn’t so important as we thought. We have doubts and waste time; we crave freedom and idle time, but regret our lack of purpose. We are not as dedicated in friendship as we aspire to be; we grow irritated by what we know is superficial, meaningless; and so on ad nauseum. Because this is one of the defining qualities of human life, examples abound and more are likely unneeded.

James says that for “leaders,” this is not so; and more importantly, because it is not so, we are “as much smitten with wonder at their freedom from inner restraint as with awe at the energy of their outward performances.”

The Steve Jobs Myths

No one who has read about Steve Jobs can escape a certain sense of perplexity concerning him. A figure praised as brilliant, profound, and revolutionary, someone who purportedly saw deeply into the mysteries of creativity and human life, and who was unquestionably responsible for a great deal of innovation, was also prone to facile irrationality, appallingly abusive and callow behavior, the dullest sorts of homilies, and seeming shallowness about his own attributes and habits.

Show a video of or read a passage about the man who absurdly concluded his commencement speech at Stanford with ”stay hungry, stay foolish” —a hackneyed Hallmark phrase that might as well be printed on a motivational poster outside of Steve Ballmer’s office— to someone not already indoctrinated, and their reaction will surprise you. His pinched voice droning on with quite-typical businessman phrases; his endless references to the most ordinary pop-art, from The Beatles to U2 to John Mayer; his casually superficial understanding of the spirituality he ostensibly sought during various phases of his life; his fruitarian diets and basic scientific ignorance, suggestive of a narcissistic mysticism: these will all fail to impress an ordinary person. As with Apple’s often-cited but never-achieved marketing perfection, the myth obscures the truth. The "Reality Distortion Field" does not seem to work except on people for whom its existence is already a given, or for people who knew him in real life.

People who knew him, notably, often report a total awe at the power of his personality and mind, a power that overwhelmed them, catalyzed some of their greatest creativity and effort, inspired them them with its focus and its capacity to find the point, the consequence, the animating vision in any effort. There is no question that Jobs was a rare sort of individual, one whom I credit with dramatically improving human access to creativity-supporting computation (among other feats that matter to me a great deal). But there is reason to wonder: in what did his greatness consist?

(Walter Isaacson’s wasteful biography is hardly helpful here, incidentally. It is a mere recounting of interviews, none well-contextualized or examined satisfactorily. It reads like an endless Time article).

A Unity of Conscience and Will

What Jobs was was indefatigable, convinced of the rightness of his pursuits —whatever they happened to be at any given time— and always in possession of a unified conscience and will. Whether flattering or cajoling a partner, denying his responsibility for his daughter, steering a company or a project, humiliating a subordinate, driving designers and engineers to democratize the “bicycle for the mind” so that computation and software could transform lives around the world, or renovating his house, he was, as they say, “single-minded,” and he never seems to have suffered from distance between his values and his actions. He believed in what he did, and was perfectly content to do whatever it took to achieve his ends. It is hard to imagine Jobs haunted by regrets, ruing this or that interpersonal cruelty; moreover, one can imagine how he might justify not regretting his misdeed, deploying a worn California aphorism like “I believe in not looking back.”

Many are willing to behave this way, of course; any number of startup CEOs take adolescent pride in aping Jobs, driving their employees to long hours, performing a sham mercuriality, pushing themselves far past psychological health in order to show just how dedicated they are. Rare is the CEO for whom this produces better results, however, than he or she would have attained with ordinary management methods.

Perhaps this is because for them, it is an act: it is an adopted methodology selected in order to assure whatever the CEO’s goals are, whether they entail wealth or the esteem of peers or conformity to the heroic paradigm he or she most admires. That is to say: there is for him or her the typical chasm between conscience and will, and as social animals, we register their confusion as we register our own. And what we seek in leaders is confidence, not confusion.

For Jobs, while there were surely elements of performance —as there have been with history’s greatest leaders, tyrants and heroes alike— there was at core an iron unity of purpose and practice. This may have been the source of the charisma for which he is famous —which is emphatically not due to the reasons most typically cited— and it is also, as James notes, related to his “energy of…outward performance…” If you really believe in what you do —and Jobs seemed to believe in whatever he did, as a function of personality— you do not tire until your body is overcome. And Jobs, as is well known, pushed himself and others to exhaustion, to mental fragility, to breakdown.

Morality and Praxis

James does not explain why this kind of unity is so magnetic, so charismatic, but his broader discussion of various types of persons imply that it may have something to do with the perennial problem of human meaning: the confrontation between morality —which tends to be ideal— and praxis, in which innumerable considerations problematize and overwhelm us.

There are two exemplary solutions to this problem in human history, opposed to the third path most of us take: muddling through and bargaining in internal monologues about what we ought to be while compromising constantly:

  1. "Saints," who decide to live in accordance with religious values no matter the cost; for example, believing that money is both meaningless and corrupting, they vow poverty, and fall from society.
  2. "Leaders," who live in accordance with their own values, or values of some community that is worldly in its intentions, such that they do not drop from society but seek to instantiate their values in it.

In an age in which religious values are, even by the religious, not considered sufficient for a turn from society —an age of “the cross in the ballpark,” as Paul Simon says, of churches that promise “the rich life,” of believers who look in disgust at the instantiation of their religions’ values— the leader emerges as our most prominent solution to the problem of meaning. She is the embodiment of values and an agent of their transformative influence on the world. She has the energy of purpose, the dedication of the saint but remains within the world, and sometimes improves it.

The value or articulation of the ideas, it is appropriate to mention, is less important than we might think; in the case of Jobs, it is not crucial that he had a system of philosophy that charted the place of design in problem-solving, problem-solving in human advancement, human-advancement in a moral context. Indeed, we might leave that to others entirely, others who write about such things rather than living each moment driving themselves and others to achieve them.

The toll leaders take is fearsome, but we admire them for using us up: better to be used, after all, than useless. This is why those who worked for Jobs so often cannot even begin to justify how he reduced so-and-so to tears, how he stole this or that bit of credit, how he crushed a former friend whom in his paranoia he suspected of disloyalty, and they scarcely care. What we admire about saints and leaders is not solely the values they exemplify but the totality with which they exemplify them, a totality alien to all of us whose lives are balanced between poles of conformity and dedication, commitment and restlessness.

Jobs himself understood the necessity of unifying conscience and will, but his words are no more capable of transforming us than an athlete’s post-game interview is of giving us physical talent:

Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven’t found it yet, keep looking. Don’t settle. As with all matters of the heart, you’ll know when you find it. And, like any great relationship, it just gets better and better as the years roll on. So keep looking. Don’t settle.

For most of us, settling is an inevitability, not because of insufficient exposure to these bland admonishments but because, unlike Jobs, we do not know what great work really is —we lack confidence in any system of values or ideals; we cannot give ourselves wholly over to anything without doubt; we cannot have faith, and utter dedication seems faintly ludicrous— or we cannot decide how much of ourselves or others we are willing to sacrifice. We want love and labor, freedom and meaning, flexibility and commitment. One has the strong sense that Jobs had no issue whatever with the idea of total, monomaniacal devotion to his cause, whatever that cause happened to be at any moment, whatever it demanded at any point of decision, however it was later judged. This is a kind of selfishness, too; it can hurt many people, and one cannot be assured that one is doing the right thing, since one might receive no signal from one’s family or peers that one’s dedication is sound, fruitful, worthwhile; for years of Jobs’ life, he did not. And of course: one might be wrong, and others might be misled, and one might immolate one’s life in error. There is no shortage of historical figures of whom we can say that such was the case.

When I read about Jobs, I am reminded far more of someone like Vince Lombardi than I am of any glamorous startup icon. Whether their monomania was “worth it” is of course a matter of whom you ask, and when. But imitating it is not useful; it is not a question of style or aesthetics of even ethics; monomania isn’t a process but a near-pathology, something that infests the mind, even as it brings it into accord with itself. Jobs seems to suggest that one should search for what infects one with it, and perhaps he was right, for while it is it is a dubious blessing, it is nevertheless one for which the world must often admit gratitude. As George Bernard Shaw famously said, “The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.” What is more reminiscent of Jobs than the unreasonable demand which despite the protests of all is satisfied, and which thereby improves the world?

That there is a tension between reasonableness and progress seems hard to accept, but it is also precisely the sort of befuddling dilemma that one encounters again and again in reading about Jobs: was it necessary for him to be cruel to be successful? Did he have to savage so-and-so in order to ship such-and-such? If he was such a man of taste, why were his artistic interests so undeveloped? Not only do I have no idea, it is obvious that among those who worked with him there is no sense of certainty either. This seems to me to reflect, in part, a simple fact: Jobs’ values are not common values, and even among those of us who admire him, his indifference to the feelings of others, even those who loved and needed him, is hard to accept. Jobs himself —like many leaders— seems impossible to resolve; there is no chatty, confessional “inner self” to be located in his words or books about him; one has the sense that no one ever got “inside him,” perhaps because “inside” is where the failed self resides, the self that falls short of its conscience, and Jobs simply didn’t have that sort of mind.

James’ formulation at least seems to bring us closer to understanding one component of his formula, however. His charisma —an enormous part of his ability to motivate and drive progress— was not due to any special intelligence, education, talent, or charm as we typically conceive of them, but due to something else: a conscience and a will unified with one another. To see a person for whom life is an instantiation of meaning, whose will reflects only their values, inspires us; it is meaning in action, the former province of religion, and it has a mysterious force over us that, despite our rational objections, turns us into “the faithful.”


In Gravity and Grace, the philosopher Simone Weil discusses love and friendship with a kind of aphoristic precision that asks us to consider every sentence carefully, despite its plain and straightforward intelligibility (that the book is in fact produced from her notebooks could account for this style). In any event, I read this passage and was reminded of the vicissitudes of childhood and adolescent friendship:

It is a fault to wish to be understood before we have made ourselves clear to ourselves. It is to seek pleasures in friendship and pleasures which are not deserved. It is something which corrupts even more than love. You would sell your soul for friendship.

1. Before real friendship comes lucid self-awareness. It is challenging to understand oneself; few of us do reliably, achieving at best momentary glimpses of an unpleasantly cagey little creature whose posturing for sympathy or praise, recriminatory mumbling, and moral evasion irritate us. I don’t know what’s worse: that I am he, or that everyone has within them this same little needing demon.

2. But we do not deserve the consolations of friendship if they are based on misrepresented or misunderstood expressions of selfhood, nor do we if they are based on sullied, secret needs. Such consolations aren’t lastingly consolatory anyway: this sort of friendship is a temptation, a trap: one is corrupted by the codependence of need and performance, the filling of frightening silence by unlistening talkers.

3. Know yourself or know none, know nothing, disappear. This lesson wasn’t taught to me, but high school as I remember it was mostly the exchange of blinded and unarticulated selves for approximations of friendship. I don’t know why we seem to be born lonely, but I was always appalled at the naked need in those boys and girls who wondered at their friendlessness —as I did when I was alone— and whose conclusion was that there was something wrong with everyone else.

4. A professor once told me: it is necessary to be mercilessly ‘objective,’ so to speak, with oneself: do not admit into evidence subjectively sympathetic excuses, do not contextualize one’s own actions with justificatory narratives. Judge acts, deeds, consequences, the pain or happiness you bring to others; don’t give quarter to your weakness by making stories of it.

On the other hand, he advised: be endlessly ‘subjective,’ again so to speak, with others: imagine anything and everything one can to excuse them, explain them, understand and love them; make their self your ‘I’ and refuse to consider them only by their acts, deeds, consequences, or whether they bring happiness or pain to the world. Think of them as your own self: a malformed soul being beaten black and blue every day until death.

5. When I have been lonely, I have thought of myself subjectively and others objectively. This is the only real means to the self-pity which defines loneliness: to think of oneself as the world. When one isn’t one’s whole world, loneliness is very different, though still extant.

Learn to thrust friendship aside, or rather the dream of friendship. To desire friendship is a great fault. Friendship should be a gratuitous joy like those afforded by art or life. We must refuse it so that we may be worthy to receive it; it is of the order of grace… It is one of those things which are added unto us. Every dream of friendship deserves to be shattered. It is not by chance that you have never been loved. To wish to escape from solitude is cowardice. Friendship is not to be sought, not to be dreamed, not to be desired; it is to be exercised (it is a virtue). We must have done with all this impure and turbid border of sentiment.

6. Friendship is something one exercises, like compassion; it is a solitary choice, requiring the approval or affection of no one at all. Every desire which seeks a psychological state as its result should be suspected of superficiality at least, but in the case of those who seek friendship as an antidote to loneliness, it is not merely a vice but a countermanding of what’s sought. One is not a friend, of course, when one’s friends are means to an end: means to escape solitude, tools rather than accomplices.

(To consider: "Friendship should be a gratuitous joy like those afforded by art or life." What sort of joys are those? What does it mean that they’re gratuitous?).

Or rather (for we must not prune too severely with ourselves), everything in friendship which does not pass into real exchanges should pass into considered thoughts. It serves no useful purpose to do without the inspiring virtue of friendship. What should be severely forbidden is to dream of its sentimental joys.

7. Earlier in the same chapter —”Love”—, Weil comes close to describing what exists in opposition to sentimental delusions and escapes:

The mind is not forced to believe in the existence of anything… That is why the only organ of contact with existence is acceptance, love. That is why beauty and reality are identical. That is why joy and the sense of reality are identical.

At the moment, I like Simone Weil significantly more than I understand her.

Objectivity and Art

As a Popperian, I believe that the distinction between the objective and the subjective (or the relative) has been misunderstood and hyperbolized. Perhaps nothing is objective, but that does not mean that all is subjective. Newton’s proposed laws of motion were, for centuries, “objectively” true; confirmed by all experimental tests, they formed the basis of thousands of discoveries in physics and other fields. These discoveries were themselves experimentally tested, and themselves led to thousands of discoveries in the exponential fashion to which we’ve become accustomed.

But Newton was wrong; his laws were inaccurate. In David Deutsch’s terms, they were very, very good misconceptions, just as Einstein’s better ideas are very, very good misconceptions that will eventually be replaced by even better, more accurate, deeper ideas that explain more with less. This process is progressive: science gets better and better, even though it is purely the creation of “subjective” human conjecture —imagination— tested against reality for utility. We might say that the history of human knowledge is one of conjectures which are never complete or objective but which are ever-improving. To be ever-improving, they must be moving towards something; if they cannot reach it, they approach it as a line does an asymptote. Science asymptotically approaches objective, complete truth, never arriving but getting closer and closer.1 It is not objective —as the work of humans, how could it be?— but neither is it aimless or subjective.

But what about art? We do not tend to think that art is progressive. Indeed, the attitude of the age treats art as a private utterance, as pure subjectivity, or at best as a personal religion of some entertaining use to others. One epistemological consequence of the democratic ethos, unmoored from axiomatic values, is that we struggle with the idea of objectivity in anything, although we incoherently exempt the sciences from our anxious doubt. But this is a temporary phase, a confusion. It is not the case that art is purely subjective, aimless, without teleology or purpose; it is rather the case that art, like science, improves over time because it asymptotically approaches something. It happens to be the same “something” that science hews to: reality.

Consider the following work of art from tens of thousands of years ago:


From Chauvet, this depiction is among the earliest instances of art; it features a range of animals including, most prominently, cave lions. From tens of thousands of years later, in the 19th century, here is the head of a lion painted by Théodore Géricault:


It’s obvious that this is a better depiction, in part because we can reasonably assume that the intent of these two artists, across so much time, was similar: to capture and convey something essential about the lion. This intent was almost certainly inexplicit for the ancient artist, and may have expressed itself in other ways which recur throughout the history of art. For example, artists have occasionally conceived of their mission in ceremonial, religious, or supernatural terms, imagining that by performing acts in concert with images they might control reality.2 In later centuries, they might consider their art in more subtle religious, political, pedagogical, ideological, or emotional terms. But a sufficiently abstract definition might cover most cases:

Art seeks to virtualize phenomena for human benefit.

By “virtualize,” I mean only that what art offers us it offers on our terms. One can experience tragedy when a loved-one dies; one can know the awe and power of the lion when one sees it enter a cave in which one’s family is camped. Art seeks to make these phenomena, and the meanings they provide, available to you apart from the uncontrollable and contingent world, for a variety of reasons. Through art, we are enriched by experiences with less risk of suffering or injury; experiences are made more portable and reproducible, and are freed from temporality; we can begin at least to portray what we imagine, even if we cannot yet build it; and so on. Art, then, supports the same accelerated development of knowledge that consciousness, metaphor and language, and reason support, and all are related. Whereas we once built knowledge accidentally and slowly, when the inexplicit knowledge of environment and utility embodied by genes would lead to those genes’ replication and spread, we now have a range of means for building knowledge rapidly and at little cost. We can, at our discretion, experience alternative modes of being, the lives of others, worlds we’ve never seen; we can be taken deep within ourselves or so far away that we can no longer remember our names.

And from this, we learn. From art, from the virtualization of phenomena far removed from our practical realities, we derive values, politics, and purposes, in addition to whatever assortment of facts and information the art carries with it. Some essential values we seem incapable of arriving at any other way, especially in the absence of axioms or authority: compassion and empathy, for example, depend on the recognition of the humanness of others but are hardly logically compulsory propositions; art is unparalleled at conveying, in experiential and therefore broadly-intelligible terms, the bases of such moral notions, even to the ignorant and resistant.3 Art is where we find meanings we cannot reason and experiences that we cannot otherwise have; that we recognize the value and utility of these experiences and meanings but cannot yet rationally justify them doesn’t mean that they’re purely subjective. The fact that our ancestors didn’t understand the stars by which they navigated didn’t make those stars subjective either. They were simply little-understood, but their utility was evident to all. The same is true of art and culture, emergent phenomena we dismiss because of weaknesses in our contemporary philosophies. What we cannot reduce we pretend doesn’t exist.

The consequences of purpose

If we say that “art seeks to virtualize phenomena for human benefit,” we can begin to critique art apart from distracting historicisms. This liberates us from, among other traps, referentiality and academic preoccupations. We can attempt to discuss art concretely in terms of its aims:

  • Does the work virtualize phenomena well? Does it use the best forms for the phenomena it pursues? Does it use effective available techniques for their virtualization? Are the relevant parts of the phenomena captured and expressed? Does the work have a purpose, and are its aesthetic choices suitable for that purpose?
  • Is the work novel? If it isn’t, it won’t “work,” for just as sound science that discovers what science already knows is redundant and contributes nothing, repetitive art with cliched expressions, moribund forms, or a derivative purpose is redundant and contributes nothing. Novelty is what permits consciousness to attend to phenomena, and is therefore a foundational value in art.
  • Do humans benefit? The benefit may be to the artist alone, which is perfectly fine but should be understood as an extremely narrow sort of aim, like a scientific discovery that extends the life of a single human. The tension between an artist’s desire to express himself purely and without calculations about reception and the fact that art must benefit humans or be pointless is irreducible and beneficial, itself a metaphor for the paradox of selfhood.
  • Art that is about art is as science about science: useful for practitioners but insufficiently universal in scope. Art that is about artists is as science about scientists: likely to be worthless where it cannot be generalized, and where it can it is hardly about individuals anyway.

An important note: art makes virtualized reality possible both for external sense experiences like seeing a lion or a landscape and internal, phenomenological experiences like emotional states or even qualia. The virtualization of meaningful human phenomena might involve nothing representational —music often does not— or taken from the world outside of us. A work of art which captures, provokes, or explores something like sorrow, hope, love, or fear might be highly abstract, impressionistic, unusual, just as our internal life is.

Artists are technologists

I’ve mentioned qualia twice, once implicitly noting that some do not believe they exist and once by noting that art captures them well. Qualia were first described by C.I. Lewis in 1929:

There are recognizable qualitative characters of the given, which may be repeated in different experiences, and are thus a sort of universals; I call these “qualia.” But although such qualia are universals, in the sense of being recognized from one to another experience, they must be distinguished from the properties of objects.

Another way of putting it: when you look at a red sign, the “redness” you see doesn’t exist anywhere. The sign is an almost entirely-empty latticework of vibrating particles. Photons bounce off of some of these and enter your eye at a wavelength, but that wavelength is a mathematical description: it has no color in it, and photons themselves are colorless. Your mind experiences “redness,” but you might also say that it “creates” or “invents” redness when prompted by certain light phenomena which themselves have nothing to do, now or ever, with “redness,” which doesn’t exist. Erwin Schrödinger, the Nobel-prize winning quantum physicist, put it thus:

The sensation of colour cannot be accounted for by the physicist’s objective picture of light-waves. Could the physiologist account for it, if he had fuller knowledge than he has of the processes in the retina and the nervous processes set up by them in the optical nerve bundles and in the brain? I do not think so.

That one of the founders of modern physics didn’t believe a physical or physiological explanation for qualia would be forthcoming is arresting. But more to the point, while scientists and philosophers try to determine what “redness” or “sorrow” really is, as a quale, artists are virtualizing qualia and catalyzing them in audiencesIndeed, much of the personal quality that art has consists in its relationship to deep, individuated qualia we ourselves hardly comprehend.

For millennia art outstripped the sciences in its ability to understand and recreate qualia, virtualize reality, and provide ennobling, edifying, educational, and entertaining simulations for humans. Indeed, art pushed science, demanding better technologies which required deeper understanding in dozens of fields. The demands of art pushed architecture, and therefore engineering and chemistry and materials sciences; art required new resources for colors and sculptures, shaping societies economically; the musical arts were constrained awfully until technology turned music from vanishing performances into enduring, widely-distributed works.

All of which is to say: artists are natural technologists. Historically, they’ve pursued the newest and best techniques, materials, and forms. When the methodology for achieving perspective became clear, few resisted it on the basis of a calcified iconographic style considered to be “high art,” or if some did they’ve been suitably forgotten. And had new inks, better canvases, or some unimaginable invention given superior means to the impressionists to capture washes of light and mood —like, say, film— they’d have used whatever was available. The purpose of painting isn’t paint, after all; nor is the purpose of writing a book.4

The purpose is instead to virtualize phenomena for the benefit of humans. The best techniques for doing so do indeed change; the schools of thought that shape artists wax, wane, wear out; intellectual movements, critical and popular reaction, and technology are all part of the contingency in which we work. But the orientation of art should not be towards the ephemeral (except in exploring ephemerality itself, permanent and vexing) but towards deeper, universal, clarifying aims.

In elementary school, we were taught about Europe’s cathedrals. Centuries of fatality- and error-filled construction and engineering innovation on the edge of recklessness produced spaces intended to virtualize the experience of heavenly light, spiritual elevation, credence in the sacred. A peasant from the fields could enter one and immediately understand; he’d not know Suger’s theories or the tradeoffs involved in the buttresses, but the purpose and effect of the art were somehow not lost on him. The same would likely have been true had he seen Michelangelo’s David or been permitted to hear Mozart or Hildegard of Bingen. With exceptions, of course, art has aspired to universality.

The extraordinary present circumstance in which art is not expected to be intelligible, to have any “benefit” beyond the meaninglessly subjective “enjoyment” of the “consumer” is an aberration. That art is denied its progressive success at virtualizing greater and greater parts of reality, conveying ever-more phenomena with ever-greater fidelity to ever-more people, is the result of a philosophical disruption and a subsequent error. We found God dead; we asked what had god-like authority and reeled to realize that nothing can. But we’ve accepted that somehow, science exceeds merely moody paradigms. It works. It gives us control over the universe and ourselves, reduces contingency and accident, allows us to be what we think we should be.

Art is part of the same process, and can be evaluated similarly. In allowing us to virtualize and experiment with realities and phenomena, and, gradually, to live in those realities, it is part of the same epistemological and creative process as science. We are simply at an earlier stage, and just as someone might have surveyed the globe in 500 CE and concluded, “There is nothing objective about the so-called sciences; it appears that every culture and every society simply invents its own ideas and none is really any better than the rest,” so we now struggle to understand how aesthetics and morality might someday be understood teleologically, not as expressions of “taste” but as forms of knowledge-generation, experimentation, and even reality-building.

Perhaps we are transitioning from artists-as-depictors and artists-as-catalyzers5 to artists-as-world-makersTo create something, you must first understand it; to create a world for humans to experience, you must first understand how humans experience the world. Once you can reliably replicate any sense-perception, you must think of how such sense-perceptions are experienced in the mind: as qualia. Then you must think of how to generalize or objectify qualia, or how to catalyze them. This is not a task for science alone, though whether it is not yet or not at all I cannot say. It will involve art, however, particularly in the form it takes when it wants to extend itself into life: design.

Design is art which cannot ignore the outcome it pursues, which uses every technology or tool it can conjure to succeed, and which accepts the judgement of audiences. In this way, one can understand why so much of the vitality of art now resides in the commercial space: there, the artists still care about audiences, still have aims apart from themselves, still seek resonance, utility, universality. My anxieties about art stem mostly from this concern: if purposive, deliberate, universal art becomes the province of commercial design, art’s values will gravitate towards market values. The hope: those values will evolve intelligently through self-correction. But it seems safer to me to have a cultural space which accords art precisely the same sort of respect we pay science so that the arts can pursue their ends purely —ends far deeper than markets, capitalism, any historicism, incidentally— just as science exists apart from technology and its commercialization. But I doubt whether such a space is possible so long as we insist that all art is subjective, no teleology is imaginable, and there is no such thing as progress. Such an insistence is, in my view, both materially incorrect and snobbish, arising more from nostalgia for older forms or aristocratic art-culture than any real analysis of the present. We live in a world in which more people read, listen to music, and experience works of art than ever before. This is both art’s triumph and a prelude to its expanding role. From its earliest efforts to virtualize reality through its portrayal and later attempts to produce specific experiences in audiences, art aspires to the creation of worlds. As it converges with technology —in video games, for example— these worlds will grow to support the range of experiences and meanings humans desire, as art always has.


  1. Much of the confusion about subjective and objective sorts of knowledge comes from this simple fact: that we cannot have authority in knowledge means that nothing can be “final”; nothing is beyond interrogation, nothing is exempt from revision and improvement. That does not mean that all is equivalent, comparable, meaningless, a matter of preference. There are “criteria for reality,” in Deutsch’s terms, and they’re perfectly adequate to the actual epistemological tasks at hand, particularly in the sciences, where academics haven’t managed to confuse everyone’s sense of purpose yet. 

  2. As it happens, using virtualizations of reality to control reality seems likely to play an important role in humanity’s future. 

  3. The invention of new therapeutic diagnoses for the insufficiently empathetic, and their subsequent ineffectual medication, is a likelier course of action for our society. 

  4. The mistaking of a temporary medium —and all media, even those that endure for thousands of years, are temporary— for the purpose of art itself is precisely the sort of confusion that happens when ends vanish and means must suffice. If you cannot believe that art has a purpose deeper than its forms, its forms seem really important. But if you think the purpose of art is to virtualize phenomena for the benefit of humans (or the glorification of God or Marx), it’s not hard to accept that we might read off of screens or never care about painting again. If art matters, the texts on screens will do for us what oral traditions did for the Greeks and tomes did for the Enlightenment. The chapter of visual art obliged by technological-limitation to ignore movement will come to an end, or, if it can still open us to experience, teach us, console us, will continue. 

  5. Perhaps the mayhem of the successive schools of non-representational art can be understood both in terms of internecine disorder during the revaluation of values and as the working-out of experimental methods and techniques for orthogonal approaches to virtualization. Experimental art can, of course, be vitally useful. 

User Interface of the Universe

Quantum physicist and philosopher David Deutsch describes a fantasy of instrumentalism: an extraterrestrial computer like an oracle which can predict the outcome of any experiment:

[I]magine that an extraterrestrial scientist has visited the Earth and given us an ultra-high-technology ‘oracle’ which can predict the outcome of any possible experiment, but provides no explanations… How would the oracle be used in practice? In some sense it would contain the knowledge necessary to build, say, an interstellar spaceship. But how exactly would that help us to build one, or to build another oracle of the same kind — or even a better mousetrap? The oracle only predicts the outcomes of experiments. Therefore, in order to use it at all we must first know what experiments to ask it about. If we gave it the design of a spaceship, and the details of a proposed test flight, it could tell us how the spaceship would perform on such a flight. But it could not design the spaceship for us in the first place. And even if it predicted that the spaceship we had designed would explode on take-off, it could not tell us how to prevent such an explosion. That would still be for us to work out. And before we could work it out, before we could even begin to improve the design in any way, we should have to understand, among other things, how the spaceship was supposed to work. Only then would we have any chance of discovering what might cause an explosion on take-off. Prediction —even perfect, universal prediction— is simply no substitute for explanation.

Similarly, in scientific research the oracle would not provide us with any new theory. Not until we already had a theory, and had thought of an experiment that would test it, could we possibly ask the oracle what would happen if the theory were subjected to that test. Thus, the oracle would not be replacing theories at all: it would be replacing experiments. It would spare us the expense of running laboratories and particle accelerators. Instead of building prototype spaceships, and risking the lives of test pilots, we could do all the testing on the ground with pilots sitting in flight simulators whose behavior was controlled by the predictions of the oracle.

The oracle would be very useful in many situations, but its usefulness would always depend on people’s ability to solve scientific problems in just the way they have to now, namely by devising explanatory theories. It would not even replace all experimentation, because its ability to predict the outcome of a particular experiment would in practice depend on how easy it was to describe the experiment accurately enough for the oracle to give a useful answer, compared with doing the experiment in reality. After all, the oracle would have to have some sort of ‘user interface’. Perhaps a description of the experiment would have to be entered into it, in some standard language. In that language, some experiments would be harder to specify than others. In practice, for many experiments the specification would be too complex to be entered. Thus the oracle would have the same general advantages and disadvantages as any other source of experimental data, and it would be useful only in cases where consulting it happened to be more convenient than using other sources. To put that another way: there already is one such oracle out there, namely the physical world. It tells us the result of any possible experiment if we ask it in the right language (i.e. if we do the experiment), though in some cases it is impractical for us to ‘enter a description of the experiment in the required form’ (i.e. to build and operate the apparatus). But it provides no explanations.

The universe is an oracle to which we can submit any properly-phrased question and receive an answer in the form of uninterpreted data. I think that’s a lovely feature of our world. However: it is only the creative, synthetic interpretation of data —the generation of explanations, a form of knowledge constructed so far as we know only by humans— that makes this useful.

Data-collection, testing, experimentation that takes place without meaningful explanations is a popular sort of ignorance in some fields; it accords with the uninterrogated ascent of the quantitative over the qualitative. But experiments derive from explanatory knowledge, not the other way around: and while an experiment can falsify an explanation, it cannot create one or even confirm one in any final sense.

Nice things to consider: our universe is an oracle that will answer any question we put to it; and conjectural creativity is essential for the formation of explanatory knowledge (which catalyzes more questions to pose to the universe, and therefore more explanations to conjure, test, explain…).

“"I think." Nietzsche cast doubt on this assertion dictated by a grammatical convention that every verb must have a subject. Actually, said he, "a thought comes when ‘it’ wants to, and not when ‘I’ want it to; so that it is falsifying the fact to say that the subject ‘I’ is necessary to the verb ‘think.’" A thought, comes to the philosopher "from outside, from above or below, like events or thunderbolts heading for him." It comes in a rush. For Nietzsche loves "a bold and exuberant intellectuality that runs presto," and he makes fun of the savants for whom thought seems "a slow, halting activity, something like drudgery, often enough worth the sweat of the hero-savants, but nothing like that light, divine thing that is such close kin to dance and to high-spirited gaiety." Elsewhere Nietzsche writes that the; philosopher "must not, through some false arrangement of deduction and dialectic, falsify the things and the ideas he arrived at by another route…. We should neither conceal nor corrupt the actual way our thoughts come to us. The most profound and inexhaustible books will surely always have something of the aphoristic, abrupt quality of Pascal’s ‘Pensées.’"

We should not “corrupt the actual way our thoughts come to us”: I find this injunction remarkable; and I notice that, beginning with ‘The Dawn,’ all the chapters in all his books are written in a single paragraph: this is so that a thought should be uttered in one single breath; so that it should be caught the way it appeared as it sped toward the philosopher, swift and dancing.”

Milan Kundera in Testaments Betrayed, discussing the meaning of the various prose styles developed by Franz KafkaErnest Hemingway, and Friedrich Nietzsche, and how technical details like paragraph structure and the use of semicolons express deeper elements of an author’s thought and purpose.

Good writing is deliberated style as much as resonant content; there should be nothing automatic, nothing inherited, nothing thoughtless. Punctuation and typeface are not incidental; indentation- and sentence-length and paragraph rhythms all matter, and all ought to be the purposive stylistic expression of authorial intent.

For whatever reason, many seem to consider such things beyond the boundaries of artistic creativity in prose, as though we are obliged to adopt the happenstance syntax of our languages. We are not, but style is not merely a matter of some radical pose, refusing to use commas or arbitrarily violating grammatical rules in a demonstrative way. Rebellion is a crutch in art.

Good prose style is simpler and harder. We must be ruthless in interrogating everything about our writing: the plain honesty of its intentions, the truth of its substance, the value of the ideas it expresses, the novelty (or at least utility) of its existence, and all its tiniest details, all its small conformities to and violations of the rules of the language, all its periods and ellipses and dashes, all the choices we make about quotation marks and italicization, all the elements few readers consciously notice but all readers register.

“I have often thought that the nature of science would be better understood if we called theories “misconceptions” from the outset, instead of only after we have discovered their successors. Thus we could say that Einstein’s Misconception of Gravity was an improvement on Newton’s Misconception, which was an improvement on Kepler’s. The neo-Darwinian Misconception of Evolution is an improvement on Darwin’s Misconception, and his on Lamarck’s… Science claims neither infallibility nor finality.”

David Deutsch, quantum physicist and philosopher, in The Beginning of Infinity. Deutsch is obliged, in the course of arguing his theses about the nature of knowledge, progress, and human purpose, to rebut reductive notions like instrumentalism and our parochial cultural pessimisms. To do so he often leans on Karl Popper, who described scientific knowledge as being conjectural, ever-improving in its isomorphic fidelity to reality yet always tentative in a strict sense.

It is striking what an effect this clever little substitution has: we know, of course, that all scientific theories are later to be subsumed by better, deeper theories with more explanatory and predictive power; we know earlier theories are now in fact considered erroneous or incomplete for this very reason; but referring to "Einstein’s Misconception" reminds us of just how provisional our knowledge is, how far from any conceivable bedrock we remain. As a matter of philosophical principle, our knowledge is asymptotic: it may increase infinitely, draw nearer and nearer to the foundation, but it will never touch it.

(Perhaps this is so due to something elementally important that Deutsch observes in an unrelated discussion: “All scientific measurements use chains of proxies.” So long as language itself, perception —or more precisely, the inventive synthesis of perceptual data and mental interpretation that creates the world we know—, and measurement tools abstract us from the subject of our study, we can draw infinitely closer to it, but we cannot reach it, so to speak).

Our two deepest theories about the universe, Deutsch notes elsewhere, are in conflict: quantum mechanics and the general theory of relativity do not accord with one another and are, therefore, misconceptions, incomplete or incorrect. In this, we are precisely like ancient humankind, and like our forebears we struggle to conceive of our own ignorance; we tend to believe that we know quite a lot, and with impressive accuracy.

So we do. Deutsch demonstrates that although we will, barring extinction, continue to refine and improve our knowledge infinitely, we will also never stop being able to improve it. Thus we will always live with fallible scientific understanding (and fallible moral theories, fallible aesthetic ideas, fallible philosophical notions, etc.); it is the nature of the relationship between knowledge, mind, and universe.

But it remains odd to say: everything I know is a misconception.

“The key to the creative type is that he is separated out of the common pool of shared meanings. There is something in his life experience that makes him take the world as a problem; as a result he has to make personal sense out of it. This holds true for all creative people to a greater or lesser extent, but it is especially obvious with the artist. Existence becomes a problem that needs an ideal answer; but when you no longer accept the collective solution to the problem of existence, then you must fashion your own. The work of art is, then, the ideal answer of the creative type to the problem of existence as he takes it in —not only the existence of the external world, but especially his own: who he is as a painfully separate person with nothing shared to lean on. He has to answer to the burden of his extreme individuation, his so painful isolation… His creative work is at the same time the expression of his heroism and the justification of it. It is his “private religion,” as [Otto] Rank put it.”

Ernest Becker in The Denial of Death, the thesis of which can perhaps be summed thusly: humanity sublimates its fear of death through the causa sui project: the construction of meanings which are enduring and non-contingent despite our mortality and ludicrous, creaturely contingency. Society, culture, and the illusions on which we depend are the fruit of this “immortality project”:

The fact is that this is what society is and always has been: a symbolic action system, a structure of statuses and roles, customs and rules for behavior, designed to serve as a vehicle for earthly heroism. Each script is somewhat unique, each culture has a different hero system… It doesn’t matter whether the hero-system is frankly magical, religious, and primitive or secular, scientific, and civilized. It is still a mythical hero-system in which people serve in order to earn a feeling of primary value…

Heroic roles might include “breadwinner,” “mother,” “shaman,” “scientist,” “hedonist,” or any other designation which indicates how a person justifies their exertions and sufferings, pleasures and triumphs. Even to claim total purposelessness is a kind of assertion of meaning: a modest refusal to participate in hero-systems is a kind of heroism, a sought-out exceptionalism to this organismic problem of individuation and death. Indeed, when we talk of meaning as such, perhaps we are merely describing those symbols which exceed the individual but do not disappear into the inhuman cosmos, those ideas which are not organismic, will not die with the matter or, if they do, will somehow still suffice to justify its existence.

Becker’s work fascinates with its elucidation of how death drives this search for meaning and how the accidentally-developed and arbitrary illusions which provide meaning can both support the transcendence we require and enslave us. Indeed, Becker devotes much of the book to neurosis, which he suggests occurs when illusions fail, when hero-systems malfunction, and when the creature cannot escape his mortality:

What we call the well-adjusted man has…the capacity to partialize the world for comfortable action… [T]he “normal” man bites off what he can chew and digest of life, and no more. In other words, men aren’t built to be gods, to take in the whole world; they are built like other creatures, to take in the piece of ground in front of their noses… [A]s soon as a man lifts his nose from the ground and starts sniffing at eternal problems like life and death, the meaning of a rose or a star cluster, he is in trouble. Most men spare themselves this trouble by keeping their minds on the small problems of their lives just as their society maps out these problems for them. These are what Kierkegaard called the “immediate” men and the “Philistines.” They “tranquilize themselves with the trivial” —and so they can lead normal lives.

What we call neurosis enters at precisely this point: some people have more trouble with their lies than others. The world is too much with them, and the techniques they have developed for holding it at bay and cutting it down to size finally begin to choke the person himself. This is neurosis in a nutshell: the miscarriage of clumsy lies about reality.

Both the neurotic and the artist are people for whom society’s hero-system and culture’s roles and meanings have failed in some measure, but whereas the former responds with ineffectual or destructive compulsions —misguided efforts to control and organize the terrors of organismic life, or to imbue them with specious meanings— the latter attempts to ”justify his heroism objectively, in the concrete creation.” But the two are not so far apart, as everyone familiar with the association between neurosis and creativity knows:

The neurotic exhausts himself not only in self-preoccupations like hypochondriacal fears and all sorts of fantasies, but also in others: those around him become his…work; he takes out his subjective problems on them… The neurotic’s frustration as a failed artist can’t be remedied by anything but an objective creative work of his own. Another way of looking at it is to say that the more totally one takes in the world as a problem,  the more inferior or “bad” one is going to feel inside oneself. He can try to work out this “badness” by striving for perfection, and then the neurotic symptom becomes his “creative” work; or he can try to make himself perfect by means of his partner. But it is obvious to us that the only way to work on perfection is in the form of an objective work that is fully under your control and is perfectible in some real ways. Either you eat up yourself and others around you, trying for perfection, or you objectify that imperfection in a work on which you then unleash your creative powers. In this sense, some kind of objective creativity is the only answer man has to the problem of life… He takes in the world, makes a total problem out of it, and then gives out a fashioned, human answer to that problem. This, as Goethe saw in Faust, is the highest that man can achieve.

I am partial to that definition of art, incidentally: a fashioned, human answer to the problems of the interiorized world of a given artist. Becker continues with a cold, obvious, and sadly persuasive point:

From this point of view the difference between the neurotic and the artist seems to boil down to a question of talent… [The neurotic] can glorify himself only in fantasy, as he cannot fashion a creative work that speaks on his behalf… He is caught in a vicious circle because he experiences the unreality of fantasied self-glorification. There is really no conviction possible for man unless it comes from others or from outside himself in some way —at least, not for long. One simply cannot justify his own heroism in his own inner symbolic fantasy, which is what leads the neurotic to feel more unworthy and inferior.

And what gives you your sense of meaning? Into what role do you pour yourself, and by what sort of creation are you satisfied? Do you, like me, sometimes notice with horror that your idle time is spent trafficking in the most pitiful and empty fantasies —shortly to be forgotten, a waste of daydreams— and your working hours pass with your nose to the ground before you? Have you a causa sui project, or have you found your meaning on a shelf, readymade for you? Are you quick to critique the hero-systems of others, or do you feel a kinship with all who seek meaning, who at least talk of purpose, love, death, as opposed to the goddamned news?

“The reason the philosopher can be compared with the poet is that both are concerned with wonder.”

Thomas Aquinas in Commentary on Aristotle’s Metaphysics, quoted by Josef Pieper, who adds:

And because of their common power to disturb and transcend, all these basic behavioral patterns of the human being have a natural connection among themselves: the philosophical act, the religious act, the artistic act, and the special relationship with the world that comes into play with the existential disturbance of love or death. Plato, as most of us know, thought about philosophy and love in similar terms… On the basis of their common orientation toward the "wonderful" (the mirandum —something not to be found in the world of work!) — on this basis, then, of the common transcending-power, the philosophical act is related to the “wonderful,” is in fact more closely related to it than to the exact, special sciences…

If it is the case that all sciences reduce to physics, it is not the case that the liberal arts —as opposed to the servile arts— must do so as well. To what, then, do they reduce? Surely they are not mystical exceptions to reductive scientific materialism! But what epistemological framework can account for or justify for the value of wonder, not as a consumed, expressed, posted emotional state but as a contemplative response to the irreducible? Indeed, can we even accept the possibility of irreducibility? No: all arts must cease to be liberal, must be made servile; this is the role of culture today: it serves ends.

The contemplation of wonder is a posture which is not inclined towards action; it is a stance of silent, self-effacing appreciation, not self-aggrandizing use. Thus, wonder is in a sense useless, but is the source of poetry and philosophy alike (and perhaps much more, perhaps even love); it can only grow within leisure, which we are laboring to eliminate.

Consciousness, Interiority, AI

Perhaps there is a relationship between how interiority defines consciousness; how artificial intelligence has thus far failed to even approach consciousness and how it’s not even clear how it might; and how technologies that insist on the exteriorization of self reduce a sense of self. 

Thomas Metzinger, The Ego Tunnel: The Science of the Mind and the Myth of the Self, 2009 (quoted by the excellent Carvalhais):

“Being conscious means that a particular set of facts is available to you: that is, all those facts related to your living in a single world. Therefore, any machine exhibiting conscious experience needs an integrated and dynamical world-model.”

Josef Pieper, Leisure, The Basis of Culture, 1948:

"[W]hoever philosophizes takes a step beyond the work-a-day world and its daily routine. The meaning of taking such a step is determined less by where it starts from as by where it leads to… just where is the philosopher going when he transcends the wold of work? Clearly, he steps over a boundary: what kind of region lies on the other side of this boundary? … No matter how such questions could be answered in detail, in any case, both regions, the world of work and the “other realm,” where the philosophical act takes place in its transcending of the working world —both regions belong to the world of man, which clearly has a complex structure…

It is in the nature of a living thing to have a world: to exist and live in the world, in “its” world. To live means to be “in” a world. But is not a stone also “in” a world? Is not everything that exists “in” a world? If we keep to the lifeless stone, is it not with and beside other things in the world? Now, “with,” “beside,” and “in” are prepositions, words of relationship; but the stone does not really have a relationship with the world “in” which it lives. Relationship, in the true sense, joins the inside with the outside; relationship can only exist where there is an “inside,” a dynamic center, from which all operation has its source and to which all that is received, all that is experienced, is brought… [A world can be] considered as a whole field of relationships. Only a being that has an ability to enter into relationships, only a being with an “inside,” has a “world”; only such a being can exist in the midst of a field of relations.

Consciousness is, in part, a matter of there being an “inside” which is not part of the outside world, although it can relate to it. One reason it is difficult to imagine, even in principle, how artificial intelligence could achieve consciousness is the fact that there is no inscrutable interiority to a programmed machine: there is no “inside,” only commands from without in the language of an external world.

It is not clear, of course, how the interiority of human consciousness works, but whether it is some combination of deterministic and stochastic processes which produce an emergent, irreducible phenomenon or an even less-understood mechanism —for example David Deutsch’s ideas about the role quantum computation might play— it is made obvious by the depressing absence of progress in AI research that we have no notion how to reproduce it.

A more pressing question is: how do technologies which demand the exteriorization of what is “inside” affect consciousness? Is it the case that part of why it seems more difficult to achieve real connection —real relationship, in Pieper’s sense— is that we increasingly reside online, where our selves are shaped by systems which cannot support our interiority? There can be no “inside” on Facebook or Twitter (save, perhaps, for DMs and messages which, it should be noted, are where our most sincere and authentic interactions occur); there can be no monetization of interiority, nor even its capture; it is not a post type nor data we can share.

Artificial intelligence cannot achieve consciousness without interiority and a “world” of relations; we ourselves are creatures of consciousness living on systems incapable (both technologically and because of business incentives) of permitting interiority. Perhaps this accounts for our increasing artificiality.

“[We have forgotten] leisure as “non-activity” —an inner absence of preoccupation, a calm, an ability to let things go, to be quiet. Leisure is the form of that stillness that is the necessary preparation for accepting reality; only the person who is still can hear, whoever is not still cannot hear. Such stillness as this is not mere soundlessness or a dead muteness; it means, rather, that the soul’s power, as real, of responding to the real —a co-respondence, eternally established in nature— has not yet descended into words. Leisure is the disposition of receptive understanding, of contemplative beholding, and immersion -in the real.”

Josef Pieper, Leisure: The Basis of Culture, 1948. This sort of leisure is the prey being hunted to extinction by technology in general and the Internet specifically, and it is this leisure which permits the creation of sustaining human meaning.

Leisure, Culture, Selfhood

Pieper’s thesis, unreasonably condensed, is that our interiorization of the dynamics of capitalism and the destruction of transcendental narratives of all sorts —principally religious, but not exclusively— have together made leisure of this sort alien and incomprehensible to us. Instead of real, contemplative, open, and receptive leisure, we pursue “leisure activities” which utterly mistake the purpose of leisure and as a result fail to satisfy our deepest needs. Above all, they’re incapable of connecting us to “the real” in the world or of immersing us in “the real” in ourselves.

This lost sort of immersion, this wordless confrontation with reality, is profoundly intimate, and from it we develop authentic personal and civilizational culture (as opposed to "content"). The changes such leisure catalyzes are not easily communicable or quantitatively measurable; they are not for the curriculum vitae, the business card, or the interview, nor for the cocktail party or photo album. They do not relate to intelligence or “skills” as such, and can be experienced by any person of any class; they may incidentally correlate to characteristics we deem useful, but that correlation is emphatically not their point. Indeed, they cannot be the result of pursuit; the discovery of enduring wisdom, the achievement of awareness, the maintenance of a serene relationship with the self and the world, the sensation of joy, result from an “open” and “receptive” attitude wholly at odds with that of “self-improvement.”

Leisure in this sense is both the crucible of all durable human meaning —what Pieper calls culture— and totally without transactional, measurable, economic point. The Greeks, Romans, and pre-Industrial Revolution Western societies understood this; indeed, the Greek word for leisure, in fact, is the basis for the Latin word scola, the German schule, and the English school. And Pieper cites surprising passages from Aristotle and Plato as well as more contemporary thinkers which suggest that the connection between repose, wisdom, and culture was once clear, even if it now seems difficult to defend. (It should be added that much of Buddhism and Hinduism seem to embody this thesis as well, for example in the relationship between Theravada monks and society, or the notion of the sannyasa stage of life).

In just a few centuries, however, this idea has vanished as the values on which it depends have been replaced. What cannot be communicated and measured is now felt not to exist —if you dispute this in the arts, you likely nevertheless insist on it in matters of religion, for example— and the impossibility of exteriorizing leisure or its fruits, of conveying contemplative communion or translating it into something quantitative, condemns it to irrelevance (or worse).

Pieper apportions much of the responsibility for this to capitalism, Marxism, and the transformation of individual, sacralized labor into “work” (physical or intellectual): if the majority of a society’s activity implies certain values, members of that society adopt those values. We are our utility (this is the real meaning of ideas like “self-esteem”: what is our use to others?). We think as our economies “think”; we consume and produce as they do; and we insist on fungibility, reproducibility, and exchangeability as criteria of meaning. What is valuable must enable transactions.

Pieper could not have imagined, however, the apotheosis these market values would achieve in the technology of our age, an age of “total technology,” or what Neil Postman called “technopoly.”

Technopoly and the Self

Think of culture (both in general and the micro-culture of selfhood) as we create and experience it now, and consider Postman’s description of technopoly:

"…the culture seeks its authorization in technology, finds its satisfactions in technology, and takes its orders from technology. It does not make [non-technological forms of culture or self-hood] illegal. It does not make them immoral. It does not even make them unpopular. It makes them invisible and therefore irrelevant."

No culture (or paradigm of selfhood) has ever taken its orders more directly from technology than ours; our music and visual arts, for example, are the result of technical specifications and network programming requirements above all else, and their forms rise and fall as quickly as industry needs. The most pure expression of a medium being the message must be the music video, a form born of technology in search of content and fatally bound to the fortunes of a defunct broadcasting model. The art, so to speak, of the hour-long drama, the animated GIF, the “interactive installation,” or the blog post is hardly different, and hardly likelier to last. 

If the tools and processes of capitalism or Marxism reduced communities to classes, creators to functionaries, makers to workers, families to consumers, our technopoly has reduced us to users and culture to media (and increasingly online content). That is to say: culture is synonymous with technology, and because we derive our sense of self from culture, so too is selfhood. Life is what can be posted; you are what can be saved and shared as data; culture is what the Internet can convey; meaning is what you perceive online.

The Medium is the Meaning

Meaning, of course, is the great problem of any human life not concerned solely with organismic survival. What is my life’s purpose? Why should I endure my hardships or enjoy my successes? Is happiness my goal, and of what does real, abiding happiness consist? Instinct is not enough, the claims of our consumer-hedonistic society notwithstanding; the satiation of urges will not sustain you through decades, even with the most exotic rotations. Generations ago, we had static, mythical sources of meaning, but no longer, and not only is there no going back to religion as a persuasive, logically-compulsory authority, authority of any sort will not again suffice. We are now democratic in both politics and epistemology.

In the absence of persuasive transcendental belief systems —God is dead, everything is permittedwe look to one another for meaning. Smeared across vast suburban landscapes, a world of diaspora, of exile from the cities in which we live but within which there are no public spaces and no neighbors, we find one another in the only space in which social interaction is still possible: online.

What do we find there? We see Facebook photos of smiling, active couples and learn that love means shared hobbies; we inattentively scan the tweeted utterance of our purported friends and learn what matters, what is important, what counts; we note the data in each other’s profiles —a person is her favorite movies, which she selects from a licensed, partial, auto-completing list, or the hashtags he includes after remarks about arbitrated trending topics— and we form a model of what it is to be a human. We follow one another on service after service, seeking amusement, beauty, some justificatory clues, hinted potentialities, signs of meaning. But our expressions of selfhood are dictated by what we can post, share, photograph, upload, link, capture. We see culture and selfhood as shaped by market forces, technology constraints, business decisions, and arbitrary software designs. No form of meaning stands apart from the technopoly and remains relevant; there is no evidence of meaning beyond those actions which can be turned into apps or pages and made to generate profit.

In the democratic capitalist technopoly, therefore, meaning is defined by forces that take no note of meaning-in-itself, reject as irrelevant everything that cannot be made into discrete, monetizable, digital units. Technology requires user actions; leisure-as-repose cannot be initiated by a click, shared, or sold. Neither, for that matter, can love, wisdom, or joy.

(Their portrayal, however, can be, and if the primary sense one has online is of a perpetual performance, a performance in which the performers do now know they’re performing and cannot stop, this is why. A perceptual world without any conceivable instantiation of subjective interiority is a world in which only what can be portrayed exists. It’s no coincidence that the rise of simulating technologies corresponds to the ascendency of appearance over essence. To take one example, this is why artists have been replaced by people who portray artists in their simulated mercuriality or their de-rigeur vices. Creative inner-struggle perhaps once drove archetypal artistic despair, but what’s inside no longer exists, so the portrayal reigns. An artist who doesn’t “act like” one isn’t one. The same is true for politicians, the beautiful, the talented, even the ordinary.

Thus: the substitution of culture’s portrayal for culture, and thus too the pervasive sense of unreality and disconnection we experience amidst what is theoretically the most informative and connective technology in history).

Flight and return

When one is away -away- from the technologies of portrayal which shape our lives -away from television, away from the electronic display, away from the status message and the news feed, one quickly begins to recover a sense of selfhood apart from speech or post. One again experiences the self without mediation, social dilution, distraction. And, if one is afforded sufficient time, and is perhaps immersed in the rhythms of the natural world, one can experience "a co-respondence, eternally established in nature… not yet descended into words… the disposition of receptive understanding, of contemplative beholding, and immersion -in the real." One begins to emerge.

Most are familiar with this reprieve, and as well with the regret one feels as one cedes to the essentially addictive habit upon returning to the world of breaking one’s silence: a post about one’s vacation, perhaps. But worse is that most of us are now unable even to get away; should we be fortunate enough to lose the fetter of an Internet connection, we still insist on taking photographs, ostensibly to record the moment for ourselves but actually because at every step we imagine how our experience might be conveyed, portrayed, broadcast. We interiorize technology as it interiorizes the market’s emphases; we all search for what can be transacted upon, for attention or esteem or approval or money. We blink into a sunset, search for our phone’s camera, and imagine how the photo will play on the screens where our avatar lives, screens belonging to other selves whom we know only as representations.

And as networks extend their influence, it is ever-harder to experience real repose, the deep communion with reality that produces authentic meaning and enduring culture. We live in a de-cultured culture, subsumed beneath an avalanche of transitory, ephemeral, temporary meanings, soon to be buried by new posts, new photographs, new digital artifacts of those acquisitive, performative “leisure activities” which are now the primary source of meaning in our lives (and most of which, of course, cost money in one way or another).

None brings us closer to whatever is essential and unmediated, unadulterated inside of ourselves, nor to any ultimate reality; indeed, perhaps no one believes in such things any longer. But if the existence of something apart from postable, quantifiable, monetizable, digitally transmissible data is in doubt, one thing is not: the Internet is an expression of radically materialist and utilitarian values which stand in opposition to leisure as Pieper described it, and therefore to the source of culture as it existed for millennia. Even if one prefers the dynamic, competitive, addictive, temporary cultures of portrayal and enactment that prevail now, it is hard to imagine life without even the possibility of repose. Yet it is harder still to imagine how such repose could ever be possible without the sort of radical disconnection from the expanding technopoly which, perversely, is considered a turning-away from the world, rather than a return to it.

Design & Compromise

In a chapter on political systems in his remarkable book The Beginning of Infinity, David Deutsch notes that

…compromises -amalgams of the policies of the contributors- have an undeservedly high reputation. Though they are certainly better than immediate violence, they are generally, as I have explained, bad policies. If a policy is no one’s idea of what will work, then why should it work? But that is not the worst of it. The key defect of compromise policies is that when one of them is implemented and fails, no one learns anything because no one ever agreed with it.

Recognize at once one of the magical qualities of the American political system! Despite the fact that we live in the laboratory of the real -we can present the universe with any meaningful, properly-phrased question and reliably receive an indisputable answer- neither party ever believes that its policies have been falsified. 

Often, this is because our democracy -such as it is- requires compromise. In ten years, when America’s health care system is still a hideous, tragic mess, Republicans will believe that this is due to the faulty premises of Democratic legislation, while Democrats will believe that the legislation was fatally weakened by obstinate Republicans. While we can of course reason our way to our own hypotheses, we will lack a truly irrefutable conclusion, the sort we now have about, say, whether the sun revolves around the earth.

Thus: a real effect of compromise is that it prevents intact ideas from being tested and falsified. Instead, ideas are blended with their antitheses into policies that are "no one’s idea of what will work," allowing the perpetual political regurgitation, reinterpretation, and relational stasis that defines the governance of the United States. 

The Autocratic Artist

There has been recent occasion to recall an odd organizational fact: the putative democratic spirit notwithstanding, it is nearly always the case that real artists are autocrats. Collaborative creativity isn’t an exception to this rule; typically, in bands for example, each collaborating artist is dictatorial within his domain, and whatever the extent of his partnership with his peers, there is rarely compromise.

This is not to say there is no persuasion. But persuasion is a radically different epistemological process:

  • to compromise is to treat competing ideas as mathematical sums whose average might be equal to (or, more preposterously, greater than) the individual ideas themselves; while
  • to persuade is merely to convince others of the soundness of an idea, often without the cost of instantiating any of the competing ideas.

Pondering the inexplicable, even disheartening superiority of the auteur over the democratic committee -considering one’s favorite tyrannical director, or weighing Google’s chances against Apple- one wonders: why do compromises not embody an aggregate of the intelligence of constituent ideas (or policies)? Why do compromises typically produce wholes pitifully less than the sum of their parts?

Deutsch’s book suggests that the real surprise is that anyone should imagine they would do otherwise. As a simple matter of epistemology, there is no reason why the blending of competing ideas would produce a better idea. Imagine if someone had proposed to Galileo and the Catholic Church that they compromise and agree that neither the sun nor the earth revolve, or that they somehow revolve around each other!

This seems obvious enough in science and other fields whose ideas we regard as being predictive, or isomorphic to physical reality in some quantifiable way. But it is no less the case in artistic and creative endeavors.

This is because creative ideas are types of explanations, and every explanation involves whole constellations of interdependent notions, speculations, assertions; a well-developed creative idea -a design, a song, a poem- is not an assembly of fungible units. It is a complete hypothesis unto itself about what will work for a given human purpose.

So while it seems perfectly natural, even morally preferable, to involve many voices and subject creative ideas to the scrutiny of committees, the result tends to be disastrous: the writer knows that his diction depends in part for its effect on his syntax, his punctuation on the typography in which it is rendered; the photographer knows that the same scene shot in a more commercially-appealing way is no longer beautiful but is now banal; the designer knows that the entire premise of his layout is undone by the substitution of a compromised header; etcetera.

That is: creative ideas embody whole explanatory and speculative matrices, even in their minor details. Compromises dilute the implicit, interdependent elements which account for the form and content of creative ideas, introducing new elements (from others, from committees) which derive from wholly different notions about the problems being solved, the relations between the elements involved, the speculations which are justified by experience and evidence, and so on.

Worse: compromise makes it impossible to sort out precisely which elements, or which implicit premises, were responsible for the success or failure of any given creative idea.

The Fault

When people discuss why small companies are more innovative than large companies, or why dictatorial creative thinkers -who are often terribly unpleasant people- produce better work than assemblies of talent, they often talk about speed, about “nimbleness,” and about bureaucracy.

But the essential problem is philosophical: creative ideas must be understood as hypotheses about certain sorts of problems. For the writer, the painter, the designer are all trying to solve a specific problem, and their hypotheses cannot be averaged anymore than Galileo’s could. While persuasion and collaboration are perfectly sensible, the real advantage the best innovators and creators have is that they understand that compromise is epistemologically invalid and procedurally fatal.

So why does compromise have its “undeservedly high reputation”? I believe it is because we are discomfited by the philosophical implications of the fact that some ideas are objectively better. We exempt science from our contemporary anxieties because its benefits are too explicit to deny, but in most creative fields we are no longer capable of accepting the superiority of some solutions to others; unable to sustain confidence in the soundness of the artistic problem-solving process, we will not provoke interpersonal or organizational conflict for the sake of mere ideas.

This sad, mistaken epistemological cowardice turns competing hypotheses into groundless, subjective opinions, and the reasonable course of action when managing conflicting, groundless opinions (about, say, what to order at a restaurant) is to compromise, because there is no better answer.

But the creative arts are not so subjective as we tend to think, which is why a talented, dictatorial auteur will produce better work than polls, focus groups, or hundreds of compromising committees.

"All evils are caused by insufficient knowledge."

So David Deutsch argues in The Beginning of Infinity, his breathtakingly profound and impossibly affecting new book. He continues:

Optimism is, in the first instance, a way of explaining failure, not of prophesying success. It says that there is no fundamental barrier, no law of nature or supernatural decree, preventing progress… If something is permitted by the laws of physics, then the only thing that can prevent it from being technologically possible is not knowing how.

A disciple of Karl Popper and a quantum physicist, Deutsch is everywhere concerned not with positive absolutes but with the process of conjecture, refutation, and the gradual improvement of our explanatory understanding of the world, as well as the corresponding ability to control it. Amidst his many lucid, remarkably direct assertions about what we can know, what we can do, and the moral repercussions which follow therefrom, he tentatively offers only one moral imperative: "…the moral imperative not to destroy the means of correcting mistakes is the only moral imperative… all other moral truths follow from it…"

If optimism is “a way of explaining failure,” it is because of another of his pronouncements, which he advises humanity to chisel on stone tables: problems are inevitable; and problems are soluble. That is: there is no possible stasis of sustainability for humanity, or any other species, within any ecosystem or civilization. Only a continuous process of problem-solving will suffice to ensure our survival, and not only our survival but our gradual triumph over evil.

Evil! It is not a word he uses often, nor is it a word often-used today, although I suspect this is less because any of us denies the existence of evil -death abounds, injustice abounds, the suffering of the innocent abounds- but because we deny the existence of the good. In any event, discussing evils caused by insufficient knowledge, Deutsch writes:

If we do not, for the moment, know how to eliminate a particular evil, or we know in theory but do not yet have enough time or resources (i.e., wealth), then, even so, it is universally true that either the laws of physics forbid eliminating it [or not]… The same must hold, equally trivially, for the evil of death -that is to say, the deaths of human beings from disease or old age. This problem… has an almost unmatched reputation for insolubility… But there is no rational basis for this reputation. It is absurdly parochial to read some deep significance into this particular failure, among so many, of the biosphere to support human life -or of medical science…

That humanity has not yet conquered death is due to one fact alone: that we have only been engaged in the critical, open-ended creation of knowledge for a few centuries, since the Enlightenment. Before it, fits and starts of such knowledge-creation are well-known, but none were sustained; all fell, all halted, some due to authoritarian political developments, some due to reactionary religious awakenings, and others due to happenstance accidents of history. Above all, Deutsch maintains, those societies in which proto-Enlightenments occurred tended to have a sense of optimism about the solubility of problems and the value of progress, an optimism more fragile than it appears, an optimism easily damaged.

He describes two heartbreaking interruptions in detail: Sparta’s defeat of Athens and Savonarola’s campaign against the Medici’s Florentine Renaissance- before concluding his chapter on optimism with a paragraph I will never forget, particularly when considering the real value of different cultural and political systems:

The inhabitants of Florence in 1494 or Athens in 404 BCE could be forgiven for concluding that optimism just isn’t factually true. For they knew nothing of such things as the reach of explanations or the power of science or even the laws of nature as we understand them, let alone the moral and technological progress that was to follow when the Enlightenment got under way. At the moment of defeat, it must have seemed at least plausible to formerly optimistic Athenians that the Spartans might be right, and to the formerly optimistic Florentines that Savonarola might be. Like every other destruction of optimism, whether in a whole civilization or in a single individual, these must have been unspeakable catastrophes for those who had dared to expect progress. But we should feel more than sympathy for those people. We should take it personally. For if any of those earlier experiments in optimism had succeeded, our species would be exploring the stars by now, and you and I would be immortal.

I will never forget this. Conflict between those who critically examine, creatively conjecture, seek understanding and technological mastery and the atavistic and retrograde elements who believe in some holy antiquity or some savage’s noble edenic idyll is a real one, a suprapolitical one, and it has real victims. All of us who will die count among this number.