Tag Archives: Adorno

Jargon of Authenticity, Day 5

We begin a new paragraph.

In Germany a jargon of authenticity is spoken— and even more so, written; the badge of societalized chosenness, elevated and folksy at once; a sublanguage playing the role of prestige dialect.

A bit of exegesis will get us going. On a first pass, the word most likely to cause trouble is “societalized,” which in German is a Weberian term of art, vergesellschaftet. (And yes, Adorno is not immune to using jargon in his campaign against jargon.) That word — or rather its noun form, Vergesellschaftung — plays an important role in Weber’s thinking, where it refers to social relationships that have been mediated through exchange or the contract (rather than the communitarian ties of kinship and the like). We might more readily grasp the point, and even retain most of the phrase’s Weberian flavor, if we swapped in the word “rationalized” or even “modernized”: The jargon is “the badge of rationalized, modernized chosenness.” That last word, “chosenness,” is a reminder that existentialism has its origins in Kierkegaard’s Protestantism, with perhaps a secondary reference to Judaism; Existenzphilosophie creates a secular elect or a chosen people. The elitism of the stance is irritating enough — hence, the move from “spoken” to “written”: existentialism is a set of poses struck by educated people. But what really seems to be irking Adorno is that this elitism has been reduced to a verbal code and something like a method. (The word I’ve translated as “badge” sometimes means “identifying feature,” but sometimes means “dog tags.” The phrase could thus also read “the tags of societalized chosenness,” where “tags” can mean both: standardized keywords entered as metadata and how you might identify a corpse.) In existentialism, the process by which you achieve your authenticity is supposed to be radically individual. No-one can help you. Sartre has that famous bit about how when you seek advice, you choose the person to advise you, which person probably won’t tell you something that you didn’t preselect for. In other words, you bear responsibility for whatever position you arrive it. And a lot of junk existentialism strikes heroic poses around this — the lonely-individual-struggling-to-make-meaning-in-a-fundamentally-absurd-universe. The heroic bit probably gets at the “chosenness.” Existentialists can’t help but feel that they are special, that they aren’t das Man, the They, the Anyone. Against this, Adorno means us to notice that the existentialism was itself a kind of group-think, a social trend whose followers achieved a false individualism only by using standardized terms and cycling through repeatable steps.

But that’s not actually the meeting of opposites that most has him exercised here. He is plainly more interested in the twofold character of existentialist jargon, which is “at once elevated and folksy.” The German terms that Adorno uses in the last clause deserve a quick look: Untersprache and Obersprache, under-language and over-language. In English, the word “sublanguage” does sometimes get used to refer to a jargon or whatever, lacking a regional base, isn’t quite a dialect — “sublanguage” as in “subculture”: “The use of emojis can be considered a whole sublanguage of its own.” The German word isn’t any more common than its English equivalent, but I did find one usage that suggests that it can mean a dialect-understood-as-inferior. Obersprache is, if anything, even less common than that, but does occasionally get used to refer to the high or official form of a language as contrasted with some putatively lesser patois. At any rate, Adorno is pointing out something unusual about Existenzdeutsch, and he must be thinking in the first instance of Heidegger. Most of the time — in English as in German — we think of jargon as relying heavily on elevated, Greek and Roman roots, the way that science and medicine do: We insist on calling snails and slugs “gastropods,” which makes them sound more alien than they really are, as though we didn’t have such creatures in these parts, as though all animals came from somewhere else, when even for taxonomical purposes, we could just call them “belly-feet.” But Heidegger makes a point of using good, sturdy German words; it’s just that he uses them in non-intuitive ways — ways that have to be learned. So what Adorno means is that among the Heideggerians a down-home idiom has begun to function in mandarin ways, which — he’s right — is quite unusual.

The next six sentences are best considered as a unit:

The jargon extends from philosophy and theology—not only of Protestant academies—to departments of education, adult education centers, and youth organizations, even to the elevated diction of the representatives of business and administration. While the jargon overflows with the pretense of profound human sensitivity, it is just as standardized as the world that it officially negates; the reason for this lies partly in its mass success, partly in the fact that it sends its message automatically, just from the way it is put together, thereby sealing that message off from the very experience that should inspire it. The jargon has at its disposal a modest number of words that click into place like signals. “Authenticity” itself is hardly the most conspicuous of the bunch; it serves, rather, to light up the ether in which the jargon flourishes and the cast of mind that latently feeds it. Some examples were serve for the time being: existentiell [“existentiell” when used by a Heideggerian, “existential” in ordinary German]; in der Entscheidung [in the decision]; Auftrag [the task]; Anruf [the call]; Begegnung [the encounter]; echtes Gespräch [a genuine conversation]; Aussage [an utterance]; Anliegen [a concern]; Bindung [a commitment]; one could add to the list any number of terms-that-aren’t-terms with a similar ring. 
.

“Terms-that-aren’t-terms”: Adorno is referring again to the existentialists’ tendency to devise jargonized uses for ordinary German words, which all of these are, with the exception of the first.

Some — like Anliegen [a concern], which is logged in the Grimms’ dictionary and which [Walter] Benjamin could still use innocently — have only taken on their changed coloring since getting drawn into this Spannungsfeld, this “field of tension” — that’s another one for the list.

An English-speaking reader will likely be put off by this paragraph’s fusillade of German. Patience is the best counsel. The details don’t matter, at least for now. You will not be quizzed on this vocabulary. Adorno is just warming up, listing in advance some of the headwords for which his devil’s dictionary will eventually supply entries. At some point, you will want to know what is interesting about the word Begegnung, but there will be time for that later.

The importance of this paragraph lies elsewhere. For if we consider what Adorno is actually doing — and not just what is he is claiming — then we can extrapolate from these lines three questions; questions that Adorno has put to pop existentialism in Germany and that we might, following his example, put to any philosophy as it enters the educated mainstream:

1) What are its buzzwords? Faced with a philosophy in wide circulation, our first task will be to compile a lexicon and catalog its boilerplate: identity, intersectionality, lived experience, a particular way with participles (minoritized groups where once there were “minorities”; the unhoused where once there were “homeless people”; the variously assigned and identifying); Black and Brown bodies, where once there were persons; and, indeed, those capital B‘s themselves, which undo the de-essentializing and lower-case diminution of an earlier generation in favor of a fresh round of monumentalization presumed permanent.

That’s the first question. Next we ask…

2) How have these buzzwords been taken up by the government and the corporations, “the representatives of business and administration”? A college’s Queer Student Union promises to “work to improve student life for all gender identities.” Amazon does them one better, “encouraging all Amazon employees to … ‘speak their truth'” and promising to “provide full support of all … gender identities.” Similarly, I have a pretty good idea of what the Combahee River Collective meant by “intersectionality.” But we’re still going to have to work out what the secret police mean by the word: “I am a woman of color,” says the CIA officer in the recruitment video, “I am a cis-gender millennial who has been diagnosed with generalized anxiety disorder. I am intersectional.”

3) How do these buzzwords standardize what they claim to promote? How, indeed, does verbal repetition introduce homogeneity into philosophical positions that typically promise the opposite? What are the protocols of routinized identity-assertion? What do we do when lived experience enters our texts mostly as a truncated verbal meme, ie, as “lived experience”?

At this point, a clarification becomes necessary. It would be possible to put Adorno’s question to any intellectual scene in Western Europe or North America over the last 250 years or so — and probably to many others besides. It would be possible, I mean, to inquire for any moment about the unexpected ties between dissident philosophy and officialdom, even when these have nothing to do with existentialism. But as it happens, our American present, in the 2020s, has a lot to do with existentialism. For it is in the idiom of identity and intersectionality that Sartreanism and its cousins most obviously survive into the present — subsumed, to be sure, buried and repackaged, but no less recognizable for that. We could make the point in terms of intellectual history, by remarking that Judith Butler completed a PhD in the 1980s under an important Sartre scholar and that of the seven French theorists that Butler discusses in Subjects of Desire, their first book, only Sartre gets a chapter of his own. But we could also make the point just by examining contemporary identity practices: When a group gathers for the first time, the reason to expect each and every person in attendance to give their pronouns is an impeccably Sartrean one. To see this, one need only consider the alternative, which would be to signal that anyone who wants to is welcome to give their pronouns — welcome, but not obliged. The point of the obligatory version– the point, that is, of requiring the normie to say “he/him” — is not only to put non-binary and non-conforming people at ease, by turning the giving of pronouns into a ready habit and rule of etiquette, though of course it is that, too. Just as important, the universal introduction-by-pronoun severs the link between gender presentation and gender identity, allowing no-one to hide behind their secondary sex characteristics; and more important still, it compels every person in the room to own their gender identification, to speak it as an identification, and to do so repeatedly, in a way that makes clear that we each maintain an identity only through free and recurring acts of affirmation. “My name is Christian Thorne, and I use he/him pronouns” — until the day I don’t. Anyone with a little time on their hands can confirm the broader point for themselves. Polity’s 2014 introduction to existentialism regularly slips into the idiom of identity: “We are always a ‘not yet’ as we press forward, fashioning and re-fashioning our identities.” And conversely, K. Anthony Appiah’s 2005 book on The Ethics of Identity is brimming with existentialist formulations: identity is a matter of “making a life,” “the final responsibility” for which “is always the responsibility of the person whose life it is.” In one thought experiment, Appiah praises a person for excelling in a particular identity “because of his commitment.” It is when we recall that “lived experience” was actually Beauvoir’s term — l’expérience vécue — that it begins to seem possible that some of Adorno’s arguments, and not just his questions, will carry over to the present, as we watch the jargon of authenticity mutate into the jargon of identity.

 

Jargon of Authenticity, Day 4

Heidegger enters the scene on the third page. Here’s the paragraph in full:

This [Kracauer getting shown the door by the Rosenstock circle] was well before the publication of Being and Time. In that work, Heidegger introduced authenticity par excellence, in the context of an existential ontology and as a philosophical term of art; so, too, did he pour into the mold of philosophy the object of the Authentics’ less theoretical zeal; and in that way he won over all those in whom philosophy strikes a vague chord. It was through Heidegger that confessional demands became unnecessary. His book acquired its nimbus by describing as full of insight — by presenting to its readers as an obligation true and proper — the drift of the [German] intelligentsia’s dark compulsions before 1933. Of course in Heidegger, as in all those who followed his language, a diminished theological resonance can be heard to this very day. The theological obsessions of those years have seeped into the language, far beyond the circle of those who at that time set themselves up as the elite. Nevertheless, the sacred quality of the Authentics’ language belongs to the cult of authenticity rather than to the Christian cult, even when — for temporary lack of any other viable authority — the Authentics end up resembling Christians. Prior to any consideration of particular content, their language molds thought in such a way that it adapts to the goal of subordination even when it thinks it is resisting that goal. The authority of the absolute is overthrown by absolutized authority. Fascism was not simply a conspiracy, although it was that; it originated, rather, in a powerful current of social development. Language provides it with a refuge; in language, the still smoldering disaster speaks as though it were salvation. 

A reader might launch into this passage and think they have arrived at the main event, the smackdown, Adorno vs. Heidegger. This is what I came to see. That would be wrong — consequentially so. For anyone trying to make sense of The Jargon of Authenticity, nothing is more important than noticing that Adorno is taking much of the onus off of Heidegger, who was at most an important relay for a malign turn in German intellectual life that happened well before he started writing. Stripped of all detail, what this page is saying is that the problem goes well beyond Heidegger. Focusing too much on Heidegger lets too many other intellectuals off the hook.

       I’d like to go ahead and extract three theses from this paragraph — they are, I think, the book’s major claims — and pause to ask what implications they might have for anyone trying to reckon with the revival of fascism in our own generation.

        Thesis #1) Anti-fascists, when studying fascist thought, should be prepared to cast the net widely. At some level, Adorno’s approach isn’t all that unusual. On the basis of this paragraph alone, we could think of The Jargon of Authenticity as his attempt at an Intellectual Origins of National Socialism, and we might note that the book appeared at more or less the same time as the classic volumes on that topic: Stern’s Politics of Cultural Despair (1961); Mosse’s Crisis of German Ideology (1964). It’s just that Adorno proposes figures of his own, alongside the agrarian ethno-nationalists and anti-Semites and pan-German Wander-birds unearthed by Mosse and Stern. He thought that the early existentialists had something important to contribute to the making of fascism — an authoritarian cast of mind that typically posed as religious and sometimes even posed as free. But unlike Mosse and Stern, Adorno was also interested in the survival, after 1945, of this proto-fascism. The fundamental task of all historical study is to judge matters of continuity and rupture — to identify what in any historical constellation has been inherited and what has been made anew. And there is perhaps no period for which this most basic of historiographic questions has higher stakes than Germany in the 1950s. What did the Germans (and their minders) manage to remake after the war, and perhaps build from scratch? And what was carried over from the 1930s and ’40s? (Who rebuilt the bombed-out cities if not Nazi architects? Who staffed the reopened public schools if not Nazi teachers?) Adorno, at any rate, is offering his own version of what we might too innocuously call the Continuity Thesis.

      Thesis #2: Even radical philosophy has a way of remaking itself as an idiom, a set of verbal commonplaces, a lingo for the educated classes. This indicates a break with Adorno’s usual method. You can pick up Negative Dialectics if you want to see Adorno grapple with the technicalities of Heidegger’s philosophical program — if, that is, you want to watch him crawl inside of that program and flush out its impasses and contradictions from the inside. The reason, one concludes, that Adorno decided finally that The Jargon of Authenticity did not belong in that volume is that he is in this case not interested in philosophy qua philosophy, and certainly not interested in its subtle failures. If anything, he is interested in the success of modernist German philosophy, but as something other than philosophy — interested, I mean, in the making of a Heideggerian-existentialist patois that got spoken, apparently, by a lot of people who weren’t philosophers — people “far beyond the circle” of adepts. What I’d like us to notice for now is that this position is doubtless repeatable. Adorno hands us a question that we might want to ask and at intervals re-ask: How does the late-modern Bildungs-bourgeoisie deploy to its own ends the philosophical argot with which its professors have equipped them? We might, for instance, want to chart the fate of critical theory itself as it moves from the classroom to Left Twitter and Left Tumblr and various workplaces. And when we ask that question, we will want to avoid a certain temptation, which will be to blame the speakers of this or that philosophical jargon for not getting it; we will have to choke back the lecture that we have at the ready, the one that pronounces ex cathedra that That’s not what Heidegger (or Foucault or Butler) really said. If Adorno is right, then philosophy attains its (malleable) historical force only in reduced form, as a vulgate. It’s the crude version that we should be keeping an eye on.

       Thesis #3: Fascism draws some of its intellectual energies from people who do not regard themselves as fascists and who may even take themselves to be anti-fascist. I can specify the matter like so: Many of the intellectuals that Adorno sees as preparing the way for Heidegger and for what he is content to call fascism were Jewish — either active Jews (like Rosenzweig or Buber) or men from Jewish families (like Rosenstock or the Ehrenbergs). In fact, the one figure that Adorno has cited approvingly (Borchardt) was way closer to the fascists than the unnamed figures he is now attacking. This is bound to shake up our understanding of fascism. Hans Ehrenberg was a vocal member of the movement that defied the Nazi takeover of the Protestant churches. Rosenzweig and Buber made landmark contributions to the revival of modern Jewish philosophy. Adorno’s argument is outrageous if wrong and disturbing if right: Your positions, as they enter the world, will not remain *your* positions. Even your anti-fascism can be transposed into a fascism. Call it the ruse of un-reason.

One way to capture the force of Adorno’s theses would be to update them, speculatively, for the revival of fascism in our generation. You could, if you wanted, enter the ranks as an anti-fascist philosophical watchdog. You could tell us that members of Trump’s inner circle have been reading Julius Evola, that they’ve met with Aleksandr Dugin. You could warn us again about Nick Land and the lure of Dark Enlightenment. You can bone up on de Benoist and the French Nouvelle Droite. But if we follow the page in front of us, then this isn’t nearly enough and may be something of a distraction.  For a decade, Nick Land was a professor of Continental philosophy at the UK’s most famously left-wing university. Alain de Benoist was first introduced to American readers by a journal founded by Lukacsians. Fascism, if it is to succeed, will have to find a perch in ordinary discourse, including educated discourse. And some of that discourse is likely to be your own. The Benoist circle call themselves Les Identitaires.

Jargon of Authenticity, Day 2

So we are sent back to the passage already quoted, to scan it again for clues:

In the early 1920s, a number of people active in philosophy, sociology and theology were planning a gathering. Most of them had switched from one denomination to another; what they had in common was their emphasis on newly acquired religion, and not the religion itself. They were all dissatisfied with the idealism that still dominated [German] universities at the time. Philosophy moved them to choose, out of freedom and autonomy, what has been known since at least Kierkegaard as positive theology. 

What jumps out now is that the men in question were all converts (not strictly true of the Patmos Circle, but perhaps true of the particular conference that Adorno is describing — who knows)? So maybe that’s the problem. But then why would that be a problem? You could begin by asking yourself: Do you typically have a beef with people who leave their religion for another — or who get God for the first time? Adorno also remarks that they had all converted in a Kierkegaardian spirit, and this might seem significant. Adorno, we know, was sufficiently interested in Kierkegaard to have written his PhD dissertation about him. So maybe that’s the problem — not conversion as such, just the Danish kind. Maybe that’s how we can tell the difference between the religious thinkers that Adorno warmed to and the ones he had no patience for. Rosenstock and Rosenzweig were existentialists; Walter Benjamin was not. But here, too, a complication opens up. Key figures in the Patmos circle were converts in the ordinary sense: Rosenstock was born into a secular Jewish family, but converted to Christianity as a teenager. The anti-fascist Protestant minister Hans Ehrenberg likewise converted from Judaism in his 20s. His cousin Rudolf Ehrenberg wasn’t exactly a convert, but was baptized Lutheran by his assimilated Jewish father, which is close. (Let’s poke around the family tree: Both Ehrenbergs were cousins of Franz Rosenzweig’s. And Rudolf’s niece was Olivia Newton-John’s mother. And Hans’s nephew was the father of the British comedian who created Blackadder.)

        The problem is that if we’re talking about Kierkegaard, then conversion might not have its ordinary meaning. I’ll see if I can’t put across a few core Kierkegaardian positions, and I’ll emphasize the ones that mattered most to the existentialists. We can begin with the idea that you have to choose yourself. Society is the domain of conformity and routine and a muddled, huddling thoughtlessness. Chances are that on any given day, you just do what everybody else is doing. You do the expected thing. Nobody living under these circumstances, which is of course most people, is even remotely an individual. Your task, then, is to become an individual, and Kierkegaard has some strong beliefs about how you might go about this. The first thing to know is that philosophy won’t help. Philosophy, after all, is keyed to the universal; it wants to be able to make claims that will hold true for all people at all times — and that’s not a promising path for anyone seeking to individuate. Philosophy is just a more recondite way of becoming no particular person. The only way to achieve individuality is to be committed to something, to be fully dedicated to something outside yourself — though you can already tell from what I’ve just written that your commitment won’t proceed from philosophy and to that extent won’t be rationally defensible. You won’t be able to give compelling reasons for holding the particular commitment that you do — reasons that other intelligent people would have to grant are cogent.

      Now Kierkegaard’s thinking on this matter is impeccably Protestant — and not just generically evangelical, but specifically Anabaptist, though that’s one of those explanations that probably needs explaining in turn. Anabaptism is the name for the Protestant sects who oppose infant baptism — who regard it as ungodly, in other words, to baptize babies, to induct newborns into a church without their consent and with no inkling of their actual standing with God. Baptism, on this view, is meant to follow on from a conversion experience, of a kind that a young child is unlikely to have and as a seal of one’s new openness to God — and as a cleansing, obviously, and a second birth. Ana- is a Greek prefix meaning “again” and refers to the early stages of the movement, when adult believers would have had to rinse away the false and infantile sprinklings of their native churches by getting baptized anew. The German word for Anabaptist means “re-dunker” or “double dipper.”

    The thing to know about Kierkegaard, then, is that he was profoundly hostile to what we might call “cultural Christianity” — a society in which most people are Christian by default, because they were raised that way. (The last book he saw through to publication was his Attack Upon Christendom.) Conversion in the specifically Kierkegaardian sense might therefore involve leaving one church for another, but it needn’t. It might just mean committing with one’s whole being to the institution of which was already nominally a member. In this framework, conversion, commitment, and the second baptism group tightly together. Franz Rosenzweig thus belongs in Rosenstock’s company even though he didn’t convert to Christianity, precisely because he very nearly converted before deciding not to: “Also bleibe ich Jude. … It looks like I’m staying a Jew,” but for real this time — which is the sound of Judaism remaking itself on the model of Baptistry, a Judaism for born-agains.

 Adorno, in fact, points out one of the more distinctive features of conversion — that it is chosen religion, religion practiced freely and by autonomous people. But then that can hardly be the problem. Surely freedom and autonomy are the very best things about the Kierkegaardian program. Let’s stick with the Anabaptists. The Amish famously tolerate among their teenagers all manner of ungodly behavior: wearing jewelry, signing up for Instagram, playing X-Box off of farmhouse generators, binge-watching Fast and Furious movies, gathering at the end of country roads to listen to hip-hop and drink Miller Lite. There is a general understanding, in other words, that a sixteen-year-old Amish person is not yet genuinely Amish and that the ordinary rules of Amish society therefore don’t apply. What’s at stake in this is perhaps best phrased like so: Being Amish is not an ethnicity. Americans at large tend to ethnicize the Amish, because Americans ethnicize everybody. The Amish even ethnicize their non-Amish neighbors and anyone who, like, drives a car, who are known collectively as “the English.” But the Amish do not ethnicize themselves. They are not “the Pennsylvania Dutch.” Young German-speakers in Lancaster County are only in some very qualified sense Amish, and their teenaged brothers and sisters are even less Amish than that, in a visible state of suspension, with the choice in front of them whether to be Amish or not. The creed of these horse-and-buggy traditionalists, then, is in core respects anti-traditional, premised on the conviction that custom is no-one’s fate, that heritage has claimed no-one in advance; and this goes back to a few of the key tenets of all sectarian Protestantism: that no-one can make you believe anything, that no-one should be forced into a church against their will or conscience. Milton called this the promise of “Christian freedom.”

      The other point to make here is that critical theorists and their cousins have often written in favor of conversion. It is thus hardly obvious that a radical philosopher would have to look askance at the experience of re-birth. Three examples will clinch the case.

         1) The core Sartrean position begins with the idea (again) that you have to choose yourself. You have to choose yourself; *and* you have to know that you have chosen yourself, you have to keep that idea in front of you; *and* you have to be ready to keep choosing yourself ongoingly; *and* you have to grant others all possible latitude to choose themselves. Conversion actually plays two distinct roles in this scheme: First, Sartre thought of Being and Nothingness — most of it anyway — as a close description of what it was like for a person to live without having arrived at this understanding. And his word for that arrival — for breaking through the reifying and identitarian delusions of ordinary existence — was “conversion.” A person, if she’s lucky, converts out of mauvaise foi. Second, nothing guarantees that I will continue to choose my personhood in the future in just the same way that I have in the past. (That’s a silly sentence, right? — because if there were a guarantee, then it wouldn’t be a choice. Sartre framing my selfhood as a perpetual choice has already abolished the guarantee.) The possibility always stands that I will encounter an “instant,” a moment of rupture, an ending-beginning, where my earlier project — my earlier personhood — is terminated and another one opens before me. The ongoing possibility and occasional reality of conversion vouches for my freedom, for a self that cannot be made intelligible via biography or some personal past.

       2) For a while, Zizek was pushing a theory of what he called subjective suicide. Let’s start with the idea that all of us spend nearly all of our lives cocooned in ideology, social convention, and pseudo-sociological delusion — a set of more or less specious understandings of “how the world works.” Eventually, though, any one of us is going to have experiences that the social-symbolic order does not cover, and Lacanian wisdom holds that these experiences are bound to be deeply unpleasant, since anything that is compatible with that order will immediately get incorporated into it, absorbed into one of the pat narratives that we already tell about ourselves and our society, which means, in turn, that anything that lies outside that order is incompatible with it, ergo a threat to our ordinary understandings of the world and ourselves. Most of the time, these encounters with the Real don’t amount to much; they aren’t much more than eerie smudges on the screen of your experience. It is always possible, though, that an encounter with the Real will, for an interval, sunder your ties with the social-symbolic order, propelling you out of your accustomed social “reality,” and in the process obliterating all of your socially entangled perceptions of yourself: your “identity.” Lacan himself gives this experience the innocuous name of “the act,” which makes it all the more alarming to realize that the technical term for the resulting condition is “psychosis.” Zizek seems to get it right, tonally, when he calls it subjective suicide, though we might also just say: Sometimes you snap. Here’s Zizek: ‘The act differs from an active intervention (action) in that it radically transforms its bearer (agent): the act is not simply something I “accomplish”—after an act, I’m literally “not the same as before.” In this sense, we could say that the subject “undergoes” the act (“passes through” it) rather than “accomplishes” it: in it, the subject is annihilated and subsequently reborn (or not).’ This, then, is the Lacanian account of conversion: your temporary reduction to nothing; psychotic Anabaptism.

        3) Badiou’s theory of the “truth-event” is the Lacanian act’s benign double. Sometimes — not often — we have an experience that strikes us as surpassingly important, something that hits us as anomalously true and real and right, an experience that leaves us feeling transformed and that suspends the normal order of the day: “This changes everything.” For Badiou, then, ethics is simply a matter of attending to those experiences — being open to them, letting ourselves be changed by them, and then resolving to stay changed, not to let ourselves bend gradually back to the norm and expectation and set-point. Until you have been called, there is simply no way for you to live morally. All ethics is an ethics of conversion.

     All I mean to say here is that radical philosophy furnishes some rather attractive defenses of conversion, which means we’ll have to account for Adorno’s not following them.

     Adorno also remarks that the Patmos group was religiously mixed; its members didn’t have a religion in common. He doubtless has this one gathering in mind, but his point would hold just as well for the New Thinkers at large. The Rosenstock-Rosenzweig correspondence is often held up as one of the twentieth century’s great feats of Judeo-Christian dialogue. Die Kreatur was published under the direction of three editors: a Protestant, a rogue Catholic, and a Jew (that last would be Martin Buber). It is hard to see these efforts as anything but a benignly ecumenical exercise. By the time Adorno wrote these words, Germans had been attacking each other on confessional grounds for more than four hundred years, often lethally: Schmalkadic Wars and  Kulturkämpfe and Judenhetze. It will come as a surprise, then, to realize that this interdenominationalism is a big part of what is bothering Adorno.

         We’ll have to keep reading to figure out why. The men in question had all taken a religious turn. A new passage:

However, they were less interested in the specific doctrine, the truth content of revelation, than in a disposition or cast of mind. To his slight annoyance, a friend [of mine], who was at that time attracted by this circle, was not invited. He was—they intimated— not authentic enough. For he hesitated before Kierkegaard’s leap, suspecting that any religion con­jured up out of autonomous thinking would subordinate itself to the latter, and would negate itself as the absolute which, after all, in terms of its own conceptual nature, it wants to be. 

I’m going to fill in a few details, and then we can see what they add up to. We know from Adorno’s papers that the friend in question was Siegfried Kracauer. If you’re reading that name for the first time, you could, just for now, think of him as the other Walter Benjamin, a second German-Jewish intellectual, close to Adorno, a good decade his senior, transfixed by popular culture in a way that Adorno manifestly was not, determined to remake thought around the experience of modern cities, and with a great many illuminating things to say about photography and film. (But then Adorno met Kracauer in his late teens, which means that in T.W.A.’s biography, he came first. Benjamin appears as a kind of second Kracauer. Also: You have to be able to imagine a Benjamin who was able to hold down a job at a major newspaper and who managed to make it out of France.)

        Apparently one of the Patmos people told Kracauer that he was inauthentic. I’m sure you don’t need to be told that no-one was accusing him of being watered down to suit American tastes, like he was a corrupted version of Kracauer the way they cook it back home. The existentialist doctrine of “authenticity” tends to involve variations of the following claims:

        -You need to know that you choose yourself, that you choose your identity, choose your way of living, choose your fundamental commitments. However you are, you weren’t just born that way.

          -Equally, then, you need to be willing to own yourself — to be candid about your commitments and to take responsibility for your personhood, for your way of being in the world. The German word for “authenticity” — the one in Adorno’s original title — is Eigentlichkeit, which comes from the word eigen, which is the everyday adjective that designates something of one’s own: “I brought my own book.” “I brought mein eigenes Buch.” Its closest cognate in German is Eigentum, which means “property” — what you own. To be “authentic,” then, is to be your own person and to be willing to own the person that you are.

-If your fundamental commitments line up in some sense with those of your culture — and it is the hallmark of right-wing existentialism that it considers this always your best option — then the task in front of you is to be in some now fully committed way what up until now you had been unthinkingly. If you wake up at 17 and realize that you have been raised more or less as a Jew and that other people regard you as Jewish, then it is up to you to commit to your Judaism and to adopt it as a project. (And yes, equally: If you wake up at 17 and realize that you have been raised more or less as a Russian, then it is up to you to commit to your Russianness…) This is where the existentialist notion of “authenticity” rejoins vernacular concerns with race and ethnicity and bad Chinese food: You go from being an x to being a real x.

   And then there’s this bit about “the Kierkegaardian leap.” That’s what English speakers usually refer to as a “leap of faith,” though Kierkegaard scholars love to point out that he never actually used that phrase. They also tend to object that the term, having first been invented by K’s Anglo-Saxon readers, then gets extracted from the ironies and pseudonymous obliquities of his writing and turned into some kitschy, blog-ready philosopheme — which is, of course, all the more reason for us to pay attention to it. The version that has come down to us mostly gets routed through the existentialist philosopher Karl Jaspers, who was one of the key figures in the Kierekegaard revival who will turn out to be one of Adorno’s prime targets in this book. It’s his version of the leap we need to know about here.

         The first point to make about Jaspers is that he was a dualist: He thought that the human mind was bifurcated into two fundamentally different domains. Unlike many of his existentialist cousins, Jaspers had no objection to science and reason as such. They aren’t really the problem; they’re just grand as far as they go. There’s no reason to call into question what the scientists have figured out about non-human nature. Science will pass all but the most stringent epistemological tests. But if I say that they are grand as far as they go, then I am implying, of course, that they also have limitations. They can’t go just anywhere. Crucially, science and reason can’t tell us how to live — can’t tell us what to care about — can’t tell us what kind of people we want to be. I can become quite learned — I can bone up on string theory and evolutionary biology and the latest research into the Haitian Revolution — and I can reflect carefully on what I’ve learned. But none of this amassed knowledge can tell me what to do with my life. That’s the dualism: There are some problems, a great many problems, that we can approach with the tools of science; and there are other problems that we have to learn to think about in some fundamentally different way. For anyone interested in the history of philosophy, one of the more unusual features of Jaspers is that he thought of himself as a Kantian; he knew himself to be devising an existentialist Kantianism or to be offering Kierkegaardian answers to Kantian prompts. Kant’s first Critique vindicates science (and other types of empirical knowledge), while insisting that the mind will nonetheless press on, riskily, to some non-empirical notion of the world, the self (or soul or psyche), and God. Existentialism, then, picks up where science leaves off. What, after all, are we supposed to do about all those issues where science and knowledge can’t help? Maybe now’s the moment to pause to ask yourself: What is your basic orientation to the world? What other orientations would be possible? What are your fundamental commitments? Jaspers’s point — and this is how the “leap of faith” has often been understood — is that you are going to have to choose those commitments — some commitments, maybe an overriding commitment — and make your peace with your knowledge that this commitment is in some sense groundless: sorely under-justified. You’ve chosen *this* commitment; you could have chosen another. Jaspers has a lot to say about the dualism of science and commitment: about the danger of treating existential matters as matters of knowledge and the even graver danger of trying to duck those fundamental existential choices.

      This last must be the accusation that was leveled against Kracauer, more or less: that he refused to grant that all convictions, including nominally secular ones, resemble religious belief; that he couldn’t bring himself to announce a commitment of a basically religious kind; that he thought maybe he could get by without commitments; that he was no leaper. Of course, Adorno also gives us — and implicitly endorses — Kracauer’s counterclaim: “He suspected that any religion that has been sworn to out of autonomous thinking would subordinate itself to the latter, and would negate itself as the absolute which by the light of its own concept it wants to be.” There are a few different issues that need to be teased out here. Adorno shares Kracauer’s exasperation with what looks like a contradiction in the existentialist position. To see this, we’ll need to bring into play the opposite of autonomy, which is heteronomy — the condition of being ruled from without — not giving-yourself-the-law, but having-the-law-imposed-upon-you. Adorno’s premise is plausible enough: He seems to think that religion is fundamentally an exercise in heteronomy, for which the Christian’s ordinary word is “revelation.” The law comes to the Christian from some external source: from the Catholic’s gradual training into Church tradition and discipline; from the scripturalist’s painstaking study of Holy Writ; from the spiritualist’s resolve to wait for God’s “leading.” Adorno’s point is that you can’t sincerely arrive at heteronomy non-dogmatically and via your own independent judgments. The existentialist gambit is to say that you can be a freethinker and still submit, and that your submission will not abrogate your status as a freethinker. And to this Adorno responds that if you retain a sense of yourself as a freethinker — if, following Jaspers, you never forget that you have chosen your commitments and could have chosen otherwise — then your commitment will always be provisional and indeed revocable and in that sense not really a commitment at all, certainly not an “absolute” one, one that you couldn’t imagine re-negotiating.

         Of course, we can run the contradiction the other way. The heteronomy that the young existentialist agrees to mimic will require that he relinquish his freedom, punctually, over and over again, even as he tells himself that he is doing so freely. Existentialism thus resembles nothing so much as the voluntary servitude first described by La Boétie in the 1540s — or the condition that made Spinoza shudder, the confusion of people “who fight for their bondage as though it were their freedom.” Equally, we could think of this existentialism as a bizarre reversal in the history of Left Hegelianism. In the 1840s, Bauer and Feuerbach and others began arguing that religion was the very model of alienation: Humans had invented God; assigned to him their own most distinctive powers (the powers of spirit, of creation, their ability to make and remake their world); and then subordinated themselves to this distorted avatar of their own disavowed eminence. Left Hegelianism was obviously an invitation to drop the God-act; to recognize that God was a projection of the power of thinking human activity and so to affirm that power directly. The existentialists then arrived on the scene and took these arguments on board, only to say: That! Do that! Kneel to the god of your own making, even as you freely concede that this is what you are doing. Existentialism thus enshrines the alienation that it was supposedly designed to combat. It is the resolve to stay alienated even once you have gained insight into the sources of your alienation.

           There’s another problem that follows on from this last and which Adorno has already insinuated at least twice: These early existentialists weren’t interested in any one religion, weren’t interested “in any specific doctrine.” So Jaspers says that we choose are commitments freely, and this will tend to imply that commitments are always plural — not that *I* will have multiple commitments, but that other plausible commitments were available and that I could have chosen differently and that I have to be prepared to let other people choose differently. Their choosing freely means they don’t have to choose as I do. This is Jaspers’s big innovation on Kierkegaard, who when all is said and done only had this one ardent version of Protestantism in mind. And though we might congratulate Jaspers for having figured out how to make Kierkegaardianism liberal, we might for that very reason fret that he has led us straight into the quagmire of multiple and contending Absolutes. We begin to sense the scope of the problem if we look again at Badiou, whose ethics is designed above all to undo liberal society’s general neutralization of commitment — its insistence that we not really believe what we claim to believe, or its grudging permission to believe anything we want provided we promise in advance never to do anything about it. Badiou, in other words, wants to teach members of a liberal society how to be fanatics again; he wants us to recover our lost capacity for militancy and Schwärmerei. The peculiar character of his argument, though, is that he is sticking up for fanaticism in general — for no particular fanaticism — and certainly not for communism specifically, which is what you might have thought he was after. And “fanaticism in general” is, of course, a broken-backed concept, a contradictory fusing of zeal and indifference: extreme and passionate dedication to a cause that doesn’t care anything about the cause. Badiou’s ethics thus incorporates the flaccidly noncommittal pluralism that it was designed to overcome, offering only a hollowed-out militancy remade on the model of its liberal enemy. Or to put the point more plainly: A genuinely religious person can’t care about religion in general, because he will be committed to the specific claims (rites, beliefs) of some particular religion. Particularity is built into the thing. Religion can only appear as “religion” to someone whose underlying premises are secular.

Jargon of Authenticity, Day 1

[The introduction is here.]

Adorno starts with an anecdote. This already marks out The Jargon of Authenticity as a bit unusual, since anecdote is not Adorno’s usual way. He doesn’t begin the Dialectic of Enlightenment by telling you about the time that he and Horkheimer hitchhiked to Amsterdam. Even Minima Moralia, which one might reasonably regard as Theodor Adorno’s Diary of America, hides its origins in lived experience behind a veneer of abstract and depersonalized utterance: Notes from the Damaged Life, not Notes from My Damaged Life. So we shouldn’t take this first page for granted. Let’s listen to Adorno tell a story.

In the early 1920s, a number of people active in philosophy, sociology and theology were planning a gathering. Most of them had switched from one denomination to another; what they had in common was their emphasis on newly acquired religion, and not the religion itself. They were all dissatisfied with the idealism that still dominated [German] universities at the time. Philosophy moved them to choose, out of freedom and autonomy, what has been known since at least Kierkegaard as positive theology. 

A reader might pause at this point to wonder who exactly Adorno has in mind here. The Jargon is Adorno’s most polemical book, and surely we would learn something if we could put names to its targets. What he says here, on the opening page, is going to frame everything that comes after. Aren’t we being asked to see these figures as representative, and wouldn’t Adorno’s points be easier to follow if we knew who they were? Or you might just be curious: Who had sufficiently raised Adorno’s ire that he was still going on about it some forty years later?

        That question has an answer. We do know who is referring to, though The Jargon will let them remain anonymous. A few years ago, a philosopher in Germany — it turns out that scholars are good for something — discovered an unpublished notebook of Adorno’s in the Walter Benjamin archive in Berlin, and in that notebook Adorno tells the story again—and this time with identifying marks. So the people he has in mind were a group of German intellectuals who came together after World War I to remake religion in a broadly Nietzschean spirit—to devise versions of Christianity and Judaism that could withstand Nietzschean attack—and to explain, further, how this modernist religion, a religion without metaphysics, could push Europe to remake itself, apocalyptically, after Passchendaele and the Somme. The key name here is Eugen Rosenstock, though the figure that a contemporary English-speaker is most likely to know is not Rosenstock, but Franz Rosenzweig—not Rose-tree, but Rose-branch, who was the former’s closest collaborator. I should point out: Even in the notebook, Adorno doesn’t write out their names; he just calls them “the Patmos people”—Patmos being the name of the publishing house that the Rosenstock circle founded in order to spread their hopes for new life in the post-Wilhelminian Pentecost.

      The question now is whether filling in these names will help us understand why Adorno doesn’t like them. Will it make the book in front of us any easier to read? And the answer to this is less clear than one might have hoped. If anything, finding out about Rosenstock and Rosenzweig can make Adorno’s animosity harder to understand.

      Let’s say we start scanning these first four sentences for clues. Adorno tells us, for one, that Rosenstock and his crew had come out against the German idealists—that they had rejected the legacy of Kant and Hegel. But it is hard to see Adorno attacking them on those grounds alone. The Jargon of Authenticity was first conceived as a series of chapters in Adorno’s Negative Dialectics and was, in fact, included in that book’s first edition, as an appendix. And that’s the book where Adorno remarks that “philosophical system is the belly turned mind, just as rage is the defining mark of idealism in all its forms.” Suffice it to sat that Adorno is unlikely to call out a post-Nietzschean philosopher for being insufficiently respectful to Kant.

         But then there are, of course, many different ways of opposing idealism. The next step, then, would be to try to at least catch the drift of R&R’s particular anti-idealism—to try to put some substance to their discontent with the philosophical heritage. Here it will help to know about three positions that Rosenstock and Rosenzweig shared—positions that add up to a kind of anti-philosophy.

         1) Philosophers have typically erred by convincing us that we can think abstractly, outside of space and time. This is little better than a trick, a writerly illusion that falsifies the most basic coordinates of human experience and the human situation. One of the few things that we can say about human beings in general is that they have to be somewhere and that they exist in time. The task of a post-Nietzschean counter-philosophy—what Rosenzweig called the New Thinking—will be to clarify what is going on when I try to apprehend the world at some particular moment, from some particular place, and to do this is in a way that resists transcendence’s every lure. Hegel, in the introduction to his smaller Logic, describes what it’s like to start studying philosophy: “The mind, denied the use of its familiar ideas, feels the ground where it once stood firm and at home taken away from beneath it.” And that, of course, is a vision of displacement and dispossession. Philosophy will take away your land; will put you on the run; will leave you homeless. Every twenty-year-old who picks up Fichte undergoes their own personal Nakba. The easiest way to understand what Rosenstock and Rosenzweig were up to, then, is simply to notice that they wanted to re-do philosophy without Hegel’s cruel threat. It doesn’t matter where you are right now, as you are reading these words. You are reading them in some particular location, and there’s a good chance that you are there by choice, because you want to be. The New Thinking is content with your remaining a terrestrial being, not that you could be otherwise. You can think carefully and still stay where you are. You don’t need to levitate, and you don’t need to leave.

 #2: A thinking that has stopped trying to abstract from time and space will have no choice but to reconstruct the primal varieties of religious experience (or else shut up about religion altogether). The idea here is that religion should be kept away from philosophy, set free from doctrine and system and argued-out theology. Once we have agreed that we are terrestrial creatures—and that we must not delude ourselves into thinking otherwise—then the question becomes whether we can discover the stuff of religion within the texture of ordinary, earth-bound experience. If we attended to experience in a more or less phenomenological fashion, could we find the raw materials of religion, non-transcendentally and before its capture by philosophy? One of the more curious consequences of that question is that it asks us to face, historically, in two directions at once. In one sense, it resembles nothing so much as the Protestant Reformation, which wanted to go back behind the whole history of the Christian church, and especially behind the tradition of Christian (Catholic) philosophy, in order to revive—to let loose upon the world again—the original spirit of Christianity (or pre-rabbinical Judaism), to turn an ossified Church back into the Jesus movement (or an archaic Israel). It can seem as though Rosenstock and Rosenzweig are proposing a radicalized version of that project, whose gambit is to get us back behind the whole history of philosophy. That’s the bit they got from Nietzsche and will share with Heidegger. At the same time, though, their point is that what we call religion is a permanent feature of human experience, to be accessed at any time. We might need to scrape away layers of philosophical accretion in order to do that, but we don’t need, each of us, to make a preposterous transhistorical leap to early antiquity.

       #3: One of the best ways to retrieve the sources of religious experience—away from the latter’s codification in theology—is to pay close attention to ordinary language. The way we speak, the way we use language, has a way of pointing to those things that we ordinarily call “religion.” Examples will help: So maybe you would grant that a few people feel a religious calling, but you know equally that most people don’t. Saul, you have read, was called on the Road to Damascus, but most of us don’t expect to be transformed by a blinding light while traveling to Dallas for work. You know that priests sometimes say they felt a calling, but it seems pretty clear that cashiers and construction workers don’t. To this Rosenstock would respond that we have all, in fact, had the experience of being called — that we are called all the time and over and over again: “Hey, Remy!” “Oh, Clara, am I glad to see you!” “Yo, Julian! What’s good?” Others address us, and our orientation in the world briefly shifts. Someone speaks my name, and I am pried open. What’s more, the vocative is primary; all the other things we do with language happen after a relationship has been established via a calling. We are inclined to think otherwise only because philosophers and grammar books tend to take the indicative as the paradigmatic instance of language, but it’s not. Second case in point: We create in language—we build our cultures and out customs and our institutions and our lifeworlds; we make things happen with utterance—and every such speech act follows the example of “Let there be light.” Religion, in general, has thus tended to be more clear-eyed about the powers of language, less deceived than philosophy by the tyranny of the declarative sentence, the syllogism, the doctrine of predication. Socrates, God help us, is a man.

     The problem, I think, is the following: We know that Adorno was friendly with intellectuals who were to varying degrees religious. It is, after all, hard to imagine a close reader of Walter Benjamin rejecting these three positions out of hand. Benjamin, indeed, published in the house journal of the New Thinkers, Die Kreatur; he published alongside the very thinkers that Adorno is going after here. And for a number of years, Adorno served as informal assistant to the Protestant theologian Paul Tillich, who must rank high on any list of T.W.A.’s consistently remarkable mentors. (Adorno on Tillich: Without him, “it is very questionable whether I would be able to speak to you today; it is even questionable whether I would have survived.”) Adorno was certainly capable of taking the fight to the religious Right. He wrote an entire book, in English, about a former boxer turned radio evangelist—an anti-Semite and red-baiter who first took to the airwaves to combat the malign influence upon America of Upton Sinclair. But Rosenstock and Rosenzweig are no Martin Luther Thomas. The easiest way to figure out what they expected from religion will be to let their modern students summarize their program.

             Here’s Wayne Cristaudo on Rosenstock: He held that “today so many, including so-called Christians, failed to fathom the claims about Jesus’ divinity, which had to do with the overpowering of death, not in any mystical or Pythagorean manner of the continuity of the individual soul in a netherworld, but in the triumph over death and deadly forces through forming a body across time, the Church. For Rosenstock-Huessy, Jesus was proof that Caesar and Pharaoh and ‘great men’ were not gods and Jesus’ divinization meant that after him no one else would be God, that our redemption was universal and mutual. Jesus’ taking on the role of the crucified was to show us that we crucify God when we do evil to each other, and that we fail to achieve the maximum of our powers (our own divinity) in our failure to obey the law of love, and that to obey the commandment of love means being continually prepared to leave abodes ruled by death and to die into new forms of love and fellowship.”

         Here’s Benjamin Pollock on Rosenzweig: “According to Rosenzweig, redemption designates that future point of unity towards which all beings strive through acts of interpersonal love and recognition, through the formation of religious and political communities, and … through translation; it is a future point that orients our everyday temporal existence but that we can experience proleptically through liturgical practice; a future point toward which history unfolds, without history thereby achieving it.”

             Here’s Cristaudo again: Rosenstock “never doubted that his desire to create new forms of community, to change the education system by bringing students and workers together, and to restructure the workplace, were as much part of one calling and project as his studies on Egypt, Greece, Christianity, the tribes, the nations, the law, and every other topic he addressed in his writings. Like Rosenzweig, he saw scholarship as a contribution to life. He held that ideas are nothing without incarnation and that everything he did was all part of one life lived in devotion, service, and prayer.”

   The word you might wish to circle is “redemption”—as in: “Redemption designates that future point of unity…”—since any long-time reader of Adorno will know that it’s a word that he uses a lot: (From the final pages of Minima Moralia: “The only philosophy which can be responsibly practiced in face of despair is the attempt to contemplate all things as they would present themselves from the standpoint of redemption. Knowledge has no light but that shed on the world by redemption.”) At some point, Adorno and Horkheimer decided in tandem that Stalinism had made it impossible to keep using the old Marxist vocabulary. This is a book about jargon, right?—and Marx’s was the jargon that they were drawn to and sometimes spoke, until they dropped it, famously scrubbing the first edition of the Dialectic of Enlightenment to make it sound less like Chicago’s Voice of Labor: “exploitation” became “enslavement” (or “injustice” or “subjugation”); “capitalism” became “the economic system”; “class society” became “society”; and so on. The question is, then: What did Adorno write instead of the words “socialism” and “communism”? And my point here is that he mostly made do with variants of “redemption” and “reconciliation,” to the point where the literature on Adorno is crowded with these terms: “the redeemed future,” “the redeemed world,” “reconciled humanity,” “a reconciled society.” Adorno himself refers in Minima Moralia to those “tidings of redemption whose purest notes are heard in the Sermon on the Mount.” I could also put the matter this way: In the early 1940s, Eugen Rosenstock, by then in his New England exile, took over a recently vacated workers’ camp in Vermont. His plan was to extend the New Deal’s jobs program to college students, on the theory that the “overprivileged”—his term—needed to learn to work for the community every bit as much as the destitute and the displaced. And that project—it was called Camp William James—is sometimes cited as the most direct precursor to the Peace Corps, which was apparently proposed to the Kennedy administration by one of its alumni.

       That’s the puzzle, in other words: Adorno does not begin The Jargon of Authenticity by going after some dingbat Ariosophist or the so-called German Christians—those were the people who thought the Romans had nailed Jesus to a swastika. He begins by going after the Christian intellectual who is said to have inspired the Peace Corps. And one wishes to know why.

A Commentary on Adorno’s Jargon of Authenticity

Certificate of Authenticity

Introduction

Today, I would like to begin a project whose like I have never attempted before. Over the next several months, I will provide a detailed commentary on a short book that Theodor Adorno published in 1964, in the run-up to Negative Dialectics. That book, The Jargon of Authenticity, has never attracted much interest, in German or in English. It’s not that readers make it through the book and then decide they don’t like it. They mostly don’t read it. Or they take it up and soon set it down again, thirty pages into the thing and still unsure what Adorno is up to. This is entirely understandable. The book is a roundhouse attack on a certain intellectual scene as it took shape in Germany in the 1950s and early ‘60s, the milieu of a right-leaning existentialism whose presiding gurus were Martin Heidegger and Karl Jaspers. But Adorno barely even uses the word “existentialism,” which the Sartreans had come by that point to monopolize, and he is not especially interested in his opponents’ philosophical positions. He is interested, rather, in how existentialism had, by 1964, degenerated into set of commonplaces, and he expects the reader to be able to recognize this sub-philosophical boilerplate. But then we are emphatically not in a position to recognize that boilerplate. History (and a foreign language) have drawn a curtain over Adorno’s efforts.

Worse, the few intellectual historians who have bothered to comment on The Jargon of Authenticity have concluded that it is minor Adorno—or even unworthy of him. They miss the dialectical intricacy of his more famous engagements with Heidegger — the ones that take Heidegger seriously as a philosopher and offer to meet him on his own ground. By the standards of Negative Dialectics (or of the now published lectures on Ontology and Dialectics), The Jargon can seem merely polemical or perhaps “sociological,” for which read “Marxist.” But then this, of course, is precisely the interest of the volume. Adorno is tracking the fate of a philosophy when it gets picked up by people who aren’t exactly philosophers, and he has changed his grip accordingly. If you want to figure out the work that a philosophy does—in the world and not just at the seminar table—it won’t be enough to read the masters. You will have to take seriously the B-listers and garbled enthusiasts, the people who seize on a philosophy’s key terms and strip them of their native subtlety. This is worth our attention for at least two reasons: First, Adorno here expands his at least somewhat well-known critique of Heidegger to many other figures, including a few intellectuals (like Buber) with whom we might have expected him to have some sympathy. Heidegger, after all, makes things easy for the critical theorist, who can always just cry “Nazi!” and claim victory. But what do we say about the Existenzphilosophen who weren’t fascists, who opposed the Nazis or were almost killed by them? Second, one suspects that all successful philosophies suffer the fate that Adorno traces here; that they are all made to yield a jargon, a bundle of memes and buzzwords. One suspects, indeed, that the list of such philosophies would include critical theory itself, with or without the capital K. And we might well be grateful for Adorno’s help in thinking about this problem. Philosophy cannot realize itself unless it is taken up as a project, and by many readers at once. But if a philosophy is widely taught, the most likely effect, at least in the middle term, is that it will become the common property of the educated classes, an acquired idiom for a society’s more successful members to justify their very advantages. Existentialism, says Adorno, outs itself as the “snooty crowing of come-down gentlemen.” To which we must add: Speaking the lingo of critical theory is by now mostly just evidence that you went to a good school.

Some practicalities: Anyone wanting to read along could grab a copy of the 1973 Jargon of Authenticity, translated by Knut Tarnowski and Frederic Will. We should be grateful to anyone who completed an Adorno translation fifty years ago, without the benefit of the extensive Frankfurt apparatus now available in English. But the translation is as error-prone as one would expect of such a pioneering effort, and I will often amend it without explanation.
Also: I have a companion in this project, Justin Piccininni of Williams College, who first suggested that The Jargon deserved a closer look. There is very little in the book that I would understand if it weren’t for conversations with him.

The Deconstructive Universal 1

  • 2. The Deconstructive Universal

 

  • 2.1

Whether or not you take to deconstruction has always had a lot to do with how you feel about universals in any of that word’s related senses: how you feel, for one, about metaphysical universals, abstract characteristics shared by individual objects or persons; but also how you feel about universals in some distinctively Hegelian sense, master categories and higher abstractions, as opposed to secondary categories and lesser abstractions, the order rather than the genus; and then, too, how you feel about ethical and political universalism, which asks that our institutions give priority to characteristics that all people (in all times and all places) might be thought to have in common. Your views on such matters are germane because Derrida’s single most famous argument is, in fact, universal in scope, pullulatingly so. If you’re going to be a Derridean, the first argument that you’re going to have to take on board is that there is no philosophically defensible distinction to be drawn between writing and speech, that all language is writing, and that all people (and peoples) must be thought of as possessing écriture. That’s the universalism: Writing is everywhere; everyone has it. Derrida, of course, offers reasons for thinking this. His proposition is that we typically (and incorrectly) think of writing as more mediated than speech. I might, for instance, worry that if spoken words represent things, and writing represents spoken words, then all written documents, even original ones, are going to have the smudgy, deteriorating quality of second-generation photocopies. Speech removes me from the object; and writing removes me further still. A Derridean counters this anxiety simply by honing in on the phrases I’ve just written—that “spoken words represent things” or that “speech removes me from the object”—in order to make the point that speech is already mediation, already the arbitrary coding of the world, already constructed out of a network of differences, gaps, or non-positivities. Words emerging from a mouth aren’t any more tethered to their objects than words emerging from an ink cartridge, which means that we will have to give up the fantasy that one type of language can keep us close to things while the other will cost us the world.

Similarly, you might think of writing as uniquely decontextualizing. Once recorded, words strung together in one place and time can be encountered in any other place or (subsequent) time. But then spoken language isn’t nearly as place-bound as we unthinkingly take it to be, since people often remember speech they’ve heard and go about their lives and move around and eventually re-speak it. Writing travels, true enough, but so does quoted speech; there is no world without recording devices. Or again, you might think that spoken language keeps listeners closer to a speaker’s intentions or private understandings, if only because they can interrupt him when he’s being unclear and ask him what he was trying to say. But there aren’t any grounds for thinking that spoken language is less in need of interpretation than the written kind, and if consulted, a living, yakking, disambiguating speaker-in-the-room can only produce more speech, equidistant from his intentions and requiring interpretation in turn.

What we’ll want to notice now is that nothing in this explanation strictly requires Derrida to claim that all language is writing. In fact, the argument would probably be more perspicuous without that provocation, without, I mean, your always having mentally to substitute for the word écriture the notion that all language displays some-but-not-all of the features conventionally associated with writing. Eventually some philosopher is likely to want to reform deconstruction along these lines, by insisting on perspicacity, stripping away as gratuitous the doctrine of universal writing and then seeing what’s left or what else has to change in the absence of an ecumenicized écriture. But anyone wanting to account for the peculiarity of really existing Derrideanism doesn’t have that option. Far from seeming expendable, the needless apotheosis of écriture—that drive to say it’s-all-writing and actually mean something a little different or to say it’s-all-writing even when your argument doesn’t strictly demand it—can easily seem like one of deconstruction’s most salient features.

Writing, this is all to say, is at the center of deconstruction’s bid for universalism, and yet its status as a universal is open to question. Even within the framework generated by Derrida himself, one has to wonder whether writing hasn’t been trickishly generalized. At the very least, we’ll want to describe Derrida’s procedure here, which is to extract a particularized term from the semantic stratum where we are used to encountering it and insert it instead into the place of the universal. At the formal level, to claim that all language is writing is akin to claiming that all vehicles are pushcarts or all buildings are pyramids. That this procedure introduces problems that Derrida cannot solve should be apparent as soon as you notice that writing, even having been promoted to the status of universal, sometimes persists in his arguments as particular all the same—as writing-writing, book-and-document writing; “writing in the narrow sense,” he calls it—at which moments écriture is called upon to function as a subset of itself. In deconstruction, we have an encompassing term, writing-which-means-the-sum-of-all-language, under which we can class a second term, which is … writing. All vehicles are pushcarts, and then some of them are also pushcarts.

The consequences of this will be hard to reckon if we don’t pause first to consider the several different ways that one could deal with writing or language as a universal term—or, indeed, the different ways one could deal with universals of any kind. It will be easier, that is, to say what Derrida is up to if we know which nearby philosophical options he is refusing.

It might help, for instance, to clear up a few misconceptions about the status of universals in Hegelian philosophy. Hegel, after all, is not quite the aloof, god’s-eye philosopher of Geist and Weltgeschichte that casually hostile readers take him to be. He is in various senses a universalist, to be sure, but this point is easy to overstate, since one of the concerns that most obviously fuels dialectical thinking is a discomfort over the ways in which non-dialectical philosophers get universals wrong, mostly by approaching them too abruptly. Among the core tenets of dialectical philosophy is the notion that universals cannot manifest themselves directly in the world. You can phrase this point in illuminatingly trivial terms—that no entity can be a bird, immediately and nakedly avian, without also being, say, a goose—as long as you realize that the payoff for this claim is above all ethical and political: that no-one can be human without specification, that no-one can instantiate mind or spirit except by pursuing some particular practice, that no-one is the abstract and Vitruvian bearer of rights and freedoms, &c.[i]

From out of dialectics, therefore, even in its classical form, it is not hard to extract some moderately anti-universalist positions, the second of which would state that individuals cannot be directly linked to their universals, but are better understood as passing through an always extendable set of intermediate categories. I am standing in western Ireland in December, looking at a creature with wings and feathers, fairly big for such an alate thing, with a white face atop a long black neck, and a variously grey, elongated body. For almost no purposes will it be enough to say that this x is an “animal” or a “bird.” It probably won’t even be enough to say that it is a “goose,” once one realizes just how high a floor in the taxonomical edifice that designation actually occupies. We might loosely think of geese as forming a species, but they don’t; there are species of goose, but no species “goose.” Nor are geese properly thought of as a genus, one story up, but rather as what zoologists call a tribe or even a subfamily. An informed person, in this context, is one who can introduce additional determinations, who will know that this x is not just a bird but a goose, and not just a goose but a barnacle goose; she might even know that the latter is itself a kind of black goose. One way to appreciate what Hegel is after here is to keep alive in yourself a sense of surprise that even the word “goose” is more abstract than you probably thought and is best approached patiently and stepwise. About écriture, then, a Hegelian would have to say that there can be no writing as such, without instantiation, and further, that no collection of words can be grasped as writing without passing through a set of intermediate terms, which in this case would let the mind loose in the encyclopedia of textual genres: birthday card, saint’s life, personal ad, ransom note, presidential signing statement, silver fork novel, and so on.

Perhaps the least appreciated point about dialectics is that it is at heart an anti-reductionism, a way of combating the mind’s tendency to seek explanations at one degree of abstraction at the expense of other explanations involving other degrees of abstraction. Let’s say, to consider a Marxist offshoot of this Hegelian program, that I am sitting down to write a book about the English Revolution. And let’s say further that I want to show how Atlantic merchants—English men trading with the Caribbean and the east coast of North America—played a central and hitherto underappreciated role in the upheavals that overtook England, Scotland, and Ireland in the 1640s. I won’t be able to make that case if I can’t tell you about those merchants in individuated detail, if I don’t know their biographies, if I can’t account for the choices they made month for month, some of which choices included rising against their king and disestablishing the national church. I have to be able to tell you about Maurice Thomson and Matthew Craddock and Samuel Vassall. At the same, though, I won’t be able to understand what these men were after if I don’t understand the groups into which they formed or the institutions that housed their projects—the corporations (set off against rival enterprises), the dissenting sects (each set off against the others and all of them set off against the Church of England), the often unformalized political factions. Similarly, I’m going to need a robust account of the new colonial-capitalist economy in the Atlantic in which all of these men operated, and to which all English, Scottish, and Irish people were increasingly connected, though at meaningfully different removes—and what I will need to show about this economy is that it introduced imperatives and constraints of its own that none of the actors in the 1640s, whether grasped as individuals or as groups, could simply defy. Just as important, I will need to make clear how each of these explanatory modes requires the other two, how each, if you like, houses the others within itself. Maurice Thomson and Matthew Craddock don’t come to me as mere data or as singletons, not as “individuals,” but as individuated within various groups—within the Providence Island Company, perhaps, or English Baptistry—as also within the Atlantic economy as a whole. But those same groups, meanwhile, are plainly made up of these individuals, while also taking on individuated profiles of their own when positioned across from one another within the Atlantic economy at large. This economy at large, meanwhile, is from some perspective nothing but the networked aggregate of those individuals arranged in those groups.

The task of Hegelian (and Hegelian-Marxist) thought is thus to find the individual and the particular in the universal; but also to find the individual and the universal in the particular; and then to find the particular and the universal in the individual. The idea is precisely to avoid the reduction to the universal or impetuous argument-to-system for which Hegelianism is often mistaken. At the same time, however, Hegelianism cautions against explanations that would lock in at the level of the intermediate category; if revolutions are the day’s topic, then such part-explanations would be the usual business of social history, the history not of persons but of groups and institutions, revealed here to be a reduction to the particular. And then, of course, the methodological individualism beloved of the it’s-more-complicated school of academic history-writing, which prides itself on its own version of anti-reductionism, stands indicted here as a reduction to the deinstitutionalized and un-mediated individual.[ii]

Adorno’s philosophy of non-identity, then, is best thought of not as breaking with Hegel but rather as radicalizing the anti-universalist strain that was indigenous to dialectics all along. This isn’t to say that Adorno’s revisions don’t present subtleties of their own. The trick to coming to terms with Adorno is to grasp that he is not a nominalist, a point that requires us to concede the insufficiently considered possibility of an anti-universalism that does not go back to Ockham. Negative dialectics asks us to oppose universals, in that term’s various senses, but not because these are fake or just names. The point is complicated: There is, in fact, a nominalist moment in Adorno’s thinking, which does sometimes describe concepts as herding singular objects into undifferentiated droves, asking us to fret about the penalties we pay for this most ordinary of all cognitive procedures, the heedless aggregation involved in all naming. It’s just that Adorno is also interested in the ways in which objects (and persons) really can be deprived of their singularity, in actuality and not just in thought, by mass production or by unified institutions or by standardization across increasingly vaster regions of the planet. The administered society, by flooding the world with generic objects, makes real the abstraction that had hitherto been merely verbal or conceptual. The standardized planet is the world remade in the image of language, a world in which language has at last become adequate to things, but only because the latter have become as indefinite as the perfunctory mono-terms with which we have always identified them. Universals in Adorno thus occur on two levels—both as verbal abstractions and as real ones—and it is his outlandish hunch that the universals of one level are best resisted on the other level, that one might be able to turn back the accelerating protocols of standardization—that one could prevent Body Shops from being built in Warsaw or the entry of Pizza Hut into Guangdong—if only one could disable abstraction at its cognitive source, in words and concepts. The vocation of negative dialectics is thus to terminate universals, sometimes via aesthetics, mostly via a re-jigged dialectics capable of bringing thought up against the unthought specificity of things.[iii]

Any guide to critical theory will tell you that Adorno’s is one of the great anti-universalisms in the history of philosophy. And a careful reading of Hegel should show that even orthodox dialectics produces an argued-through critique of das Allgemeine. Saying as much now should bring into view the first of the features that makes Derrida distinctive, which is that he is not an anti-universalist to nearly the same degree.

[i] See Hegel in the Philosophy of History, translated by J. Sibree (Mineola, NY: Dover, 2004), p. 59 : “A person is a specific existence; not man in general (a term to which no real existence corresponds).” Or in the early essay on the “Positivity of Christianity,” in the Early Theological Writings, translated by TM Knox (New York: Harper Torchbook, 1961), p. 169: “The general concept of human nature admits of infinite modifications, and there is no need of the makeshift of calling experience to witness that modifications are necessary and that human nature has never been present in its purity. A strict proof of this is possible; all that is necessary is to settle the question: ‘What is human nature in its purity?’ This expression, ‘human nature in its purity,’ should imply no more than accordance with the general concept. But the living nature of humanity is always other than the concept of the same, and hence for the concept is a bare modification, a pure accident, a superfluity, becomes a necessity, something living, perhaps the only thing which is natural and beautiful.” Hence, too, the emphasis placed by many Hegelians on “concrete universality (i.e., the specific embodiment that the universality of modern philosophy receives in particular sociohistorical settings.” See Paul Piccone’s Italian Marxism (Berkeley: University of California Press, 1983), p. 18.

[ii] Hegel’s anti-reductionism is clearest in his account of the syllogism in either of his two Logics, see, e.g., The Science of Logic, translated by George di Giovanni (Cambridge, UK: Cambridge University Press, 2010),  p. 588 – 624. The book I’m describing is not hypothetical. See Robert Brenner’s Merchants and Revolution: Commercial Change, Political Conflict, and London’s Overseas Traders, 1550-1653 (1993) (London: Verso, 2003).

[iii] This is the goal of the demontieren I was describing earlier. See Negative Dialectics, pp. 3 – 28.

Immanuel Kant’s Manifesto for Dad Rock

•1.

If there is one point that should be reasonably clear to anyone who has read “The Culture Industry,” it is that Adorno and Horkheimer do not reject popular culture. That essay, it’s true, gives us reasons to question any number of things that we typically hold dear: free time (for being unfree time, nearly as programmed as the work from which it nominally releases us), laughter (for being the consolation prize you get for not having a life worth living), style (for funneling all social and historical content into a pre-arranged matrix or inflexible scheme of aesthetic quirks and twitches; for holding out the promise of artistic individualism—the personal signature in literature or music—and then transposing this into its opposite, the iterative, unresponsive art-machine). Most of us remember “The Culture Industry” as anti-pop’s cahier de doléance, its encyclopedia of anathema, the night in which all bêtes sont noires. But alongside the essay’s admittedly austere bill of grievances, it is easy enough to compile a second list, an inventory of things that Adorno and Horkheimer say they like and suggest we might admire: Charlie Chaplin, the Marx Brothers, Greta Garbo, the circus, old cartoons, Felix the Cat (maybe), Gertie the Dinosaur (perhaps), Betty Boop (for sure, because they name her). Just to be clear: “The Culture Industry,” Exhibit A in any case against critical theory’s Left elitism, is also the essay in which Adorno attacks Mozart while praising “stunt films,” which we might more idiomatically translate as “Jackie Chan.” One can thus cite authentically Adornian precedence for an attitude that distrusts classical music and celebrates kung fu movies, and this will be hard to believe only if you prefer a critical theory shorn of its dialectics, stripped of the contradictory judgments that thought renders upon contradictory material—only, that is, if you prefer the Adorno of joke Twitter feeds and scowling author photos: bald, moon-faced, a Central European frown emoji inexplicably mad at his own piano. One suspects that readers have generally refused to take seriously the essay’s central category. For the culture industry is neither an epithet nor a gratuitously Marxist synonym for popular culture, but rather a different concept, distorted every time we paraphrase it in that other, more comfortable idiom, as a calumny upon pop culture or pop. There is plenty of evidence, in the essay itself, that Adorno and Horkheimer were drawing distinctions between forms of popular culture, and not just pitting the Glenn Miller Orchestra against Alban Berg.[1]

Such, then, is one way of taking the measure of Nicholas Brown’s Autonomy.[2] This is one of those books that you might have thought no-one could write anymore: four chapters that mean to restate the old, left-wing case for art, unapologetically named as such, as the artwork—and not as text or culture or cultural production—the idea being that art represents the survival of independent human activity under conditions hostile to such a thing. No longer homogenized under those master terms, art can again take as its rival entertainment, a word whose German equivalent derives from the verb unterhalten, which even English speakers can tell means “to hold under,” as though movies and TV shows existed to keep us down, as though R&B were a ducking or a swirlie. That the English word borrows the same roots from the French only confirms the point: entre + tenir, to keep amidst or hold in position. Entertain used to mean “to hire, as a servant.”

Autonomy is also the book in which a next-generation American Marxist out-Mandarins Adorno, who, after all, begins his essay by insisting that the cultural conservatives are wrong. There has been no decline of standards, no cultural anarchy let loose by the weakening of the churches and the vanishing of the old, agrarian societies, hence no permissive culture in which anything goes. Just the contrary: Magazines and radio and Hollywood form a system with its own rigidly enforced standards, a highly regulated domain in which almost nothing goes. Adorno’s way of saying this is that there is no “cultural chaos.”[3] But Nicholas Brown prefers the chaos thesis, endorsing the position that Adorno has preemptively rejected as both reactionary and implausible: “The culture industry,” Brown writes, couching in Frankfurtese his not-at-all Adornian point, is “the confusion in which everything worth saving is lost” (135).

Similarly, readers are usually surprised to find Adorno writing in defense of “mindlessness.” His hunch is that Kantian aesthetics might find its niche among the lowest art forms and not, as we more commonly expect, among the most elevated. Sometimes I encounter an object and find it beautiful, and in that moment of wonderment, my attitude towards the object is adjusted. I stop trying to discern what the thing is for or how to use it. Where a moment ago, I was still scanning its instruction manual, I am now glad for the thing just so. Perhaps I am even moved to disenroll the beautiful thing from the inventory of useful objects, or find myself doting on it even having ascertained that it’s not good for much. But then sometimes this purposiveness without a purpose is going to strike me not as beautiful, but as stupid, and Adorno’s point is that the stupid can do the work of the beautiful, that the beaux arts are If anything outmatched by the imbecile kind. The activities that we do for their own sake, for the idiot joy of our own capacities, are the ones that our pragmatic selves are likely to dismiss as dopey: someone you know can pay two recorders at once with her nose; a guy you once met could burp louder than a riding mower; you’ve heard about people who can vomit at will and recreationally. Kantian Zweckmäßigkeit ohne Zweck enters the vernacular every time we mutter “That was pointless.” It is in this spirit that Adorno sticks up for “entertainment free of all restraint,” “pure entertainment,” “stubbornly purposeless expertise,” and “mindless artistry.” His claim, in fact, is that the culture industry is hostile to such “meaninglessness,” that Hollywood is “making meaninglessness disappear.”[4] It might be enough here to recall the difficulties that the major studios have in making comedies that are funny all the way through, preferring as they do to recruit their clowns from improv clubs and sketch shows, to promote them to the rank of movie star, and then to impound them in the regularities of the well-made plot, complete with third-act twists and character arcs, gracelessly telegraphed in the film’s final twenty-five minutes, to make up for all the time squandered on jokes, and tending to position the buffo’s comic persona as a pathology to be cured, scripting a return to normalcy whose hallmark is a neutralized mirthlessness. Hollywood’s comic plots model the supersession of comedy and not its vindication.

But Nicholas Brown is not on the side of meaninglessness. “In commercial culture,” he writes, “there are no works to critique and no meanings to be found”—and he does not mean this as praise (10). In Autonomy, there is no liberating nonsense, but only the English professor’s compulsion to discern meaning, his impatience with any art for which one could not readily devise an essay prompt. Whatever independence the book’s title is offering us, it is not the freedom to stop making sense. It feels bracing, in fact, to read a book so willing to discard the institutionalized anti-elitism of cultural studies and 200-level seminars offering to “introduce” 20-year-olds to horror movies. When Brown rolls his eyes over Avatar because of some dumb thing its director once said in an interview, or when he calls off a wholly promising reading of True Detective by announcing that it is “nothing more than an entertainment,” we need to see him as turning his back on the aging pseudo-Gramscians of the contemporary academy, all those populists without a movement, the media-studies scholars who imagine themselves as part of a Cultural Front that no-one else can see, a two-term alliance consisting entirely of Beyoncé fans and themselves; the shopping-mall Maoists of the 1990s who couldn’t tell the difference between aller au peuple and aller au cinema (71). Adorno, of course, was concerned that the desires and tastes of ordinary audiences could be manipulated or even in some sense produced. “The Culture Industry” prompts in its readers the still Kantian project to figure out which of the many pleasures they experience are authentically their own. Which are the pleasures that will survive your reflection upon them, and which are the ones that you might reject for having made you more object-like, for having come to you as mere stimulation or conditioning? The autonomy that Adorno is trying to imagine is therefore ours, in opposition to a mass media that muscles in to tell us what we want before we have had a chance to consider what else there is to want or how a person might want differently, to work out not just different objects of desire, but different modes of desiring and of seeking satisfaction. Brown, by contrast, complains repeatedly that artists more than ever have to make things that people like. The autonomy that he is after is thus not our autonomy from an insinuating system but the artist’s autonomy from us. It is no longer surprising for a tenured literature professor to disclose, in writing, that he’s been listening to early Bruno Mars records. The unusual bit comes when Brown says he doesn’t think they’re any good (24).

[PART TWO IS HERE.]

[1] See Adorno and Horkheimer’s “Culture Industry,” in The Dialectic of Enlightenment (1944/1947), translated by Edmund Jephcott (Stanford: Stanford University Press, 2002), pp. 94 – 136. On free time, p. 104; on laughter, p. 112; on style, pp. 100ff; Chaplin and the Marx Brothers, p. 109; Greta Garbo, p. 106; the circus, p. 114; Betty Boop, p. 106.

[2] Nicholas Brown, Autonomy (Durham, NC: Duke University Press, 2019); subsequent citations will be given by page number in parentheses.

[3] Adorno and Horkheimer, p. 94.

[4] Ibid., p. 114.

Outward Bound: On Quentin Meillassoux’s After Finitude

 

 

Il n’y a pas de hors-texte. If post-structuralism has had a motto—a proverb and quotable provocation—then surely it is this, from Derrida’s Of Grammatology. Text has no outside. There is nothing outside the text. It is tempting to put a conventionally Kantian construction on these words—to see them, I mean, as bumping up against an old epistemological barrier: Our thinking is intrinsically verbal—in that sense, textual—and it is therefore impossible for our minds to get past themselves, to leave themselves behind, to shed words and in that shedding to encounter objects as they really are, in their own skins, even when we’re not thinking them, plastering them with language, generating little mind-texts about them. But this is not, in fact, what the sentence says. Derrida’s claim would seem to be rather stronger than that: not There are unknowable objects outside of text, but There are outside of text no objects for us to know. So we reach for another gloss—There is only textain’t nothing but text—except the sentence isn’t really saying that either, since to say that there is nothing outside text points to the possibility that there is, in a manner yet to be explained, something inside text, and this something would not itself have to be text, any more than caramels in a carrying bag have to be made out of cellophane.

So we look for another way into the sentence. An alternate angle of approach would be to consider the claim’s implications in institutional or disciplinary terms. The text has no outside is the sentence via which English professors get to tell everyone else in the university how righteously important they are. No academic discipline can just dispense with language. Sooner or later, archives and labs and deserts will all have to be exited. The historians will have to write up their findings; so will the anthropologists; so will the biochemists. And if that’s true, then it will be in everyone’s interest to have around colleagues who are capable of reflecting on writing—literary critics, philosophers of language, the people we used to call rhetoricians—not just to proofread the manuscripts of their fellows and supply these with their missing commas, but to think hard about whether the language typically adopted by a given discipline can actually do what the discipline needs it to do. If the text has no outside, then literature professors will always have jobs; the idea is itself a kind of tenure, since it means that writerly types can never safely be removed from the interdisciplinary mix. The idea might even establish—or seek to establish—the institutional primacy of literature programs. Il n’y a pas de hors-texte. There is nothing outside the English department, since every other department is itself engaged in a more or less literary endeavor, just one more attempt to make the world intelligible in language.

Such, then, is the interest of Quentin Meillassoux’s After Finitude, first published in French in 2006. It is the book that, more than any other of its generation, means to tell the literature professors that their jobs are not, in fact, safe. Against Derrida it banners a counter-slogan of its own: ““it could be that contemporary philosophers have lost the great outdoors, the absolute outside.” It is Meillassoux’s task to restore to us what he is careful not to call nature, to lead post-structuralists out into the open country, to make sure that we are all getting enough fresh air. Meillassoux means, in other words, to wean us from text, and for anyone beginning to experience a certain eye-strain, a certain cramp of the thigh from not having moved all day from out his favorite chair, this is bound to be an appealing prospect, though if you end up unconvinced by its arguments—and there are good reasons for doubt, as the book amounts to a tissue of misunderstanding and turns, finally, on one genuinely arbitrary prohibition—then it’s all going to end up sounding like a bullying father enrolling his pansy son in the Boy Scouts against his will: Get your head out of that book! Why don’t you go in the yard and play?!

• • •

Of course, Meillassoux’s way of getting the post-structuralists to go hiking with him is by telling them which books to read first. If you start scanning After Finitude’s bibliography, what will immediately stand out is its programmatic borrowing from seventeenth- and early eighteenth-century philosophers. Meillassoux regularly cites Descartes and poses anew the question that once led to the cogito, but will here lead someplace else: What is the one thing I as a thinking person cannot disbelieve even from the stance of radical doubt? He christens one chapter after Hume and proposes, as a knowing radicalization of the latter’s arguments, that we think of the cosmos as “acausal.” In the final pages, Galileo steps forward as modern philosophy’s forgotten hero. His followers are given to saying that Meillassoux’s thinking marks out a totally new direction in the history of philosophy, but I don’t think anyone gets to make that kind of claim until they have first drawn up an exhaustive inventory of debts. At one point, he praises a philosopher publishing in the 1980s for having “written with a concision worthy of the philosophers of the seventeenth century.” That’s one way to get a bead on this book—that it resurrects the Grand Siècle as a term of praise. The movement now coalescing around Meillassoux—the one calling itself speculative realism—is a bid to get past post-structuralism by resurrecting an ante-Kantian, more or less baroque ontology, on the understanding that nearly all of European philosophy since the first Critique can be denounced as one long prelude to Derrida. There never was a “structuralism,” but only “pre-post-structuralism.”

Meillassoux, in sum, is trying to recover the Scientific Revolution and early Enlightenment, which wouldn’t be all that unusual, except he is trying to do this on radical philosophy’s behalf—trying, that is, to get intellectuals of the Left to make their peace with science again, as the better path to some of post-structuralism’s signature positions. His argument’s reliance on early science is to that extent instructive. One of the most appealing features of Meillassoux’s writing is that it restages something of the madness of natural philosophy before the age of positivism and the research grant; it retrieves, paragraph-wise, the sublimity and wonder of an immoderate knowledge. In 1712, Richard Blackmore published an epic called Creation, which you’ve almost certainly never heard of but which remained popular in Britain for several decades. That poem tells the story of the world’s awful making, before humanity’s arrival, and if you read even just its opening lines, you’ll see that this conception is premised on a rather pungent refusal of Virgil and hence on a wholesale refurbishing of the epic as genre: “No more of arms I sing.” Blackmore reclassifies what poets had only just recently been calling “heroic verse” as “vulgar”; the epic, it would seem, has degenerated into bellowing stage plays and popular romances and will have to learn from the astrophysicists if it is to regain its loft and dignity. Poets will have to accompany the natural philosophers as they set out “to see the full extent of nature” and to tally “unnumbered worlds.” The point is that there was lots of writing like this in the eighteenth century, and that it was aligned for the most part with the period’s republicans and pseudo-republicans and whatever else England had in those years instead of a Left. This means that the cosmic epic was to some extent a mutation of an early Puritan culture, a way of carrying into the eighteenth earlier trends in radical Protestant writing, and especially the latter’s Judaizing or philo-Semitic strains. The idea here was that Hebrew poetry provided an alternative model to Greek and Roman poetry: a sublime, direct poetry of high emotion, of inspiration, ecstasy, and astonishment. The Creation is one of the things you could read if you wanted to figure out how ordinary people ever came to care about science—how science was made into something that could turn a person on—and what you’ll find in its pages is a then new aesthetic that is equal parts Longinus and Milton, or rather Longinus plus Moses plus Milton plus Newton, and not a Weberian or Purito-rationalist Newton, but a Newton supernal and thunder-charged, in which the Principia is made to yield science fiction. It is, finally, this writing that Meillassoux is channeling when he asks us—routinely—to contemplate the planet’s earliest, not-yet-human eons; when, like a boy-intellectual collecting philosophical trilobites, he demands that our minds be arrested by the fossil record or that all of modern European philosophy reconfigure itself to accommodate the dinosaurs. And it is the eighteenth-century epic’s penchant for firebolt apocalyptic that echoes in his descriptions of a cosmos beyond law:

Everything could actually collapse: from trees to stars, from stars to laws, from physical laws to logical laws; and this not by virtue of some superior law whereby everything is destined to perish, but by virtue of the absence of any superior law capable of preserve anything, no matter what, from perishing.

Meillassoux’s followers call this an idea that no-one has ever had before. The epic poets once called it Strife.

That so many readers have discovered new political energies in Meillassoux’s argument is perhaps hard to see, since the book contains absolutely nothing that would count, in any of the ordinary senses, as political thought. There are, it’s true, a few passages in which Meillassoux lets you know he thinks of himself as a committed intellectual: a (badly underdeveloped) account of ideology critique; the faint chiming, in one sentence, of The Communist Manifesto; a few pages in tribute to Badiou. With a little effort, though, the political openings can be teased out, and they are basically twofold: 1) Meillassoux says that thought’s most pressing task is to do justice to the possibility—or, indeed, to the archaic historical reality—of a planet stripped of its humans. On at least one occasion, he even uses, in English translation, the phrase “world without us.” For anyone looking to devise a deep ecology by non-Heideggerian means—and there are permanent incentives to reach positions with as little Heidegger as possible—Meillassoux’s thinking is bound to be attractive. The book is an entry, among many other such, in the competition to design the most attractive anti-humanism. 2) The antinomian language in the sentence last quoted—laws could collapse; there is no superior law­—or, indeed, the very notion of a cosmos structured only by unnecessary laws—is no doubt what has drawn to this book those who would otherwise be reading Deleuze, since Meillassoux, like this other, has designed an ontology to anarchist specifications, though he has done so, rather surprisingly, without Spinoza. Another world is possible wasn’t Marx’s slogan—it was Leibniz’s—except at this level, it has to be said, the book’s politics remain for all intents and purposes allegorical. Meillassoux’s argument operates at most as a peculiar, quasi-theological reassurance that if we set out to change the political and legal order of our nation-states, the universe will like it.

Maybe this is already enough information for us to see that After Finitude’s relationship to post-structuralism is actually quite complicated. Any brief description of the book is going to have to say that it is out to demolish German Idealism and post-structuralism and any other philosophy of discourse or mind. But if we take a second pass over After Finitude, we will have to conclude that far from flattening these latter, its chosen task is precisely to shore them up, to move anti-foundationalism itself onto sturdy ontological foundations. Meillassoux’s niftiest trick, the one that having mastered he compulsively performs, is the translating of post-structuralism’s over-familiar epistemological claims into fresh-sounding ontological ones. What readers of Foucault and Lyotard took to be claims about knowledge turn out to have been claims about Being all along, and it is through this device that Meillassoux will preserve what he finds most valuable in the radical philosophy of his parents’ generation: its anti-Hegelianism, its hard-Left anti-totalitarianism, its attack on doctrines of necessity, its counter-doctrine of contingency, its capacity for ideology critique.

Adorno was arguing as early as the mid-‘60s that thought needed to figure out some impossible way to think its other, which is the unthought, “objects open and naked,” the world out of our clutches. “The concept takes as it most pressing business everything it cannot reach.” Is it possible to devise “cognition on behalf of the non-conceptual”? This is the sense in which Meillassoux, far from breaking with post-structuralism and its cousins, is simply answering one of its central questions. It’s just that he does so in a way that any convinced Adornian or Left Heideggerian is going to find baffling. Cognition on behalf of the non-conceptual turns out to have been right in front of us all along—it is called science and math. Celestial mechanics has always been the better anti-humanism. A philosophical anarchism that has thrown its lot in with the geologists and not with the Situationists—that is the possibility for thought that After Finitude opens up.  The book, indeed, sometimes seems to be borrowing some of Heidegger’s idiom of cosmic awe, but it separates this from the latter’s critique of science—such that biology and chemistry and physics can henceforth function as vehicles of ontological wonder, astonishment at the world made manifest. And with that idea there comes to an end almost a century’s worth of radical struggle against domination-through-knowledge, against bureaucracy, rule by experts, the New Class, technocracy, instrumental reason, and epistemological regimes. On the back cover of After Finitude, Bruno Latour says that Meillassoux promises to “liberate us from discourse,” but that’s not exactly right and may be exactly wrong. He wants rather to free us from having to think of discourse as a problem—precisely not to rally us against it, in the manner of Adorno and Foucault—but to license us to make our peace with, and so sink back into, it.

• • •

Lots of people will find good reasons to take this book seriously. It is, nonetheless, unconvincing on five or six fronts at once.

It is philosophically conniving. There are almost no empirical constraints placed on the argumentative enterprise of ontology. Nothing in everyday experience is ever going to suggest that one generalized account of all Being is right and another wrong, and this situation will inevitably grant the philosopher latitude. Ontologies will always be tailored to extra-philosophical considerations, any one of them elected only because a given thinker wants something to be true about the cosmos. Explanations of existence are all speculative and in that sense opportunistic. It is this opportunism we sense when we discover Meillassoux baldly massaging his sources. Here he is on p. 38: “Kant maintains that we can only describe the a priori forms of knowledge…, whereas Hegel insists that it is possible to deduce them.” Kant, we are being told, doesn’t think the categories are deducible. And then here’s Meillassoux on pp. 88 and 89: “the third type of response to Hume’s problem is Kant’s … objective deduction of the categories as elaborated in the Critique of Pure Reason.”

The leap from epistemology to ontology sometimes falls short. At one point, Meillassoux thinks he can get the better of post-structuralists like so: Imagine, he says, that an anti-foundationalist is talking to a Christian (about the afterlife, say). The Christian says: “After we die, the righteous among us will sit at the right hand of the Lord.” And the anti-foundationalist responds the way anti-foundationalists always respond: “Well, you could be right, but it could also be different.” For Meillassoux, that last clause is the ontologist’s opening. His task is now to convince the skeptic that “it could also be different” is not just a skeptical claim about what we can’t know—it is not an ignorance, but rather already an ontological position in its own right. What we know about the real cosmos, existing apart from thought, is that everything in it could also be different. And now suppose that the anti-foundationalist responds to the ontologist by just repeating the same sentence—again, because it’s really all the skeptic knows how to say: “Well, you could be right, but it could also be different.” Meillassoux at this point begins his end-zone dance. He has just claimed that Everything could be different, and the skeptic obviously can’t disagree with this by objecting that Everything could be different. The skeptic has been maneuvered round to agreeing with the ontologist’s position. But Meillassoux doesn’t yet have good reasons to triumph, because, quite simply, he is using “could be different” in two contrary senses, and he rather bafflingly thinks that their shared phrasing is enough to render them identical. He has simply routed his argument through a rigged formulation, one in which ontological claims and epistemological claims seem briefly to coincide. The skeptical, epistemological version of that sentence says: “Everything could be different from how I am thinking it.” And the ontological version says: “Everything could be different from how it really is now.” There may, in fact, occur real-word instances in which skeptics string words into ambiguous sentences that could mean either, and yet this will never indicate that they unwittingly or via logical compulsion mean the latter.

Meillassoux’s theory of language is lunatic. Another way of getting a bead on After Finitude would be to say that it is trying to shut down science studies; it wants to stop literary (and anthropological) types from reading the complicated utterances produced by science as writing (or discourse or culture). Meillassoux is bugged by anyone who reads scientific papers and gets interested in what is least scientific in them—anyone, that is, who attributes to astronomy or kinetics a political unconscious, as when one examines the great new systems devised during the seventeenth century and realizes that they all turned on new ways of understanding “laws” and “forces” and “powers.” Meillassoux’s own philosophy requires, as he puts it, “the belief that the realist meaning of [any utterance about the early history of the planet] is its ultimate meaning—that there is no other regime of meaning capable of deepening our understanding of it.” The problem is, of course, that it’s really easy to show that science writing does, in fact, contain an ideological-conceptual surcharge; that, like any other verbally intricate undertaking, it can’t help but borrow from several linguistic registers at once; and that there is always going to be some other “order of meaning” at play in statements about strontium or the Mesozoic. Science studies, after all, possesses lots of evidence of a more or less empirical kind, and Meillassoux’s response is to object that this evidence concerns nothing “ultimate.” But then what would it mean for a sentence to have an “ultimate meaning” anyway? A meaning that outlasts its rivals? Or that defeats them in televised battle? What, then, is the time that governs meanings, such that some count as final even while the others are still around? And at what point do secondary meanings just disappear? What are the periods of a meaning’s rise and fall? Meillassoux doesn’t possess the resources to answer any of those questions; nor, as best as I can tell, does he mean to try. The phrase “ultimate meaning” is not philosophically serious. It does little more than commit us to a blatant reductionism, commanding us to disregard any complexities and ambiguities that a linguistically attentive person would, upon reading Galileo, discover. We can even watch Meillassoux’s own language drift, such that “ultimate meaning” becomes, over the course of three pages, exclusive meaning. “Either [a scientific] statement has a realist sense, and only a realist sense, or it has no sense at all.” It exasperates Meillassoux that an unscientific language would so regularly worm its way into science writing; and it exasperates him, further, that English professors would take the trouble to point this language out. His response is to install a prohibition, the wholly unscientific injunction to treat scientific language as simpler than it is even when the data show otherwise. It is perhaps a special problem for Meillassoux that the ideological character of science writing is especially pronounced in the very period to which he is looking for intellectual salvation—the generations on either side of Newton, which were crammed with ontologies explicitly modeled on the political theology of the late Middle Ages—new scientific cosmologies, I mean, whose political dimensions were quite overt. And it is definitely a problem for Meillassoux that he has himself written a political ontology of roughly this kind—a cosmology made-to-order for the punks and the Bakuninites—since one of his opening moves is to disallow the very idea of such ontologies. After Finitude only has the implications its anarchist readership takes it to have if its language means more than it literally says, and Meillassoux himself insists that it can have no such meaning.

He poses as secular but is actually a kind of theologian. It is not just that Meillassoux is secular. He is pugnaciously secular or, if you prefer, actively anti-religious. He casually links Levinas with fanaticism and Muslim terror. He sticks up for what Adorno once called the totalitarianism of enlightenment, marveling at philosophy’s now vanished willingness to tell religious people that they’re stupid or at its determination to make even non-philosophers fight on its terms. And against our accustomed sense that liberalism is the spontaneous ideology of secular modernity, Meillassoux sees freedom of opinion instead as an outgrowth of the Counter-Reformation and Counter-Enlightenment. Liberalism, in other words, is how religion gets readmitted to the public sphere even once everyone involved has been forced to concede that it’s bunk. And yet for all that, Meillassoux has entirely underestimated how hard it is going to be to craft a consequent anti-humanism without taking recourse to religious language. At the heart of After Finitude is a simple restatement of the religious mystic’s ecstatic demand that we “get out of ourselves” and thereby learn to “grasp the in-itself”; the book aches for an “outside which thought could explore with the legitimate feeling of being on foreign territory—of being entirely elsewhere.” In the place of God, Meillassoux has installed a principle he calls “hyper-Chaos,” to which, however, he then attaches all manner of conventional theological language, right down to the capital-C-of-adoration. Hyper-Chaos is an entity…

…for which nothing is or would seem to be impossible … capable of destroying both things and worlds, of bringing forth monstrous absurdities, yet also of never doing anything, of realizing every dream, but also every nightmare, of engendering random and frenetic transformations, or conversely, of producing a universe that remains motionless down to its ultimate recess, like a cloud bearing the fiercest storms, then the eeriest bright spells.

No-one reading that passage—even casually, even for the first time—is going to miss the predictable omnipotence language with which it begins: Chaos is the God of Might. Meillassoux himself acknowledges as much. What may be less apparent, though, is that this entire line of argument simply extends into the present the late medieval debate over whether God was constrained to create this particular universe, or whether he could have, at will, created another, and Meillassoux’s position in this sense resembles nothing so much as the orthodox Christian defense of miracles, theorizing a power that can, in defiance of its own quotidian regularities, “bring forth absurdities, engender transformations, cast bright spells.” There have been many different theories of contingency over the last generation, especially among philosophers of history. As a philosopheme, it has, in fact, become rather commonplace. Meillassoux is unusual in this regard only in that he has elevated contingency to the position of demiurge and so returned a full portion of metaphysics to a position that had until now been trying to get by without it. Such is the penalty after all for going back behind Kant, that you’ll have to stop your ears again against the singing of angels. Two generations before the three Critiques there stood Christian Wolff, whom Meillassoux does not name, but on whose system his metaphysics is modeled and who wrote, in the 1720s and ‘30s, that philosophy was “the study of the possible as possible.” Philosophy, in other words, is the one all-important branch of knowledge that does not study actuality. Each more circumscribed intellectual endeavor—biology, history, philology—studies what-now-is, but philosophy studies events and objects in our world only as a subset of the much vaster category of what-could-be. It tries, like some kind of interplanetary structuralism, to work out the entire system of possibilities—every hypothetical aggregate of objects or particles or substances that could combine without contradiction—and thereby reclassifies the universe we currently inhabit as just one unfolding outcome among many unseen others. Meillassoux, in this same spirit, asks us to imagine a cosmos of “open possibility, wherein no eventuality has any more reason to be realized than any other.” And this way of approaching actuality is what Wolff calls theology, which in this instance means not knowledge of God but God’s knowledge. Philosophy, for Wolff—as, by extension, for Meillassoux—is a way of transcending human knowledge in the direction of divine knowledge, when the latter is the science not just of our world but of all things that could ever be, what Hegel called “the thoughts had by God before the Creation”—sheer could-ness, vast and indistinct.

He misdescribes recent European philosophy and is thus unclear about his own place in it. Maybe this point is better made with reference to his supporters than to Meillassoux himself. Here’s how one of his closest allies explains his contribution:

With his term ‘correlationism,’ Meillassoux has already made a permanent contribution to the philosophical lexicon. The rapid adoption of this word, to the point that an intellectual movement has already assembled to combat the menace it describes suggests that ‘correlationism’ describes a pre-existent reality that was badly in need of a name. Whenever disputes arise in philosophy concerning realism and idealism, we immediately note the appearance of a third personage who dismisses both of these alternatives as solutions to a pseudo-problem. This figure is the correlationist, who holds that we can never think of the world without humans nor of humans without the world, but only of a primal correlation or rapport between the two.

As intellectual history, this is almost illiterate. We weren’t in need of a name, because the people who argue in terms of the-rapport-between-humans-and-world or subject-and-object were already called “Hegelians,” and the movement opposing them hasn’t just “sprung up,” because philosophers have been battling the Hegelians as long as there have been Hegelians to fight. Worse still is the notion, projected by Meillassoux himself, that all of European philosophy since Kant must be opposed for leading inexorably, shunt-like, to post-structuralism. This is just the melodrama to which radical philosophy is congenitally prone; the entire history of Western thought has to become a single, uninterrupted exercise in the one perhaps quite local error you would like to correct, the cost of which, in this instance, is that Meillassoux and Company have to turn every major European thinker into a second-rate idealist or vulgar Derridean and so end up glossing Wittgenstein and Heidegger and Sartre and various Marxists in ways that are tendentious to the point of unrecognizability. There are central components of Meillassoux’s project that philosophers have been attempting since the 1790s, and he occasionally gives the impression of not knowing that European philosophy has been trying for generations to get past dialectics or humanism or the philosophy of the subject or whatever else it is for which “correlationism” is simply a new term. Perhaps Meillassoux thinks that his contribution has been to show that Wittgenstein and Heidegger were more Hegelian than they themselves realized. But then this, too, seems more like a repetition than a new direction, since European philosophy has always had a propensity for auto-critique of precisely this kind. Auto-critique is in lots of ways its most fundamental move: One anti-humanist philosopher accuses another of having snuck in some humanist premise or another. One philosopher-against-the-subject accuses another of being secretly attached to theories of subjectivity. And so on. For Meillassoux to come around now and say that there are residues of Kant and Hegel all over the place in contemporary thought—well, sure: That’s just the sort of thing that European philosophers are always saying.

He is wrong about German idealism. Kant, Meillassoux says, is the one who deprived us all of the Great Outdoors, which accusation seems plausible … until you remember that bit about “the starry sky above me.” This is one more indication that Meillassoux is punching air, though the point matters more with reference to Hegel than to Kant. Hegel’s philosophy, after all, turns on a particular way of relating the history of the world: At first, human beings were just pinpricks of consciousness in a world not of their own making, mobile smudges of mind on an alien planet. But human activity gradually remade the world—it refashioned every glade and river valley—worked all the materials—to the point where there now remains nothing in the world that hasn’t to some degree been made subject to human desire and planning. The world has, in this sense, been all but comprehensively humanized; it is saturated with mind. What are we to say, then, when Meillassoux claims that no modern philosopher since Kant can even begin to deal with the existence of the world before humans; that they can’t even take up the question; that they have to duck it; that it is what will blow holes in their systems? Hegel not only has no trouble speaking of the pre-human planet; his historical philosophy downright presupposes it. The world didn’t used to be human; it is now thorough-goingly so; the task of philosophy is to account for that change. And it is the great failing of Meillassoux’s book that, having elevated paleontology to the paradigmatic science, he can’t even begin to explain the transformation. You might ask yourself again whether Meillassoux’s account of science is more plausible than a Hegelian one. What, after all, happened when Europeans began devising modern science? What did science actually start doing? Was it or wasn’t it a rather important part of the ongoing process by which human beings subjected the non-human world to mind? Meillassoux urges us to think of science as the philosophy of the non-human, positing as it does a world separable from thought, a planet independent of humanity, laws that don’t require our enforcing. But does science, in fact, bring that world about? Meillassoux hasn’t even begun to respond to those philosophers, like Adorno and Heidegger, who wanted to pry philosophy away from science, not because they were complacently encased in the thought-bubbles of discourse and subjectivity, but more nearly the opposite—because they thought science was the philosophy of the subject, or one important version of it, the very techno-thinking by which human being secures its final dominion over the non-human. Meillassoux, in this sense, is trying to theorize, not the science that actually entered into the world in the seventeenth century, but something else, an alternate modernity, one in which aletheia and science went hand in hand, a fully non-human science or science that humans didn’t control: gelassene Wissenschaft. But the genuinely materialist position is always going to be the one that takes seriously the effects of thought and discourse upon the world; the one that knows science itself to be a practice; the one that faces up to the realization that the concept of  “the non-human” can only ever be a device by which human beings do things to themselves and their surroundings. There is nothing real about a realism that offers itself only as a utopian counter-science, a communication from the pluriverse, a knowledge that presumes our non-existence and so requires, as bearer, some alternate cosmic intelligence that it would be simplest to call divinity.

(Thanks to Jason Adams, Chris Pye, and Anita Sokolsky. My understanding of Christian Wolff I take from Werner Schneiders’s “Deus est philosophus absolute summus: Über Christian Wolffs Philosophie und Philosophiebegriff.” The ally of Meillassoux’s that I quote is Graham Harman.)

 

Postmodernism Is Maybe After All A Historicism, Part 3

PART ONE IS HERE.

PART TWO IS HERE.

You’re going to understand De Palma’s Body Double better if you understand why Theodor Adorno liked Mahler. Somebody might have told you once that Adorno championed difficult art in general and atonal music in particular: string quartets made to skirl; the mathematically precise caterwaul of that half-stepping dozen, the series chromatic and uncanny. This isn’t exactly wrong, and it is the regular stuff of encyclopedia entries and intro classes, but it’s not exactly right either. For Adorno did not want an art entirely without subjectivity, which is what serial music sometimes suggests, a pure and as it were automatic music that would never suggest to anyone listening a link back to human utterance or expressiveness; that would never once yield a tune that someone, at least, would want to sing; a music, in fine, that was all system. What he was seeking, rather, was an art organized around antitheses, in which the conflict between subject and system would become audible; and he worried there were different ways an artwork could instead obliterate any sense we had of a living person struggling to come to speech within it, and he didn’t like any of these. Traditionalism was the obvious problem: the expert mimicry of older styles, the striking of already petrified poses, the chanting of sentences already spoken. Adorno said of Stravinsky that he was a U2 tribute band. But then a radical aesthetic can beat its own experimental path to the same deadly place, one he identified in the fully developed versions of twelve-tone music, in Webern, that is, and the late modernists of the ‘60s: serial music become oppressive because now wholly itself, without any concession to its historical rivals or predecessors, routinized and ascetic, sealed off inside its own rigors and formulae.

It is this rejection of Webern that should clarify Adorno’s championing of both Alban Berg and Gustav Mahler, which is to say both a composer conventionally classified as atonal and one typically reckoned not, the point being that each of these two absorbed into his music the opposition that musical history tries to construct only between them. Mahler and Berg can be conceptualized together as the Composers of the Break, neither tonal nor atonal, but first-one-and-then-the-other, by turns and in shifting ratios or proportions. If it’s misleading to say that Adorno was one of the great theorists of serial music, then that’s because it was this music-at-the-cusp—and not the purity of The Twelve—that he meant to recommend. At issue were compositions in which the conflict between entire aesthetic periods or modes of cultural production was openly theatricalized, and from this perspective, a composer’s starting point was irrelevant. You could fill your music with tunes, but let them curdle on occasion into noise; or, alternately, you could plunge your listeners into noise, but remind them occasionally of what tunes used to sound like. Either way, you would be staging a face-off between the entire history of human songfulness and some other, radically new aesthetic mode in which art no longer takes our pleasure as its aim and limit. And here, perhaps, is the most curious point: These last are scenarios in which either term, tonality or atonality, can count as subject and either as structure. You can say that the fine old tunes sustain us as subjects and that the mere math of the twelve-tone series recreates for us in the concert hall the experience of structure and rationalization. But you can just as plausibly say that those tunes are sedimented and mindless convention, at which point we might welcome dissonance as the opening out of the composer’s idiom—or simply as the afflicted yowl of anyone who wishes the radio would for once play something different.

We can’t make listeners choose between Mahler and Berg, because it is really easy to find Mahler in Berg. If we want to get back to Body Double, all we need to do, then, is generalize Adorno’s argument in a direction he probably wouldn’t have; to insist that antithesis, far from being the special achievement of these two Austrians, is the inevitable condition of most artworks, nearly all of which absorb into themselves piecewise the styles and conventions of various historical periods, social classes, and political tendencies. You can call this “liminal art” if you want, as long as you are prepared to add that threshold never becomes room. The struggles that a Gramscian reader thinks go on between artworks are usually reproduced one by one within those same works, which, if patiently read, will generate maps of the broader cultural fields of which they are also a part. What we can say now of postmodern art is that it is almost never wholly itself, that in order even to be recognized as postmodern, it will have to announce its own distinctiveness, marking itself off from its modernist counterparts, which it will have to after a fashion name and in naming preserve. The sentences regularly encountered in Jameson in which x artist is declared to be a postmodern revision of y modernist are thus oddly self-defeating. How often do you find yourself wanting to remind Jameson of how the dialectic works?—stammering, in this case, that one cannot name a break between two terms without simultaneously positing their continuity. If you want to lift out what was new in the movie Body Heat, having first spotted that it was, as Jameson has it, a “remake of James M. Cain’s Double Indemnity,” then you have yourself already conceded that the one was really, actually, finally a lot like the other. When we designate a work as “postmodern,” the superseded and modernist version thereof will persist, as its not-really negated shadow, and this shadow will, in turn, vitiate our sense of postmodernism as ahistorical. You can say that Body Double is a movie about other movies, but that very reliance on other films—prior films—will be a prompt to historical thinking. Postmodern Body Double preserves within itself the memory of movies that weren’t yet postmodern. But then this or something like it is going to be true of most really existing postmodernism, which we now have to reconceive as the arena of a certain fight—the showdown between the various modernisms and a postmodernism available only as ideal type.

This point is available, first, at the level of genre. There’s a remarkable moment about an hour into Body Double when we witness our hero decide to take matters into his own hands, make his own inquiries about the murder, get to the bottom of things. The spectator-actor prepares himself to assume the detective functions of classic crime narrative. And at just that moment, when the movie seems ready at last to lead us back behind the spectacle—to, you know, strike the set—it instead amplifies by the pageantry by launching into a full-fledged music video—for Frankie Goes To Hollywood’s “Relax,” complete with shots of lip-synching lead-singer Holly Johnson. What makes the sequence even more compelling is that the music video stands in for hardcore porn; it’s the point in the movie when the hero is trying to infiltrate a porn set by pretending to be a hired stud, and De Palma is letting FGTH’s lubricious, post-disco electro-march substitute for the obscenities he cannot show. The movie thereby directs our attention neither to porn nor to MTV, but to whatever it is rather that the two share—and thus to an entire set of new or newly prevalent video genres, characteristic of the last few decades and defined by their collective willingness to abandon narrative or at least scale it back to some barely-more-than-sequential minimum. From our own vantage, we would want to add, above and beyond the raunch and the Duran Duran, YouTube shorts, initially capped at ten minutes and now majestically extended to fifteen, and new-model movie trailers, which, following Jameson, deserve to be considered as a form in their own right, with their own conventions and feature-usurping pleasures.

This is what it would mean to talk about Body Double not as postmodern but as a conflict-ridden composite of postmodernism and the pop modernism of the detective story, which still thinks of itself as a device for disclosing hidden truths. The competing genres are entirely visible within the movie. And then the all-important point to be made in this regard is that the detective story more or less wins out, and not only because the movie ends with a literal unmasking, latex pulled from a face. The movie does indeed document the spectator’s inability to act, though even here its procedure is basically satirical, in a manner that depends on our memory of other heroes having once done something, a memory counterposed to which postmodernity will register not as a schizoid intensity but only as a vacuity. Check your Jameson: The movie’s parody isn’t all that blank, because its very genre provides a set of expectations against which its innovations will be judged. But even beyond this, Body Double seems dedicated to the idea that certain forms of agency remain available even in the society of the spectacle. The movie’s hero doubles himself—he is both spectator and actor—and then this pairing is itself in some sense doubled, because spectator and actor both come in a second version that we could call juridical or epistemological, and not just inactive or image-consuming. There has after all always been an affinity between the spectator and the detective, with the latter now understood as the-one-who-watches, the one who arrives on the crime scene like an apparition, pledged to leave no mark, to pollute no object, to minimize the observer effect by leaving the murder bed unmade. To this we need merely append the observation that performer-cops are also a familiar species, called “narcs” or “undercover agents,” and that acting, too, can be a form of information gathering. Body Double does to this extent grant its cipher a certain limited effectivity, within the bounds of acting and spectating, as gumshoe and mole. The once corrosive insight that the detective is like a voyeur is thus replaced by its opposite, a reminder that the detective functions might in fact survive, that epistemological and moral purpose can still be roused from within the position of the spectator.

This last is a point to be made at the level of genre as a whole. But we can make a few similar observations if we start calling out the titles of specific movies, or at least of one specific movie. For Body Double’s relationship back to Rear Window also contains its own historical argument. De Palma updates his Hitchcock in one absolutely crucial way: In the later movie, the spectator-hero is meant to see the murder, which is to say that his spectatorship has been factored in in advance. We can think of the matter this way: Rear Window was still easily explained within the usual Enlightenment paradigm of truth and knowledge, the magical version of which is the usual stuff of crime stories, in which once the solution is announced and the murderer identified, everything automatically sets itself to right: culprits march themselves off to jail, widows and fatherless children return to their business suddenly unbereaved, &c. Hitchcock had some good questions to put to that paradigm, epistemological questions, for one—about whether one really knows what one thinks ones knows—and also psychoanalytic questions—about the relationship between the knower and the peeper and hence about the sneaky way in which desire rides in on knowledge’s back. De Palma, however, radicalizes this scenario by inventing a murderer who wants to be seen, a murderer, in other words, whose plans depends on the existence of a manipulated witness. The shift from Hitchcock to De Palma thus secretes a certain periodization, marking out the difference between a society in which the media exercise independent oversight functions over the government and other major actors, like corporations, and a society in which government and corporations have already reckoned the cameras into all their calculations and so incessantly stage themselves for the public, which means that watchdogs are called upon only to play an already scripted role. Body Double is really and truly a meditation on that condition, but within the narrow parameters of the thriller.

This brings us to the big point: There was always something unresolved in Jameson’s postmodernism argument, and especially in his claim that postmodern culture tends to jettison historical thinking. It’s not just that narrative forms are never going to be able to revert back to some zero degree of history-less-ness, though that’s also true. The issue is rather that Jameson was making two claims that are finally rather hard to square with one another: that under names like “retro” and “vintage,” postmodernism revived the copycat historicism of the nineteenth-century art academy … and also that it wasn’t a historicism. The best chance you’ve got of making this argument work is by making it accusatory, because you have to be able to say that postmodern historicism isn’t really historical, that it is fake history, history reduced back to image or consumer good, just so many styles for the donning, as when the ‘50s mean Formica and the ‘70s Fiestaware. Sometimes that blow is going to land. But if you’re doing anything other than designing your kitchen—if you’re making a movie or writing a novel or metering out a poem—the citations you introduce will often be, not an aping farrago, but their own path to chronology, an exercise in temporal counterpoint or Ungleichzeitigkeit, a dozen arrows pointing us outside the present, and so a request that we resume the project of historical thinking only just terminated.

On Agamben’s Signatures

Let’s say you don’t believe that wholes or totalities exist. You don’t believe that people and objects inhabit underlying structures that assign to them meanings or functions. Whatever it is that is bigger than us, the space within which we move, is neutral terrain, not exactly empty, but unstriated, a field of constantly shifting singularities. It’s going to help to have a name for this space, this wire cage in which the lottery balls blow, though it’s unclear what that name is to be. There is a lot that you can’t call it, many words that, believing as you do, you are going to have to give up. You can’t talk about structure or system or any of their derivatives: There are no ecosystems, and there is no world; there are no political or economic or legal systems, no capitalism, then, no empire, nothing global. It would be safer, conceptually purer, to shut up about the state and society. There can be no talk of rules and laws, because such things either constitute structure or are assigned by it. You’ll also want to toss out any terms that refer to big blocks of time. You can start with the word “modernity.”

That people who claim not to believe in totalities routinely talk about all these things suggests only that they are not yet disbelieving with their hearts, like Christian teenagers pretending to be more badass than they really are. Yer average copy of Anti-Oedipus is, in this sense, a prop cigarette. But it doesn’t have to be that way. It is the virtue of Giorgio Agamben’s recent book on method, The Signature of All Things, to remind us what a painstaking post-structuralism can look like. And yes, this is the first thing to know about the book: that it is post-structuralist, in some wholly precise sense of that term, still, in 2008, when it was first published in Italy, and not just because its author quotes Foucault a lot. What matters is that Agamben is still actively trying to purge the concept of “structure” from his thinking; still trying to jimmy that e from his typewriter; still scanning old volumes of philosophy so he can accusingly annotate the passages where schemes sneak in unbidden; still trying to devise something to put in their place.

We can see how this works in the second essay, from which this little book takes its title, and in which Agamben asks us to start thinking again about a basic problem in structural linguistics: How does language pass from words to utterances? Or if you like: How does the mind get from inert words, archived dictionary-like in lists, to living sentences that actually carry meaning? The usual answer to that question would have something to do with rules or laws: There are rules governing how words get combined. Your mind doesn’t only know words and their definitions; it’s absorbed the guidelines for their use. But Agamben doesn’t want to say this, because the word “rules” makes language sound like a government agency. Nor are the usual alternatives much better: Any talk about the “structure of language” is going to bring in resonances of the state or capitalism or the administered world. We could try to identify the mind’s “devices for building sentences,” but that would turn language into a technology. We could wonder how words get “processed,” but that would be either bureaucratic—words as case files or credit-card applications—or again technological—words as refined sugar. Agamben is in the market for a way of thinking about language that does not go through a juridical model of laws and rules …  or a political model of the system … or a technical model of the machine.

His proposal, derived from synopses of Paracelsus and Jakob Böhme, is that we learn to think of language as magic. Magic is what will substitute for structure, in which case one synonym for post-structuralism is “the occult.” Agamben wants magical signs; this, roughly, is what he means by “signatures,” signs that aren’t just neutral stand-ins for things, tokens or pointers, but charmed symbols vibrating with their own energies, signs that have “efficacy,” “efficacious likenesses,” not marks that you write down but marks that are written across you. Every spoken sentence changes the world and is in that sense a spell or hex. This is probably the clearest instance of the “regression” that Agamben makes central to his method: “the opposite of rationalization,” he calls it. If you are serious about your critique of enlightenment, you are going to need an enchanted epistemology.

So … at least it’s not the same old anti-foundationalism—a post-structuralism, then, with new emphases and possibilities. Indeed, one of the more conspicuous features of Agamben’s reflections on “method” is that they actually add up to some pretty strong and decidedly un-skeptical claims about the nature of social reality. How one studies the world is premised on an already robust idea about how the world really is. This is clearest in the book’s first essay, which explains Agamben’s notion of “paradigms”—it would help if you could set to one side whatever you currently think that word means and let Agamben explain it for himself. He is, above all, trying to explain what Foucault had in mind when he said that the panopticon was the nineteenth century’s representative institution, or what Agamben himself wants to say when he makes the same claim, for the twentieth century, about the concentration camp. These two, the panopticon and the camp, are paradigms—not their respective eras’ most powerful institutions, at least not by any of the usual metrics, and not their most frequently encountered institutions—but the pattern or model for all manner of other agencies, and so the key to the latter’s intelligibility. To examine in detail a Regency-era prison is actually to describe five or six other institutions all at once: hospitals, elementary schools, mental asylums, army barracks, nearly any public street in Britain in 2010. The prison itself serves as a kind of extended sociological analogy, even a kind of “allegory”—the word is Agamben’s own. Everything is now like x.

You’ll be able to make up your own mind about the “paradigm”—about how useful it is as an explanatory device—if you bring into view its competitor concepts, the notions that it most nearly resembles and so means to replace. These are basically two: the symptom and the function. We could try to discover what functions prisons or concentration camps play in the social order at large. This would require that we attempt something like a political economy of the camps, that we try to work out what it is in the modern European state or in organized capitalism that tends to produce camps. If, alternately, we called the camps a “symptom,” we would be positing not so much function as dysfunction; the camp would be the visible mark or felt sign of an underlying sociopolitical disorder, one whose pathways and mechanisms, because not available to the eye, would still have to be analytically reconstructed. Either way, if we talk about functions or symptoms, the task in front of us is to relate camps and prisons back to the underlying order that has at least partially produced them. And this is precisely the job that Agamben is now calling off. What he likes most about the notion of “the paradigm” is that it bypasses any talk of the totality or system; it spares us from having to reconstruct anything. If you call the camp a “paradigm,” you are saying that nothing “precedes the phenomenon.” Camps and prisons are “pure occurences” that persist “independently of reference” to other institutions—“positivities,” he calls them and doesn’t blush. They are representative institutions, and they conjure up parallel institutions, but only as a string of singularities, the relationships between which are to be left, as a matter of principle, unelucidated. It isn’t even an open question, to be settled empirically, whether prisons and this or that capitalism require one another. The question is methodologically disallowed. There is pride in not asking it. His sense is that the agencies of a given historical period might congeal into a set, might adopt similar designs and follow similar procedures, might come to resemble one another, without, however, being functionally related, and the task of the social historian is only to chart the spontaneous mutation in some free-floating logic of institutions.

Here’s Agamben: “According to Aristotle’s definition, the paradigmatic gesture moves not from the particular to the whole and from the whole to the particular but from the singular to the singular.” You can attribute that idea to Aristotle, but it also sounds an awful lot like the “constellations” of Benjamin and Adorno—assemblages of singular things, not subsumed under a category or master term, but linked all the same, except only just, minimally unified, scattered fragments carefully re-collected, scraps joined with twists of wire, like an early Rauschenberg combine, the unity-of-unity-and-difference with difference dialed high in the mix. Agamben and Adorno share the idea that singularities might be linked together directly and so circumvent the abstractions that typically manhandle them. And saying as much should help us identify what is peculiar about Agamben’s thinking. For Adorno, of course, preserves the moment of the totality or the whole—he continues to speak of “capitalism” or “the administered world”—to which the constellation of singularities nonetheless provides an alternative. Hence Adorno’s in some sense entirely conventional reliance on the aesthetic: He thinks we need a better way to cognize objects and thinks, too, that art might provide it; that in the aesthetic encounter we for once apprehend objects in their singularity, without immediately subsuming them under models or formulas. In rare moments, we stop thinking like administrators and lose the names for things. The constellation is an alternative mode of cognition, a utopian counter-term, and in that sense a project, rather than, as Agamben has it, a method—a counter-systemic thinking and not a post-structuralism. The bizarre thing about Agamben—although this is a peculiarity he shares with lots and lots of other thinkers—is that he thinks that this utopian counter-term already describes our political and economic reality. It is the mistake endemic to the breed. What in Adorno remains a political task Agamben and sundry others turn into proclamation. Fired by the idea that the world should not be organized into structures and systems, they convince themselves that the world is not so organized, though where they used to take the epistemological shortcut to singularity, they are now more likely to take the ontological one: sameness cannot exist; it is existentially excluded; there is only multiplicity.

If there is a big point here, then we’ve just hit it: You can be counter-systemic, or you can be post-structuralist, but you cannot coherently be both, because once you’ve declared that there is no structure, you cannot then say you want to overturn it. Adorno thinks that a transformed world—let’s call it communism, though he wouldn’t have, as others were hogging the word—would be one in which people and objects can exist as free but linked singularities; and he thinks that we can proleptically work out the epistemology of that world-that-is-not-yet-ours, such that we can sometimes experience objects and others as though already redeemed. Agamben thinks that this utopian epistemology—the knowing of linked singularities—accurately describes the world we already inhabit, which is the society of camps and prisons.

I don’t mean to suggest that Agamben has anything nice to say about prisons and concentration camps. This is manifestly not the case. He typically presents himself as a thinker of The Catastrophe—the destruction of experience, the permanent state of exception, the generalization of Dachau, the merging of the concentration camp with everyday life, Buchenwald without end. There is, if anything, an apocalypticism in his writing, modeled again on the late Adorno and a Benjamin-about-to-die. And yet a certain utopian misdescription of the concentration camp is built into his arguments all the same, simply because he has taken the redemptive moment from negative dialectics—Adorno’s inevitably temporary reminders of how objects would appear to us once liberated from the abstractions of the exchange relation and bureaucratic reason—and locked it in place as a uniform method. The strain of this argument is often evident, as here—Agamben is trying again to sum up what he means by “paradigm”:

We can … say … that a paradigm entails a movement that goes from singularity to singularity and, without ever leaving singularity, transforms every singular case into an exemplar of a general rule that can never be stated a priori.

This is a version of what we’ve already seen: Singularities are directly joined, flush up against each other. A certain generality can be achieved, but a miraculous generality that doesn’t come at the expense of singularity, a generality without abstraction. What Agamben is saying here really isn’t all that complicated. All he means is that when you write about a prison or a concentration camp, you are writing about our general condition, but you need never exit the detail and fine grain of your description in order to make this point separately and in its generality. You can just motor on with your individualized account, immersed in the singularity of that particular institution, confident that it will stand in for other similar institutions. The problem, in this light, is the term “a priori,” which Agamben has grabbed from Kant. The rule of prisons, like the rule of camps, cannot be stated a priori. To which one would like to reply: Of course not. Of course these “rules” can’t be formulated a priori, because Agamben and Foucault are offering us a method for historical study; they are talking about historical periods, trying to identify shifts in historical experience, and historical experience is by definition not a priori. That is one of things one knows a priori about the term “a priori.” The claim, in other words, isn’t wrong. Quite the contrary: it is troublingly evident, because definitional. It’s the sort of truth you can’t insist on without making other people wonder whether you’ve really grasped the underlying issues. We can be certain, at least, that we are not dealing with a distinctive virtue of Agamben’s method; there is no philosophy whatsoever that could deliver to us a priori knowledge of Sachsenhausen or the Alleghany County Jail. What is true of “the paradigm”—what Agamben makes his boast—is true of every other historical methodology, without exception. One suspects, then, that this sentence cannot mean what it plainly says, that Agamben wants to use the term a priori to suggest a rather different claim: not that the general historical rule can’t be stated a priori, but that it can never be stated in its generality, as an abstraction. But Agamben can’t put it that way, because in that form the claim is just false. Anything that can be said about the panopticon paradigmatically could also be said generally, as an observation about a system or set of institutions, without our even having to mention the panopticon. So that’s one way to make it seem as though you have excised from your thought the structures or totalities that have not vanished from the world: You argue the obvious in order to insinuate the wrong.