Category Archives: Critical theory and philosophy

Zizek’s method

Triptych

THREE SHORT ESSAYS ON Žižek

•2. Žižek’s Method

FIRST ESSAY IS HERE…

Žižek is above all a Gothic writer, and the admirers who approach him as though he were Louis CK or Reggie Watts are thus falling into a kind of category error. They’ve got the genre wrong, like the people who go to slasher movies and chortle every time the knife comes out. A Gothic writer: It’s not just that Žižek publishes on the kind of accelerated schedule that we more typically associate with pulp fiction or even comic books, though some still unfriendly readers could probably reconcile themselves to his industrial tempo if they began thinking of The Monstrosity of Christ and First as Tragedy not as free-standing volumes, nor even properly as books, but simply as the latest issues in a long-running title—a single year’s worth of Slavoj Žižek’s Adventures into Weird Worlds. The first-order evidence for Žižek’s Gothicism—the cues and triggers that invite us to read his writing as a kind of Gruselphilosophie—are not hard to find: the frequent encomia to Stephen King, to whom even his beloved Hitchcock is finally assimilated; a tendency to explicate Lacan by summarizing the plots of scary movies; a persistent concern with trauma, cataclysm, and grief. Psychoanalysis’s most fundamental insight, he writes, is that “at any moment, the most common everyday conversation, the most ordinary event can take a dangerous turn, damage can be caused that cannot be undone.” So, yes, Žižek is a magnetic and slobber-voiced goof; he is also the theorist of your life where it is going to be worst, the implacable prognosticator of your distress.

But even once we’ve spotted the jack-o-lantern that Žižek never takes off his porch, it is going to be hard to know what to do with it or how to reckon its consequences. What, after all, does it mean to say that a given philosopher is a kind of horror writer? You might be wondering, for instance, if there is a philosophical argument attached to all of Žižek’s horror-talk. It would be possible at this point to survey the philosophy canon and compile a list of concepts or excerptable positions establishing European thought’s many different accounts of terror, trepidation, and unease. Indeed, for the philosophy graduate student, the language’s fine discriminations between panic’s various grades and modes come as it were with the names of Great Thinkers already attached: Hobbesean fear, Kierkegaardian dread, Freudian Unheimlichkeit, the angst, anxiety, or anguish of your preferred existentialist. And there is nothing stopping you from reading Žižek in this manner and so walking away with yet another philosopheme, in which case you might decide that Žižek is a fairly conventional theorist of the spooky-sublime, like so: All language involves a doubling; whenever we name something, we fashion a doppelganger for it. I open my mouth, and where before there was one thing, the object, there are now two, the object and its name, and if I’m thinking clearly I need to be able to distinguish rigorously between the word “table” and the touchable, breakable, enduring-decaying, eighteenth-century Connecticut batten door upon which I am now typing. Žižek takes the position that language thus severed from its referents is always on the side of fiction, fantasy, and ideology. You can only be sure that you are in the presence of something real if this kind of doubling hasn’t taken place, if, in other words, the object hasn’t been surrounded by verbal shadows of itself. If you can talk about something, then it is by definition untrue; it has already been translated into a kind of derealized chatter. And if it’s true, or if it’s Real—because that’s the philosopheme you are about to pocket: the Real—then you can’t talk about it or can’t talk about it lucidly and coherently. But in that case, the only things that get to count as Real are the things that resist being named—those enormities that daunt our congenital glibness—which is to say the worst things: the torsions, the tearings, the ugliest breaks. Nearly everything can get sucked into the order of language, but some few things can’t. What remains is what’s real: the unspeakable.

But perhaps this too-fluid summary is beside the point. For to call Žižek a Gothic writer is finally to say less about the substance of his arguments than about his way of making those arguments—his philosophical style or Darstellung. It is one thing, I mean, to point out that Žižek gives an account of fear, which we could reflect on and debate at the seminar table and then agree with or not. It is another, rather more interesting thing to observe that Žižek is trying to scare you—not just to explain the uncanny to you, but to raise its pimples in your armflesh: “What unites us is that, in contrast to the classic image of proletariat who have ‘nothing to lose but their chains,’ we are in danger of losing everything.” Critical theory, of course, has always been readable as a mode of Gothic writing, just another subgenre of the dark-fantastic, with Freudianism and Foucauldianism assuming their place on the bookshelf alongside vampire novels and chronicles of crewless ghost ships and other such stories of the damned. Marx describes the commodity as “phantom-like” and calls capital a bloodsucker and attributes to it a “werewolf-hot-hunger.” Freud makes of psychoanalysis a sort of ghost story and instructs his followers to conduct therapy as though it were a séance or an exorcism—a making-the-spirits-walk. In German, the other name for the unconscious is not reassuringly distanced and Latinate, but bluntly, forbiddingly vernacular. The Ego, this is to say, does not share our person with the Id—that’s not how Freud puts it. Das Ich is chained to das Es,the Me” to “the It,” or, if you like, to It. Walter Benjamin, meanwhile, asks us to declare our solidarity with the dead. Adorno requires that you take a shard in the eye. Foucault recasts Left Weberianism as a paranoid thriller, a story about imprisonment and surveillance and the impossibility of outrunning power. Critical theory, this is all to say, needs to be read not only as a teaching or a storehouse of oppositional arguments, but also as a historically inventive crossbreeding of philosophy and genre fiction. The Frankfurt School Reader is, in that sense, one of the twentieth century’s great horror anthologies. If we now insert Žižek into this philosophical-literary timeline, we should feel less awkward naming some of his writing’s schlockier conventions: his direct emotional appeals to the reader; his sudden juxtapositions of opposed argumentative positions, which recall less the patient extrapolations of the dialectic than they do the jump cuts of summer-camp massacre movies; his pervasive intermingling of high and low, which marks Žižek’s arguments as postmodern productions in their own right, against which the genre experiments of Freud or Benjamin will seem, in retrospect, downright Jamesian and understated and belletristic. Das Ding an sich is just about hearable as the name of a B-movie: The Thing In Itself!

But this isn’t yet to say enough. I want you to agree that the Gothic in Žižek is something more than a reasoned-through philosophical position, offered to the reader to adopt as creed or mantra. But it is also something more than a sinister rhetoric or set of literary conventions—more than a palette of gruesome flourishes borrowed from the horror classics. In Žižek’s writing, the Gothic attains the status of a method. This will need to be explained, but it’s worth it: It is a tenet of Lacanianism that things in the world have trouble cohering or maintaining their integrity; this is true of persons, but it is every bit as true of institutions or, indeed, of entire social fields. One of the great Lacanian pastimes is thus to scan a person or a piece of writing or a historical-political scene for evidence of its (her, his) fragmentation or disintegration. To the bit of Sartrean wisdom that says that all identity is performance, the Lacanians add a qualifier: All identity is failed performance, in which case it is our task to stay on the lookout for a person’s protrusions and tells and prostheses, the incongruous features that seemingly put-together persons have not been able to absorb into their specious unity. In what specifiable ways are you least like you claim to be? Where is your Adam’s apple, because it’s probably not on your neck? Now once you get good at asking such questions of people, the challenge will be to figure out how to ask them again of the systems in which people reside. The Real—whatever lies menacingly outside of discourse—can take several different forms: Most obviously, it can name external trauma: assaults upon your person, the bullet in your belly, your harrowing. But it might also name your own disgusting desires, the ones you are least willing to own. Or it might name the totality (of empire, say, or global capitalism). Any concept that we form of the totality is going to be a reification, of course, something theorized, which is to say linguistically devised or even in some sense made up. But the totality-as-such, as distinct from this or that concept of totality, will persist as an unknowable limit to our efforts. It will be, to revise an old phrase, a structure palpable only in its effects, with the key proviso now being that the only effects that matter are the unpleasant ones: a structure palpable only in its humiliations. The world system is the shark in the water. Again, the Real might name a given social order’s fundamental antagonisms—the conflicts that are so basic to a set of institutions that no-one participating in those institutions can stand outside them. Or the Real might name the ungroundedness of those institutions and of our personae, their tenuous anchoring in free choices and changeable practices. So if you want to write political commentary in the style of Žižek, you really only need to do two things: 1) You scan the social scene that interests you in order to identify some absurd element within it, something that by official lights should not be in the room. Political Lacanianism in practice tends to be one big game of “Which one doesn’t belong?” or “One of these things is not like the others.” And 2) You figure out how this incongruity is an index of the Real in any of those varied senses: trauma, the drive, the totality, antagonism, or the void. You describe, in other words, how the Unspeakable is introducing anomalies and distortions into a sphere otherwise governed by speech.

So that’s one version of Žižek’s Gothic method. There are thus three distinct claims we’ll need to be able to tell apart. We can say, first, that Žižek likes to read Gothic fiction and also the eerier reaches of science fiction—and that’s true, though he precisely does not read them the way a literary critic would. It has always been one of the more idiosyncratic features of Žižek’s thought that he is willing to proclaim Pet Sematary a vehicle of genuine analytic insight or to see in horror stories more broadly a spontaneous and vernacular Lacanianism, in much the same way that old-fashioned moral philosophers used to think of Christianity as Kantianism for people without PhDs. To this observation we can easily add a second: that Žižek himself often reads as though he were writing speculative fiction, as in: You are not an upstanding member of society who dreams on occasion that he is a murderer, you are a murderer who dreams every night that he is an upstanding member of society—though keep reading in Žižek and you’ll also find: torture chambers, rape, “strange vibrating noises.” And yet if we’re taking Žižek at his word, then the point is not just to read Gothic novels, nor yet to write them. We must cultivate in ourselves, rather, a determination to read pretty much everything as Gothic. Once we’ve concluded that horror fiction offers a more accurate way of describing the world than do realist novels—that it is the better realism, a literature of the Real—then the only way to defend this insight will be to read the very world as horror show. It will no longer be enough to read Lovecraft and Shirley Jackson. The Gothic hops the border and becomes a hermeneutics rather than a genre. Anything—any poem, painting, person, or polity—will, if snuck up upon from the right angle, disclose to you its bony grimace.

This approach should help us further specify Žižek’s place on the philosophical scene. It is often complained that Hegelian thinkers—Adorno, Wallerstein, Jameson—subdue their interlocutors not by proving their arguments false but precisely by agreeing with them. Going up against a Hegelian, you find yourself less refuted than outflanked—absorbed, reduced, assigned some cramped nook in the dialectical apparatus. That’s a point we can now extend to Žižek, in whose writing the Gothic gets weaponized in precisely this Hegelian way. Horror becomes a device, a move, a way of transforming other people’s arguments. When Žižek engages in polemic with some peer, his usual tack is not to controvert his adversary’s arguments, but rather to improvise an eerie riff upon them, to re-state his opponent’s claims in their most unsettling register. You can call this the dialectic, but you might also call it pestilence. Žižek infects his rivals with Lacan and forces them to speak macabre versions of their core positions: undead Heidegger, undead Badiou, undead Judith Butler.

Three of these fiends we will want to single out:

Žižek summons zombie Deleuze. It is often remarked that critical theory in the new century has taken a vitalist turn. The trials-by-epistemology that were the day-to-day business of the long post-structuralist generation have given way to the endless policing of ontologies. Graduate students accuse each other of possessing the wrong cosmology or of performing their obeisance to the object with insufficient fervor. Deleuze and Guattarí can be corrected only by those proposing counter-ontologies. Claims get to be right because Bergson made them. You are scared to admit that you wrote your whole first book without having read Spinoza. Nietzsche is still quotable, but only where he is most ebullient and alpine. You ask which description of the stars, if recited consequently to its last rhyme, will reform the banking system and unmelt the ice caps. Klassenkampf seems less interesting than theomachia. What is less often remarked is that vitalism has only returned to the fore by consenting to a major modification—a fundamental change in its program and priorities—only, that is, by agreeing not to grant precedence to those things we used to call “living.” The achievement of the various neo-vitalisms has been to extend the idiom of the old Lebensphilosophie—its egalitarian cosmos of widely shared powers, its emphasis on mutation and metamorphosis—to entire categories of object that vitalists used to think of themselves as opposing: the inanimate, the inorganic, and the dead. It is in this sense misleading to call Deleuze a Spinozist without immediately noting that his Spinoza has been routed through La Mettrie and the various Industrial Revolutions and the Futurists, which makes of schizoanalysis less a vitalism than a profound updating of the same, such that it no longer has to exclude the machine—a techno-vitalism, then, for which engines are the better organisms, and which takes as its unnamed material prompts epochal innovations in the history of capitalism itself: the emergence of the late twentieth century’s animate industrialisms, flexible manufacturing and biotech, production producing and production produced.

So that’s one vitalism of the unliving, but there are others. Jane Bennett claims for her ontology the authority of her great lebensphilosophische forebears—Spinoza, Bergson, Hans Driesch, Bakhtin—and yet calls matter “vibrant” rather than “vital,” because she wants her list of things living and lifelike to include national electricity grids and the litter thrown from the windows of passing cars. Bennett is trying to imagine a United States that has become in a few key respects more like Japan—an America in which Midwesterners, possessed by an “irrational love of matter,” hold funeral services for their broken DVD players and pay priests to bounce adzuki beans from off the hoods of newly purchased trucks. The phrase “vibrant matter” might hearken back to William Blake’s infinite-in-everything, but Bennett uses it mostly to refer to the consumables and disposables of advanced capitalist societies: to enchanted rubbish dumps and copper tubing and other such late-industrial yōkai. The task, again, is to figure out how to be a vitalist on a planet without nature—a pantheist of the anaerobic or Spinoza for the Anthropocene. Bennett herself says that what interests her is above all the “variability” and “creativity” of “inorganic matter.”  In that context, the achievement of the adjective “vibrant” is to recall the word “vital” without entailing it: not alive, merely pulsating; not vitalist, but vitalish.

What we can now say about Žižek is that he offers his own, rather different way of dialectically revising the older vitalisms. His point is that most people already happen upon the cosmic life force—in their everyday lives and without special philosophical tutoring—and that such encounters are, on balance, terrifying. The élan vital is not your iPod’s morning workout mix; nor is it some metaphysical energy drink. It is the demiurge that makes of you “a link in the chain you serve against your will”—the formulation is Freud’s—“a mere appendage of your germ-plasm,” not life’s theorist and apostle, but its stooge and discardable instrument. Psychoanalysis is the school that takes as its starting point the repugnance that we properly feel towards life—a vitalism still, but one with all the judgments reversed, a necrovitalism, in which bios takes on the attributes that common sense more typically associates with death, its nullity, above all, and its blind stupidity. One of Žižek’s favorite ways of making this point is by reminding you of how you felt when you first saw Ridley Scott’s Alien—movie of cave-wombs and booby-trapped eggs, of male pregnancies and forced blow jobs, which ends when the undressed woman finally lures from his hidey-hole the giant penis monster, the adult alien with the taut, glossy head of an erection. But we might also think of the matter this way: In the early 1950s, Wilhelm Reich—the magus of western Maine, Paracelsus in a lab coat, the ex-Freudian who thought he could capture the cosmic life force in shoeboxes and telephone booths—organized something he called the Oranur Experiment. Reich had by that point begun styling himself the counter-Einstein, foil and counterweight to the Nobel Laureate of Dead Cities, dedicated to building the nuclear age’s new and sorely needed weapons of life. He had to this end procured a single needle of radium; the idea was that he would introduce this shaving of Nagasaki into a room supercharged with élan vital so that he could observe the cosmic forces of death and the cosmic forces of life fighting it out under laboratory conditions. It did not go as he’d planned. Reich panicked when he discovered, not just that the radium was in some sense stronger, but that the radioactivity had contaminated and rendered malevolent the compound’s orgone. The cosmic life force hadn’t been obliterated; it had been turned, made sinister, recruited over to do the work of death. Žižek, we might say, is the theorist of this toxic vitality; the one who thinks that orgone was bad to begin with; the philosopher of rampant and metastatic life.

Untitled-2

Žižek summons zombie Levinas. It might be more precise to say that Žižek summons the zombie Other or the Neighbor-Wight. Either way, poring over Žižek’s response to Levinas is your best chance at learning how to replicate his achievement—how, that is, to turn philosophers you dislike into your reanimated thralls. Derrida delivers the funeral oration; Žižek returns with a shovel later that night. The spell you will read from the Lacanian grimoire has three parts:

-First, you seek out the moment in your rival’s system where his thinking is already at its creepiest. Chapbook summaries of Levinas often make him sound like a fairly conventional European moral philosopher, as though he hadn’t done anything more than cut a new path, dottering and roundabout, back to the old Kantian positions about the dignity and autonomy of other people. It is easy, I mean, to make Levinas sound inoffensive and dutiful. The wise man’s hand silently cups the chin of a stranger. It will be important to insist, then, that ethics-as-first-philosophy harbors its proper share of sublimity or even of something akin to dread. We know that Levinas’s first step was to adjust Husserl’s doctrine of intentionality: So consciousness is always consciousness of something—sure enough. And all thinking is directed outward; it cannot not refer—granted. But intention, even as it fans away from me in a wide, curving band, will meet obstacles or opacities, and it is by fixing our attention on these stains in the phenomenological field that Levinas develops what he himself calls “a philosophy of the enigma”—a kind of anti-phenomenology in which thinking begins anew by giving priority to what does not appear and in which it falls to me to sustain and shepherd this strangeness I have just discovered in the Not-I. This is a program whose uncanny and un-Kantian qualities we could restore only if we agreed to set aside Levinas’s own undarnably worn-out language—alterity, the Other, otherness—and to put “the Alien” in its stead: an ethics of the Alien would ask us to look upon the face of the Alien so that we can better understand the tasks of being-for-the-Alien. For current purposes, what we’ll want to keep in mind is that Žižek has no beef with this Levinas. He agrees after a fashion with the doctrines of alterity and can easily translate their claims about the obscurity of other people into Lacanian observations about the modes of appearance of the Real. But again, it’s not the argument that matters; it’s the method: Žižek has to find at least one point of agreement with Levinas, because that’s how the zombie hex gains access to its mark.

-So that’s the first step. You make a point of agreeing with your rival by finding that one argument of his that is already pretty occult. The next step, then, is to show how he nonetheless runs away from the creepiness he has conjured. Žižek’s complaint against Levinas is easily summarized. He thinks that the ethics of alterity, far from demanding difficult encounters with other people, encourages me to keep my relationship to others within strict bounds—to delimit, attenuate, and finally dull such encounters. Totality and Infinity is the handbook for stage-managing a counterfeit otherness, as a moment’s reflection on two of the words we most associate with Levinas should suffice to show. Who, after all, are the people who routinely allow themselves to be “caressed”? A Levinasian ethics takes as its paradigmatic others people with cheeks at the ready: lovers and children and hospice patients. The attitude it means to cultivate in us is accordingly amorous or avuncular or perhaps candy-striping. The moral person is the one in a position to dandle and cosset. The language of “the Neighbor,” meanwhile, forfeits even the slight provocations of the word “Other,” making strangers proximate again, returning outlanders to their position of adjacency. Neighbors aren’t the ones who draw out of you your hitherto unsuspected capacities for righteousness. They are the ones-to-whom-you-loan-cordless-drills, the ones-who-could-afford-to-buy-on-your-block. Psychoanalysis, then, is where Žižek would have us look for a philosophical program that does not housebreak the Other in this way, though the phenomenologists, if they are to follow him there, will have to agree to reinstate the entire, outmoded metaphysics of appearance v. essence, since those who go into analysis are consenting to set aside public facades and facile self-perceptions and are learning instead to speak the secret language of hidden things. The more-than-Levinasian task, at any rate, would be to find a way to live alongside that person, the person whose unspoken desires you would doubtless find ugly. Other people would terrify you if you knew them well—that is the most remorseless, Freudian plain speech—and it is in the dying light of that claim that Levinas’s thinking looks suspiciously like an excuse not to know them. A psychoanalytically robust account of Otherness would therefore have to reintroduce you to the people next door, that “inhuman” family with whom you now share a hedge, where by inhuman Žižek means “irrational, radically evil, capricious, revolting, disgusting.” Can you hew to the ethics of neighborliness even when a vampire buys the bungalow across the street? Are you willing to caress not just an unfamiliar face but a moldering one? Methodologically, the point we will not want to miss is that Levinas now stands accused on his own terms of having replaced the Alien with the Other, of having persuaded you to stuff your ears against your neighbors’ shenanigans, of having evinced once again what he himself once called the “horror of the Other which remains Other.” We put up with other people as long as they put up a face. And here, finally, is the portable technique, which you can bring to bear against any theorist and not just against the radical ethicists: When you read a rival philosopher, you will want to take whatever creepy argument he already proposes and find a way to make it a whole lot creepier. That will be your chance to conduct a kind of body swap, to replace the philosopher with a more consequently unpleasant version of himself.

-So that’s the second step. Step three is: You welcome your rival into the army of the dead, making sure that he realizes that he is just one monster among many such. Here’s where the hoodoo gets tricky. A Levinasian ethics presents itself to us as intimate, a thought nestled between two terms, Me and the Other, where the latter means “the neighbor and his mug at strokable distance.” And yet the term “Other” is incapable of this kind of grazing approach; it is barred in its very constitution from ever rubbing noses with us. For the word indicates no particular second person but only the anonymous and shrouded Autrui. If I speak only of “the Other,” with no further specification, I could be referring to anyone but me. The concept produces no further criterion and calls no-one by name. Behind its sham-individuation there thus lurks the mathematical sublimity of the crowd, impersonal and planet-filling. At this point you have two options: You can decide that the ethics of alterity is ineffectual because self-consuming in this fashion, claiming to preserve the irreducible strangeness of the other while in fact washing such peculiarities away in a bath of equal and undifferentiated otherness. The philosophical system’s organizing term is, as ever, what betrays it. Alternately, you can decide that a Levinasian ethics can survive only by generalizing itself, by accepting its own faceless abstraction as a prompt and so by agreeing to become categorical. If we follow this second route, we will have to say without blushing that Levinas’s thought as it has come down to us was already characterized by a pressure, irregularly heeded, to all-but-universalize. The term Other directs my moral concern recklessly in all directions, sponsoring a universalism to which I am the only exception—a humanism minus one.

But then it should be easy to add the subtracted one back in. It should be easy, I mean, to get the Me to takes its place among those many indistinct others and thereby to make Levinas’s universalism complete. It will be enough, in fact, to call to mind the basic dialectical idea that we do not cognize objects singly, but only relationally or in constellations. This means, among many other things, that the Me and the Other strictly imply one another. If my action in the world didn’t reach a certain limit, if I didn’t routinely knock into other objects and persons, if these latter didn’t reliably humble me, then I wouldn’t even have a sense of myself as a Me, which is to say as something that does not, in fact, coincide with the world. But then the Other and the Me are not fixed positions; they are conceptually unstable and even in some sense interchangeable. I can obviously switch places with the other; I am other to the Other, who, in turn, is a Me in her own right. As soon as I concede this, I have discovered my own alien-ness. Second, and as an intensification of these Hegelian reciprocity games, we can collapse the two terms into a single formation: not the Me and the Other, but the Other-Me or the self as foreign element. This can be managed a few different ways. My experience of becoming—of my own changeability—renders me other to myself, reconstructing the ego as watercourse or Heraclitean series. I do not shake the Other’s hand as though I didn’t know what it was like to be a stranger. But we can also travel a more direct psychoanalytic path to the same insight, simply by noting that I am not transparent to myself, not in charge of my own person, that my own desires and motives are basically incomprehensible to me—that, indeed, I am just another dimness or demonic riddle.

And with that, the terms generated by Levinas’s philosophy mutate beyond recognition. This, in case you missed it, is the culminating step in Žižek’s method: If when reading philosopher X, you hold fast to what is most Gothic in X’s thinking—if you generalize its monstrosities and don’t exempt yourself from them, if you promote Unwesen to the position of Wesen—then other core features of X’s system will break and buckle and shift, until it no longer really looks any more like X’s thinking. To stay with Levinas: The ethics of alterity rotates around a single inviolable prohibition—that I not conclude that all egos are more or less the same; that I not propose a theory of subjectivity that would hold equally for all people; that I not stipulate as the precondition of my welcoming another person that he or she be like me. But if the terms “self” and “other” cannot be maintained in their separateness—and they can’t—then this injunction will be lifted, and Žižek can improvise in its stead a paradoxical argument in which alterity becomes the vehicle of our similarity, in which I realize I am like others in their very otherness, in which the Hegelian homecoming comes to pass after all, but on the terrain of alienation and not of the self, in which what establishes our identity is not some human substance, but our inevitable distance from such substance—which distance, we, however, share. There thus arises the possibility that I will identify with the Alien, not in his humanity, but in his very monstrosity, as long as I have come to the conclusion first that the world’s most obviously damaged people only make public the inhumanity that is our common portion and my own clandestine ferment. And out of such acts of identification—and not of pity or tolerance or aid—Žižek would build, in the place of Levinas’s philosophy à deux, a global alien host or legion of the damned. Radicalize what is creepiest in your rival, in other words, and then make it universal. This brings us to Episode Number Three, in media res, as they say: already in progress…

Levinas zombie

Žižek summons the zombie multitude. I want to point out two more instances of this horror-movie universalism—two more cases, that is, in which Žižek takes one of radical thought’s settled positions and contagiously expands its orbit. What you’ll want to pay attention to is how each position leads to the same conceptual destination, which is the undead horde—Levinas has just led to the horde; and now Rancière will lead to the horde, and then Agamben will, too, like characters in a Lucio Fulci movie getting picked off at twenty-minute intervals. The horde: We’ll want to consider the possibility now that the cadaver-thronged parking lot is a post-political society’s last remaining image of the unmediated collectivity, the term that, having driven from consciousness the gatherings and aggregates posited by classical political philosophy—the assembly, the demos, the populo, the revolutionary crowd—must now be asked to absorb into itself the indispensable political energies we used to expect from these latter. Can we get the walking dead to mill about the barricades?—that is another of Žižek’s driving questions. Will they know to throw rocks?

One path to the horde begins with Rancière’s idea that politics proper belongs to “the part that has no part”—which is the philosopher’s oxymoronic term for the disenfranchised, those who are important to the system’s functioning but who don’t in the usual sense count, who don’t get to take part and who have no party. Rancière’s claim—and sometimes Žižek’s, too—is that only the agitations of such people (refugees, guest workers, the undocumented) so much as deserve to be called “politics,” because it is only at a system’s roiled margins that basic questions about a polity can be raised, questions, that is, about its scope and constitution. Anything that happens in the ordinary course of government takes the state’s functioning for granted and so isn’t really about the polis—is not, in that sense, “political.” On the face of it, this is a terrible idea. Rancière’s position is anti-constitutional and anti-utopian and indeed committed to failure. My actions only get to count as political provided the state does not recognize me, and as soon as I succeed in convincing someone in power to look me in the eye or indeed to act on my behalf, I cede my claim to be a political actor and become just another pawn of policy makers and the police. There is, in this sense, no such thing as getting the state right; every political breakthrough is actually a setback. To frame your program in terms of “the part that has no part” is to show contempt for those parts-with-parts, absolutely any parts, even though some of these portions will be quite meager. This has made Rancière ill-equipped to talk about what we might call the part that has little part: the native-born working classes, the rural poor, the jobless, the ineffectually enfranchised.

So can Rancière’s thinking be Gothically universalized? It is one of the more attractive features of Žižek’s thinking that he corrects Rancière at just this point and in just this fashion, insisting on the instability of the conceptual pair around which the politics of parts usually turns, inclusion-exclusion, as in: Politics is only ever out there; here there is only administration. That last sentence turns out to be untenable, for even the part that has no-part is not simply excluded. It is one of radical thought’s lazier habits to treat the word “margins” as though it meant the outside when it fact it means the space just inside the door, the page’s extremity and not the empty air that surrounds the lifted book. More: Even the word “exclusion” never refers to simple separation or distance. You have to have had some contact with a system for me to be able to say that you are excluded from it; the very concept depends on some thread or temporary node of connection. The gauchos of the Uruguayan plains may not be represented in the Danish Folketing, but they aren’t excluded from it either. “Exclusion” contains the idea of “inclusion” within itself and is not the latter’s simple opposite. Genuine apartness would require a different concept. This observation will allow Žižek to fold the old proletariat back into the category of the part that has no part. Working people and refugees are actually in similar positions of inclusion/exclusion: the grinding, mutilating condition of being swept up in a system whose inner workings nonetheless seem closed off and impossible to fathom.

One way to think about what Žižek is doing here would be to say that he is trying, within the terms dictated by contemporary European philosophy, to get us to shake off our gauchiste habit, picked up over the social democrat decades, of seeing European workers as basically First World and coddled and deleteriously white. He wants to help us retrieve “a more radical notion of the proletarian”—where more radical means not “more militant,” at least not in the first instance, but merely “more abject.” If I say now that the doctrine of we-all-are-refugees might hold the key to the emergence of a new proletariat, you might object, mildly, that this new proletariat sounds a lot like the old one—the really old one, the one that didn’t yet drive oversized Buicks, the working class stump-armed and black-lunged and blind. There is something new, however, about Žižek’s version of the wretched ones, which is that he’s pretty sure that they include us, the people who actually read his books, the people who know who Žižek is: the second-year university students, the middle-aged art historians, the underemployed web designers, the gap-year backpackers. “Today, we are all potentially homo sacer”—and then that’s a second, unusually clear instance of his Gothic universalism right there, now keyed to Agamben, who, once whammied, will produce an image of the concentration camp victim as Everyman or bare life as Ordinary Joe. To be a new-model proletarian is simply to know that your life, if not yet ghastly, is nonetheless exposed and insecure—wholly vincible. In place of Hardt and Negri’s squatters and street-partiers and Glo-Stick communards, Žižek means to fill the streets with a multitude less than human. It might take a minute for this idea to sink in. The new proletariat will be built out of homines sacri.  Žižek’s thrilling and preposterous idea is that having failed to organize fast-food chains or big-box retail, we might yet organize ourselves on the basis of la vita nuda—that the Musselmänner might form a union and yet remain Musselmänner, that those who have lost even the instincts of self-preservation, who have stopped swatting the flies that lay eggs in their open sores, might be made to see the point of collective bargaining.

It has become almost obligatory over the past decade to argue that fear lives on the Right, that terror is a means of social control, that one could defeat Al Qaeda and the Patriot Act at once if only one would resolve to be unafraid, if only we could make ourselves okay with not being safe. It is against the Left machismo of those arguments, so many rehashings of the old Spinozist idea that “fear makes us womanish,” that Žižek’s accomplishment over the last decade can be measured, as he has set about to reclaim terror as one possible platform for emancipation and revolutionary equality, to help us imagine a communism for the screamers and the tearful and the scared. Not that Žižek is offering to make you any less frightened. He will not give you refuge or grab your hand or quietly sing nonsense lyrics into your ear. A politics of militant fear does not begin by offering solace. Quite the contrary: Our task will be to communicate fear and to amplify it. You have a few different options as to how you might go about this. You can issue reasoned admonitions, explain to us soberly about the threats and the thresholds and the no-going-back: two degrees Celsius, go ahead tell us again. Or you can make us feel your own foreboding, as also the grief that is fear’s come-true aftermath: Show us the photographs of Katrina graffiti—“Destroy this memory,” one picture records, in white paint on a flooded brick house, in good, teacherly cursive, no less. But it has been left to Žižek to propose a radically darkened politics, a politics that, no longer content to protest the ongoing catastrophe, has taken the disaster into itself and begun to root for ruin. We are the ones they were supposed to be afraid of. In George Romero’s Land of the Dead, the zombies are for once oddly purposeful, these animate corpses with faces torn into tragic masks, whose first, returning memories are of what it was like once to work and when not working march. You are probably already hurting. A just politics is going to hurt a whole lot worse.

Land of the Dead

MORE SOON…

Three Essays on Zizek

Zizek Marat Joseph

•1. Žižek’s Argument

I’d like to put two questions to Slavoj Žižek, though the second question might turn out to be the first one wearing different-colored leotards. It would help, I think, if I explained first what I take to be Žižek’s core argument—the problem and puzzle driving his theoretical overproduction—both so that he can tell me if I’m wrong and because readers of Žižek are sorely in need of a map. It’s not that he never says what he is after; the problem is, rather, that the centrality of this one issue tends in his writing to get lost amidst the riffs and the endlessly re-explained Lacanianisms and the compulsive recording of everything he’s watched this year on hotel room televisions. It is possible to read an awful lot of Žižek and still not realize that he has a point. Indeed, one sometimes gets the feeling that the only people who understand him less well than his opponents are his enthusiasts.

So here, for easy reference, is his animating claim: that every political formation, in addition to generating the law, generates a particular more or less expected way of violating the law. Any set of prohibitions comes with its own accustomed transgressions, a particular way in which Law-in-the-abstract allows itself to be broken. Different laws produce different lawbreakers or different modes of rebellion. And what keeps us attached to a given political order—what makes us loyal to it—is not the law, but the transgression. We like living in a particular society because of the illicit pleasures that it affords us—because, that is, it grants us a particular set of turn-ons, and it does so not by openly trading in these latter, but precisely by seeming to disallow them. Following the law is one path to subservience; breaking it is a second. Transgression, in fact, produces in us the more powerful political obligation; it is the device by which a governing order takes hold of us for good. And Žižek, by making this argument, is merely tracking back to Freudian ground zero, to the idea that all of our relationships carry a libidinal charge or that desire and satisfaction are permanent features of our psychic lives—ineliminable, not to be overcome. The idea, further, is that law by itself couldn’t possibly work; the law alone can never be lawlike in its effects, for if some authority genuinely denied us all pleasure, we would take measures to abolish it. But authority doesn’t deny us pleasures; it creates new ones and can become, indeed, just another target for our ardor.

Enjoyment, to bottom-line it, is not the heroic alternative to discipline and convention. It is discipline’s sidekick and in some sense the authentically nomian term—the secret bearer of law’s regularities and compulsions. The libido is the vehicle of our subjection and thus the answer to why most of us, even those of us in the habit of striking defiant poses, don’t seek fundamental political changes or seek them only half-heartedly: Change would disrupt whatever erotic bargain we’ve quietly worked out with the prevailing order. Žižek’s way of putting all this is to say that every political system—every code of law or tablet of rules—comes with an “obscene supplement”; he also calls it “the inherent transgression.” And his single greatest talent as an intellectual is to survey some corner of the social scene and find the smudge of obscenity that holds it together, to smoke out its anchoring enjoyment, to help you see how people are getting off on things that they don’t seem to be getting off on.

That’s a pretty Calvinist skill as skills go. And, indeed, it is the asceticism of Žižek’s position, so unlike the prevailing tenor of radical philosophy, that we will want to underscore. In 1934, Wilhelm Reich, having recently fled to Denmark from Berlin, wrote an essay trying to make sense of the epochal victory in Germany of the leather-jacket Right. Why had the German Left failed to stand up to the fascists? How had they ceded so much ground? Reich began that essay by saying that Marxists were going to have to spend less time thinking about structure and system and historical process and more time thinking about “the subjective factor in history”—less time improvising mini-lectures on monopoly capitalism and the pseudo-democratic ruses of the bourgeois state and more time talking to ordinary people about how they feel and what they might do to feel better. The most remarkable section of the essay comes when Reich begins quoting Joseph Goebbels, not in order to document yet another National Socialist inanity, but in order to make clear that the fascists were onto something. Their success meant, by definition, that they had understood something that the Left had failed to grasp.National Socialism, [Goebbels] said, was not a puritan movement; the people should not be robbed of their joie de vivre; the aim was to achieve more life affirmation and less hypocrisy, more morality and fewer moralistic attitudes.” This is what socialists should have been saying, but perversely weren’t. Shame sits ever on our lips. Reich perceived a basic contradiction in the political constellation of the early 1930s: The fascists successfully appealed to people at the level of pleasure and desire, even while implementing punishment. The socialists, meanwhile, had big plans for emancipating their fellows in several different senses at once, and yet comported themselves according to the petty morality of the well-cushioned parlor. Fascism, in short, broke through in Germany because it was a lot more fun—it seemed to run on expanded erotic energies—whereas the Left, as ever, preferred to educate its potential comrades in the gross national product of India while asking them pointedly whether they fully understood that children made their shoes. Marxists, Reich concluded, needed to buy some guitars; they would have to write some better tunes.

It is this Reichian program, moreover, this determination to out-merry-make the Right, that Fredric Jameson has been trying to keep alive when arguing that Marxism must continue to strut down “the path of the subject,” that it must learn better ways to stimulate the “desire called Marx” or the “desire called utopia.” “If ideology … is a vision of the future that grips the masses, we have to admit that … no Marxist or Socialist Party anywhere has the slightest conception of what socialism or communism as a social system ought to be or can be expected to look like.” It’s just that Jameson, who was born eight months before Elvis Presley, came of age alongside the rock’n’roll Left that Reich seemed in many respects to have blueprinted, which means that his repeating of Reich’s complaint in the 1970s and ‘80s has to be read as an implicit reckoning with the counterculture’s limitations, an admission that even the newly larkish Left—the Left naked and capering—had been no match for General Electric and the Nixon administration. It’s not that Reich was wrong, and yet the socialist libido was still going to need something more than a Bo Diddley beat — that’s one version of Jameson.

And of course it’s not just Jameson who has been making this case. This is one of the things that makes Žižek so important—that he hasn’t been copycatting the inherited Reichian line, and so offers an alternative to Jameson and Deleuze and the many barrelsworth of Reich and Marcuse that really existing queer theory has smuggled past its Foucauldian sentries, an alternative, that is, to the no-longer-new Left’s program for the endless expansion and intensification of sexual life. Žižek is a Freudian, to be sure, and a man of the Left, but he is not a Left Freudian, if we take that term still to refer to one who mistakes his testicles for the working class and who regards the Id as a buddy and a pet and the smothered wellspring of his creativity. So Žižek is not like Jameson and Deleuze, but this observation is itself easily misunderstood. For his version of psychoanalysis does not want you to give up on your unorthodox desires—or at least not on all of them. Quite the contrary. Žižek’s sense is that we almost all engage in unusual behavior—sexual or at least eroticized behavior—to some degree. The problem is that nearly all of that behavior takes place with reference back to authority or to the law. We develop most of our sexual quirks as a way of taking a position with regard to the Master; we carry some notion of authority around in our heads, and the ways in which we like to get off are almost always predicated on what we believe to be true about the people in charge. So Žižek does indeed reject as facile the usual anti-authoritarian thrust of radical psychoanalysis, convinced as it is that we can forthrightly strip down and hump our way to emancipation, but it does so only to reinstate that anti-authoritarianism in another, more difficult place. Psychoanalysis in this mode doesn’t care what you get up to—it really doesn’t care how you take your pleasures—provided that these make no reference to the Master, provided, that is, that they aren’t even a rebellion against him. And to that extent there is one sense in which Žižek’s Lacanian-Hegelian system, otherwise committed to the ideas of negation and the lack, is fully invested in establishing a positivity or simple fact. Your task is to figure out the peculiar way you happen to desire when authority is entirely removed from the picture, when, that is, you no longer take the Master to be peeping from behind the curtains.

This, then, is the reason to go into analysis: The analyst has to be on the lookout for the one thing you desire—or the one way you desire, the one way you organize your satisfaction—that is not relational, not a position over and against bosses and fathers. Such is the knack that any good analyst has to develop: the ability to discriminate between Master-directed kink and kink that is truly your own. The bargain that analysis will make with you is that any enjoyment that survives the sundering of your psyche from authority is yours to keep. It’s just that most of your libidinal habits are not going to survive that sundering—or will be transformed by it into new ones. Žižek, following Lacan, calls any enjoyment thus liberated a sinthome, which, in the original French, isn’t anything more than an arch misspelling of and murky pun upon the word symptom. The Lacanian point is that the enjoyment that you take home with you at the end of a successful course of psychoanalysis is likely to look like and sound like a symptom—fevered, morbid, a “deviation from normal functioning,” the clinicians like to say. But it won’t actually be a symptom, or it will be a symptom with a difference, a symptom that is not a symptom. Analysis, in other words, aims not to cure you or return you to normal functioning, but to help you find your way to a happier disorder. Žižek’s hunch is that most people will leave analysis freakier than when they went into it.

So can we tell the difference between the raunch that unshackles us and the  raunch that fixes us in place? This is one of the more pungent questions that a political psychoanalysis prompts us to ask. For Wilhelm Reich was, of course, in one sense absolutely correct. It is not hard to agree that fascism succeeded in large part by devising new gratifications for its adherents. And perhaps it was only predictable that the Western Left would decide to take Reich’s advice and compete on that ground and help build consumer society’s all-singing-all-dancing-24-hour gaudy show. But psychoanalysis allows us to take stock of where we rock’n’rollers remain least at ease—or, indeed, to describe with some precision the new forms of anxiety that have come to the fore in an age of sex-without-taboos. Žižek’s argument is, in this respect, best understood as proposing a new way to periodize recent history—a new way, that is, of identifying the novelty of the present. It bears repeating: If Žižek is right, then in the political organization of enjoyment, obscenity has always played some kind of role. Even public life organized around strong authority figures used to summon the obscene supplement in its support. But we’ll want to at least consider the possibility that in our version of consumer capitalism, the obscene supplement has become primary and so largely supplanted what it had once been asked merely to buoy. The transgression has moved into the position of the master and so instituted a kind of authoritative obscenity. This marks a comprehensive change in what we might call the regime of enjoyment. Again: What keeps you attached to a society is the forms of deviant pleasure that it winks at. In nearly every social order that has ever existed, there has been law: state law or generally recognized prohibitions, and some people get off on breaking the law, while other people get off on the law itself, get off on enforcing it, get off on playing the cop or exasperated schoolmarm. What sets the present apart is that the prohibitions have to some considerable extent faded, which has produced a system of transgression without law or perhaps even transgression as law—what Žižek calls “the world of ordained transgression”—a society of compulsory pleasure in which you are perpetually enjoined to blow your load. You can think of this, if you like, as the flip side to another of Reich’s signature arguments. Sex-pol claimed that if you raised children in a sexually liberated way, refusing to drum inhibition into them, then they would not be willing later in life to go along with authority, because they would not be in the habit of giving up what was important to their happiness. They would be able to resist the call to renunciation, and if authority threatened their enjoyment directly, they would mutiny. Libidinally unpoliced children would become anti-authoritarian adults. The simple corollary of this argument is a catastrophe that Reich never even paused to consider—the plausibility of which advanced capitalism endlessly demonstrates—which is that if authority doesn’t threaten such people’s enjoyment, they will never rebel. If the social order gives people abundant opportunities to get off, it can abuse and exploit them in every other way.

Anyone trying to make sense of Žižek, then, will want to start tracking the ways in which ascetic and anti-ascetic arguments are knotted together in his work. He routinely speaks of “obscene enjoyment” or sometimes just of “obscenity,” and this in tones that we typically associate with anti-pornography campaigners. It’s just that what this version of psychoanalysis considers obscene is not sex, but the conjunction of sex and authority. An obscene pleasure is not one in which I gnash a ball gag or show too much areola, but one in which I imagine, however inarticulately, that I am serving the Master or emulating him or, indeed, defying him. To practice an anti-obscenity would therefore mean to devise a sexuality rigorously beyond the law. Whether or not it might also mean to devise a law beyond sexuality—a law unstained by pleasure—is one of the great open questions in Žižek work. You can, at any rate, accentuate this argument’s anti-asceticism, if you care to, since one of the conundrums most driving Žižek’s work is whether or not the sinthome can be turned into a politics. There is no question that Lacanianism can underwrite political positions or attitudes; it can underwrite a disconcertingly wide variety of them, in fact. The question is, rather, whether it can also produce a genuinely political practice. Could ordinary people learn en masse how to sever their desire from authority? Could we agree collectively not to fuck the police?—because if we can’t, then Lacanianism would seem condemned to remain a therapy and not a politics, to be undertaken in near isolation by the unhappy and the kithless, and producing little more than a libidinal aristocracy, the few upon whom liberated enjoyment has been bestowed, the jedi of the sinthome, an order increasingly restricted to France and Argentina and the university neighborhoods of Buffalo, NY. Can the sinthome be mass-produced?—that’s the properly hedonist version of Žižek’s project.

But then you can also, if you wish, lift out of Žižek’s arguments their fully anti-hedonist strains. Because when he tries to imagine this Lacanian politics, the models he turns to are notably austere: Kantianism, Christianity, Leninism. He says admiringly that poor teenagers with almost nothing to their name can still have discipline, an almost literal self-possession, a martial bearing and a karate chop. That most of us have met no such teenagers—that fifteen-year-olds tend, indeed, to be bywords not for discipline but for its opposite—suggests only how committed Žižek is to a certain fantasy of restraint and composure and self-command. One easy way to summarize Žižek, then, is to note that he tends to make abstemious proposals to libertine prompts. Liberated desire mutates inchwise into liberation from desire. It is easy for readers to find themselves wrong-footed by this. Chances are that you were first drawn to Žižek for one of two reasons: Maybe he was exactly what you always dreamed an Eastern European intellectual would be—manic, vulgar, flocculent; like a drunken peasant who just happened to be a great philosopher; not merely a Lacanian, but a gypsy-punk Lacanian. Or maybe it was enough that you found him funny, the one critical theorist whose mode of argumentation reliably recalls stand-up comedy, a programmatic tastelessness best watched on YouTube in six-minute bursts. Žižek, of course, doesn’t just retell a lot of inherited anekodty; his most famous observations themselves have the structure of bits: Have you ever noticed that different countries have different toilets? But then there is much in his thinking that Slavophiles and comedy nerds are required to overlook: that, for instance, he regularly attacks Eastern European intellectuals and artists for playing up the hard-living, balalaika schtick or for cultivating the impression that they write their books in slivovitz instead of ink. This, he says, is precisely the indecency on which nationalism thrives, and not only in the Balkans. Fans also fail to notice that Žižek’s first book in English already contained an attack on laughter (and the ideology of a liberated laughter)—an attack that he has never backed away from or even, to my knowledge, qualified. Obscenity might be the enemy, but comedy is its sniggering minion. Adorno used to say that anyone committed to the future would have to learn first to be unhappy in the present—that before we would so much as know to be fed up with our own exploitation, we would have to be “sated with false pleasures.” There is nothing that Žižek distrusts more than a dirty joke, which means you probably like him for the wrong reasons.

THE SECOND ESSAY IS HERE…

The Revolutionary Energy of the Outmoded

ORIGINALLY PUBLISHED IN OCTOBER, SPRING 2003.

 

•1.

Fredric Jameson does not like predictions. His is an owlish and retrospective Marxism, one that happily foregoes the crystal ball of some former orthodoxy. There is a Hegelian lesson that Jameson’s writing repeatedly attempts to impart, which is that wisdom only comes in the backwards glance, that we glimpse history only in the moment when our plans fail or dialectically backfire, when our actions bump up against the objective, hurtful (but never foreseeable) limits of the historical situation. You can draw up your revolutionary schemes, paint the future as gaily or grimly as you like, but only upon review will it become plain in just what way you have been Reason’s dupe. If this point is unclear, you might consider Jameson’s response to the World Trade Center attacks, which began with the following extraordinary observation:I have been reluctant to comment on the recent ‘events’ because the event in question, as history, is incomplete and one can even say that it has not yet fully happened. … Historical events…are not punctual, but extend in a before and after of time which only gradually reveal themselves.”[1] I suspect many will find remarkable Jameson’s reluctance here to help shape the public response to September 11th. An event that has not fully happened yet is, after all, an event in which one may yet intercede, an event that one needn’t yet cede to the Right, an event to which one might yet attribute one’s own polemical and political meanings. But Jameson makes a conspicuous display here of spurning what Left criticism generally (and glibly) calls an “intervention”—as though the business of a Marxist criticism were not to intervene, but rather to bide its time, to wait until an event has been thoroughly mediated or disclosed its function, and then to identify, with the serene impotence of hindsight, history’s great game. Any event is, like revolution itself, a leap into the unknown. The owl of Minerva only flies in November.

One might wonder, then, how Jameson feels about his own writing, which has been so accidentally and accurately predictive. How does he feel, for instance, about his landmark postmodernism essay, the one that sometimes goes by the name “Postmodernism and Consumer Society”?[2] That article so neatly anticipated U.S. popular culture in the 1990s that it is hard to shake the feeling that a whole generation of artists—writers, musicians, filmmakers above all—must have mistaken it for a manifesto. (“Pastiche—check. Death of the subject—you bet. Depthlessness and disorientation—where do I sign up?”) As ridiculous as it may sound, the essay, first published in 1983, now reads like an exercise in cultural embryology, discerning the first, fetal traces of an aesthetic mode that would become fully evident only in the years that followed. One wonders, too, if young readers encountering the article for the first time now don’t therefore underestimate its savvy. One wonders if they don’t find it rather trite, since a sharp-eyed exegesis of Body Heat (1981) is really just a workaday description of L.A. Confidential (1997)—a script treatment.

We can be more precise: What has seemed so strangely prophetic about Jameson’s postmodernism argument are, oddly enough, its Benjaminian qualities. Benjamin’s fingerprints seem, in some complicated way, to be all over postmodernism. One might even say that postmodernism in America is a dismal parody of Benjaminian thought. Just cast an eye back over the last ten years, over U.S. pop culture on the cusp of the millennium—postmodernism post-Jameson. Consider, for instance, the apocalypticism that has been among its most persistent trends. The recent fin de siècle has been preoccupied with dire images of a devastated future: we might think here of the full-blown resurgence of millenarian thought and the orchestrated panic surrounding the millennium bug; of X-Files paranoia, which has told us to “fight the future”; of catastrophe movies and the resurgence of film noir and dystopian science fiction. If you were to design a course on popular culture in the 1990s, you would be teaching a survey in doom.

There is much in this culture of disaster that would merit our closest attention—there is, in fact, strangeness aplenty. Consider, for instance, the emergence as a genre of the Christian fundamentalist action thriller, the so-called rapture novel. These novels are basically an exercise in genre splicing; they begin by offering, in what for right-wing Protestantism is a fairly ordinary procedure, prophetic interpretations of world events—the collapse of the Soviet Union, the new Intifada—but they then graft onto these biblical scenarios plots borrowed from Tom Clancy techno-thrillers. The first thing that needs to be noted about rapture novels, then, is that they signal, on the part of U.S. fundamentalism, an unprecedented capitulation to pop culture, which the godly Right had until recently held in well-nigh Adornian contempt. Older forms of Christian mass culture have seized readily on new technologies—radio, say, or cable television—but they have tended to recreate within those media a gospel or revival-show aesthetic. In rapture novels, by contrast, as in the rapture movies that have followed in the novels’ wake, we are able to glimpse the first outlines of a fully commercialized, fully mediatized Christian blockbuster culture. Fundamentalist Christianity gives way at last to commodity aesthetics.

This is not yet to say enough, however, because this rapprochement inevitably holds surprises for secular and Christian audiences alike. The best-selling rapture novel to date is Jerry Jenkins and Timothy LaHaye’s Left Behind, which has served as a kind of template for the entire genre. In the novel’s opening pages, the indisputably authentic Christians are all called up to Christ—they are “raptured.” They literally disappear from earth, leaving their clothes pooled on the ground behind them, pocket change and car keys scattered across the pavement. This scene is the founding convention of the genre, the one event that no rapture novel can do without. And yet this mass vanishing, conventional though it may be, cannot help but have some curious narrative consequences. It means, for a start, that the typical rapture novel is not interested in good Christians. The heroes of these stories, in other words, are not godly people—this is true by definition, because the real Christians have all quit the scene; they have been vacuumed from the novel’s pages. In their absence, the narrative turns its attention to indifferent or not-quite Christians, who can be shown now snapping out of their spiritual ennui, rallying to God, and taking up the fight against the anti-Christ (who in Left Behind, takes the form of an Eastern European humanitarian whose malign plans include scrapping the world’s nuclear arsenals and feeding malnourished children). Left Behind, I would go so far as to suggest, seems to work on the premise that there is something better—something more significantly Christian—about bad Christians than there is about good ones. This notion has something to do with the role of women in the novel. Left Behind, it turns out, has almost no use for women at all. They all either disappear in the novel’s opening pages or get left behind and metamorphose into the whores of anti-Christ. It will surprise no-one to find a Christian fundamentalist novel portraying women as whores, but the former point is worth dwelling on: Left Behind cannot wait to dispense with even its virtuous women. It may hate the harlots, but it has no use for ordinary church-supper Christians either, imagined here as suburban housewives and their well-behaved young children. Anti-Christ has to be defeated at novel’s end, and for this to happen, the good Christians have to be shown the door, for smiling piety can, in the novel’s terms, sustain no narrative interest; it can enter into no conflicts. Left Behind is premised on the notion that devout Christians are cheek-turning wimps and goody-two shoes, mere women, in which case they won’t be much good in the fight against the liberals and the Jews. What this means is that the protagonists who remain in the novel—the Christian fence-sitters—are all men, and not just any men, but rugged men with rugged, porn-star names: Rayford Steele, Buck Williams, Dirk Burton. Left Behind is a novel, in other words, that envisions the remasculinization of Christianity, that calls upon its readers to imagine a Christianity without women, but with muscle and grit instead, a Christianity that can do more than just bake casseroles for people. And such a project, of course, requires bad Christians so that they may become bad-ass Christians. Perhaps it goes without saying: A Christian action thriller is going to be interested first and foremost in action-thriller Christians.

It is with the film version of Left Behind (2001), however, that things really get curious. The film’s final moments nearly make explicit a feature of the narrative that is half-buried in the novel: The film concludes with a brief sequence that we’ve all seen a dozen times, in a dozen different action movies—the sequence, that is, in which the heroic husband returns home from his adventures to be reunited with his wife and child. Typically, this scene is staged at the front door of the suburban house with the child at the wife’s side; you might think, emblematically, of the final shots of John Woo’s Face/Off (1997), which show FBI Agent Sean Archer (John Travolta) exchanging glances with his wife (Joan Allen) over the threshold as their teenaged daughter hovers in the background. Left Behind, for its part, reproduces that scene almost exactly, almost shot for shot, except, since the women have all evaporated or gone over to anti-Christ, the film has no choice but to stage this familiar ending in an unfamiliar way—between its male heroes, between Rayford Steele, standing in the doorway with his daughter, and a bedraggled Buck Williams, freshly returned from his battles with the Beast. A remasculinized Christianity, then, cannot help but imagine that the perfect Christian family would be—two men. Such, then, is one upshot of fundamentalism’s new openness to pop culture: Christianity uncloseted.

Of course, the borrowings can go in the other direction as well. Secular apocalypse movies can deck themselves out in religious trappings, but when they do so, they risk an ideological incoherence of their own. Think first about conventional, secular catastrophe movies—Armegeddon (1998), Deep Impact (1998), Volcano (1997)—so-called apocalypse films that actually make no reference to religion. These tend to be reactionary in rather humdrum and technocratic ways, full of experts and managers deploying the full resources of the nation to fend off a threat defined from the outset as non-ideological. The volcanoes and earthquakes and meteors that loom over such movies are therefore merely more refined versions of the maniacal terrorists and master thieves who normally populate action movies: they are enemies of the state whose challenge to the social order never approaches the level of the political. It is when such secular narratives reintroduce some portion of religious imagery, however, that their political character becomes pronounced. We might think here of The Seventh Sign (1988), which featured Demi Moore, or of the Arnold Schwarzenegger vehicle End of Days (1999). Like Left Behind, these last two films work by combining biblical scenarios and disaster-movie conventions, and the results are similarly confusing. To be more precise, they begin by offering luridly Baroque versions of the Christian apocalypse narrative, but then revert back to the secular logic of the disaster movie, as though to say: Catastrophes are destabilizing a merciless world in preparation for Christ’s return—and this must be stopped! In a half-hearted nod to Christian ethics, each of these movies begins by depicting the world of global capitalism as brutal and unjust—the montage of squalor has become something of an apocalypse-movie cliché—before deciding that this world must be preserved at all costs. The characters in these films, in other words, expend their entire allotment of action-movie ingenuity trying to prevent the second coming of Christ, imagined here as the biggest disaster of all.[3]

This is not to say that contemporary American apocalypses dispense with redemptive imagery altogether, at least of some worldly kind. Carceral dystopias, for instance, films that work by trapping their characters in controlled and constricted spaces, tend to posit some utopian outside to their seemingly total systems: the characters in Dark City (1997) dream of Shell Beach, the fictitious seaside resort that supposedly lies just past their nightmarish noir metropolis, the illusory last stop on a bus line that actually runs nowhere; the man-child of Peter Weir’s Truman Show (1998) dreams, in similar ways, of Fiji, which is a rather more conventional vision of oceanic bliss; and the Horatio-Alger hero of the genetics dystopia Gattaca (1997) follows this particular utopian logic to its furthest end by dreaming of the day he will be made an astronaut, the day he will fly to outer space, which of course is no social order at all, let alone a happier one, but merely an anything-but-here, an any-place-but-this-place, the sheerest beyond. As utopias go, then, these three are remarkably impoverished; they cannot help but seem quaint and nostalgic, strangely dated, like the daydreams of some Cold-War eight-year old, all Coney Island and Polynesian hula-girls and John-Glenn, shoot-the-moon fantasies.

But then it is precisely the old-fashioned quality of these utopias that is most instructive; it is precisely their retrograde quality that demands an explanation. For if on the one hand, U.S. pop culture has seemed preoccupied with the apocalypse, on the other hand it has seemed every bit as obsessed with cheery images from a sanitized past. Apocalypse culture has as its companion the many-faceted retro-craze: vintage clothing; Nick at Nite; the ‘70s vogue; the ‘50s vogue; the ‘40s vogue; the ‘30s vogue; the ‘20s vogue (the ‘60s are largely missing from this tally, for reasons too obvious to enumerate; the ‘60s vogue has been stunted, almost nonexistent, at least within a U.S. framework—retro tops out about 1963 and then gets shifted over to Europe and the mods); the return of surf, lounge-music, and Latin jazz; retro-marketing and retro-design, and especially the Volkswagen Beetle and the PT Cruiser.

Retro, then, deserves careful consideration of its own, as an independent phenomenon alongside the apocalypse. Some careful distinctions will be necessary. Retro takes a hundred different forms; it has the appearance of a single and coherent phenomenon only at a very high level of generality. We could begin, then, by examining the heavily marketed ‘60s and ‘70s retro of mainstream, white youth culture. Here we would want to say, at least on first pass, that the muffled camp of Austin Powers (1997), say—or the mid-‘90s Brady Bunch revival, or Beck’s Midnite Vultures—closely approximates Jameson’s notion of postmodern pastiche: this is retro as blank parody, the affectless recycling of alien styles, worn like so many masks. But that said, we would have to counterpose against these examples the retro-culture of a dozen regional scenes, scattered across the U.S., most of which are retro in orientation, but none of which are exercises in pastiche exactly. Take, for instance, the rockabilly and honky-tonk scene in Chapel Hill, North Carolina: It is impeccably retro in its musical choices and impeccably retro in its fashions, full of redneck hipsters sporting bowling shirts and landing-pad flattops and smart-alecky tattoos. Theirs is a form of retro whose reference points are emphatically local, and in its regionalism, the Chapel Hill scene aspires to a subculture’s subversiveness, a kind of Southern-fried defiance, which stakes its ground in contradistinction to some perceived American mainstream and then gives its rebellion local color, as though to say: “We don’t work in your airless (Yankee) offices. We don’t speak your pinched (Yankee) speech. We don’t belong to your emasculated (Yankee) culture. We are hillbillies and punks in equal proportion.”  Retro, in short, can be placed in the service of a kind of spitfire regionalism, and there is little to be gained by simply conflating this form of retro with the retro-culture marketed nationwide.

In fact, even mainstream ‘70s retro can take on different valences in different hands. To cite just one further example: hip-hop sampling, which builds new tracks out of the recycled fragments of existing recordings, might seem upon first inspection to be the very paradigm of the retro-aesthetic. And yet hip-hop, which has mined the ‘70s funk back-catalog with special diligence, typically forgoes the irony that otherwise accompanies such postmodern borrowings. Indeed, hip-hop sampling generally involves something utterly unlike irony; it is often positioned as a claim to authenticity, an homage to the old school, so that when OutKast, say, channels some vintage P-Funk, that sample is meant to function as a genetic link, a reoccurring trait or musical cell-form. The sample is meant to serve as a tangible connection back to some originary moment in the history of soul and R&B (or funk and disco).[4]

So differences abound in retro. And yet one is tempted, all the same, to speak of something like an official retro-culture, which takes as its object the 1940s and ‘50s: diners, martinis, “swing” music (which actually refers, not to ‘30s and ‘40s swing, but to post-war jump blues), industrial-age furniture, late-deco appliances, all chrome and geometry. The most important point to be made about this form of retro is that it is an unabashedly nationalist project; it sets out to create a distinctively U.S. idiom, one redolent of Fordist prosperity, an American aesthetic culled from the American century, a version of Yankee high design able to compete, at last, with its vaunted European counterparts. In general, then, we might want to say that retro is the form that national tradition takes in a capitalist culture: Capitalism, having liquidated all customary forms of culture, will sell them back to you at $16 a pop. But then commodification has ever been the fate of national customs, which are all more or less scripted and inauthentic. What is distinctive about retro, then, is the class of objects that it chooses to burnish with the chamois of tradition. There is a remarkable scene near the beginning of Jeunet and Caro’s great retro-film Delicatessen (1991) that is instructive in this regard: Two brothers sit in a basement workshop, handcrafting moo-boxes—those small, drum-shaped toys that, once upended and then set right again, low like sorrowful cows. The brothers grind the ragged edges from the boxes, blow away the shavings as one might dust from a favorite book, rap the work-table with a tuning fork and sing along with the boxes to ensure the perfect pitch of the heifer’s bellow. And in that image of their care, their workman’s pride, lies one of retro-culture’s great fantasies: Retro distinguishes itself from the more or less folkish quality of most national traditions in that it elevates to the status of custom the commodities of early mass production—old Coke bottles, vintage automobiles—and it does so by imbuing them with artisanal qualities, so that, in a strange historical inversion, the first industrial assembly lines come to seem the very emblem of craftsmanship. Retro is the process by which mass-produced trinkets can be reinvented as “heritage.”[5]

The apocalypse and the retro-craze—such, then, are the twin poles of postmodernism, at least on Jameson’s account. We are all so accustomed to this twosome that it has become hard to appreciate what an odd juxtaposition it really is. Disco inferno, indeed. This is a pairing, at any rate, that finds a rather precise corollary in the writings of Walter Benjamin. Each of the moments of our swinging apocalypse can be traced back to Benjaminian impulses, or opens itself, at least, to Benjaminian description. For in what other thinker are we going to find, in a manner that so oddly approximates the culture of American malls and American multiplexes, this combination of millenarian mournfulness and antiquarian devotion? Benjamin’s Collector seems to preside over postmodernism’s thrift-shop aesthetic, just as surely as its apocalyptic imagination is overseen by Benjamin’s Messiah, or at least by his Catastrophic Angel. It would seem, then, that Benjaminians should be right at home in postmodernism, and if this is palpably untrue—if the culture of global capitalism does not after all seem altogether hospitable to communists and the Kabbalah—then this is something we will now have to account for. Why, despite easily demonstrated affinities, does it seem a little silly to describe U.S. postmodernism as Benjaminian?

Jameson’s work is again clarifying. It is not hard to identify the Benjaminian elements in Jameson’s idiom, and especially in his utopian preoccupations, his determination to make of the future an open and exhilarating question. No living critic has done more than Jameson to preserve the will-be’s and the could-be’s in a language that would just as soon dispense altogether with its future tenses and subjunctive moods. And yet a moment’s reflection will show that Jameson is, for all that, the great anti-Benjaminian. It is Jameson who has taught us to experience pop culture’s Benjaminian qualities, not as utopian pledges, but as threats or calamities. Thus Jameson on apocalypse narratives: “It seems to be easier for us today to imagine the thoroughgoing deterioration of the earth and of nature than the breakdown of late capitalism; perhaps that is due to some weakness in our imaginations. I have come to think that the word postmodern ought to be reserved for thoughts of this kind.”[6] It is worth calling attention to the obvious point about these sentences—that Jameson here more or less equates postmodernism and apocalypticism—if only because in his earliest work on the subject, it is not the apocalypse but retro-culture that seems to be postmodernism’s distinguishing and debilitating mark. Again Jameson: “there cannot but be much that is deplorable and reprehensible in a cultural form of image addiction which, by transforming the past into visual mirages, stereotypes, or texts, effectively abolishes any practical sense of the future and of the collective project.”[7]  Jameson, in short, is most sour precisely where Benjamin is most expectant. He would have us turn our back on the most conspicuous features of Benjamin’s work; for late capitalism, it would seem, far from keeping faith with Benjamin, actually robs us of our Benjaminian tools, if only by generalizing them, by transforming them into noncommittal habits or static conventions: the Collector, fifty years on, shows himself to be just another fetishist, and even the Angel of History turns out to be a predictable and anti-utopian figure, unable to so much as train its eyes forward, foreclosing, without reprieve, on the time yet to come. U.S. postmodernism may be a culture that loves to “brush history against the grain,” but only in the way that you might brush back your ironic rockabilly pompadour.

 

•2.

But what if we refused to break with Benjamin in this way? Try this, just as an exercise: Ask yourself what these seemingly disparate trends—apocalypticism and the retro-craze—have to do with one another. Consider in particular that remarkable crop of recent films that actually unite these two trends, films that ask us to imagine an unlivable future, but do so in elegant vintage styles. These include: Ridley Scott’s Blade Runner (1982), the grand-daddy of the retro-apocalypses; three oddly upbeat dystopias—Starship Troopers and the aforementioned Gattaca and Dark City—all box-office underachievers from 1997; and, again, the cannibal slapstick Delicatessen. All of these films posit, in their very form, some profound correlation between retro and the apocalypse, but it is hard, on a casual viewing, to see what that correlation could be. Jameson, of course, offers a clear and compelling answer to this question, which is that apocalypticism and the retro-craze are the Janus faces of a culture without history, two eyeless countenances, pressed back to back, facing blankly out over the vistas they cannot survey.[8]

Some of these films, it must be noted, seem to invite a Jamesonian account of themselves. This is true of Blade Runner, for instance, or of The Truman Show—films that offer a vision of retro-as-dystopia, a realm of fabricated memory, in which history gets handed over to corporate administration, in which every madeleine is stamped “Made in Malaysia.” Perhaps it is worth pausing here, however, since we need to be wary of running these two films together. The contrast between them is actually quite revealing. Both Blade Runner and The Truman Show present retro-culture as dystopian, and in order to do this, both rely on some of the basic conventions of science fiction. Think about what makes science fiction distinctive as a mode—think, that is, about what distinguishes it from those genres with which it seems otherwise affiliated, such as the horror movie. Horror movies, especially since the 1970s, have typically worked by introducing some terrifying, unpredictable element into apparently safe and ordinary spaces. Monsters are nearly always intruders—slashers in the suburbs, zombies forcing their way past the barricaded door. But dystopian science fiction is, in this respect, nearly the antithesis of horror. It does not depict a familiar setting into which something frightening then gets inserted. What is frightening in dystopian science fiction is rather the setting itself. Now, this point holds for both Blade Runner and The Truman Show, but it holds in rather different ways. The first observation that needs to be made about The Truman Show is that it is more or less a satire, which is to say that, though it takes retro as its object, it is not itself a retro-film. It portrays a world that has handed itself over entirely to retro, a New Urbanist idyll of gleaming clapboard houses on mixed-use streets; but the film itself is not, by and large, retro in its narrative forms or cinematic techniques. Quite the contrary: the film wants to teach its viewers how to read retro in a new way; it wishes, polemically, to loosen the hold of retro upon them. The Truman Show takes a setting that initially seems like some American Eden, and then through the menacing comedy of its mise-en-scène—the falling lights and incomplete sets, the scenery that Truman stumbles upon or that springs disruptively to life—makes this retro-town come slowly to seem ominous. To give the film the cheap Lacanian description it is just begging for: The Truman Show charts the unraveling of the symbolic order. Every klieg light that comes crashing down from the sky is a warning shot fired from the Real. The simpler point, however, is that The Truman Show rests on a deflationary argument about American mass culture—a media-governed retro-culture depicted here as restrictive, counterfeit, and infantilizing—and its form is accordingly rather conventional. It is essentially a cinematic Bildungsroman, which ends once the protagonist steps forward to take full responsibility for his own life, and this, of course, tends to compromise the film’s own Lacanian premise: It suggests that any of us could simply step out of the symbolic order, step boldly out into the Real, if only we could muster sufficient resolve.[9]

Having a compromised and conventional form, however, is not the same thing as having a retro-form. In Blade Runner, by contrast, the setting—a dismal and degenerate Los Angeles—is self-evidently dystopian, but it is itself retro; it is retro as a matter of style or form. The film’s vision of L.A., as has often been observed, is equal parts Metropolis and ‘40s film noir, and the effect of the film is thus rather different from The Truman Show, though it is equally curious: Blade Runner may recycle earlier styles or narrative forms in a manner typical of retro, but the films that it mimics are themselves all more or less dystopian. If Blade Runner is a pastiche, it is a pastiche of other dystopias, and this has the effect of establishing the correlation between retro and the apocalypse in a distinctive way: Blade Runner posits a historical continuum between a bleak past and an equally bleak future, between the corrupt and stratified modernist city (of German expressionism and hardboiled fiction) and the coming reign of corporate capital (envisioned by so much science fiction), between the bad world we’ve survived and the bad world that awaits.

Such, then, are the films that seem ready to make Jameson’s argument for him. But there is good reason, I think, to set Jameson temporarily to one side. For present purposes, it would be more revealing to direct our attention back to Delicatessen, which, of all the retro-apocalypses, is perhaps the most winning and Benjaminian. The question that confronts any viewer of Delicatessen is why this film—which, after all, depicts an utterly dismal world in which men and women are literally butchered for meat—should be so delightful to watch, and not just wry or darkly humorous, but giddy and dithyrambic. I would suggest that the pleasure peculiar to Delicatessen has everything to do with the status of objects in the film—that is, with the extravagant and festive care that Jeunet and Caro bring to the filming of objects, which take on the appearance here of so many found and treasured items. One might call to mind the hand-crank coffee grinder, which doubles as a radio transmitter; or the cherry-red bellboy’s outfit; or simply the splendid opening credits—this slow pan over broken records and torn photographs—in which the picture swings open like a case of curiosities. It is as though the film took as its most pressing task the re-enchantment of the object-world, as though it were going to lift objects to the camera one by one and reattach to them their auras—not their fetishes, now, as happens in most commercial films, with their product placements and designer outfits—but their auras, as though the objects at hand had never passed through a marketplace at all. This is tricky: The objects in Delicatessen are recognizably of the same type as American retro-commodities—an antique wind-up toy, an old gramophone, stand-alone black-and-white television sets. At this point, then, the argumentative alternatives become clear: Either we can dismiss Delicatessen as ideologically barren, as just another pretext for retro-consumption, just another flyer for the flea market of postmodernism. Or we can muster a little more patience, tend to the film a little more closely, in which case we might discover in Delicatessen the secret of all retro-culture: its desire, delusional and utopian in equal proportion, for a relationship to objects as something other than commodities.

To follow the latter course is to raise an obvious question: How does the film direct our attention to objects in a new way? How does it reinvigorate our affection for the object world? This is a question, first of all, of the film’s visual style, although it turns out that nothing all that unusual is going on cinematographically: In a manner characteristic of French art-film since the New Wave, Delicatessen keeps the spectator’s eye on its objects simply by cutting to them at every opportunity and thus giving them more screen time than household artifacts typically claim. By the usual standards of analytical editing, in other words—within the familiar breakdown of a scene into detailed views of faces, gestures, and props—the props get a disproportionate number of shots. The objects, like so many Garbos, hog all the close-ups. “By permitting thought to get, as it were, too close to its object,” Adorno once said of Benjamin’s critical method, “the object becomes as foreign as an everyday, familiar thing under a microscope.”[10] Delicatessen works, in these terms, by taking Adorno’s linguistic figure at face value and returning it back to something like its literal meaning, back to the visual. The film permits the camera to get too close to its object. It forces the spectator to scrutinize objects anew simply by bringing them into sustained proximity.

The camerawork, however, is just the start of it, for in addition to the question of cinematic style, there is the related question of form or genre. Delicatessen, it turns out, is playing a crafty game with genre, and it is through this formal frolic that the film most insistently places itself in the service of its objects. For Delicatessen is retro not only in its choice of props—it is, like Blade Runner, formally or generically retro, as well. This point may not be immediately apparent, however, since Delicatessen resurrects a genre largely shunned by recent U.S. film. One occasionally gets the feeling from American cinema that film noir is the only genre ripe for recycling. The 1990s have delivered a whole paddywagon full of old-fashioned crime stories and heist pics, but where are all the other classic Hollywood genres? Where are the retro-Westerns and the retro war movies? Where are the retro-screwballs?[11] Neo-noir, of course, is relatively easy to pull off—dim the lights and fire a gun and some critic or another will call it noir. Delicatessen, for its part, attempts something altogether more difficult or, at least, sets in motion a less reliable set of cinematic conventions: pratfalls, oversized shoes, madcap chase scenes. Early on, in fact, the film has one of its characters say that, in its post-apocalyptic world, people are so hungry they “would eat their shoes”; and with this one line—an unambiguous reference to the celebrated shoe-eating of Chaplin’s The Gold Rush—it becomes permissible to find references to silent comedy at every turn: in the hero’s suspenders, in the film’s several clownish dances, in the near-demolition of the apartment building in which all the action is set, a demolition that, once read as slapstick, will call to mind Buster Keaton’s wrecking-ball comedy, the crashing houses of Steamboat Bill, Jr. (1928), say. Delicatessen, in sum, is retro-slapstick, and noting as much will allow us to ask a number of valuable questions.

The most compelling of these questions will return us to the matter at hand. We are trying to figure out how Delicatessen gets the viewer to pay attention to its objects, and so the question now must be: What does slapstick have to do with the status of objects in the film? It is hardly intuitive, after all, that slapstick should bring about the redemption of objects, should reattach objects to their auras. A cursory survey of classic slapstick, in fact, might suggest just the opposite—a world, not of enchanted objects, but of aggressive and adversarial ones. Banana peels and cream pies spring mischievously to mind. And yet we need to approach these icons with caution, lest we take a conceptual pratfall of our own; for Delicatessen draws on slapstick in at least two different ways, or rather, it draws on two distinct trends in early American slapstick, and each of these trends grants a different status to its objects. Everything rides on this distinction:

1) When we think of slapstick, we think first of all of roughhouse comedy, of the pie in the face and the kick in the pants, the endless assault on ass and head. Classic slapstick of this kind is what we might call the comedy of Newtonian physics. It is a farce of gravity and force, and as such, it is based on the premise that the object world is fundamentally intransigent, hostile to the human body. In this Krazy-Kat or Keystone world, every brick, every mop is a tightly wound spring of kinetic energy, always ready to uncoil, violently and without motivation.[12] It is worth remarking, then, that Delicatessen, contains its share of knockabout: the Rube Goldberg suicide machines, the postman always tumbling down the stairs. In its most familiar moments, Delicatessen, in keeping with its comic predecessors, seems to suggest that the human body is irreparably out of joint with its environment.

A first distinction is necessary here, for though Delicatessen may embrace the sadism of slapstick, it does so with a historical specificity of its own. Classic slapstick typically addresses itself to the place of the body under urban and industrial capitalism; one is pretty much obliged at this point to adduce Chaplin’s Modern Times (1936), with its scenes of working-class mayhem and man-eating machines. Delicatessen, by contrast, contains man-eaters of its own, but they are not metaphorical man-eaters, as Chaplin’s machines are—they are cannibals true and proper, and their presence adds a certain complexity to the question of the film’s genre, for there have appeared so many films about cannibalism over the last twenty years that they virtually constitute a minor genre of their own.[13] One way to describe Delicatessen’s achievement, then, is to say that it splices together classic slapstick with the cannibal film. There will be no way to appreciate what this means, however, until we have determined the status of the cannibal in contemporary cinema. Broadly speaking, images of the cannibal tend to participate in one of two discourses: Historically, they have played a rather repugnant role in the racist repertoire of colonial clichés. Cannibalism is one of the more extreme versions of the imperial Other, the savage who does not respect even the most basic of civilization’s taboos. Increasingly, however, in films such as Eat the Rich (1987) or Dawn of the Dead (1978), cannibalism has become a conventional (and more or less satirical) image of Europeans and Americans themselves—an image, that is, of consumerism gone awry, of a consumerism that has liquidated all ethical boundaries, that has sunk into raw appetite, without restraint.[14] For present purposes, this point is nowhere clearer than in Delicatessen’s final chase scene, in which the cannibalistic tenants of the film’s apartment house gather to hunt down the film’s hero. The important point here is that, within the conventions of classic Hollywood comedy, the film makes a conspicuous substitution, for our comic hero is not on the run from some exasperated factory foreman or broad-shouldered cop on the beat, as silent slapstick would have it. He is fleeing, rather, from a consumer mob, E.P. Thompson’s worst nightmare, some degraded, latter-day bread riot. It is important that we appreciate the full ideological force of this switchover: By staffing the old comic scenarios with kannibals instead of kops, the film is able to transform slapstick in concrete and specifiable ways. The cannibals mean that when Delicatessen revives Chaplin-era slapstick, it does so without Chaplin’s factories or Chaplin’s city. This is slapstick for some other, later stage of capitalism—modernist comedy from which modernist industry has disappeared, leaving only consumption in its place.

2) Slapstick, then, announces a pressing political problem, in Delicatessen as in silent comedy. It sounds an alarm on behalf of the besieged human body. Delicatessen’s project, in this sense, is to imagine that problem’s solution, to mount a counterattack, to ward off the principle of slapstick by shielding the human body from its batterings. The deranged, consumption-mad crowd, in this light, is one, decidedly sinister version of the collective, but it finds its counterimage here in a second collective, a radical collective—the vegetarian insurgency that serves as ethico-political anchor to the film. Or to be more precise: The film is a fantasy about the conditions under which an advanced consumer capitalism could be superceded, and in order to do so, it follows two different tracks: One of the film’s subplots follows the efforts of the anti-consumerist underground, the Trogolodytes, while a second subplot stages a fairly ordinary romance between the clown-hero and a butcher’s daughter. Delicatessen thus divides its utopian energies between the revolutionary collective, depicted here as some lunatic version of La Resistance, and the heterosexual couple, imagined in impeccably Adornian fashion as the last, desperate repository of human solidarity, the faint afterimage of a non-instrumental relationship in a world otherwise given over to instrumentality.[15]

But this pairing does not exhaust the film’s political imagination, if only because knockabout does not exhaust the possibilities of slapstick. Delicatessen, in fact, is more revealing when it refuses roughhouse and shifts instead into one of slapstick’s other modes. Consider the key scene, early in the film, when the clown-hero, who has been hired as a handyman in the cannibal house, hauls out a bucket of soapy water to wash down the stairwell. The bucket, of course, is another slapstick icon, and anyone already cued in to the film’s generic codes might be able to predict how the scene will play out. Classic slapstick would dictate that the hero’s foot get wedged inside the bucket, that he skid helplessly across the ensuing puddle, that the mop pivot into the air and crack him in the forehead, that he somersault finally down the stairs. The important point, of course, is that no such thing happens. The clown does not get his pummeling. On the contrary, he uses his cleaning bucket to fill the hallway of this drear and half-inhabited house with giant, wobbling soap-bubbles, with which he then dances a kind of shimmy. It is in this moment, when the film pointedly repudiates the comedy of abuse, that the film modulates into a different tradition of screen comedy, what Mark Winokur has called “transformative“ or “tramp” comedy.

The hallway scene, in other words, is Chaplin through and through. It is important, then, to specify the basic structure of the typical Chaplin gag—and to specify, in particular, what distinguishes Chaplin from the generalized brutality and bedlam of the Keystone shorts. Chaplin’s bits are so many visual puns: they work by taking an everyday object and finding a new and exotic use for it, turning a roast chicken into a funnel, or a tuba into an umbrella stand, or dinner rolls into two dancing feet.[16] In Delicatessen, such transformative comedy is apparent in the New Year’s Eve noisemaker that the frog-man uses as a tongue, to catch flies; or in the hero’s musical saw, which, in fact, is the very emblem of the film’s many objects—an implement liberated from its pedestrian uses, a tool that yields melody, a dumb commodity suddenly able to speak again, and not just to shill, but to murmur of new possibilities. It is in transformative comedy, then, in the spectacle of objects whose use has been transposed, that slapstick takes on a utopian function. Slapstick becomes, so to speak, its own solution: Knockabout slapstick, in which objects are perpetually in revolt against the human body, finds its redemption in transformative slapstick, in which the human body discovers a new and unexpected affinity with objects. The pleasure that is distinctive of Delicatessen is thus actually some grand comic version of Kant’s aesthetics, of Kant’s Beauty, premised as it is on the dawning and grateful realization that objects are ultimately and against all reasonable expectation suited to human capacities. Delicatessen reimagines the world as a perpetual pas de deux with the inanimate.[17]

Transformative slapstick, this is all to say, functions in Delicatessen as a kind of antidote to cannibalistic forms of consumption. At its most schematic, the film faces its viewers with a choice between two different ways of relating to objects: a cannibalistic relationship, in which the object will be destroyed by the consumer’s unchecked hunger, or a Chaplinesque relationship, in which the object will be kept alive and continually reinvented. And so at a moment when cinematic realism has fallen into a state of utter disrepair, when realism finds it can do nothing but script elegies for the working class—when even fine films like Ken Loach’s Ladybird Ladybird (1994) and Zonca’s Dream Life of Angels (1998) have opted for the funereal, with so much as the protest drubbed out of them—it falls to Delicatessen’s grotesquerie to fulfill realism’s great utopian function, to keep faith, as Bazin said, with mere things, “to allow them first of all to exist for their own sakes, freely, to love them in their singular individuality.”[18]

It is crucial, however, that we not confine this observation to Delicatessen, because in that film’s endeavor lies the buried aspiration of all retro-culture, even (or especially) at its most fetishistic. If you examine the signs that hang next to the objects at Restoration Hardware and other such retro-marts—these small placards that invent elaborate and fictional histories for the objects stacked there for sale—you will discover a culture recoiling from its commodities in the very act of acquiring them, a culture that thinks it can drag objects back into the magic circle if only it can learn to consume them in the right way. Retro-commodities utterly collapse our usual Benjaminian distinctions between the fetish and the aura, and they do so by taking as their fundamental promise what Benjamin calls  “the revolutionary energies that appear in the ‘outmoded,’” the notion that if you know the history of an item or if you can aestheticize even the most ordinary of objects—a well-wrought dustpan, perhaps, or a chrome toaster—then you are never merely buying an object; you are salvaging it from the sphere of circulation, and perhaps even from the tawdriness of use.[19]

This is not yet to say enough, however, because it is the achievement of Delicatessen to demonstrate that this retro-utopia is unthinkable without the apocalypse. For if the objects in Delicatessen achieve a luminosity that is denied even the most exquisite retro-commodities, then this is only because they occupy a ruined landscape, in which they come to seem singular and irreplaceable. Delicatessen is a film whose characters are forever scavenging for objects, scrapping over parcels that have gone astray, rooting through the trash like so many hobos or German Greens. It is the film’s fundamental premise, then, that in a time of shortage, and in a time of shortage alone, objects will slough off their commodity status. They will crawl out from under the patina of mediocrity that the exchange relationship ordinarily imposes on them. If faced with shortage, each object will come to seem unique again, fully deserving of our attention. There is a startling lesson here for anyone interested in the history of utopian forms: that utopia can require suffering, or at least scarcity, and not abundance; that the classical utopias of plenty—those Big Rock Candy mountains with their lemonade springs and cigarette trees and smoked hams raining from the sky—are, under late capital, little more than hideous afterimages of the marketplace itself, spilling over with redundant and misdistributed goods, stripped of their revolutionary energy; that a society of consumption must, however paradoxically, find utopia in its antithesis, which is dearth.[20] And so we come round, finally, to my original point: that we must have, alongside Jameson, a second way of positing the identity of retro-culture and the apocalypse, one that will take us straight back to Benjamin: Underlying retro-culture is a vision of a world in which commodity production has come to a halt, in which objects have been handed down, not for our consumption, but for our care. The apocalypse is retro-culture’s deepest fantasy, its enabling wish.

 


[1] Jameson’s full comments can be found in the London Review of Books (Volume 23, Number 19, October 4, 2001). See also “Architecture and the Critique of Ideology, in The Ideologies of Theory, Volume 2: The Syntax of History, pp. 35-60, esp. p. 41: “dialectical interpretation is always retrospective, always tells the necessity of an event, why it had to happen the way it did; and to do that, the event must already have happened, the story must already have come to an end.”

[2] This essay is available in multiple versions. The easiest to come by is perhaps “Postmodernism and Consumer Society,”  in The Cultural Turn (London: Verso, 1998), pp. 1-20; and the most densely argued “The Cultural Logic of Late Capitalism” in Postmodernism, or The Cultural Logic of Late Capitalism (Durham: Duke, 1991), pp. 1-54.

[3] The Seventh Sign, for what it’s worth, draws on at least four different genres: 1) It is, at the most general level, a Christian apocalypse narrative; its nominal subject is the End Time, the series of catastrophes set in motion by God in preparation for His final judgment. 2) But in doing so, it deploys most of the conventions of the occult horror film. Even though the film expressly states that God is responsible for the disasters depicted, it cannot help but stage those disasters as supernatural and scary, in sequences borrowed more or less wholesale from the exorcism and devil-child movies of the 1970s, which is to say that viewers are expected to experience God’s actions as essentially diabolical. The film may adorn itself with Christian trappings, but in a manner typical of the Gothic, it cannot, finally, represent religion as anything but frightening. 3) This last point is clearest in the film’s depiction of Jesus Christ, who actually appears as a character and is almost always filmed in shots lifted from serial-killer films—Jesus stands alone, isolated in ominous medium long-shots, his face half in shadow, lit starkly from the side. Jesus’ menace is also a plot point: Christ, in the film, rents a room from Demi Moore and, in a manner that recalls Pacific Heights (1990) or The Hand That Rocks the Cradle (1992), becomes the intruder in the suburban home, the malevolent force that the white professional family has mistakenly welcomed under its roof. 4) In its final logic, then, the film reveals itself to be just a disaster movie in disguise: The Apocalypse must be scuttled. Christ must be sent back to heaven (and thus evicted from the suburban home). Justice must be averted.

[4] I owe this point to a conversation with Roger Beebe. Even here, though, matters are more complicated than they at first seem. Hip-hop, after all, hardly dispenses with irony and pastiche altogether: Jay-Z  has sampled “It’s a Hard-knock Life” (from Annie) and Missy Elliot has sampled Mozart’s Requiem, but no-one is likely to suggest that hip-hop is establishing a genetic link back to the Broadway musical or Viennese classicism.

[5] Of course, as a nationalist project, retro will play out differently in different national contexts. Perhaps a related cinematic example will make this clear. Consider Jeneut’s Fabuleux destin d’Amélie Poulain (2001). At the level of diagesis—as a plain matter of plot and dialogue and character—the film has nothing at all to do with nationalism. On the contrary, it dedicates an entire subplot to undermining the provincialism of one of its characters, Amélie’s father, who resolves at movie’s end to become more cosmopolitan. The entire film is directed towards getting him to leave France. But at the level of form, things look rather different. Formally, the film is retro through and through. It won’t take a cinephile to notice the overt references to Jules et Jim (1962) and Zazie dans le Metro (1960), at which point it becomes clear that Amélie is a pastiche of the French New Wave, which is thereby transformed into a historical artifact of its own. Amélie, then, attempts to recreate the nouvelle vague, not with an eye to making it vital again as an aesthetic and political project, but merely to cycle exhaustively through its techniques, its stylistic tics, as though it were compiling some kind of visual compendium. The nationalism that the film’s narrative explicitly rejects thus reappears as a matter of form. Amélie works to draw our attention to the Frenchness of the New Wave, to codify it as a national style, and the presumed occasion for the film is therefore the ongoing battle, in France, over the Americanization of la patrie. Amélie is a bulldozer looking for its MacDonald’s.

[6] See Jameson’s “The Antinomies of Postmodernism,” in The Cultural Turn, pp. 50-72, quotation p. 50.

[7] See “The Cultural Logic of Late Capitalism,” in Postmodernism or, The Cultural Logic of Late Capitalism (Durham: Duke, 1991), pp. 1-54, quotation p. 46.

[8] The second quotation cited here goes on to make this point clear: Retro-culture, Jameson continues, “abandon(s) the thinking of future change to fantasies of sheer catastrophe and inexplicable cataclysm, from visions of ‘terrorism’ on the social level to those of cancer on the personal.”

[9] The Truman Show, to be fair, does hedge the matter somewhat. The film’s numerous cutaways to the show’s viewers show a “real world” that is itself populated by TV-thralls, Truman Burbanks of a lower order. So when Truman steps out of his videodrome, we have a choice: We can either conclude, in proper Lacanian fashion, that Truman has simply traded one media-governed pseudo-reality for another. Or we can conclude that the film is asking us to distinguish between those, like Truman, who are able to shrug off their media masters, and those, like his viewers, who aren’t. I take this to be the film’s constitutive hesitation, its undecideable question.

[10] See Adorno’s “Portrait of Walter Benjamin” in Prisms, translated by Samuel and Shierry Weber (Cambridge: MIT, 1981, pp. 227-241), here p. 240.

[11] Examples of these last can be found, but it takes some looking: Paul Verhoeven’s Starship Troopers is a retro World War II movie, more so than Pearl Harbor (2001) or Saving Private Ryan (1998), which aspire to be historical dramas; and the Coen brothers’ Hudsucker Proxy (1994) is unmistakably a retro-screwball (and such a lovely thing that it’s a wonder others haven’t followed its lead). But they are virtually the lone examples of their kinds, singular members of non-existent sets. Neo-noir, by contrast, has become too extensive a genre to list comprehensively.

[12] Perhaps a rare instance of literary slapstick, manifestly modeled on cinematic examples, will drive this point home. The following is from Martin Amis’s Money (London: Penguin, 198?), p. 289: “What is it with me and the inanimate, the touchable world? Struggling to unscrew the filter, I elbowed the milk carton to the floor. Reaching for the mop, I toppled the trashcan. Swivelling to steady the trashcan, I barked my knee against the open fridge door and copped a pickle-jar on my toe, slid in the milk, and found myself on the deck with the trashcan throwing up in my face … Then I go and good with the grinder. I took the lid off too soon, blinding myself and fine-spraying every kitchen cranny.”

[13] See, for instance, Eating Raoul (1982); Parents (1989); The Cook, The Thief, His Wife, and Her Lover (1989); and, in a different mood, Silence of the Lambs (1991) and Hannibal (2001).

[14] On the cultural uses of cannibalism, see Cannibalism and the Colonial World, edited by Francis Barker, Peter Hulme, Margaret Iversen (Cambridge: Cambridge, 1998), especially Crystal Bartolovich’s “Consumerism, or the cultural logical of late cannibalism” (pp. 204-237).

[15] For a discussion of Delicatessen that pays closer attention to the film’s narrowly French contexts—its nostalgia for wartime, its debt to French comedies—see Naomi’s Greene’s Landscapes of Loss: The National Past in Postwar French Cinema (Princeton: Princeton, 1999).

[16] See, respectively, Modern Times; The Pawnshop (1916); The Gold Rush (1925).

[17] There’s a sense in which this operation is at work even in the most vicious knockabout. Even the most paradigmatically abusive comedies—the Keystone shorts, say—are redemptive in that the staging of abuse itself discloses a joyous physical dexterity. The staging of bodies out of synch with the inanimate world relies on bodies that are secretly very much in synch with that world—and this small paradox characterizes the pleasure peculiar to those films.

[18] Bazin, What is Cinema?, translated by Hugh Gray (Berkeley: UCalifornia, 1967); see also Siegfried Kracauer’s Theory of Film: The Redemption of Physical Reality (New York: Oxford, 1965).

[19] See Benjamin’s “Surrealism: The Last Snapshot of the European Intelligentsia,” translated by Edmund Jephcott in the Selected Writings: Volume 2, 1927-1934, edited by Michael Jennings, Howard Eiland, and Gary Smith (Cambridge: Belknap, 1999, pp. 207-221), here p. 210.

[20] Compare Langle and Vanderburch’s utopia of abundance, as noted by Benjamin himself, in the 1935 Arcades-Project Exposé (in The Arcades Project, translated by Howard Eiland and Kevin McLaughlin—Cambridge: Belknap, 1999, pp. 3-13), here p. 7:

“Yes, when all the world from Paris to China

Pays heed to your doctrine, O divine Saint-Simon,

The glorious Gold Age will be reborn.

Rivers will flow with chocolate and tea,

Sheep roasted whole will frisk on the plain,

And sautéed pike will swim in the Seine.

Fricaseed spinach will grow on the ground,

Garnished with crushed fried croutons;

The trees will bring forth stewed apples,

And farmers will harvest boots and coats.

It will snow wine, it will rain chickens,

And ducks cooked with turnips will fall from the sky.”

(Translation altered)

To the Political Ontologists

The political ontologists have their work cut out for them. Let’s say you believe that the entire world is made out of fire: Your elms and alders are fed by the sky’s titanic cinder; your belly is a metabolic furnace; your lungs draw in the pyric aether; the air that hugs the earth is a slow flame—a blanket of chafing-dish Sterno—shirring exposed bumpers and cast iron fences; water itself is a mingling of fire air with burning air. The cosmos is ablaze. The question is: How are you going to derive a political program from this insight, and in what sense could that program be a politics of fire? How, that is, are you going to get from your ontology to your political proposals? For if fire is not just a political good, but is in fact the very stuff of existence, the world’s primal and universal substance, then it need be neither produced nor safeguarded. No merely human arrangement—no parliament, no international treaty, no tax policy—could dislodge it from its primacy. It will no longer make sense to describe yourself as a partisan of fire, since you cannot be said to defend something that was never in danger, and you cannot be said to promote something that is everywhere already present. Your ontology, in other words, has already precluded the possibility that fire is a choice or that it is available only in certain political frameworks. This is the fate of all political ontologies: The philosophy of all-being ends up canceling the politics to which it is only superficially attached. The –ology swallows its adjective.

The task, then, when reading the radical ontologists—the Spinozists, the Left Heideggerians, the speculative realists—is to figure out how they think they can get politics back into their systems; to determine by which particular awkwardness they will make room for politics amidst the spissitudes of being. In its structure, this problem repeats an old theological question, which the political ontologists have merely dressed in lay clothes—the question, that is, of whether we are needed by God or the gods. If you have given in to the pressure to subscribe to an ontology, then this is the first question you should ask: Whatever is at the center of your ontology—does it need you? Does Becoming need you? Is Being incomplete without you? Has the cosmic fire deputized you? And if you decide that, no, the fire does not need you—if, that is, you resist the temptation to appoint yourself that astounding entity upon which even the Absolute depends—then you will have yourself already concluded that there is nothing exactly to be gained from getting your ontology right, and you will be free to think about other and more interesting things.

If, on the other hand, you are determined to ontologize, and determined additionally that your ontology yield a politics, there are, roughly speaking, three ways you can make this happen.

First, you could determine that even though fire is the primal stuff of the universe, it is nonetheless unevenly distributed across it; or that the cosmos’s seemingly discrete objects embody fire to greater and lesser degrees. The heavy-gauge universalism of your ontology will prevent you from saying outright that water isn’t fire, but you might conclude all the same that it isn’t very good fire. This, in turn, would allow you to start drawing up league tables, the way that eighteenth-century vitalists, convinced that the whole world was alive, nonetheless distinguished between vita maxima and vita minima. And if you possess ontological rankings of this kind, you should be able to set some political priorities on their basis, finding ways to reward the objects (and people? and groups?) that carry their fiery qualities close to the surface, corona-like, and, equally, to punish those objects and people who burn but slowly and in secret. You might even decide that it is your vocation to help the world’s minimally fiery things—trout ponds, shale—become more like its maximally fiery things—volcanoes, oil-drum barbecue pits. The pyro-Hegelian takes it upon himself to convert the world to fire one timber-framed building at a time.

Alternately—and herewith a second possibility—you can proclaim that the cosmos is made of fire, but then attribute to humanity an appalling power not to know this. “Power” is the important word here, since the worry would have to be that human ignorance on this point could become so profound that it would damage or dampen the world-flame itself. Perhaps you have concluded that fire is not like an ordinary object. We know in some approximate and unconsidered way what it is; we are around it every day, walking in its noontide light, enlisting it to pop our corn, conjuring it from our very pockets with a roll of the thumb or knuckly pivot. And yet we don’t really understand the blaze; we certainly do not grasp its primacy or fathom the ways we are called upon to be its Tenders. You might even have discovered that we are the only beings, the only guttering flames in a universe of flame, capable of defying the fire, proofing the world against it, rebuilding the burning earth in gypsum and asbestos, perversely retarding what we have been given to accelerate. This argument expresses clear misgivings about humanity; it doesn’t trust us to keep the fire stoked; and to that extent it partakes of the anti-humanism that is all but obligatory among political ontologists. And yet it shares with humanism the latter’s sense that human beings are singular, a species apart, the only beings in existence capable of living at odds with the cosmos, capable, that is, of some fundamental ontological misalignment, and this to a degree that could actually abrogate an ontology’s most basic guarantees. From a rigorously anti-humanist perspective, this position could easily seem like a lapse—the residue of the very anthropocentrism that one is pledged to overcome—but it is in fact the most obvious opening for an anti-humanist politics (as opposed, say, to an anti-humanist credo), since you really only get a politics once the creedal guarantees have been lifted. If human beings are capable of forgetting the fire, someone will have to call to remind them. Someone, indeed, will have to ward off the ontological catastrophe—the impossible-but-somehow-still-really-happening nihilation of the fire—the Dousing.

That said, a non-catastrophic version of this last position is also possible, though its politics will be accordingly duller. Maybe duller is even a good thing. Such, at any rate, is the third pathway to a political ontology: You might consider arguments about being politically germane even if you don’t think that humanity’s metaphysical obtuseness can rend the very tissue of existence. You don’t have to say that we are damaging the cosmic fire; it will be enough to say that we are damaging ourselves, though having said that, you are going to have to stop trying to out-anti-humanize your peers. Your position will now be that not knowing the truth about the fire-world deforms our policies; that if we mistake the cosmos for something other than flame, we are likely to attempt impossible feats—its cooling; its petrification—and will then grow resentful when these inevitably fail. You might, in the same vein, determine that there are entire institutions dedicated to broadcasting the false ontologies that underwrite such doomed projects, doctrines of air and doxologies of stone, and you might think it best if such institutions were dismantled. If it’s politics we’re talking about, you might even have plans for their dismantling. Even so, you will have concluded by this point that the problem is in its essentials one of belief—the problem is simply that some people believe in water—in which case, ontology isn’t actually at issue, since nothing can happen ontologically; the fire will crackle on regardless of what we think of it, indifferent to our denials and our elemental philandering. You have thus gotten the politics you asked for, but only having in a certain sense bracketed the ontology or placed it beyond political review. And your political program will accordingly be rather modest: a new framework of conviction—a clarification—an illumination.

Still, even a modest politics sometimes shows its teeth. William Connolly, in a book published in 2011, says that the world-fire is burning hotter than it has ever burnt; the problem is, though, that some “territories … resist” the flame. What we don’t want to miss is the basically militarized language of that claim: “resisting territories” suggests backwaters full of ontological rednecks; Protestant Austrian provinces; the Pyrenees under Napoleon; Anbar. Connolly’s notion is that these districts will need to be enlightened and perhaps even pacified, whereupon political ontology outs itself as just another program of philosophical modernization, a mopping up operation, the People of the Fire’s concluding offensive against the People of the Ice. Don’t fight it, Connolly, in this way, too, an irenicist, instructs the existentially retrograde. Let it burn.

The all-important point, then, is that there is absolutely no reason to get hung up on the word “fire,” in the sense that there is no more sophisticated concept you can put in its place that will make these problems go away: not Being, not Becoming, not Contingency, not Life, not Matter, not Living Matter. Go ahead: Choose your ontological term or totem and mad-lib it back into the last six paragraphs.  Nothing else about them will change.

• • •

Anyone wanting to read Connolly’s World of Becoming, or Jane Bennett’s Vibrant Matter, its companion piece, also from 2011, now has some questions they can ask. The two books share a program:

-to survey theories of chaos, complexity; to repeat the pronouncements of Belgian chemists who declare the end of determinism; and then to resurrect under the cover of this new science a much older intellectual program—a variously Aristotelian, Paracelsian, and hermetic strain in early modern natural philosophy, which once posited and will now posit again a living cosmos a-go-go with active forces, a universe whose intricate assemblages of self-organizing systems will frustrate any attempt to reduce them back to a few teachable formulas;

-or, indeed, to trade in “science” altogether in favor of what used to be called “natural history,” the very name of which strips nature of its pretense to permanence and pattern and nameable laws and finds instead a universe existing wholly in time, as fully exposed to contingency, mutation, and the event as any human invention, with alligators and river valleys and planets now occupying the same ontological horizon as two-field crop rotation and the Lombard Leagues;

-to recklessly anthropomorphize this historical cosmos, to the point where that entirely humanist device, which everywhere it looks sees only persons, tips over into its opposite, as humanity begins divesting itself of its specialness, giving away its privileges and distinguishing features one by one, and so produces a cosmos full of more or less human things, active, volatile, underway—a universe enlivened and maybe even cartoonish, precisely animated, staffed by singing toasters and jitterbugging hedge clippers.

I wouldn’t blame anyone for finding this last idea rather winning, though one problem should be noted right way, which is that Connolly, in particular, despite getting a lot of credit for bringing the findings of the natural sciences into political theory—and despite repeating in A World of Becoming his earlier admonition to radical philosophers for failing to keep up with neurobiology and chemistry and such—really only quotes science when it repeats the platitudes of the old humanities. The biologist Stuart Kauffman has, Connolly notes, “identified real creativity” in the history of the cosmos or of nature. Other research has identified “degrees of real agency” in a “variety of natural-social processes.” The last generation of neuroscience has helped specify the “complexity of experience,” the lethal and Leavisite vagueness of which phrase should be enough to put us on our guard. It turns out that the people who will save the world are still the old aesthetes; it’s just that their banalities can now borrow the authority of Nobel Laureates (always, in Connolly, named as such). Of one scientific finding Connolly notes: “Mystics have known this for centuries, but the neuroscience evidence is nice to have too.” That will tell you pretty much everything you need to know about the role of science in the new vitalism, which is that it gets adduced only to ratify already held positions. This is interdisciplinarity as narcissistic mirror.

But we can grant Connolly his fake science—or rather, his fake deployment of real science. The position he and Bennett share—that the cosmos is full of living matter in a constant state of becoming—isn’t wrong just because it’s warmed over Ovid. What really needs explaining is just which problems the political philosophers think this neuro-metamorphism is going to solve. More to the point, one wonders which problems a vitalist considers still unsolved. If Bennett and Connolly are right, then is there anything left for politics to do? Has Becoming bequeathed us any tasks? Won’t Living Matter get by just fine without us? And if there is no political business yet to be undertaken, then in what conceivable sense is this a political philosophy and not an anti-political one?

The real dilemma is this: There are those three options for getting a politics back into ontology—you can devise an ontological hierarchy; you can combat ontological Vergessenheit; or you can promote ontological enlightenment. Bennett and Connolly don’t like two of these, and the third one—the one they opt for—ends up canceling the ontology they mean to advocate. I’ll explain.

Option #1: Hierarchy could work. Bennett and Connolly could try to distinguish between more and less dynamic patches of the universe—or between more and less animate versions of matter—but they don’t want to do that. The entire point of their philosophical program is a metaphysical leveling; witness that defense of anthropomorphism. Bennett, indeed, uses the word “hierarchical” only as an insult, the way that liberals and anarchists and post-structuralists have long been accustomed to doing. Having only just worked out that all of matter has the characteristics of life, she is not about to proclaim that some life forms are more important than others. Her thinking discloses a problem here, if only because it reminds one of how difficult is has been for the neo-vitalists to figure out when to propose hierarchies and when to level them, since each seems to come with political consequences that most readers will find unpalatable. Bennett herself worries that a philosophy of life might remove certain protections historically afforded humans and thus expose them to “unnecessary suffering.” She positions herself as another trans- or post-humanist, but she doesn’t want to give up on Kant and the never really enforced guarantees of a Kantian humanism; she thinks she can go over to Spinoza and Nietzsche and still arrive at a roughly Left-Kantian endpoint. “Vital materialism would … set up a kind of safety net for those humans who are now … routinely made to suffer.” That idea—which sounds rather like the Heidegger of the “Letter on Humanism”—is, of course, wrong. Bennett is right to fret. A vitalist anti-humanism is indeed rather cavalier about persons, as her immediate predecessors and philosophical mentors make amply clear. The hierarchies it erects are the old ones: Michael Hardt and Toni Negri think it is a good thing that entire populations of peasants and tribals were wiped out because their extermination increased the vital energies of the system as a whole. And if vitalism’s hierarchies produce “unnecessary suffering,” well, then so do its levelings: Deleuze and Guattari think that French-occupied Africa was an “open social field” where black people showed how sexually liberated they were by fantasizing about “being beaten by a white man.”

Option #2: They could follow the Heideggerian path, which would require them to show that humanity is a species with weird powers—that humans (and humans alone) can fundamentally distort the universe’s most basic feature or hypokeinomon. That would certainly do the political trick. Vitalism would doubtless take on an urgency if it could make the case that human beings were capable of dematerializing vibrant matter—or of making it less vibrant—or of pouring sugar into the gas tank of Becoming. But Bennett and Connolly are not going to follow this path either, for the simple reason that they don’t believe anything of the sort. Their books are designed in large part to attest the opposite—that humanity has no superpowers, no special role to play nor even to refuse to play. Early on, Bennett praises Spinoza for “rejecting the idea that man ‘disturbs rather than follows Nature’s order.’” We’ll want to note that Spinoza’s claim has no normative force; it’s a statement of fact. We don’t need to be talked out of disturbing nature’s order, because we already don’t. The same grammatical mood obtains when Bennett quotes a modern student of Spinoza: “human beings do not form a separate imperium unto themselves.” We “do not”—the claim in its ontological form means could not—stand apart and so await no homecoming or reunion.

Those sentences sound entirely settled, but there are other passages in Vibrant Matter when you can watch in real time as such claims visibly neutralize the political programs they are being called upon to motivate. Here’s Bennett: “My hunch is that the image of dead or thoroughly instrumentalized matter feeds human hubris and our earth-destroying fantasies of conquest and consumption.” On a quick read you might think that this is nothing more than a little junk Heideggerianism—that techno-thinking turns the world into a lumberyard, &c. But on closer inspection, the sentence sounds nothing like Heidegger and is, indeed, entirely puzzling. For if it is “hubris” to think that human beings could “conquer and consume” the world—not hubris to do it, but hubris only to think it, hubris only in the form of “fantasy”—then in what danger is the earth of actually being destroyed? How could mere imagination have world-negating effects and still remain imagination? Bennett’s position seems to be that I have to recognize that consuming the world is impossible, because if I don’t, I might end up consuming the world. Her argument only gains political traction by crediting the fantasy that she is putatively out to dispel. Or there’s this: Bennett doesn’t like it when a philosopher, in this instance Hannah Arendt, “positions human intentionality as the most important of all agential factors, the bearer on an exceptional kind of power.” Her book’s great unanswered question, in this light, is whether she can account for ecological calamity, which is perhaps her central preoccupation, without some notion of human agency as potent and malign, if only in the sense that human beings have the capacity to destroy entire ecosystems and striped bass don’t. The incoherence that underlies the new vitalism can thus be telegraphed in two complementary questions: If human beings don’t actually possess exceptional power, then why is it important to convince them to adopt a language that attributes to them less of it? But if they do possess such power, then on what grounds do I tell them that their language is wrong?

Option #3: Enlightenment it is, then. What remains, I mean, for both Connolly and Bennett, is the simple idea that most people subscribe to a false ontology and are accordingly in need of re-education. Connolly describes himself and his fellow vitalists as “seers”—he also calls them “those exquisitely sensitive to the world”—and he more then once quotes Nietzsche referring to everyone else, the non-seers, the foggy-eyed, as “apes.” I don’t much like being called an orangutan and know others who will like it even less, but at least this rendering of Bennett/Connolly has the possible merit of making the object-world genuinely autonomous and so getting the cosmos out from under the coercions of thought. Our thinking might affect us, but it cannot affect the universe. But there is a difficulty even here—the most injurious of political ontology’s several problems, I think—which is that via this observation philosophy returns magnetically to its proper object—or non-object—which is thought, and we realize with a start that the only thing that is actually up for grabs in these new realist philosophies of the object is in fact our thinking personhood. This is really quite remarkable. Bennett says that the task facing contemporary philosophy is to “shift from epistemology to ontology,” but she herself undertakes the dead opposite. She has precisely misnamed her procedure: “We are vital materiality,” she writes, “and we are surrounded by it, though we do not always see it that way. The ethical task at hand here is to cultivate the ability to discern nonhuman vitality, to become perceptually open to it.” There is nothing about her ontology that Bennett feels she needs to work out; it is entirely given. The philosopher’s commission is instead to devise the  moralized epistemology that will vindicate this ontology, and which will, in its students, produce “dispositions” or “moods” or, as Connolly has it, a “working upon the self” or the “cultivation of a capacity” or a “sensibility” or maybe even just another intellectual “stance.” Connolly and Bennett have lots of language for describing mindsets and almost no language for describing objects. Their arguments take shape almost entirely on the terrain of Geist. They really just want to get the subjectivity right.

There are various ways one might bring this betrayal of the object into view, in addition to quoting Bennett and Connolly’s plain statements on the matter. Among the great self-defeating deficiencies of these books are the fully pragmatist argumentative procedures adopted by their authors, who adduce no arguments in favor of their  chosen ontology. Bennett points out that her position is really just an “experiment” with different ways of “narrating”; an “experiment with an idea”; a “thought experiment,” Connolly says. “What would happen to our thinking about nature if…” The post-structuralism that both philosophers think they’ve put behind them thus survives intact. But such play with discourse is, of course, entirely inconsistent with a robust philosophy of objects, premised as it is on the idea that the object exerts no pressure on the language we use to describe it, which indeed we elect at will. The mind, as convinced of its freedom as it ever was, chooses a philosophical idiom just to see what it can do.

This problem—the problem, I mean of an object-philosophy that can’t stop talking about the subject—then redoubles itself in two ways:

– The problem is redoubled, first, in the blank epiphanies of Bennett’s prose style, and especially when she makes like Novalis on the streets of Baltimore, putting in front of readers an assemblage of objects the author encountered beneath a highway underpass so that we can imagine ourselves beside her watching them pulsate. The problem is that she literally tells us nothing about these items except that she heard them chime. One begins to say that she chose four particular objects—a glove, pollen, a dead rat, and a bottle cap—except that formulation is already misleading, since lacking further description, these four objects really aren’t particular at all. They are sham specificities, for which any other four objects could have served just as well. She could have changed any or all of them—could have improvised any Borgesian quartet—and she would have written that page in exactly the same manner. You can suggest your own, like this:

-a sock, some leaves, a lame squirrel, and a soda can

-a castoff T-shirt, a fallen tree limb, a hungry kitten, and an empty Cheetos bag

a bowler hat, a beehive, a grimy parasol, and Idi Amin

These aren’t objects; these are slots; and Bennett’s procedure is to that extent entirely abstract. This is what it means to say that materialism, too, is just another philosophy of the subject. It does no more or less than any other intellectual system, maintaining the word “object” only as a vacancy onto which to project its good intentions.

-The problem is redoubled, second, in the nakedly religious idiom in which these two books solemnize their arguments. That idiom, indeed, is really just pragmatism in cassock and cope. The final page of Bennett’s book prints a “Nicene Creed for would-be vital materialists.” Connolly’s book begins by offering its readers “glad tidings.” Nor does the latter build arguments or gather evidence; he “confesses” a “philosophy/faith,” which is also a “faith/conviction,” which is also a “philosophy/creed.” Bennett and Connolly hold vespers for the teeming world. Eager young materialists, turning to these books to help round out their still developing views, must be at least somewhat alarmed to discover that our relationship to matter is actually one of “faith” or “conviction.” A philosophical account of the object is replaced by a pledge—a deferral—a promise, by definition tentative, offered in a mood of expectancy, to take the object on trust. Nor is this in any way a gotcha point. Connolly is completely open about his (Deleuzian) aim “to restore belief in the world.” It’s just that no sooner is this aim uttered than the world undergoes the fate of anything in which we believe, since if you name your belief as belief, then you are conceding that your position is optional and to some considerable degree unfounded and that you do not, in that sense, believe it at all.

It’s not difficult, at any rate, to show that Connolly for one does not believe in his own book. The stated purpose of A World of Becoming is to show us how to “affirm” that condition. That’s really all that’s left for us to do, once one has determined that Becoming will go on becoming even without our help and even if we work against it. Connolly’s writing, it should be said, is generally short on case studies or named examples of emergent conjunctures, leaving readers to guess what exactly they are being asked to affirm. For many chapters on end, one gets the impression that the only important way in which the world is currently becoming is that more people from Somalia are moving to the Netherlands, and that the phrase “people who resist Becoming” is really just Connolly’s idiosyncratically metaphysical synonym for “racists.” But near the end of the book, three concrete examples do appear, all at once—three Acts of Becoming—two completed, one still in train: the 2003 invasion of Iraq; the 2008 financial collapse; and global warming. All three, if regarded from the middle distance, seem to confirm the vitalist position in that they have been transformative and destabilizing and will for the foreseeable future produce unpredictable and ramifying consequences. What is surprising—but then really, no, finally not the least bit surprising—is that Connolly uses a word in regard to these three cases that a Nietzschean committed to boundless affirmation shouldn’t be able to so much as write: “warning.” Melting icecaps are not to be affirmed—that’s Connolly’s own view of the matter. Mass foreclosure is not to be affirmed. Quite the contrary: If you know that the cosmos is capable of shifting suddenly, then you might be able to get the word out. The responsibility borne by philosophers shifts from affirmation to its opposite: Vitalists must caution others about what rushes on. The philosopher of Becoming thus asks us to celebrate transformation only until he runs up against the first change he doesn’t like.

This is tough to take in. Lots of things are missing from political ontology: politics, objects, an intelligible metaphilosophy. But surely one had the right to expect from a theorist of systemic and irreversible change, one with politics on his mind, some reminder of the possibility of revolution, some evocation, since evocations remain needful, of the joy of that mutation, the elation reserved for those moments when Event overtakes Circumstance. But in Connolly, where one might have glimpsed the grinning disbelief of experience unaccounted for, one finds only the bombed out cafés of Diyala, hence fear, hence the old determination to fight the future. The philosopher of fire grabs the extinguisher. The philosopher of water walks in with a mop.

Thanks to Jason Josephson and everyone in the critical theory group at Williams College.

Outward Bound: On Quentin Meillassoux’s After Finitude

 

 

Il n’y a pas de hors-texte. If post-structuralism has had a motto—a proverb and quotable provocation—then surely it is this, from Derrida’s Of Grammatology. Text has no outside. There is nothing outside the text. It is tempting to put a conventionally Kantian construction on these words—to see them, I mean, as bumping up against an old epistemological barrier: Our thinking is intrinsically verbal—in that sense, textual—and it is therefore impossible for our minds to get past themselves, to leave themselves behind, to shed words and in that shedding to encounter objects as they really are, in their own skins, even when we’re not thinking them, plastering them with language, generating little mind-texts about them. But this is not, in fact, what the sentence says. Derrida’s claim would seem to be rather stronger than that: not There are unknowable objects outside of text, but There are outside of text no objects for us to know. So we reach for another gloss—There is only textain’t nothing but text—except the sentence isn’t really saying that either, since to say that there is nothing outside text points to the possibility that there is, in a manner yet to be explained, something inside text, and this something would not itself have to be text, any more than caramels in a carrying bag have to be made out of cellophane.

So we look for another way into the sentence. An alternate angle of approach would be to consider the claim’s implications in institutional or disciplinary terms. The text has no outside is the sentence via which English professors get to tell everyone else in the university how righteously important they are. No academic discipline can just dispense with language. Sooner or later, archives and labs and deserts will all have to be exited. The historians will have to write up their findings; so will the anthropologists; so will the biochemists. And if that’s true, then it will be in everyone’s interest to have around colleagues who are capable of reflecting on writing—literary critics, philosophers of language, the people we used to call rhetoricians—not just to proofread the manuscripts of their fellows and supply these with their missing commas, but to think hard about whether the language typically adopted by a given discipline can actually do what the discipline needs it to do. If the text has no outside, then literature professors will always have jobs; the idea is itself a kind of tenure, since it means that writerly types can never safely be removed from the interdisciplinary mix. The idea might even establish—or seek to establish—the institutional primacy of literature programs. Il n’y a pas de hors-texte. There is nothing outside the English department, since every other department is itself engaged in a more or less literary endeavor, just one more attempt to make the world intelligible in language.

Such, then, is the interest of Quentin Meillassoux’s After Finitude, first published in French in 2006. It is the book that, more than any other of its generation, means to tell the literature professors that their jobs are not, in fact, safe. Against Derrida it banners a counter-slogan of its own: ““it could be that contemporary philosophers have lost the great outdoors, the absolute outside.” It is Meillassoux’s task to restore to us what he is careful not to call nature, to lead post-structuralists out into the open country, to make sure that we are all getting enough fresh air. Meillassoux means, in other words, to wean us from text, and for anyone beginning to experience a certain eye-strain, a certain cramp of the thigh from not having moved all day from out his favorite chair, this is bound to be an appealing prospect, though if you end up unconvinced by its arguments—and there are good reasons for doubt, as the book amounts to a tissue of misunderstanding and turns, finally, on one genuinely arbitrary prohibition—then it’s all going to end up sounding like a bullying father enrolling his pansy son in the Boy Scouts against his will: Get your head out of that book! Why don’t you go in the yard and play?!

• • •

Of course, Meillassoux’s way of getting the post-structuralists to go hiking with him is by telling them which books to read first. If you start scanning After Finitude’s bibliography, what will immediately stand out is its programmatic borrowing from seventeenth- and early eighteenth-century philosophers. Meillassoux regularly cites Descartes and poses anew the question that once led to the cogito, but will here lead someplace else: What is the one thing I as a thinking person cannot disbelieve even from the stance of radical doubt? He christens one chapter after Hume and proposes, as a knowing radicalization of the latter’s arguments, that we think of the cosmos as “acausal.” In the final pages, Galileo steps forward as modern philosophy’s forgotten hero. His followers are given to saying that Meillassoux’s thinking marks out a totally new direction in the history of philosophy, but I don’t think anyone gets to make that kind of claim until they have first drawn up an exhaustive inventory of debts. At one point, he praises a philosopher publishing in the 1980s for having “written with a concision worthy of the philosophers of the seventeenth century.” That’s one way to get a bead on this book—that it resurrects the Grand Siècle as a term of praise. The movement now coalescing around Meillassoux—the one calling itself speculative realism—is a bid to get past post-structuralism by resurrecting an ante-Kantian, more or less baroque ontology, on the understanding that nearly all of European philosophy since the first Critique can be denounced as one long prelude to Derrida. There never was a “structuralism,” but only “pre-post-structuralism.”

Meillassoux, in sum, is trying to recover the Scientific Revolution and early Enlightenment, which wouldn’t be all that unusual, except he is trying to do this on radical philosophy’s behalf—trying, that is, to get intellectuals of the Left to make their peace with science again, as the better path to some of post-structuralism’s signature positions. His argument’s reliance on early science is to that extent instructive. One of the most appealing features of Meillassoux’s writing is that it restages something of the madness of natural philosophy before the age of positivism and the research grant; it retrieves, paragraph-wise, the sublimity and wonder of an immoderate knowledge. In 1712, Richard Blackmore published an epic called Creation, which you’ve almost certainly never heard of but which remained popular in Britain for several decades. That poem tells the story of the world’s awful making, before humanity’s arrival, and if you read even just its opening lines, you’ll see that this conception is premised on a rather pungent refusal of Virgil and hence on a wholesale refurbishing of the epic as genre: “No more of arms I sing.” Blackmore reclassifies what poets had only just recently been calling “heroic verse” as “vulgar”; the epic, it would seem, has degenerated into bellowing stage plays and popular romances and will have to learn from the astrophysicists if it is to regain its loft and dignity. Poets will have to accompany the natural philosophers as they set out “to see the full extent of nature” and to tally “unnumbered worlds.” The point is that there was lots of writing like this in the eighteenth century, and that it was aligned for the most part with the period’s republicans and pseudo-republicans and whatever else England had in those years instead of a Left. This means that the cosmic epic was to some extent a mutation of an early Puritan culture, a way of carrying into the eighteenth earlier trends in radical Protestant writing, and especially the latter’s Judaizing or philo-Semitic strains. The idea here was that Hebrew poetry provided an alternative model to Greek and Roman poetry: a sublime, direct poetry of high emotion, of inspiration, ecstasy, and astonishment. The Creation is one of the things you could read if you wanted to figure out how ordinary people ever came to care about science—how science was made into something that could turn a person on—and what you’ll find in its pages is a then new aesthetic that is equal parts Longinus and Milton, or rather Longinus plus Moses plus Milton plus Newton, and not a Weberian or Purito-rationalist Newton, but a Newton supernal and thunder-charged, in which the Principia is made to yield science fiction. It is, finally, this writing that Meillassoux is channeling when he asks us—routinely—to contemplate the planet’s earliest, not-yet-human eons; when, like a boy-intellectual collecting philosophical trilobites, he demands that our minds be arrested by the fossil record or that all of modern European philosophy reconfigure itself to accommodate the dinosaurs. And it is the eighteenth-century epic’s penchant for firebolt apocalyptic that echoes in his descriptions of a cosmos beyond law:

Everything could actually collapse: from trees to stars, from stars to laws, from physical laws to logical laws; and this not by virtue of some superior law whereby everything is destined to perish, but by virtue of the absence of any superior law capable of preserve anything, no matter what, from perishing.

Meillassoux’s followers call this an idea that no-one has ever had before. The epic poets once called it Strife.

That so many readers have discovered new political energies in Meillassoux’s argument is perhaps hard to see, since the book contains absolutely nothing that would count, in any of the ordinary senses, as political thought. There are, it’s true, a few passages in which Meillassoux lets you know he thinks of himself as a committed intellectual: a (badly underdeveloped) account of ideology critique; the faint chiming, in one sentence, of The Communist Manifesto; a few pages in tribute to Badiou. With a little effort, though, the political openings can be teased out, and they are basically twofold: 1) Meillassoux says that thought’s most pressing task is to do justice to the possibility—or, indeed, to the archaic historical reality—of a planet stripped of its humans. On at least one occasion, he even uses, in English translation, the phrase “world without us.” For anyone looking to devise a deep ecology by non-Heideggerian means—and there are permanent incentives to reach positions with as little Heidegger as possible—Meillassoux’s thinking is bound to be attractive. The book is an entry, among many other such, in the competition to design the most attractive anti-humanism. 2) The antinomian language in the sentence last quoted—laws could collapse; there is no superior law­—or, indeed, the very notion of a cosmos structured only by unnecessary laws—is no doubt what has drawn to this book those who would otherwise be reading Deleuze, since Meillassoux, like this other, has designed an ontology to anarchist specifications, though he has done so, rather surprisingly, without Spinoza. Another world is possible wasn’t Marx’s slogan—it was Leibniz’s—except at this level, it has to be said, the book’s politics remain for all intents and purposes allegorical. Meillassoux’s argument operates at most as a peculiar, quasi-theological reassurance that if we set out to change the political and legal order of our nation-states, the universe will like it.

Maybe this is already enough information for us to see that After Finitude’s relationship to post-structuralism is actually quite complicated. Any brief description of the book is going to have to say that it is out to demolish German Idealism and post-structuralism and any other philosophy of discourse or mind. But if we take a second pass over After Finitude, we will have to conclude that far from flattening these latter, its chosen task is precisely to shore them up, to move anti-foundationalism itself onto sturdy ontological foundations. Meillassoux’s niftiest trick, the one that having mastered he compulsively performs, is the translating of post-structuralism’s over-familiar epistemological claims into fresh-sounding ontological ones. What readers of Foucault and Lyotard took to be claims about knowledge turn out to have been claims about Being all along, and it is through this device that Meillassoux will preserve what he finds most valuable in the radical philosophy of his parents’ generation: its anti-Hegelianism, its hard-Left anti-totalitarianism, its attack on doctrines of necessity, its counter-doctrine of contingency, its capacity for ideology critique.

Adorno was arguing as early as the mid-‘60s that thought needed to figure out some impossible way to think its other, which is the unthought, “objects open and naked,” the world out of our clutches. “The concept takes as it most pressing business everything it cannot reach.” Is it possible to devise “cognition on behalf of the non-conceptual”? This is the sense in which Meillassoux, far from breaking with post-structuralism and its cousins, is simply answering one of its central questions. It’s just that he does so in a way that any convinced Adornian or Left Heideggerian is going to find baffling. Cognition on behalf of the non-conceptual turns out to have been right in front of us all along—it is called science and math. Celestial mechanics has always been the better anti-humanism. A philosophical anarchism that has thrown its lot in with the geologists and not with the Situationists—that is the possibility for thought that After Finitude opens up.  The book, indeed, sometimes seems to be borrowing some of Heidegger’s idiom of cosmic awe, but it separates this from the latter’s critique of science—such that biology and chemistry and physics can henceforth function as vehicles of ontological wonder, astonishment at the world made manifest. And with that idea there comes to an end almost a century’s worth of radical struggle against domination-through-knowledge, against bureaucracy, rule by experts, the New Class, technocracy, instrumental reason, and epistemological regimes. On the back cover of After Finitude, Bruno Latour says that Meillassoux promises to “liberate us from discourse,” but that’s not exactly right and may be exactly wrong. He wants rather to free us from having to think of discourse as a problem—precisely not to rally us against it, in the manner of Adorno and Foucault—but to license us to make our peace with, and so sink back into, it.

• • •

Lots of people will find good reasons to take this book seriously. It is, nonetheless, unconvincing on five or six fronts at once.

It is philosophically conniving. There are almost no empirical constraints placed on the argumentative enterprise of ontology. Nothing in everyday experience is ever going to suggest that one generalized account of all Being is right and another wrong, and this situation will inevitably grant the philosopher latitude. Ontologies will always be tailored to extra-philosophical considerations, any one of them elected only because a given thinker wants something to be true about the cosmos. Explanations of existence are all speculative and in that sense opportunistic. It is this opportunism we sense when we discover Meillassoux baldly massaging his sources. Here he is on p. 38: “Kant maintains that we can only describe the a priori forms of knowledge…, whereas Hegel insists that it is possible to deduce them.” Kant, we are being told, doesn’t think the categories are deducible. And then here’s Meillassoux on pp. 88 and 89: “the third type of response to Hume’s problem is Kant’s … objective deduction of the categories as elaborated in the Critique of Pure Reason.”

The leap from epistemology to ontology sometimes falls short. At one point, Meillassoux thinks he can get the better of post-structuralists like so: Imagine, he says, that an anti-foundationalist is talking to a Christian (about the afterlife, say). The Christian says: “After we die, the righteous among us will sit at the right hand of the Lord.” And the anti-foundationalist responds the way anti-foundationalists always respond: “Well, you could be right, but it could also be different.” For Meillassoux, that last clause is the ontologist’s opening. His task is now to convince the skeptic that “it could also be different” is not just a skeptical claim about what we can’t know—it is not an ignorance, but rather already an ontological position in its own right. What we know about the real cosmos, existing apart from thought, is that everything in it could also be different. And now suppose that the anti-foundationalist responds to the ontologist by just repeating the same sentence—again, because it’s really all the skeptic knows how to say: “Well, you could be right, but it could also be different.” Meillassoux at this point begins his end-zone dance. He has just claimed that Everything could be different, and the skeptic obviously can’t disagree with this by objecting that Everything could be different. The skeptic has been maneuvered round to agreeing with the ontologist’s position. But Meillassoux doesn’t yet have good reasons to triumph, because, quite simply, he is using “could be different” in two contrary senses, and he rather bafflingly thinks that their shared phrasing is enough to render them identical. He has simply routed his argument through a rigged formulation, one in which ontological claims and epistemological claims seem briefly to coincide. The skeptical, epistemological version of that sentence says: “Everything could be different from how I am thinking it.” And the ontological version says: “Everything could be different from how it really is now.” There may, in fact, occur real-word instances in which skeptics string words into ambiguous sentences that could mean either, and yet this will never indicate that they unwittingly or via logical compulsion mean the latter.

Meillassoux’s theory of language is lunatic. Another way of getting a bead on After Finitude would be to say that it is trying to shut down science studies; it wants to stop literary (and anthropological) types from reading the complicated utterances produced by science as writing (or discourse or culture). Meillassoux is bugged by anyone who reads scientific papers and gets interested in what is least scientific in them—anyone, that is, who attributes to astronomy or kinetics a political unconscious, as when one examines the great new systems devised during the seventeenth century and realizes that they all turned on new ways of understanding “laws” and “forces” and “powers.” Meillassoux’s own philosophy requires, as he puts it, “the belief that the realist meaning of [any utterance about the early history of the planet] is its ultimate meaning—that there is no other regime of meaning capable of deepening our understanding of it.” The problem is, of course, that it’s really easy to show that science writing does, in fact, contain an ideological-conceptual surcharge; that, like any other verbally intricate undertaking, it can’t help but borrow from several linguistic registers at once; and that there is always going to be some other “order of meaning” at play in statements about strontium or the Mesozoic. Science studies, after all, possesses lots of evidence of a more or less empirical kind, and Meillassoux’s response is to object that this evidence concerns nothing “ultimate.” But then what would it mean for a sentence to have an “ultimate meaning” anyway? A meaning that outlasts its rivals? Or that defeats them in televised battle? What, then, is the time that governs meanings, such that some count as final even while the others are still around? And at what point do secondary meanings just disappear? What are the periods of a meaning’s rise and fall? Meillassoux doesn’t possess the resources to answer any of those questions; nor, as best as I can tell, does he mean to try. The phrase “ultimate meaning” is not philosophically serious. It does little more than commit us to a blatant reductionism, commanding us to disregard any complexities and ambiguities that a linguistically attentive person would, upon reading Galileo, discover. We can even watch Meillassoux’s own language drift, such that “ultimate meaning” becomes, over the course of three pages, exclusive meaning. “Either [a scientific] statement has a realist sense, and only a realist sense, or it has no sense at all.” It exasperates Meillassoux that an unscientific language would so regularly worm its way into science writing; and it exasperates him, further, that English professors would take the trouble to point this language out. His response is to install a prohibition, the wholly unscientific injunction to treat scientific language as simpler than it is even when the data show otherwise. It is perhaps a special problem for Meillassoux that the ideological character of science writing is especially pronounced in the very period to which he is looking for intellectual salvation—the generations on either side of Newton, which were crammed with ontologies explicitly modeled on the political theology of the late Middle Ages—new scientific cosmologies, I mean, whose political dimensions were quite overt. And it is definitely a problem for Meillassoux that he has himself written a political ontology of roughly this kind—a cosmology made-to-order for the punks and the Bakuninites—since one of his opening moves is to disallow the very idea of such ontologies. After Finitude only has the implications its anarchist readership takes it to have if its language means more than it literally says, and Meillassoux himself insists that it can have no such meaning.

He poses as secular but is actually a kind of theologian. It is not just that Meillassoux is secular. He is pugnaciously secular or, if you prefer, actively anti-religious. He casually links Levinas with fanaticism and Muslim terror. He sticks up for what Adorno once called the totalitarianism of enlightenment, marveling at philosophy’s now vanished willingness to tell religious people that they’re stupid or at its determination to make even non-philosophers fight on its terms. And against our accustomed sense that liberalism is the spontaneous ideology of secular modernity, Meillassoux sees freedom of opinion instead as an outgrowth of the Counter-Reformation and Counter-Enlightenment. Liberalism, in other words, is how religion gets readmitted to the public sphere even once everyone involved has been forced to concede that it’s bunk. And yet for all that, Meillassoux has entirely underestimated how hard it is going to be to craft a consequent anti-humanism without taking recourse to religious language. At the heart of After Finitude is a simple restatement of the religious mystic’s ecstatic demand that we “get out of ourselves” and thereby learn to “grasp the in-itself”; the book aches for an “outside which thought could explore with the legitimate feeling of being on foreign territory—of being entirely elsewhere.” In the place of God, Meillassoux has installed a principle he calls “hyper-Chaos,” to which, however, he then attaches all manner of conventional theological language, right down to the capital-C-of-adoration. Hyper-Chaos is an entity…

…for which nothing is or would seem to be impossible … capable of destroying both things and worlds, of bringing forth monstrous absurdities, yet also of never doing anything, of realizing every dream, but also every nightmare, of engendering random and frenetic transformations, or conversely, of producing a universe that remains motionless down to its ultimate recess, like a cloud bearing the fiercest storms, then the eeriest bright spells.

No-one reading that passage—even casually, even for the first time—is going to miss the predictable omnipotence language with which it begins: Chaos is the God of Might. Meillassoux himself acknowledges as much. What may be less apparent, though, is that this entire line of argument simply extends into the present the late medieval debate over whether God was constrained to create this particular universe, or whether he could have, at will, created another, and Meillassoux’s position in this sense resembles nothing so much as the orthodox Christian defense of miracles, theorizing a power that can, in defiance of its own quotidian regularities, “bring forth absurdities, engender transformations, cast bright spells.” There have been many different theories of contingency over the last generation, especially among philosophers of history. As a philosopheme, it has, in fact, become rather commonplace. Meillassoux is unusual in this regard only in that he has elevated contingency to the position of demiurge and so returned a full portion of metaphysics to a position that had until now been trying to get by without it. Such is the penalty after all for going back behind Kant, that you’ll have to stop your ears again against the singing of angels. Two generations before the three Critiques there stood Christian Wolff, whom Meillassoux does not name, but on whose system his metaphysics is modeled and who wrote, in the 1720s and ‘30s, that philosophy was “the study of the possible as possible.” Philosophy, in other words, is the one all-important branch of knowledge that does not study actuality. Each more circumscribed intellectual endeavor—biology, history, philology—studies what-now-is, but philosophy studies events and objects in our world only as a subset of the much vaster category of what-could-be. It tries, like some kind of interplanetary structuralism, to work out the entire system of possibilities—every hypothetical aggregate of objects or particles or substances that could combine without contradiction—and thereby reclassifies the universe we currently inhabit as just one unfolding outcome among many unseen others. Meillassoux, in this same spirit, asks us to imagine a cosmos of “open possibility, wherein no eventuality has any more reason to be realized than any other.” And this way of approaching actuality is what Wolff calls theology, which in this instance means not knowledge of God but God’s knowledge. Philosophy, for Wolff—as, by extension, for Meillassoux—is a way of transcending human knowledge in the direction of divine knowledge, when the latter is the science not just of our world but of all things that could ever be, what Hegel called “the thoughts had by God before the Creation”—sheer could-ness, vast and indistinct.

He misdescribes recent European philosophy and is thus unclear about his own place in it. Maybe this point is better made with reference to his supporters than to Meillassoux himself. Here’s how one of his closest allies explains his contribution:

With his term ‘correlationism,’ Meillassoux has already made a permanent contribution to the philosophical lexicon. The rapid adoption of this word, to the point that an intellectual movement has already assembled to combat the menace it describes suggests that ‘correlationism’ describes a pre-existent reality that was badly in need of a name. Whenever disputes arise in philosophy concerning realism and idealism, we immediately note the appearance of a third personage who dismisses both of these alternatives as solutions to a pseudo-problem. This figure is the correlationist, who holds that we can never think of the world without humans nor of humans without the world, but only of a primal correlation or rapport between the two.

As intellectual history, this is almost illiterate. We weren’t in need of a name, because the people who argue in terms of the-rapport-between-humans-and-world or subject-and-object were already called “Hegelians,” and the movement opposing them hasn’t just “sprung up,” because philosophers have been battling the Hegelians as long as there have been Hegelians to fight. Worse still is the notion, projected by Meillassoux himself, that all of European philosophy since Kant must be opposed for leading inexorably, shunt-like, to post-structuralism. This is just the melodrama to which radical philosophy is congenitally prone; the entire history of Western thought has to become a single, uninterrupted exercise in the one perhaps quite local error you would like to correct, the cost of which, in this instance, is that Meillassoux and Company have to turn every major European thinker into a second-rate idealist or vulgar Derridean and so end up glossing Wittgenstein and Heidegger and Sartre and various Marxists in ways that are tendentious to the point of unrecognizability. There are central components of Meillassoux’s project that philosophers have been attempting since the 1790s, and he occasionally gives the impression of not knowing that European philosophy has been trying for generations to get past dialectics or humanism or the philosophy of the subject or whatever else it is for which “correlationism” is simply a new term. Perhaps Meillassoux thinks that his contribution has been to show that Wittgenstein and Heidegger were more Hegelian than they themselves realized. But then this, too, seems more like a repetition than a new direction, since European philosophy has always had a propensity for auto-critique of precisely this kind. Auto-critique is in lots of ways its most fundamental move: One anti-humanist philosopher accuses another of having snuck in some humanist premise or another. One philosopher-against-the-subject accuses another of being secretly attached to theories of subjectivity. And so on. For Meillassoux to come around now and say that there are residues of Kant and Hegel all over the place in contemporary thought—well, sure: That’s just the sort of thing that European philosophers are always saying.

He is wrong about German idealism. Kant, Meillassoux says, is the one who deprived us all of the Great Outdoors, which accusation seems plausible … until you remember that bit about “the starry sky above me.” This is one more indication that Meillassoux is punching air, though the point matters more with reference to Hegel than to Kant. Hegel’s philosophy, after all, turns on a particular way of relating the history of the world: At first, human beings were just pinpricks of consciousness in a world not of their own making, mobile smudges of mind on an alien planet. But human activity gradually remade the world—it refashioned every glade and river valley—worked all the materials—to the point where there now remains nothing in the world that hasn’t to some degree been made subject to human desire and planning. The world has, in this sense, been all but comprehensively humanized; it is saturated with mind. What are we to say, then, when Meillassoux claims that no modern philosopher since Kant can even begin to deal with the existence of the world before humans; that they can’t even take up the question; that they have to duck it; that it is what will blow holes in their systems? Hegel not only has no trouble speaking of the pre-human planet; his historical philosophy downright presupposes it. The world didn’t used to be human; it is now thorough-goingly so; the task of philosophy is to account for that change. And it is the great failing of Meillassoux’s book that, having elevated paleontology to the paradigmatic science, he can’t even begin to explain the transformation. You might ask yourself again whether Meillassoux’s account of science is more plausible than a Hegelian one. What, after all, happened when Europeans began devising modern science? What did science actually start doing? Was it or wasn’t it a rather important part of the ongoing process by which human beings subjected the non-human world to mind? Meillassoux urges us to think of science as the philosophy of the non-human, positing as it does a world separable from thought, a planet independent of humanity, laws that don’t require our enforcing. But does science, in fact, bring that world about? Meillassoux hasn’t even begun to respond to those philosophers, like Adorno and Heidegger, who wanted to pry philosophy away from science, not because they were complacently encased in the thought-bubbles of discourse and subjectivity, but more nearly the opposite—because they thought science was the philosophy of the subject, or one important version of it, the very techno-thinking by which human being secures its final dominion over the non-human. Meillassoux, in this sense, is trying to theorize, not the science that actually entered into the world in the seventeenth century, but something else, an alternate modernity, one in which aletheia and science went hand in hand, a fully non-human science or science that humans didn’t control: gelassene Wissenschaft. But the genuinely materialist position is always going to be the one that takes seriously the effects of thought and discourse upon the world; the one that knows science itself to be a practice; the one that faces up to the realization that the concept of  “the non-human” can only ever be a device by which human beings do things to themselves and their surroundings. There is nothing real about a realism that offers itself only as a utopian counter-science, a communication from the pluriverse, a knowledge that presumes our non-existence and so requires, as bearer, some alternate cosmic intelligence that it would be simplest to call divinity.

(Thanks to Jason Adams, Chris Pye, and Anita Sokolsky. My understanding of Christian Wolff I take from Werner Schneiders’s “Deus est philosophus absolute summus: Über Christian Wolffs Philosophie und Philosophiebegriff.” The ally of Meillassoux’s that I quote is Graham Harman.)

 

Postmodernism Is Maybe After All A Historicism, Part 3

PART ONE IS HERE.

PART TWO IS HERE.

You’re going to understand De Palma’s Body Double better if you understand why Theodor Adorno liked Mahler. Somebody might have told you once that Adorno championed difficult art in general and atonal music in particular: string quartets made to skirl; the mathematically precise caterwaul of that half-stepping dozen, the series chromatic and uncanny. This isn’t exactly wrong, and it is the regular stuff of encyclopedia entries and intro classes, but it’s not exactly right either. For Adorno did not want an art entirely without subjectivity, which is what serial music sometimes suggests, a pure and as it were automatic music that would never suggest to anyone listening a link back to human utterance or expressiveness; that would never once yield a tune that someone, at least, would want to sing; a music, in fine, that was all system. What he was seeking, rather, was an art organized around antitheses, in which the conflict between subject and system would become audible; and he worried there were different ways an artwork could instead obliterate any sense we had of a living person struggling to come to speech within it, and he didn’t like any of these. Traditionalism was the obvious problem: the expert mimicry of older styles, the striking of already petrified poses, the chanting of sentences already spoken. Adorno said of Stravinsky that he was a U2 tribute band. But then a radical aesthetic can beat its own experimental path to the same deadly place, one he identified in the fully developed versions of twelve-tone music, in Webern, that is, and the late modernists of the ‘60s: serial music become oppressive because now wholly itself, without any concession to its historical rivals or predecessors, routinized and ascetic, sealed off inside its own rigors and formulae.

It is this rejection of Webern that should clarify Adorno’s championing of both Alban Berg and Gustav Mahler, which is to say both a composer conventionally classified as atonal and one typically reckoned not, the point being that each of these two absorbed into his music the opposition that musical history tries to construct only between them. Mahler and Berg can be conceptualized together as the Composers of the Break, neither tonal nor atonal, but first-one-and-then-the-other, by turns and in shifting ratios or proportions. If it’s misleading to say that Adorno was one of the great theorists of serial music, then that’s because it was this music-at-the-cusp—and not the purity of The Twelve—that he meant to recommend. At issue were compositions in which the conflict between entire aesthetic periods or modes of cultural production was openly theatricalized, and from this perspective, a composer’s starting point was irrelevant. You could fill your music with tunes, but let them curdle on occasion into noise; or, alternately, you could plunge your listeners into noise, but remind them occasionally of what tunes used to sound like. Either way, you would be staging a face-off between the entire history of human songfulness and some other, radically new aesthetic mode in which art no longer takes our pleasure as its aim and limit. And here, perhaps, is the most curious point: These last are scenarios in which either term, tonality or atonality, can count as subject and either as structure. You can say that the fine old tunes sustain us as subjects and that the mere math of the twelve-tone series recreates for us in the concert hall the experience of structure and rationalization. But you can just as plausibly say that those tunes are sedimented and mindless convention, at which point we might welcome dissonance as the opening out of the composer’s idiom—or simply as the afflicted yowl of anyone who wishes the radio would for once play something different.

We can’t make listeners choose between Mahler and Berg, because it is really easy to find Mahler in Berg. If we want to get back to Body Double, all we need to do, then, is generalize Adorno’s argument in a direction he probably wouldn’t have; to insist that antithesis, far from being the special achievement of these two Austrians, is the inevitable condition of most artworks, nearly all of which absorb into themselves piecewise the styles and conventions of various historical periods, social classes, and political tendencies. You can call this “liminal art” if you want, as long as you are prepared to add that threshold never becomes room. The struggles that a Gramscian reader thinks go on between artworks are usually reproduced one by one within those same works, which, if patiently read, will generate maps of the broader cultural fields of which they are also a part. What we can say now of postmodern art is that it is almost never wholly itself, that in order even to be recognized as postmodern, it will have to announce its own distinctiveness, marking itself off from its modernist counterparts, which it will have to after a fashion name and in naming preserve. The sentences regularly encountered in Jameson in which x artist is declared to be a postmodern revision of y modernist are thus oddly self-defeating. How often do you find yourself wanting to remind Jameson of how the dialectic works?—stammering, in this case, that one cannot name a break between two terms without simultaneously positing their continuity. If you want to lift out what was new in the movie Body Heat, having first spotted that it was, as Jameson has it, a “remake of James M. Cain’s Double Indemnity,” then you have yourself already conceded that the one was really, actually, finally a lot like the other. When we designate a work as “postmodern,” the superseded and modernist version thereof will persist, as its not-really negated shadow, and this shadow will, in turn, vitiate our sense of postmodernism as ahistorical. You can say that Body Double is a movie about other movies, but that very reliance on other films—prior films—will be a prompt to historical thinking. Postmodern Body Double preserves within itself the memory of movies that weren’t yet postmodern. But then this or something like it is going to be true of most really existing postmodernism, which we now have to reconceive as the arena of a certain fight—the showdown between the various modernisms and a postmodernism available only as ideal type.

This point is available, first, at the level of genre. There’s a remarkable moment about an hour into Body Double when we witness our hero decide to take matters into his own hands, make his own inquiries about the murder, get to the bottom of things. The spectator-actor prepares himself to assume the detective functions of classic crime narrative. And at just that moment, when the movie seems ready at last to lead us back behind the spectacle—to, you know, strike the set—it instead amplifies by the pageantry by launching into a full-fledged music video—for Frankie Goes To Hollywood’s “Relax,” complete with shots of lip-synching lead-singer Holly Johnson. What makes the sequence even more compelling is that the music video stands in for hardcore porn; it’s the point in the movie when the hero is trying to infiltrate a porn set by pretending to be a hired stud, and De Palma is letting FGTH’s lubricious, post-disco electro-march substitute for the obscenities he cannot show. The movie thereby directs our attention neither to porn nor to MTV, but to whatever it is rather that the two share—and thus to an entire set of new or newly prevalent video genres, characteristic of the last few decades and defined by their collective willingness to abandon narrative or at least scale it back to some barely-more-than-sequential minimum. From our own vantage, we would want to add, above and beyond the raunch and the Duran Duran, YouTube shorts, initially capped at ten minutes and now majestically extended to fifteen, and new-model movie trailers, which, following Jameson, deserve to be considered as a form in their own right, with their own conventions and feature-usurping pleasures.

This is what it would mean to talk about Body Double not as postmodern but as a conflict-ridden composite of postmodernism and the pop modernism of the detective story, which still thinks of itself as a device for disclosing hidden truths. The competing genres are entirely visible within the movie. And then the all-important point to be made in this regard is that the detective story more or less wins out, and not only because the movie ends with a literal unmasking, latex pulled from a face. The movie does indeed document the spectator’s inability to act, though even here its procedure is basically satirical, in a manner that depends on our memory of other heroes having once done something, a memory counterposed to which postmodernity will register not as a schizoid intensity but only as a vacuity. Check your Jameson: The movie’s parody isn’t all that blank, because its very genre provides a set of expectations against which its innovations will be judged. But even beyond this, Body Double seems dedicated to the idea that certain forms of agency remain available even in the society of the spectacle. The movie’s hero doubles himself—he is both spectator and actor—and then this pairing is itself in some sense doubled, because spectator and actor both come in a second version that we could call juridical or epistemological, and not just inactive or image-consuming. There has after all always been an affinity between the spectator and the detective, with the latter now understood as the-one-who-watches, the one who arrives on the crime scene like an apparition, pledged to leave no mark, to pollute no object, to minimize the observer effect by leaving the murder bed unmade. To this we need merely append the observation that performer-cops are also a familiar species, called “narcs” or “undercover agents,” and that acting, too, can be a form of information gathering. Body Double does to this extent grant its cipher a certain limited effectivity, within the bounds of acting and spectating, as gumshoe and mole. The once corrosive insight that the detective is like a voyeur is thus replaced by its opposite, a reminder that the detective functions might in fact survive, that epistemological and moral purpose can still be roused from within the position of the spectator.

This last is a point to be made at the level of genre as a whole. But we can make a few similar observations if we start calling out the titles of specific movies, or at least of one specific movie. For Body Double’s relationship back to Rear Window also contains its own historical argument. De Palma updates his Hitchcock in one absolutely crucial way: In the later movie, the spectator-hero is meant to see the murder, which is to say that his spectatorship has been factored in in advance. We can think of the matter this way: Rear Window was still easily explained within the usual Enlightenment paradigm of truth and knowledge, the magical version of which is the usual stuff of crime stories, in which once the solution is announced and the murderer identified, everything automatically sets itself to right: culprits march themselves off to jail, widows and fatherless children return to their business suddenly unbereaved, &c. Hitchcock had some good questions to put to that paradigm, epistemological questions, for one—about whether one really knows what one thinks ones knows—and also psychoanalytic questions—about the relationship between the knower and the peeper and hence about the sneaky way in which desire rides in on knowledge’s back. De Palma, however, radicalizes this scenario by inventing a murderer who wants to be seen, a murderer, in other words, whose plans depends on the existence of a manipulated witness. The shift from Hitchcock to De Palma thus secretes a certain periodization, marking out the difference between a society in which the media exercise independent oversight functions over the government and other major actors, like corporations, and a society in which government and corporations have already reckoned the cameras into all their calculations and so incessantly stage themselves for the public, which means that watchdogs are called upon only to play an already scripted role. Body Double is really and truly a meditation on that condition, but within the narrow parameters of the thriller.

This brings us to the big point: There was always something unresolved in Jameson’s postmodernism argument, and especially in his claim that postmodern culture tends to jettison historical thinking. It’s not just that narrative forms are never going to be able to revert back to some zero degree of history-less-ness, though that’s also true. The issue is rather that Jameson was making two claims that are finally rather hard to square with one another: that under names like “retro” and “vintage,” postmodernism revived the copycat historicism of the nineteenth-century art academy … and also that it wasn’t a historicism. The best chance you’ve got of making this argument work is by making it accusatory, because you have to be able to say that postmodern historicism isn’t really historical, that it is fake history, history reduced back to image or consumer good, just so many styles for the donning, as when the ‘50s mean Formica and the ‘70s Fiestaware. Sometimes that blow is going to land. But if you’re doing anything other than designing your kitchen—if you’re making a movie or writing a novel or metering out a poem—the citations you introduce will often be, not an aping farrago, but their own path to chronology, an exercise in temporal counterpoint or Ungleichzeitigkeit, a dozen arrows pointing us outside the present, and so a request that we resume the project of historical thinking only just terminated.

Postmodernism is Maybe After All a Historicism, Part 1

Can you make a movie about postmodernity?

That probably sounds like a pretty stupid question. The scholars who first proposed the term “postmodernity” wanted it to mean something like the Age of Developed Capitalism, the global and all-consuming version, driven by its own distinctive and world-transforming technologies—long-distance communication, the media, computers, the Internet—and facing no obvious competitors. One clarification is immediately required: These scholars—Frederic Jameson, mostly—thought of postmodernity not as breaking with capitalism’s basic and long-term trends but precisely as intensifying them, which intensification we will begin to register if we simply list some of the things that have gone missing over the last half century: socialism, organized anti-imperialism, nature, the Left—capitalism’s historical rivals, in other words—the various attempted counter-modernities. This means that the term “postmodernity” was always something of a mess and bound to spread confusion, because on most accounts capitalism is one of modernity’s chief features—in lots of contexts, the word “modern” is a near-synonym or even euphemism for “capitalist”—in which case “postmodern” actually means something like “fully modern” or “hyper-modern.” Postmodernity comes after lots of things, but modernity isn’t one of them.

Now if you accept that periodization, the question, again, is going to seem pretty pointless. The word “postmodernity” is a way of naming our present and of marking out some of its more salient features. And since most movies are unselfconsciously set in the present—and since all of them engage with the present even when set in the past or the future or in unreal worlds with made-up histories—they are all to that extent “about postmodernity.” Maybe you take the word “postmodern” to mean something more bounded; maybe it inevitably calls up for you memories of 1982 and the first time you heard Cabaret Voltaire; but then there are movies for you, too, movies about the years when people started describing themselves as postmodern, movies that work to produce “the Eighties” as an object of historical scrutiny or puzzlement: The Squid and The Whale, say, and especially Donnie Darko.

I still think the question is a viable one, but it is up to me to explain why. One of postmodernity’s most pronounced features has been what Jameson calls “the cultural turn.” The argument here follows on from Debord and Baudrillard. Commercial media and the new technologies have created a world that is, to a historically unprecedented degree, saturated with “culture”—completely soaked with images and stories and music. This suggests an unusual process of de-differentiation, in which “culture” is no longer a special realm unto itself, governed by its own institutions, with its own rules and idioms (museums, libraries, philosophical aesthetics, &c.), but has become the universal medium for all other spheres—the economy, the law, the state, religion, &c—all of which must now learn to stage themselves, again to a historically unprecedented degree, at the level of image and story.

This “rise of culture” has in some sense meant the end of art—its apotheosis, yes, but also its termination—the end of art, that is, as something to be pursued in redemptive isolation, away from the state and the marketplace. Postmodernism—and we can now, at last, swap out suffixes—arrived as the liquidation of certain valuable aesthetic projects. It had once been the project of realist literature to help us cognize the composite and dispersed social systems of capitalism; realism broke with the experience of everyday life, allowing readers to hold in their heads the complexity of a capitalist city in a way that no person could do spontaneously. Modernism, meanwhile, which is usually thought of as having been consecrated to the New, is perhaps better conceived as a series of failed rescue projects, so many bids to preserve a realm of experience outside of the workplace and the shopping arcade; to get back to the objects so that they might be boosted by the doting armful from the market stalls and boutique vitrines; to give back to choking people their swallowed tongues; to salvage language … and sound … and paint … by reinventing them; to use each Adamically and as though for the first time; to model for us all an expanded realm of freedom, in which persons and objects would exist without function or fixed purpose. Postmodernism marked the collapse of all that—the end of a certain hard-won intelligibility, the end of search-and-rescue—and so the triumph of a generalized market culture.

We can say now that when Jameson started talking about “postmodern art,” what he meant was something like “fully capitalist art”—though he was more cunning than that and would never have put it that baldly. And “fully capitalist art” isn’t quite right anyway, because even postmodern art retained a complex and transitional character, cultivating some minimal allegiance to art’s inherited forms and institutions—paintings hung in galleries, long novels published by prestige presses—while nonetheless opening these latter up to Hollywood and rock & roll and comic books and advertising. What we witnessed in postmodernism was not, in this sense, the final abandonment of art—not the old avant garde’s rather more liberating fantasy of actually burning down the museums, thereby forcing artists to paint the streets—but a process still visibly underway and captured in freeze-frame—commercial culture’s ongoing expansion into the regions of its former quarantine. Marilyn vanquishes the naiads.

Another quick way to get a handle on what was going on in postmodern art is to imagine that it all began with a realist operation. Even impeccably realist novelists would, if trying to itemize the everyday life of contemporary North Americans, have to register the massive presence of the media in the lives of such people and so introduce into their realism the shadow world of television and the Internet, codes and whispers and images and memes, all taken now as social facts in their own right, at which point the accustomed distinction between realism and meta-fiction would become untenable, because postmodernity promotes meta-fiction to the status of realism. The least realistic thing about most horror movies is that, when the beasties attack, no-one shouts: “This is just like a horror movie”—which is, of course, the very first thing you or I would say. There is no getting around the Realm of Appearance; everything travels through it. The hallmark of High Postmodernism, then, at the level of style, was its commitment to the Code or to Seeming, not to seeming this way or that way, but to seeming as such; its wholly deliberate and upfront play with media images; its bracketing of the world’s objects; its bracketing, too, of what in other circumstances we might have called self-expression; its sense that we are all living in an enclosed videodrome where the signs will ever chatter.

What Jameson wanted to do, back in 1983, was lay out a certain trade off. It’s not that postmodernism didn’t have its pleasures. Postmodern art offered its admirers a sequence of free-floating and discontinuous intensities—this was its delight and its achievement—though we’ll want to note right away that such an achievement basically repeated the experience of channel surfing or listening to FM radio. The problem as Jameson saw it was this: Anyone wanting to pursue these joyous shavings or shards of vividness would have to give up on some of our older ways of trying to make sense of the world—entire vast and intricate modes of historical or structural understanding. For a while there, Jameson was especially drawn to aesthetic artifacts where you could actively experience the swapping of intelligibility for schizoid intensity, where you could sense some inherited expectation of understanding being violated and then feel that ticklish vertigo or camp sublimity creep in behind: great big buildings that make no effort to orient their visitors, that cheerfully allow guests to get lost in them, the luxury hotel as corn maze; historical novels in which the past is never properly retrieved, never allowed to march in review, in which distant events keep slipping away from readers until they realize finally that they are stuck in the present.

It’s that last we’ll want to hang on to: Postmodernism gave up on historical thinking and sometimes seemed to give up on narrative as such. Of course Marx was making the point as early as the 1860s that capitalism made it hard to think historically, simply by introducing into our daily lives an unprecedented degree of social complexity and so blocking our customary understanding of where objects come from. Factory production and long-distance trade fill our lives with mysterious things. And Lukács, similarly, was trying as early as the 1920s to describe an order in which commodities were entirely “constitutive of … society,” in which “the commodity structure [penetrated] society in all its aspects and [remolded] it in its own image”—a society, that is, in which capitalism had completed its historical mission to rob us of our bearings. That’s Jameson’s postmodernism, and there is a certain tone you need to hear in his argument, as though spoken back to Lukács: You thought you had it bad… Surely the sharpest bit of literary criticism that Jameson has ever written are those three pages on Doctorow’s Ragtime in the landmark postmodernism essay: “This historical novel can no longer set out to represent the historical past; it can only ‘represent’ our ideas and stereotypes about that past (which thereby at once becomes ‘pop history’).” There is high drama in that sentence—postmodern ahistoricism comes crashing in to historical thinking’s last literary redoubt—though Jameson could have made matters easier on himself, since there was a whole string of straightforwardly anti-historical novels published between the late ‘60s and the early ‘90s, novels about professional historians and history teachers who abandon the practice of history, who conclude that historical knowledge has decisively eluded them: Grass’s Local Anaesthetic (1969), Swift’s Waterland (1983), Updike’s Memories of the Ford Administration (1992), Gass’s Tunnel (1995). There’s no mistaking what’s going on in those novels. Doctorow, on the other hand, you could misread as Walter Scott with an oddly clipped prose style.

So the question I really want to ask is: Can you make a postmodern movie about postmodernity? And that isn’t a stupid question because the term postmodernity is fully historical in a manner that is inimical to postmodernism itself. What we’ve been asking is: Can you make a movie about a historical period in a style that isn’t designed for recording history? And our hunch has got to be no. An artwork that is postmodern should not be able to register its own postmodernity, should not be able to draw attention to what is historically novel about its own condition.

More soon, because I think I’ve found the movie that fits the bill….

PART TWO IS HERE.

PART THREE IS HERE.

On Agamben’s Signatures

Let’s say you don’t believe that wholes or totalities exist. You don’t believe that people and objects inhabit underlying structures that assign to them meanings or functions. Whatever it is that is bigger than us, the space within which we move, is neutral terrain, not exactly empty, but unstriated, a field of constantly shifting singularities. It’s going to help to have a name for this space, this wire cage in which the lottery balls blow, though it’s unclear what that name is to be. There is a lot that you can’t call it, many words that, believing as you do, you are going to have to give up. You can’t talk about structure or system or any of their derivatives: There are no ecosystems, and there is no world; there are no political or economic or legal systems, no capitalism, then, no empire, nothing global. It would be safer, conceptually purer, to shut up about the state and society. There can be no talk of rules and laws, because such things either constitute structure or are assigned by it. You’ll also want to toss out any terms that refer to big blocks of time. You can start with the word “modernity.”

That people who claim not to believe in totalities routinely talk about all these things suggests only that they are not yet disbelieving with their hearts, like Christian teenagers pretending to be more badass than they really are. Yer average copy of Anti-Oedipus is, in this sense, a prop cigarette. But it doesn’t have to be that way. It is the virtue of Giorgio Agamben’s recent book on method, The Signature of All Things, to remind us what a painstaking post-structuralism can look like. And yes, this is the first thing to know about the book: that it is post-structuralist, in some wholly precise sense of that term, still, in 2008, when it was first published in Italy, and not just because its author quotes Foucault a lot. What matters is that Agamben is still actively trying to purge the concept of “structure” from his thinking; still trying to jimmy that e from his typewriter; still scanning old volumes of philosophy so he can accusingly annotate the passages where schemes sneak in unbidden; still trying to devise something to put in their place.

We can see how this works in the second essay, from which this little book takes its title, and in which Agamben asks us to start thinking again about a basic problem in structural linguistics: How does language pass from words to utterances? Or if you like: How does the mind get from inert words, archived dictionary-like in lists, to living sentences that actually carry meaning? The usual answer to that question would have something to do with rules or laws: There are rules governing how words get combined. Your mind doesn’t only know words and their definitions; it’s absorbed the guidelines for their use. But Agamben doesn’t want to say this, because the word “rules” makes language sound like a government agency. Nor are the usual alternatives much better: Any talk about the “structure of language” is going to bring in resonances of the state or capitalism or the administered world. We could try to identify the mind’s “devices for building sentences,” but that would turn language into a technology. We could wonder how words get “processed,” but that would be either bureaucratic—words as case files or credit-card applications—or again technological—words as refined sugar. Agamben is in the market for a way of thinking about language that does not go through a juridical model of laws and rules …  or a political model of the system … or a technical model of the machine.

His proposal, derived from synopses of Paracelsus and Jakob Böhme, is that we learn to think of language as magic. Magic is what will substitute for structure, in which case one synonym for post-structuralism is “the occult.” Agamben wants magical signs; this, roughly, is what he means by “signatures,” signs that aren’t just neutral stand-ins for things, tokens or pointers, but charmed symbols vibrating with their own energies, signs that have “efficacy,” “efficacious likenesses,” not marks that you write down but marks that are written across you. Every spoken sentence changes the world and is in that sense a spell or hex. This is probably the clearest instance of the “regression” that Agamben makes central to his method: “the opposite of rationalization,” he calls it. If you are serious about your critique of enlightenment, you are going to need an enchanted epistemology.

So … at least it’s not the same old anti-foundationalism—a post-structuralism, then, with new emphases and possibilities. Indeed, one of the more conspicuous features of Agamben’s reflections on “method” is that they actually add up to some pretty strong and decidedly un-skeptical claims about the nature of social reality. How one studies the world is premised on an already robust idea about how the world really is. This is clearest in the book’s first essay, which explains Agamben’s notion of “paradigms”—it would help if you could set to one side whatever you currently think that word means and let Agamben explain it for himself. He is, above all, trying to explain what Foucault had in mind when he said that the panopticon was the nineteenth century’s representative institution, or what Agamben himself wants to say when he makes the same claim, for the twentieth century, about the concentration camp. These two, the panopticon and the camp, are paradigms—not their respective eras’ most powerful institutions, at least not by any of the usual metrics, and not their most frequently encountered institutions—but the pattern or model for all manner of other agencies, and so the key to the latter’s intelligibility. To examine in detail a Regency-era prison is actually to describe five or six other institutions all at once: hospitals, elementary schools, mental asylums, army barracks, nearly any public street in Britain in 2010. The prison itself serves as a kind of extended sociological analogy, even a kind of “allegory”—the word is Agamben’s own. Everything is now like x.

You’ll be able to make up your own mind about the “paradigm”—about how useful it is as an explanatory device—if you bring into view its competitor concepts, the notions that it most nearly resembles and so means to replace. These are basically two: the symptom and the function. We could try to discover what functions prisons or concentration camps play in the social order at large. This would require that we attempt something like a political economy of the camps, that we try to work out what it is in the modern European state or in organized capitalism that tends to produce camps. If, alternately, we called the camps a “symptom,” we would be positing not so much function as dysfunction; the camp would be the visible mark or felt sign of an underlying sociopolitical disorder, one whose pathways and mechanisms, because not available to the eye, would still have to be analytically reconstructed. Either way, if we talk about functions or symptoms, the task in front of us is to relate camps and prisons back to the underlying order that has at least partially produced them. And this is precisely the job that Agamben is now calling off. What he likes most about the notion of “the paradigm” is that it bypasses any talk of the totality or system; it spares us from having to reconstruct anything. If you call the camp a “paradigm,” you are saying that nothing “precedes the phenomenon.” Camps and prisons are “pure occurences” that persist “independently of reference” to other institutions—“positivities,” he calls them and doesn’t blush. They are representative institutions, and they conjure up parallel institutions, but only as a string of singularities, the relationships between which are to be left, as a matter of principle, unelucidated. It isn’t even an open question, to be settled empirically, whether prisons and this or that capitalism require one another. The question is methodologically disallowed. There is pride in not asking it. His sense is that the agencies of a given historical period might congeal into a set, might adopt similar designs and follow similar procedures, might come to resemble one another, without, however, being functionally related, and the task of the social historian is only to chart the spontaneous mutation in some free-floating logic of institutions.

Here’s Agamben: “According to Aristotle’s definition, the paradigmatic gesture moves not from the particular to the whole and from the whole to the particular but from the singular to the singular.” You can attribute that idea to Aristotle, but it also sounds an awful lot like the “constellations” of Benjamin and Adorno—assemblages of singular things, not subsumed under a category or master term, but linked all the same, except only just, minimally unified, scattered fragments carefully re-collected, scraps joined with twists of wire, like an early Rauschenberg combine, the unity-of-unity-and-difference with difference dialed high in the mix. Agamben and Adorno share the idea that singularities might be linked together directly and so circumvent the abstractions that typically manhandle them. And saying as much should help us identify what is peculiar about Agamben’s thinking. For Adorno, of course, preserves the moment of the totality or the whole—he continues to speak of “capitalism” or “the administered world”—to which the constellation of singularities nonetheless provides an alternative. Hence Adorno’s in some sense entirely conventional reliance on the aesthetic: He thinks we need a better way to cognize objects and thinks, too, that art might provide it; that in the aesthetic encounter we for once apprehend objects in their singularity, without immediately subsuming them under models or formulas. In rare moments, we stop thinking like administrators and lose the names for things. The constellation is an alternative mode of cognition, a utopian counter-term, and in that sense a project, rather than, as Agamben has it, a method—a counter-systemic thinking and not a post-structuralism. The bizarre thing about Agamben—although this is a peculiarity he shares with lots and lots of other thinkers—is that he thinks that this utopian counter-term already describes our political and economic reality. It is the mistake endemic to the breed. What in Adorno remains a political task Agamben and sundry others turn into proclamation. Fired by the idea that the world should not be organized into structures and systems, they convince themselves that the world is not so organized, though where they used to take the epistemological shortcut to singularity, they are now more likely to take the ontological one: sameness cannot exist; it is existentially excluded; there is only multiplicity.

If there is a big point here, then we’ve just hit it: You can be counter-systemic, or you can be post-structuralist, but you cannot coherently be both, because once you’ve declared that there is no structure, you cannot then say you want to overturn it. Adorno thinks that a transformed world—let’s call it communism, though he wouldn’t have, as others were hogging the word—would be one in which people and objects can exist as free but linked singularities; and he thinks that we can proleptically work out the epistemology of that world-that-is-not-yet-ours, such that we can sometimes experience objects and others as though already redeemed. Agamben thinks that this utopian epistemology—the knowing of linked singularities—accurately describes the world we already inhabit, which is the society of camps and prisons.

I don’t mean to suggest that Agamben has anything nice to say about prisons and concentration camps. This is manifestly not the case. He typically presents himself as a thinker of The Catastrophe—the destruction of experience, the permanent state of exception, the generalization of Dachau, the merging of the concentration camp with everyday life, Buchenwald without end. There is, if anything, an apocalypticism in his writing, modeled again on the late Adorno and a Benjamin-about-to-die. And yet a certain utopian misdescription of the concentration camp is built into his arguments all the same, simply because he has taken the redemptive moment from negative dialectics—Adorno’s inevitably temporary reminders of how objects would appear to us once liberated from the abstractions of the exchange relation and bureaucratic reason—and locked it in place as a uniform method. The strain of this argument is often evident, as here—Agamben is trying again to sum up what he means by “paradigm”:

We can … say … that a paradigm entails a movement that goes from singularity to singularity and, without ever leaving singularity, transforms every singular case into an exemplar of a general rule that can never be stated a priori.

This is a version of what we’ve already seen: Singularities are directly joined, flush up against each other. A certain generality can be achieved, but a miraculous generality that doesn’t come at the expense of singularity, a generality without abstraction. What Agamben is saying here really isn’t all that complicated. All he means is that when you write about a prison or a concentration camp, you are writing about our general condition, but you need never exit the detail and fine grain of your description in order to make this point separately and in its generality. You can just motor on with your individualized account, immersed in the singularity of that particular institution, confident that it will stand in for other similar institutions. The problem, in this light, is the term “a priori,” which Agamben has grabbed from Kant. The rule of prisons, like the rule of camps, cannot be stated a priori. To which one would like to reply: Of course not. Of course these “rules” can’t be formulated a priori, because Agamben and Foucault are offering us a method for historical study; they are talking about historical periods, trying to identify shifts in historical experience, and historical experience is by definition not a priori. That is one of things one knows a priori about the term “a priori.” The claim, in other words, isn’t wrong. Quite the contrary: it is troublingly evident, because definitional. It’s the sort of truth you can’t insist on without making other people wonder whether you’ve really grasped the underlying issues. We can be certain, at least, that we are not dealing with a distinctive virtue of Agamben’s method; there is no philosophy whatsoever that could deliver to us a priori knowledge of Sachsenhausen or the Alleghany County Jail. What is true of “the paradigm”—what Agamben makes his boast—is true of every other historical methodology, without exception. One suspects, then, that this sentence cannot mean what it plainly says, that Agamben wants to use the term a priori to suggest a rather different claim: not that the general historical rule can’t be stated a priori, but that it can never be stated in its generality, as an abstraction. But Agamben can’t put it that way, because in that form the claim is just false. Anything that can be said about the panopticon paradigmatically could also be said generally, as an observation about a system or set of institutions, without our even having to mention the panopticon. So that’s one way to make it seem as though you have excised from your thought the structures or totalities that have not vanished from the world: You argue the obvious in order to insinuate the wrong.