Category Archives: Articles

The Revolutionary Energy of the Outmoded

ORIGINALLY PUBLISHED IN OCTOBER, SPRING 2003.

 

•1.

Fredric Jameson does not like predictions. His is an owlish and retrospective Marxism, one that happily foregoes the crystal ball of some former orthodoxy. There is a Hegelian lesson that Jameson’s writing repeatedly attempts to impart, which is that wisdom only comes in the backwards glance, that we glimpse history only in the moment when our plans fail or dialectically backfire, when our actions bump up against the objective, hurtful (but never foreseeable) limits of the historical situation. You can draw up your revolutionary schemes, paint the future as gaily or grimly as you like, but only upon review will it become plain in just what way you have been Reason’s dupe. If this point is unclear, you might consider Jameson’s response to the World Trade Center attacks, which began with the following extraordinary observation:I have been reluctant to comment on the recent ‘events’ because the event in question, as history, is incomplete and one can even say that it has not yet fully happened. … Historical events…are not punctual, but extend in a before and after of time which only gradually reveal themselves.”[1] I suspect many will find remarkable Jameson’s reluctance here to help shape the public response to September 11th. An event that has not fully happened yet is, after all, an event in which one may yet intercede, an event that one needn’t yet cede to the Right, an event to which one might yet attribute one’s own polemical and political meanings. But Jameson makes a conspicuous display here of spurning what Left criticism generally (and glibly) calls an “intervention”—as though the business of a Marxist criticism were not to intervene, but rather to bide its time, to wait until an event has been thoroughly mediated or disclosed its function, and then to identify, with the serene impotence of hindsight, history’s great game. Any event is, like revolution itself, a leap into the unknown. The owl of Minerva only flies in November.

One might wonder, then, how Jameson feels about his own writing, which has been so accidentally and accurately predictive. How does he feel, for instance, about his landmark postmodernism essay, the one that sometimes goes by the name “Postmodernism and Consumer Society”?[2] That article so neatly anticipated U.S. popular culture in the 1990s that it is hard to shake the feeling that a whole generation of artists—writers, musicians, filmmakers above all—must have mistaken it for a manifesto. (“Pastiche—check. Death of the subject—you bet. Depthlessness and disorientation—where do I sign up?”) As ridiculous as it may sound, the essay, first published in 1983, now reads like an exercise in cultural embryology, discerning the first, fetal traces of an aesthetic mode that would become fully evident only in the years that followed. One wonders, too, if young readers encountering the article for the first time now don’t therefore underestimate its savvy. One wonders if they don’t find it rather trite, since a sharp-eyed exegesis of Body Heat (1981) is really just a workaday description of L.A. Confidential (1997)—a script treatment.

We can be more precise: What has seemed so strangely prophetic about Jameson’s postmodernism argument are, oddly enough, its Benjaminian qualities. Benjamin’s fingerprints seem, in some complicated way, to be all over postmodernism. One might even say that postmodernism in America is a dismal parody of Benjaminian thought. Just cast an eye back over the last ten years, over U.S. pop culture on the cusp of the millennium—postmodernism post-Jameson. Consider, for instance, the apocalypticism that has been among its most persistent trends. The recent fin de siècle has been preoccupied with dire images of a devastated future: we might think here of the full-blown resurgence of millenarian thought and the orchestrated panic surrounding the millennium bug; of X-Files paranoia, which has told us to “fight the future”; of catastrophe movies and the resurgence of film noir and dystopian science fiction. If you were to design a course on popular culture in the 1990s, you would be teaching a survey in doom.

There is much in this culture of disaster that would merit our closest attention—there is, in fact, strangeness aplenty. Consider, for instance, the emergence as a genre of the Christian fundamentalist action thriller, the so-called rapture novel. These novels are basically an exercise in genre splicing; they begin by offering, in what for right-wing Protestantism is a fairly ordinary procedure, prophetic interpretations of world events—the collapse of the Soviet Union, the new Intifada—but they then graft onto these biblical scenarios plots borrowed from Tom Clancy techno-thrillers. The first thing that needs to be noted about rapture novels, then, is that they signal, on the part of U.S. fundamentalism, an unprecedented capitulation to pop culture, which the godly Right had until recently held in well-nigh Adornian contempt. Older forms of Christian mass culture have seized readily on new technologies—radio, say, or cable television—but they have tended to recreate within those media a gospel or revival-show aesthetic. In rapture novels, by contrast, as in the rapture movies that have followed in the novels’ wake, we are able to glimpse the first outlines of a fully commercialized, fully mediatized Christian blockbuster culture. Fundamentalist Christianity gives way at last to commodity aesthetics.

This is not yet to say enough, however, because this rapprochement inevitably holds surprises for secular and Christian audiences alike. The best-selling rapture novel to date is Jerry Jenkins and Timothy LaHaye’s Left Behind, which has served as a kind of template for the entire genre. In the novel’s opening pages, the indisputably authentic Christians are all called up to Christ—they are “raptured.” They literally disappear from earth, leaving their clothes pooled on the ground behind them, pocket change and car keys scattered across the pavement. This scene is the founding convention of the genre, the one event that no rapture novel can do without. And yet this mass vanishing, conventional though it may be, cannot help but have some curious narrative consequences. It means, for a start, that the typical rapture novel is not interested in good Christians. The heroes of these stories, in other words, are not godly people—this is true by definition, because the real Christians have all quit the scene; they have been vacuumed from the novel’s pages. In their absence, the narrative turns its attention to indifferent or not-quite Christians, who can be shown now snapping out of their spiritual ennui, rallying to God, and taking up the fight against the anti-Christ (who in Left Behind, takes the form of an Eastern European humanitarian whose malign plans include scrapping the world’s nuclear arsenals and feeding malnourished children). Left Behind, I would go so far as to suggest, seems to work on the premise that there is something better—something more significantly Christian—about bad Christians than there is about good ones. This notion has something to do with the role of women in the novel. Left Behind, it turns out, has almost no use for women at all. They all either disappear in the novel’s opening pages or get left behind and metamorphose into the whores of anti-Christ. It will surprise no-one to find a Christian fundamentalist novel portraying women as whores, but the former point is worth dwelling on: Left Behind cannot wait to dispense with even its virtuous women. It may hate the harlots, but it has no use for ordinary church-supper Christians either, imagined here as suburban housewives and their well-behaved young children. Anti-Christ has to be defeated at novel’s end, and for this to happen, the good Christians have to be shown the door, for smiling piety can, in the novel’s terms, sustain no narrative interest; it can enter into no conflicts. Left Behind is premised on the notion that devout Christians are cheek-turning wimps and goody-two shoes, mere women, in which case they won’t be much good in the fight against the liberals and the Jews. What this means is that the protagonists who remain in the novel—the Christian fence-sitters—are all men, and not just any men, but rugged men with rugged, porn-star names: Rayford Steele, Buck Williams, Dirk Burton. Left Behind is a novel, in other words, that envisions the remasculinization of Christianity, that calls upon its readers to imagine a Christianity without women, but with muscle and grit instead, a Christianity that can do more than just bake casseroles for people. And such a project, of course, requires bad Christians so that they may become bad-ass Christians. Perhaps it goes without saying: A Christian action thriller is going to be interested first and foremost in action-thriller Christians.

It is with the film version of Left Behind (2001), however, that things really get curious. The film’s final moments nearly make explicit a feature of the narrative that is half-buried in the novel: The film concludes with a brief sequence that we’ve all seen a dozen times, in a dozen different action movies—the sequence, that is, in which the heroic husband returns home from his adventures to be reunited with his wife and child. Typically, this scene is staged at the front door of the suburban house with the child at the wife’s side; you might think, emblematically, of the final shots of John Woo’s Face/Off (1997), which show FBI Agent Sean Archer (John Travolta) exchanging glances with his wife (Joan Allen) over the threshold as their teenaged daughter hovers in the background. Left Behind, for its part, reproduces that scene almost exactly, almost shot for shot, except, since the women have all evaporated or gone over to anti-Christ, the film has no choice but to stage this familiar ending in an unfamiliar way—between its male heroes, between Rayford Steele, standing in the doorway with his daughter, and a bedraggled Buck Williams, freshly returned from his battles with the Beast. A remasculinized Christianity, then, cannot help but imagine that the perfect Christian family would be—two men. Such, then, is one upshot of fundamentalism’s new openness to pop culture: Christianity uncloseted.

Of course, the borrowings can go in the other direction as well. Secular apocalypse movies can deck themselves out in religious trappings, but when they do so, they risk an ideological incoherence of their own. Think first about conventional, secular catastrophe movies—Armegeddon (1998), Deep Impact (1998), Volcano (1997)—so-called apocalypse films that actually make no reference to religion. These tend to be reactionary in rather humdrum and technocratic ways, full of experts and managers deploying the full resources of the nation to fend off a threat defined from the outset as non-ideological. The volcanoes and earthquakes and meteors that loom over such movies are therefore merely more refined versions of the maniacal terrorists and master thieves who normally populate action movies: they are enemies of the state whose challenge to the social order never approaches the level of the political. It is when such secular narratives reintroduce some portion of religious imagery, however, that their political character becomes pronounced. We might think here of The Seventh Sign (1988), which featured Demi Moore, or of the Arnold Schwarzenegger vehicle End of Days (1999). Like Left Behind, these last two films work by combining biblical scenarios and disaster-movie conventions, and the results are similarly confusing. To be more precise, they begin by offering luridly Baroque versions of the Christian apocalypse narrative, but then revert back to the secular logic of the disaster movie, as though to say: Catastrophes are destabilizing a merciless world in preparation for Christ’s return—and this must be stopped! In a half-hearted nod to Christian ethics, each of these movies begins by depicting the world of global capitalism as brutal and unjust—the montage of squalor has become something of an apocalypse-movie cliché—before deciding that this world must be preserved at all costs. The characters in these films, in other words, expend their entire allotment of action-movie ingenuity trying to prevent the second coming of Christ, imagined here as the biggest disaster of all.[3]

This is not to say that contemporary American apocalypses dispense with redemptive imagery altogether, at least of some worldly kind. Carceral dystopias, for instance, films that work by trapping their characters in controlled and constricted spaces, tend to posit some utopian outside to their seemingly total systems: the characters in Dark City (1997) dream of Shell Beach, the fictitious seaside resort that supposedly lies just past their nightmarish noir metropolis, the illusory last stop on a bus line that actually runs nowhere; the man-child of Peter Weir’s Truman Show (1998) dreams, in similar ways, of Fiji, which is a rather more conventional vision of oceanic bliss; and the Horatio-Alger hero of the genetics dystopia Gattaca (1997) follows this particular utopian logic to its furthest end by dreaming of the day he will be made an astronaut, the day he will fly to outer space, which of course is no social order at all, let alone a happier one, but merely an anything-but-here, an any-place-but-this-place, the sheerest beyond. As utopias go, then, these three are remarkably impoverished; they cannot help but seem quaint and nostalgic, strangely dated, like the daydreams of some Cold-War eight-year old, all Coney Island and Polynesian hula-girls and John-Glenn, shoot-the-moon fantasies.

But then it is precisely the old-fashioned quality of these utopias that is most instructive; it is precisely their retrograde quality that demands an explanation. For if on the one hand, U.S. pop culture has seemed preoccupied with the apocalypse, on the other hand it has seemed every bit as obsessed with cheery images from a sanitized past. Apocalypse culture has as its companion the many-faceted retro-craze: vintage clothing; Nick at Nite; the ‘70s vogue; the ‘50s vogue; the ‘40s vogue; the ‘30s vogue; the ‘20s vogue (the ‘60s are largely missing from this tally, for reasons too obvious to enumerate; the ‘60s vogue has been stunted, almost nonexistent, at least within a U.S. framework—retro tops out about 1963 and then gets shifted over to Europe and the mods); the return of surf, lounge-music, and Latin jazz; retro-marketing and retro-design, and especially the Volkswagen Beetle and the PT Cruiser.

Retro, then, deserves careful consideration of its own, as an independent phenomenon alongside the apocalypse. Some careful distinctions will be necessary. Retro takes a hundred different forms; it has the appearance of a single and coherent phenomenon only at a very high level of generality. We could begin, then, by examining the heavily marketed ‘60s and ‘70s retro of mainstream, white youth culture. Here we would want to say, at least on first pass, that the muffled camp of Austin Powers (1997), say—or the mid-‘90s Brady Bunch revival, or Beck’s Midnite Vultures—closely approximates Jameson’s notion of postmodern pastiche: this is retro as blank parody, the affectless recycling of alien styles, worn like so many masks. But that said, we would have to counterpose against these examples the retro-culture of a dozen regional scenes, scattered across the U.S., most of which are retro in orientation, but none of which are exercises in pastiche exactly. Take, for instance, the rockabilly and honky-tonk scene in Chapel Hill, North Carolina: It is impeccably retro in its musical choices and impeccably retro in its fashions, full of redneck hipsters sporting bowling shirts and landing-pad flattops and smart-alecky tattoos. Theirs is a form of retro whose reference points are emphatically local, and in its regionalism, the Chapel Hill scene aspires to a subculture’s subversiveness, a kind of Southern-fried defiance, which stakes its ground in contradistinction to some perceived American mainstream and then gives its rebellion local color, as though to say: “We don’t work in your airless (Yankee) offices. We don’t speak your pinched (Yankee) speech. We don’t belong to your emasculated (Yankee) culture. We are hillbillies and punks in equal proportion.”  Retro, in short, can be placed in the service of a kind of spitfire regionalism, and there is little to be gained by simply conflating this form of retro with the retro-culture marketed nationwide.

In fact, even mainstream ‘70s retro can take on different valences in different hands. To cite just one further example: hip-hop sampling, which builds new tracks out of the recycled fragments of existing recordings, might seem upon first inspection to be the very paradigm of the retro-aesthetic. And yet hip-hop, which has mined the ‘70s funk back-catalog with special diligence, typically forgoes the irony that otherwise accompanies such postmodern borrowings. Indeed, hip-hop sampling generally involves something utterly unlike irony; it is often positioned as a claim to authenticity, an homage to the old school, so that when OutKast, say, channels some vintage P-Funk, that sample is meant to function as a genetic link, a reoccurring trait or musical cell-form. The sample is meant to serve as a tangible connection back to some originary moment in the history of soul and R&B (or funk and disco).[4]

So differences abound in retro. And yet one is tempted, all the same, to speak of something like an official retro-culture, which takes as its object the 1940s and ‘50s: diners, martinis, “swing” music (which actually refers, not to ‘30s and ‘40s swing, but to post-war jump blues), industrial-age furniture, late-deco appliances, all chrome and geometry. The most important point to be made about this form of retro is that it is an unabashedly nationalist project; it sets out to create a distinctively U.S. idiom, one redolent of Fordist prosperity, an American aesthetic culled from the American century, a version of Yankee high design able to compete, at last, with its vaunted European counterparts. In general, then, we might want to say that retro is the form that national tradition takes in a capitalist culture: Capitalism, having liquidated all customary forms of culture, will sell them back to you at $16 a pop. But then commodification has ever been the fate of national customs, which are all more or less scripted and inauthentic. What is distinctive about retro, then, is the class of objects that it chooses to burnish with the chamois of tradition. There is a remarkable scene near the beginning of Jeunet and Caro’s great retro-film Delicatessen (1991) that is instructive in this regard: Two brothers sit in a basement workshop, handcrafting moo-boxes—those small, drum-shaped toys that, once upended and then set right again, low like sorrowful cows. The brothers grind the ragged edges from the boxes, blow away the shavings as one might dust from a favorite book, rap the work-table with a tuning fork and sing along with the boxes to ensure the perfect pitch of the heifer’s bellow. And in that image of their care, their workman’s pride, lies one of retro-culture’s great fantasies: Retro distinguishes itself from the more or less folkish quality of most national traditions in that it elevates to the status of custom the commodities of early mass production—old Coke bottles, vintage automobiles—and it does so by imbuing them with artisanal qualities, so that, in a strange historical inversion, the first industrial assembly lines come to seem the very emblem of craftsmanship. Retro is the process by which mass-produced trinkets can be reinvented as “heritage.”[5]

The apocalypse and the retro-craze—such, then, are the twin poles of postmodernism, at least on Jameson’s account. We are all so accustomed to this twosome that it has become hard to appreciate what an odd juxtaposition it really is. Disco inferno, indeed. This is a pairing, at any rate, that finds a rather precise corollary in the writings of Walter Benjamin. Each of the moments of our swinging apocalypse can be traced back to Benjaminian impulses, or opens itself, at least, to Benjaminian description. For in what other thinker are we going to find, in a manner that so oddly approximates the culture of American malls and American multiplexes, this combination of millenarian mournfulness and antiquarian devotion? Benjamin’s Collector seems to preside over postmodernism’s thrift-shop aesthetic, just as surely as its apocalyptic imagination is overseen by Benjamin’s Messiah, or at least by his Catastrophic Angel. It would seem, then, that Benjaminians should be right at home in postmodernism, and if this is palpably untrue—if the culture of global capitalism does not after all seem altogether hospitable to communists and the Kabbalah—then this is something we will now have to account for. Why, despite easily demonstrated affinities, does it seem a little silly to describe U.S. postmodernism as Benjaminian?

Jameson’s work is again clarifying. It is not hard to identify the Benjaminian elements in Jameson’s idiom, and especially in his utopian preoccupations, his determination to make of the future an open and exhilarating question. No living critic has done more than Jameson to preserve the will-be’s and the could-be’s in a language that would just as soon dispense altogether with its future tenses and subjunctive moods. And yet a moment’s reflection will show that Jameson is, for all that, the great anti-Benjaminian. It is Jameson who has taught us to experience pop culture’s Benjaminian qualities, not as utopian pledges, but as threats or calamities. Thus Jameson on apocalypse narratives: “It seems to be easier for us today to imagine the thoroughgoing deterioration of the earth and of nature than the breakdown of late capitalism; perhaps that is due to some weakness in our imaginations. I have come to think that the word postmodern ought to be reserved for thoughts of this kind.”[6] It is worth calling attention to the obvious point about these sentences—that Jameson here more or less equates postmodernism and apocalypticism—if only because in his earliest work on the subject, it is not the apocalypse but retro-culture that seems to be postmodernism’s distinguishing and debilitating mark. Again Jameson: “there cannot but be much that is deplorable and reprehensible in a cultural form of image addiction which, by transforming the past into visual mirages, stereotypes, or texts, effectively abolishes any practical sense of the future and of the collective project.”[7]  Jameson, in short, is most sour precisely where Benjamin is most expectant. He would have us turn our back on the most conspicuous features of Benjamin’s work; for late capitalism, it would seem, far from keeping faith with Benjamin, actually robs us of our Benjaminian tools, if only by generalizing them, by transforming them into noncommittal habits or static conventions: the Collector, fifty years on, shows himself to be just another fetishist, and even the Angel of History turns out to be a predictable and anti-utopian figure, unable to so much as train its eyes forward, foreclosing, without reprieve, on the time yet to come. U.S. postmodernism may be a culture that loves to “brush history against the grain,” but only in the way that you might brush back your ironic rockabilly pompadour.

 

•2.

But what if we refused to break with Benjamin in this way? Try this, just as an exercise: Ask yourself what these seemingly disparate trends—apocalypticism and the retro-craze—have to do with one another. Consider in particular that remarkable crop of recent films that actually unite these two trends, films that ask us to imagine an unlivable future, but do so in elegant vintage styles. These include: Ridley Scott’s Blade Runner (1982), the grand-daddy of the retro-apocalypses; three oddly upbeat dystopias—Starship Troopers and the aforementioned Gattaca and Dark City—all box-office underachievers from 1997; and, again, the cannibal slapstick Delicatessen. All of these films posit, in their very form, some profound correlation between retro and the apocalypse, but it is hard, on a casual viewing, to see what that correlation could be. Jameson, of course, offers a clear and compelling answer to this question, which is that apocalypticism and the retro-craze are the Janus faces of a culture without history, two eyeless countenances, pressed back to back, facing blankly out over the vistas they cannot survey.[8]

Some of these films, it must be noted, seem to invite a Jamesonian account of themselves. This is true of Blade Runner, for instance, or of The Truman Show—films that offer a vision of retro-as-dystopia, a realm of fabricated memory, in which history gets handed over to corporate administration, in which every madeleine is stamped “Made in Malaysia.” Perhaps it is worth pausing here, however, since we need to be wary of running these two films together. The contrast between them is actually quite revealing. Both Blade Runner and The Truman Show present retro-culture as dystopian, and in order to do this, both rely on some of the basic conventions of science fiction. Think about what makes science fiction distinctive as a mode—think, that is, about what distinguishes it from those genres with which it seems otherwise affiliated, such as the horror movie. Horror movies, especially since the 1970s, have typically worked by introducing some terrifying, unpredictable element into apparently safe and ordinary spaces. Monsters are nearly always intruders—slashers in the suburbs, zombies forcing their way past the barricaded door. But dystopian science fiction is, in this respect, nearly the antithesis of horror. It does not depict a familiar setting into which something frightening then gets inserted. What is frightening in dystopian science fiction is rather the setting itself. Now, this point holds for both Blade Runner and The Truman Show, but it holds in rather different ways. The first observation that needs to be made about The Truman Show is that it is more or less a satire, which is to say that, though it takes retro as its object, it is not itself a retro-film. It portrays a world that has handed itself over entirely to retro, a New Urbanist idyll of gleaming clapboard houses on mixed-use streets; but the film itself is not, by and large, retro in its narrative forms or cinematic techniques. Quite the contrary: the film wants to teach its viewers how to read retro in a new way; it wishes, polemically, to loosen the hold of retro upon them. The Truman Show takes a setting that initially seems like some American Eden, and then through the menacing comedy of its mise-en-scène—the falling lights and incomplete sets, the scenery that Truman stumbles upon or that springs disruptively to life—makes this retro-town come slowly to seem ominous. To give the film the cheap Lacanian description it is just begging for: The Truman Show charts the unraveling of the symbolic order. Every klieg light that comes crashing down from the sky is a warning shot fired from the Real. The simpler point, however, is that The Truman Show rests on a deflationary argument about American mass culture—a media-governed retro-culture depicted here as restrictive, counterfeit, and infantilizing—and its form is accordingly rather conventional. It is essentially a cinematic Bildungsroman, which ends once the protagonist steps forward to take full responsibility for his own life, and this, of course, tends to compromise the film’s own Lacanian premise: It suggests that any of us could simply step out of the symbolic order, step boldly out into the Real, if only we could muster sufficient resolve.[9]

Having a compromised and conventional form, however, is not the same thing as having a retro-form. In Blade Runner, by contrast, the setting—a dismal and degenerate Los Angeles—is self-evidently dystopian, but it is itself retro; it is retro as a matter of style or form. The film’s vision of L.A., as has often been observed, is equal parts Metropolis and ‘40s film noir, and the effect of the film is thus rather different from The Truman Show, though it is equally curious: Blade Runner may recycle earlier styles or narrative forms in a manner typical of retro, but the films that it mimics are themselves all more or less dystopian. If Blade Runner is a pastiche, it is a pastiche of other dystopias, and this has the effect of establishing the correlation between retro and the apocalypse in a distinctive way: Blade Runner posits a historical continuum between a bleak past and an equally bleak future, between the corrupt and stratified modernist city (of German expressionism and hardboiled fiction) and the coming reign of corporate capital (envisioned by so much science fiction), between the bad world we’ve survived and the bad world that awaits.

Such, then, are the films that seem ready to make Jameson’s argument for him. But there is good reason, I think, to set Jameson temporarily to one side. For present purposes, it would be more revealing to direct our attention back to Delicatessen, which, of all the retro-apocalypses, is perhaps the most winning and Benjaminian. The question that confronts any viewer of Delicatessen is why this film—which, after all, depicts an utterly dismal world in which men and women are literally butchered for meat—should be so delightful to watch, and not just wry or darkly humorous, but giddy and dithyrambic. I would suggest that the pleasure peculiar to Delicatessen has everything to do with the status of objects in the film—that is, with the extravagant and festive care that Jeunet and Caro bring to the filming of objects, which take on the appearance here of so many found and treasured items. One might call to mind the hand-crank coffee grinder, which doubles as a radio transmitter; or the cherry-red bellboy’s outfit; or simply the splendid opening credits—this slow pan over broken records and torn photographs—in which the picture swings open like a case of curiosities. It is as though the film took as its most pressing task the re-enchantment of the object-world, as though it were going to lift objects to the camera one by one and reattach to them their auras—not their fetishes, now, as happens in most commercial films, with their product placements and designer outfits—but their auras, as though the objects at hand had never passed through a marketplace at all. This is tricky: The objects in Delicatessen are recognizably of the same type as American retro-commodities—an antique wind-up toy, an old gramophone, stand-alone black-and-white television sets. At this point, then, the argumentative alternatives become clear: Either we can dismiss Delicatessen as ideologically barren, as just another pretext for retro-consumption, just another flyer for the flea market of postmodernism. Or we can muster a little more patience, tend to the film a little more closely, in which case we might discover in Delicatessen the secret of all retro-culture: its desire, delusional and utopian in equal proportion, for a relationship to objects as something other than commodities.

To follow the latter course is to raise an obvious question: How does the film direct our attention to objects in a new way? How does it reinvigorate our affection for the object world? This is a question, first of all, of the film’s visual style, although it turns out that nothing all that unusual is going on cinematographically: In a manner characteristic of French art-film since the New Wave, Delicatessen keeps the spectator’s eye on its objects simply by cutting to them at every opportunity and thus giving them more screen time than household artifacts typically claim. By the usual standards of analytical editing, in other words—within the familiar breakdown of a scene into detailed views of faces, gestures, and props—the props get a disproportionate number of shots. The objects, like so many Garbos, hog all the close-ups. “By permitting thought to get, as it were, too close to its object,” Adorno once said of Benjamin’s critical method, “the object becomes as foreign as an everyday, familiar thing under a microscope.”[10] Delicatessen works, in these terms, by taking Adorno’s linguistic figure at face value and returning it back to something like its literal meaning, back to the visual. The film permits the camera to get too close to its object. It forces the spectator to scrutinize objects anew simply by bringing them into sustained proximity.

The camerawork, however, is just the start of it, for in addition to the question of cinematic style, there is the related question of form or genre. Delicatessen, it turns out, is playing a crafty game with genre, and it is through this formal frolic that the film most insistently places itself in the service of its objects. For Delicatessen is retro not only in its choice of props—it is, like Blade Runner, formally or generically retro, as well. This point may not be immediately apparent, however, since Delicatessen resurrects a genre largely shunned by recent U.S. film. One occasionally gets the feeling from American cinema that film noir is the only genre ripe for recycling. The 1990s have delivered a whole paddywagon full of old-fashioned crime stories and heist pics, but where are all the other classic Hollywood genres? Where are the retro-Westerns and the retro war movies? Where are the retro-screwballs?[11] Neo-noir, of course, is relatively easy to pull off—dim the lights and fire a gun and some critic or another will call it noir. Delicatessen, for its part, attempts something altogether more difficult or, at least, sets in motion a less reliable set of cinematic conventions: pratfalls, oversized shoes, madcap chase scenes. Early on, in fact, the film has one of its characters say that, in its post-apocalyptic world, people are so hungry they “would eat their shoes”; and with this one line—an unambiguous reference to the celebrated shoe-eating of Chaplin’s The Gold Rush—it becomes permissible to find references to silent comedy at every turn: in the hero’s suspenders, in the film’s several clownish dances, in the near-demolition of the apartment building in which all the action is set, a demolition that, once read as slapstick, will call to mind Buster Keaton’s wrecking-ball comedy, the crashing houses of Steamboat Bill, Jr. (1928), say. Delicatessen, in sum, is retro-slapstick, and noting as much will allow us to ask a number of valuable questions.

The most compelling of these questions will return us to the matter at hand. We are trying to figure out how Delicatessen gets the viewer to pay attention to its objects, and so the question now must be: What does slapstick have to do with the status of objects in the film? It is hardly intuitive, after all, that slapstick should bring about the redemption of objects, should reattach objects to their auras. A cursory survey of classic slapstick, in fact, might suggest just the opposite—a world, not of enchanted objects, but of aggressive and adversarial ones. Banana peels and cream pies spring mischievously to mind. And yet we need to approach these icons with caution, lest we take a conceptual pratfall of our own; for Delicatessen draws on slapstick in at least two different ways, or rather, it draws on two distinct trends in early American slapstick, and each of these trends grants a different status to its objects. Everything rides on this distinction:

1) When we think of slapstick, we think first of all of roughhouse comedy, of the pie in the face and the kick in the pants, the endless assault on ass and head. Classic slapstick of this kind is what we might call the comedy of Newtonian physics. It is a farce of gravity and force, and as such, it is based on the premise that the object world is fundamentally intransigent, hostile to the human body. In this Krazy-Kat or Keystone world, every brick, every mop is a tightly wound spring of kinetic energy, always ready to uncoil, violently and without motivation.[12] It is worth remarking, then, that Delicatessen, contains its share of knockabout: the Rube Goldberg suicide machines, the postman always tumbling down the stairs. In its most familiar moments, Delicatessen, in keeping with its comic predecessors, seems to suggest that the human body is irreparably out of joint with its environment.

A first distinction is necessary here, for though Delicatessen may embrace the sadism of slapstick, it does so with a historical specificity of its own. Classic slapstick typically addresses itself to the place of the body under urban and industrial capitalism; one is pretty much obliged at this point to adduce Chaplin’s Modern Times (1936), with its scenes of working-class mayhem and man-eating machines. Delicatessen, by contrast, contains man-eaters of its own, but they are not metaphorical man-eaters, as Chaplin’s machines are—they are cannibals true and proper, and their presence adds a certain complexity to the question of the film’s genre, for there have appeared so many films about cannibalism over the last twenty years that they virtually constitute a minor genre of their own.[13] One way to describe Delicatessen’s achievement, then, is to say that it splices together classic slapstick with the cannibal film. There will be no way to appreciate what this means, however, until we have determined the status of the cannibal in contemporary cinema. Broadly speaking, images of the cannibal tend to participate in one of two discourses: Historically, they have played a rather repugnant role in the racist repertoire of colonial clichés. Cannibalism is one of the more extreme versions of the imperial Other, the savage who does not respect even the most basic of civilization’s taboos. Increasingly, however, in films such as Eat the Rich (1987) or Dawn of the Dead (1978), cannibalism has become a conventional (and more or less satirical) image of Europeans and Americans themselves—an image, that is, of consumerism gone awry, of a consumerism that has liquidated all ethical boundaries, that has sunk into raw appetite, without restraint.[14] For present purposes, this point is nowhere clearer than in Delicatessen’s final chase scene, in which the cannibalistic tenants of the film’s apartment house gather to hunt down the film’s hero. The important point here is that, within the conventions of classic Hollywood comedy, the film makes a conspicuous substitution, for our comic hero is not on the run from some exasperated factory foreman or broad-shouldered cop on the beat, as silent slapstick would have it. He is fleeing, rather, from a consumer mob, E.P. Thompson’s worst nightmare, some degraded, latter-day bread riot. It is important that we appreciate the full ideological force of this switchover: By staffing the old comic scenarios with kannibals instead of kops, the film is able to transform slapstick in concrete and specifiable ways. The cannibals mean that when Delicatessen revives Chaplin-era slapstick, it does so without Chaplin’s factories or Chaplin’s city. This is slapstick for some other, later stage of capitalism—modernist comedy from which modernist industry has disappeared, leaving only consumption in its place.

2) Slapstick, then, announces a pressing political problem, in Delicatessen as in silent comedy. It sounds an alarm on behalf of the besieged human body. Delicatessen’s project, in this sense, is to imagine that problem’s solution, to mount a counterattack, to ward off the principle of slapstick by shielding the human body from its batterings. The deranged, consumption-mad crowd, in this light, is one, decidedly sinister version of the collective, but it finds its counterimage here in a second collective, a radical collective—the vegetarian insurgency that serves as ethico-political anchor to the film. Or to be more precise: The film is a fantasy about the conditions under which an advanced consumer capitalism could be superceded, and in order to do so, it follows two different tracks: One of the film’s subplots follows the efforts of the anti-consumerist underground, the Trogolodytes, while a second subplot stages a fairly ordinary romance between the clown-hero and a butcher’s daughter. Delicatessen thus divides its utopian energies between the revolutionary collective, depicted here as some lunatic version of La Resistance, and the heterosexual couple, imagined in impeccably Adornian fashion as the last, desperate repository of human solidarity, the faint afterimage of a non-instrumental relationship in a world otherwise given over to instrumentality.[15]

But this pairing does not exhaust the film’s political imagination, if only because knockabout does not exhaust the possibilities of slapstick. Delicatessen, in fact, is more revealing when it refuses roughhouse and shifts instead into one of slapstick’s other modes. Consider the key scene, early in the film, when the clown-hero, who has been hired as a handyman in the cannibal house, hauls out a bucket of soapy water to wash down the stairwell. The bucket, of course, is another slapstick icon, and anyone already cued in to the film’s generic codes might be able to predict how the scene will play out. Classic slapstick would dictate that the hero’s foot get wedged inside the bucket, that he skid helplessly across the ensuing puddle, that the mop pivot into the air and crack him in the forehead, that he somersault finally down the stairs. The important point, of course, is that no such thing happens. The clown does not get his pummeling. On the contrary, he uses his cleaning bucket to fill the hallway of this drear and half-inhabited house with giant, wobbling soap-bubbles, with which he then dances a kind of shimmy. It is in this moment, when the film pointedly repudiates the comedy of abuse, that the film modulates into a different tradition of screen comedy, what Mark Winokur has called “transformative“ or “tramp” comedy.

The hallway scene, in other words, is Chaplin through and through. It is important, then, to specify the basic structure of the typical Chaplin gag—and to specify, in particular, what distinguishes Chaplin from the generalized brutality and bedlam of the Keystone shorts. Chaplin’s bits are so many visual puns: they work by taking an everyday object and finding a new and exotic use for it, turning a roast chicken into a funnel, or a tuba into an umbrella stand, or dinner rolls into two dancing feet.[16] In Delicatessen, such transformative comedy is apparent in the New Year’s Eve noisemaker that the frog-man uses as a tongue, to catch flies; or in the hero’s musical saw, which, in fact, is the very emblem of the film’s many objects—an implement liberated from its pedestrian uses, a tool that yields melody, a dumb commodity suddenly able to speak again, and not just to shill, but to murmur of new possibilities. It is in transformative comedy, then, in the spectacle of objects whose use has been transposed, that slapstick takes on a utopian function. Slapstick becomes, so to speak, its own solution: Knockabout slapstick, in which objects are perpetually in revolt against the human body, finds its redemption in transformative slapstick, in which the human body discovers a new and unexpected affinity with objects. The pleasure that is distinctive of Delicatessen is thus actually some grand comic version of Kant’s aesthetics, of Kant’s Beauty, premised as it is on the dawning and grateful realization that objects are ultimately and against all reasonable expectation suited to human capacities. Delicatessen reimagines the world as a perpetual pas de deux with the inanimate.[17]

Transformative slapstick, this is all to say, functions in Delicatessen as a kind of antidote to cannibalistic forms of consumption. At its most schematic, the film faces its viewers with a choice between two different ways of relating to objects: a cannibalistic relationship, in which the object will be destroyed by the consumer’s unchecked hunger, or a Chaplinesque relationship, in which the object will be kept alive and continually reinvented. And so at a moment when cinematic realism has fallen into a state of utter disrepair, when realism finds it can do nothing but script elegies for the working class—when even fine films like Ken Loach’s Ladybird Ladybird (1994) and Zonca’s Dream Life of Angels (1998) have opted for the funereal, with so much as the protest drubbed out of them—it falls to Delicatessen’s grotesquerie to fulfill realism’s great utopian function, to keep faith, as Bazin said, with mere things, “to allow them first of all to exist for their own sakes, freely, to love them in their singular individuality.”[18]

It is crucial, however, that we not confine this observation to Delicatessen, because in that film’s endeavor lies the buried aspiration of all retro-culture, even (or especially) at its most fetishistic. If you examine the signs that hang next to the objects at Restoration Hardware and other such retro-marts—these small placards that invent elaborate and fictional histories for the objects stacked there for sale—you will discover a culture recoiling from its commodities in the very act of acquiring them, a culture that thinks it can drag objects back into the magic circle if only it can learn to consume them in the right way. Retro-commodities utterly collapse our usual Benjaminian distinctions between the fetish and the aura, and they do so by taking as their fundamental promise what Benjamin calls  “the revolutionary energies that appear in the ‘outmoded,’” the notion that if you know the history of an item or if you can aestheticize even the most ordinary of objects—a well-wrought dustpan, perhaps, or a chrome toaster—then you are never merely buying an object; you are salvaging it from the sphere of circulation, and perhaps even from the tawdriness of use.[19]

This is not yet to say enough, however, because it is the achievement of Delicatessen to demonstrate that this retro-utopia is unthinkable without the apocalypse. For if the objects in Delicatessen achieve a luminosity that is denied even the most exquisite retro-commodities, then this is only because they occupy a ruined landscape, in which they come to seem singular and irreplaceable. Delicatessen is a film whose characters are forever scavenging for objects, scrapping over parcels that have gone astray, rooting through the trash like so many hobos or German Greens. It is the film’s fundamental premise, then, that in a time of shortage, and in a time of shortage alone, objects will slough off their commodity status. They will crawl out from under the patina of mediocrity that the exchange relationship ordinarily imposes on them. If faced with shortage, each object will come to seem unique again, fully deserving of our attention. There is a startling lesson here for anyone interested in the history of utopian forms: that utopia can require suffering, or at least scarcity, and not abundance; that the classical utopias of plenty—those Big Rock Candy mountains with their lemonade springs and cigarette trees and smoked hams raining from the sky—are, under late capital, little more than hideous afterimages of the marketplace itself, spilling over with redundant and misdistributed goods, stripped of their revolutionary energy; that a society of consumption must, however paradoxically, find utopia in its antithesis, which is dearth.[20] And so we come round, finally, to my original point: that we must have, alongside Jameson, a second way of positing the identity of retro-culture and the apocalypse, one that will take us straight back to Benjamin: Underlying retro-culture is a vision of a world in which commodity production has come to a halt, in which objects have been handed down, not for our consumption, but for our care. The apocalypse is retro-culture’s deepest fantasy, its enabling wish.

 


[1] Jameson’s full comments can be found in the London Review of Books (Volume 23, Number 19, October 4, 2001). See also “Architecture and the Critique of Ideology, in The Ideologies of Theory, Volume 2: The Syntax of History, pp. 35-60, esp. p. 41: “dialectical interpretation is always retrospective, always tells the necessity of an event, why it had to happen the way it did; and to do that, the event must already have happened, the story must already have come to an end.”

[2] This essay is available in multiple versions. The easiest to come by is perhaps “Postmodernism and Consumer Society,”  in The Cultural Turn (London: Verso, 1998), pp. 1-20; and the most densely argued “The Cultural Logic of Late Capitalism” in Postmodernism, or The Cultural Logic of Late Capitalism (Durham: Duke, 1991), pp. 1-54.

[3] The Seventh Sign, for what it’s worth, draws on at least four different genres: 1) It is, at the most general level, a Christian apocalypse narrative; its nominal subject is the End Time, the series of catastrophes set in motion by God in preparation for His final judgment. 2) But in doing so, it deploys most of the conventions of the occult horror film. Even though the film expressly states that God is responsible for the disasters depicted, it cannot help but stage those disasters as supernatural and scary, in sequences borrowed more or less wholesale from the exorcism and devil-child movies of the 1970s, which is to say that viewers are expected to experience God’s actions as essentially diabolical. The film may adorn itself with Christian trappings, but in a manner typical of the Gothic, it cannot, finally, represent religion as anything but frightening. 3) This last point is clearest in the film’s depiction of Jesus Christ, who actually appears as a character and is almost always filmed in shots lifted from serial-killer films—Jesus stands alone, isolated in ominous medium long-shots, his face half in shadow, lit starkly from the side. Jesus’ menace is also a plot point: Christ, in the film, rents a room from Demi Moore and, in a manner that recalls Pacific Heights (1990) or The Hand That Rocks the Cradle (1992), becomes the intruder in the suburban home, the malevolent force that the white professional family has mistakenly welcomed under its roof. 4) In its final logic, then, the film reveals itself to be just a disaster movie in disguise: The Apocalypse must be scuttled. Christ must be sent back to heaven (and thus evicted from the suburban home). Justice must be averted.

[4] I owe this point to a conversation with Roger Beebe. Even here, though, matters are more complicated than they at first seem. Hip-hop, after all, hardly dispenses with irony and pastiche altogether: Jay-Z  has sampled “It’s a Hard-knock Life” (from Annie) and Missy Elliot has sampled Mozart’s Requiem, but no-one is likely to suggest that hip-hop is establishing a genetic link back to the Broadway musical or Viennese classicism.

[5] Of course, as a nationalist project, retro will play out differently in different national contexts. Perhaps a related cinematic example will make this clear. Consider Jeneut’s Fabuleux destin d’Amélie Poulain (2001). At the level of diagesis—as a plain matter of plot and dialogue and character—the film has nothing at all to do with nationalism. On the contrary, it dedicates an entire subplot to undermining the provincialism of one of its characters, Amélie’s father, who resolves at movie’s end to become more cosmopolitan. The entire film is directed towards getting him to leave France. But at the level of form, things look rather different. Formally, the film is retro through and through. It won’t take a cinephile to notice the overt references to Jules et Jim (1962) and Zazie dans le Metro (1960), at which point it becomes clear that Amélie is a pastiche of the French New Wave, which is thereby transformed into a historical artifact of its own. Amélie, then, attempts to recreate the nouvelle vague, not with an eye to making it vital again as an aesthetic and political project, but merely to cycle exhaustively through its techniques, its stylistic tics, as though it were compiling some kind of visual compendium. The nationalism that the film’s narrative explicitly rejects thus reappears as a matter of form. Amélie works to draw our attention to the Frenchness of the New Wave, to codify it as a national style, and the presumed occasion for the film is therefore the ongoing battle, in France, over the Americanization of la patrie. Amélie is a bulldozer looking for its MacDonald’s.

[6] See Jameson’s “The Antinomies of Postmodernism,” in The Cultural Turn, pp. 50-72, quotation p. 50.

[7] See “The Cultural Logic of Late Capitalism,” in Postmodernism or, The Cultural Logic of Late Capitalism (Durham: Duke, 1991), pp. 1-54, quotation p. 46.

[8] The second quotation cited here goes on to make this point clear: Retro-culture, Jameson continues, “abandon(s) the thinking of future change to fantasies of sheer catastrophe and inexplicable cataclysm, from visions of ‘terrorism’ on the social level to those of cancer on the personal.”

[9] The Truman Show, to be fair, does hedge the matter somewhat. The film’s numerous cutaways to the show’s viewers show a “real world” that is itself populated by TV-thralls, Truman Burbanks of a lower order. So when Truman steps out of his videodrome, we have a choice: We can either conclude, in proper Lacanian fashion, that Truman has simply traded one media-governed pseudo-reality for another. Or we can conclude that the film is asking us to distinguish between those, like Truman, who are able to shrug off their media masters, and those, like his viewers, who aren’t. I take this to be the film’s constitutive hesitation, its undecideable question.

[10] See Adorno’s “Portrait of Walter Benjamin” in Prisms, translated by Samuel and Shierry Weber (Cambridge: MIT, 1981, pp. 227-241), here p. 240.

[11] Examples of these last can be found, but it takes some looking: Paul Verhoeven’s Starship Troopers is a retro World War II movie, more so than Pearl Harbor (2001) or Saving Private Ryan (1998), which aspire to be historical dramas; and the Coen brothers’ Hudsucker Proxy (1994) is unmistakably a retro-screwball (and such a lovely thing that it’s a wonder others haven’t followed its lead). But they are virtually the lone examples of their kinds, singular members of non-existent sets. Neo-noir, by contrast, has become too extensive a genre to list comprehensively.

[12] Perhaps a rare instance of literary slapstick, manifestly modeled on cinematic examples, will drive this point home. The following is from Martin Amis’s Money (London: Penguin, 198?), p. 289: “What is it with me and the inanimate, the touchable world? Struggling to unscrew the filter, I elbowed the milk carton to the floor. Reaching for the mop, I toppled the trashcan. Swivelling to steady the trashcan, I barked my knee against the open fridge door and copped a pickle-jar on my toe, slid in the milk, and found myself on the deck with the trashcan throwing up in my face … Then I go and good with the grinder. I took the lid off too soon, blinding myself and fine-spraying every kitchen cranny.”

[13] See, for instance, Eating Raoul (1982); Parents (1989); The Cook, The Thief, His Wife, and Her Lover (1989); and, in a different mood, Silence of the Lambs (1991) and Hannibal (2001).

[14] On the cultural uses of cannibalism, see Cannibalism and the Colonial World, edited by Francis Barker, Peter Hulme, Margaret Iversen (Cambridge: Cambridge, 1998), especially Crystal Bartolovich’s “Consumerism, or the cultural logical of late cannibalism” (pp. 204-237).

[15] For a discussion of Delicatessen that pays closer attention to the film’s narrowly French contexts—its nostalgia for wartime, its debt to French comedies—see Naomi’s Greene’s Landscapes of Loss: The National Past in Postwar French Cinema (Princeton: Princeton, 1999).

[16] See, respectively, Modern Times; The Pawnshop (1916); The Gold Rush (1925).

[17] There’s a sense in which this operation is at work even in the most vicious knockabout. Even the most paradigmatically abusive comedies—the Keystone shorts, say—are redemptive in that the staging of abuse itself discloses a joyous physical dexterity. The staging of bodies out of synch with the inanimate world relies on bodies that are secretly very much in synch with that world—and this small paradox characterizes the pleasure peculiar to those films.

[18] Bazin, What is Cinema?, translated by Hugh Gray (Berkeley: UCalifornia, 1967); see also Siegfried Kracauer’s Theory of Film: The Redemption of Physical Reality (New York: Oxford, 1965).

[19] See Benjamin’s “Surrealism: The Last Snapshot of the European Intelligentsia,” translated by Edmund Jephcott in the Selected Writings: Volume 2, 1927-1934, edited by Michael Jennings, Howard Eiland, and Gary Smith (Cambridge: Belknap, 1999, pp. 207-221), here p. 210.

[20] Compare Langle and Vanderburch’s utopia of abundance, as noted by Benjamin himself, in the 1935 Arcades-Project Exposé (in The Arcades Project, translated by Howard Eiland and Kevin McLaughlin—Cambridge: Belknap, 1999, pp. 3-13), here p. 7:

“Yes, when all the world from Paris to China

Pays heed to your doctrine, O divine Saint-Simon,

The glorious Gold Age will be reborn.

Rivers will flow with chocolate and tea,

Sheep roasted whole will frisk on the plain,

And sautéed pike will swim in the Seine.

Fricaseed spinach will grow on the ground,

Garnished with crushed fried croutons;

The trees will bring forth stewed apples,

And farmers will harvest boots and coats.

It will snow wine, it will rain chickens,

And ducks cooked with turnips will fall from the sky.”

(Translation altered)

To the Political Ontologists

The political ontologists have their work cut out for them. Let’s say you believe that the entire world is made out of fire: Your elms and alders are fed by the sky’s titanic cinder; your belly is a metabolic furnace; your lungs draw in the pyric aether; the air that hugs the earth is a slow flame—a blanket of chafing-dish Sterno—shirring exposed bumpers and cast iron fences; water itself is a mingling of fire air with burning air. The cosmos is ablaze. The question is: How are you going to derive a political program from this insight, and in what sense could that program be a politics of fire? How, that is, are you going to get from your ontology to your political proposals? For if fire is not just a political good, but is in fact the very stuff of existence, the world’s primal and universal substance, then it need be neither produced nor safeguarded. No merely human arrangement—no parliament, no international treaty, no tax policy—could dislodge it from its primacy. It will no longer make sense to describe yourself as a partisan of fire, since you cannot be said to defend something that was never in danger, and you cannot be said to promote something that is everywhere already present. Your ontology, in other words, has already precluded the possibility that fire is a choice or that it is available only in certain political frameworks. This is the fate of all political ontologies: The philosophy of all-being ends up canceling the politics to which it is only superficially attached. The –ology swallows its adjective.

The task, then, when reading the radical ontologists—the Spinozists, the Left Heideggerians, the speculative realists—is to figure out how they think they can get politics back into their systems; to determine by which particular awkwardness they will make room for politics amidst the spissitudes of being. In its structure, this problem repeats an old theological question, which the political ontologists have merely dressed in lay clothes—the question, that is, of whether we are needed by God or the gods. If you have given in to the pressure to subscribe to an ontology, then this is the first question you should ask: Whatever is at the center of your ontology—does it need you? Does Becoming need you? Is Being incomplete without you? Has the cosmic fire deputized you? And if you decide that, no, the fire does not need you—if, that is, you resist the temptation to appoint yourself that astounding entity upon which even the Absolute depends—then you will have yourself already concluded that there is nothing exactly to be gained from getting your ontology right, and you will be free to think about other and more interesting things.

If, on the other hand, you are determined to ontologize, and determined additionally that your ontology yield a politics, there are, roughly speaking, three ways you can make this happen.

First, you could determine that even though fire is the primal stuff of the universe, it is nonetheless unevenly distributed across it; or that the cosmos’s seemingly discrete objects embody fire to greater and lesser degrees. The heavy-gauge universalism of your ontology will prevent you from saying outright that water isn’t fire, but you might conclude all the same that it isn’t very good fire. This, in turn, would allow you to start drawing up league tables, the way that eighteenth-century vitalists, convinced that the whole world was alive, nonetheless distinguished between vita maxima and vita minima. And if you possess ontological rankings of this kind, you should be able to set some political priorities on their basis, finding ways to reward the objects (and people? and groups?) that carry their fiery qualities close to the surface, corona-like, and, equally, to punish those objects and people who burn but slowly and in secret. You might even decide that it is your vocation to help the world’s minimally fiery things—trout ponds, shale—become more like its maximally fiery things—volcanoes, oil-drum barbecue pits. The pyro-Hegelian takes it upon himself to convert the world to fire one timber-framed building at a time.

Alternately—and herewith a second possibility—you can proclaim that the cosmos is made of fire, but then attribute to humanity an appalling power not to know this. “Power” is the important word here, since the worry would have to be that human ignorance on this point could become so profound that it would damage or dampen the world-flame itself. Perhaps you have concluded that fire is not like an ordinary object. We know in some approximate and unconsidered way what it is; we are around it every day, walking in its noontide light, enlisting it to pop our corn, conjuring it from our very pockets with a roll of the thumb or knuckly pivot. And yet we don’t really understand the blaze; we certainly do not grasp its primacy or fathom the ways we are called upon to be its Tenders. You might even have discovered that we are the only beings, the only guttering flames in a universe of flame, capable of defying the fire, proofing the world against it, rebuilding the burning earth in gypsum and asbestos, perversely retarding what we have been given to accelerate. This argument expresses clear misgivings about humanity; it doesn’t trust us to keep the fire stoked; and to that extent it partakes of the anti-humanism that is all but obligatory among political ontologists. And yet it shares with humanism the latter’s sense that human beings are singular, a species apart, the only beings in existence capable of living at odds with the cosmos, capable, that is, of some fundamental ontological misalignment, and this to a degree that could actually abrogate an ontology’s most basic guarantees. From a rigorously anti-humanist perspective, this position could easily seem like a lapse—the residue of the very anthropocentrism that one is pledged to overcome—but it is in fact the most obvious opening for an anti-humanist politics (as opposed, say, to an anti-humanist credo), since you really only get a politics once the creedal guarantees have been lifted. If human beings are capable of forgetting the fire, someone will have to call to remind them. Someone, indeed, will have to ward off the ontological catastrophe—the impossible-but-somehow-still-really-happening nihilation of the fire—the Dousing.

That said, a non-catastrophic version of this last position is also possible, though its politics will be accordingly duller. Maybe duller is even a good thing. Such, at any rate, is the third pathway to a political ontology: You might consider arguments about being politically germane even if you don’t think that humanity’s metaphysical obtuseness can rend the very tissue of existence. You don’t have to say that we are damaging the cosmic fire; it will be enough to say that we are damaging ourselves, though having said that, you are going to have to stop trying to out-anti-humanize your peers. Your position will now be that not knowing the truth about the fire-world deforms our policies; that if we mistake the cosmos for something other than flame, we are likely to attempt impossible feats—its cooling; its petrification—and will then grow resentful when these inevitably fail. You might, in the same vein, determine that there are entire institutions dedicated to broadcasting the false ontologies that underwrite such doomed projects, doctrines of air and doxologies of stone, and you might think it best if such institutions were dismantled. If it’s politics we’re talking about, you might even have plans for their dismantling. Even so, you will have concluded by this point that the problem is in its essentials one of belief—the problem is simply that some people believe in water—in which case, ontology isn’t actually at issue, since nothing can happen ontologically; the fire will crackle on regardless of what we think of it, indifferent to our denials and our elemental philandering. You have thus gotten the politics you asked for, but only having in a certain sense bracketed the ontology or placed it beyond political review. And your political program will accordingly be rather modest: a new framework of conviction—a clarification—an illumination.

Still, even a modest politics sometimes shows its teeth. William Connolly, in a book published in 2011, says that the world-fire is burning hotter than it has ever burnt; the problem is, though, that some “territories … resist” the flame. What we don’t want to miss is the basically militarized language of that claim: “resisting territories” suggests backwaters full of ontological rednecks; Protestant Austrian provinces; the Pyrenees under Napoleon; Anbar. Connolly’s notion is that these districts will need to be enlightened and perhaps even pacified, whereupon political ontology outs itself as just another program of philosophical modernization, a mopping up operation, the People of the Fire’s concluding offensive against the People of the Ice. Don’t fight it, Connolly, in this way, too, an irenicist, instructs the existentially retrograde. Let it burn.

The all-important point, then, is that there is absolutely no reason to get hung up on the word “fire,” in the sense that there is no more sophisticated concept you can put in its place that will make these problems go away: not Being, not Becoming, not Contingency, not Life, not Matter, not Living Matter. Go ahead: Choose your ontological term or totem and mad-lib it back into the last six paragraphs.  Nothing else about them will change.

• • •

Anyone wanting to read Connolly’s World of Becoming, or Jane Bennett’s Vibrant Matter, its companion piece, also from 2011, now has some questions they can ask. The two books share a program:

-to survey theories of chaos, complexity; to repeat the pronouncements of Belgian chemists who declare the end of determinism; and then to resurrect under the cover of this new science a much older intellectual program—a variously Aristotelian, Paracelsian, and hermetic strain in early modern natural philosophy, which once posited and will now posit again a living cosmos a-go-go with active forces, a universe whose intricate assemblages of self-organizing systems will frustrate any attempt to reduce them back to a few teachable formulas;

-or, indeed, to trade in “science” altogether in favor of what used to be called “natural history,” the very name of which strips nature of its pretense to permanence and pattern and nameable laws and finds instead a universe existing wholly in time, as fully exposed to contingency, mutation, and the event as any human invention, with alligators and river valleys and planets now occupying the same ontological horizon as two-field crop rotation and the Lombard Leagues;

-to recklessly anthropomorphize this historical cosmos, to the point where that entirely humanist device, which everywhere it looks sees only persons, tips over into its opposite, as humanity begins divesting itself of its specialness, giving away its privileges and distinguishing features one by one, and so produces a cosmos full of more or less human things, active, volatile, underway—a universe enlivened and maybe even cartoonish, precisely animated, staffed by singing toasters and jitterbugging hedge clippers.

I wouldn’t blame anyone for finding this last idea rather winning, though one problem should be noted right way, which is that Connolly, in particular, despite getting a lot of credit for bringing the findings of the natural sciences into political theory—and despite repeating in A World of Becoming his earlier admonition to radical philosophers for failing to keep up with neurobiology and chemistry and such—really only quotes science when it repeats the platitudes of the old humanities. The biologist Stuart Kauffman has, Connolly notes, “identified real creativity” in the history of the cosmos or of nature. Other research has identified “degrees of real agency” in a “variety of natural-social processes.” The last generation of neuroscience has helped specify the “complexity of experience,” the lethal and Leavisite vagueness of which phrase should be enough to put us on our guard. It turns out that the people who will save the world are still the old aesthetes; it’s just that their banalities can now borrow the authority of Nobel Laureates (always, in Connolly, named as such). Of one scientific finding Connolly notes: “Mystics have known this for centuries, but the neuroscience evidence is nice to have too.” That will tell you pretty much everything you need to know about the role of science in the new vitalism, which is that it gets adduced only to ratify already held positions. This is interdisciplinarity as narcissistic mirror.

But we can grant Connolly his fake science—or rather, his fake deployment of real science. The position he and Bennett share—that the cosmos is full of living matter in a constant state of becoming—isn’t wrong just because it’s warmed over Ovid. What really needs explaining is just which problems the political philosophers think this neuro-metamorphism is going to solve. More to the point, one wonders which problems a vitalist considers still unsolved. If Bennett and Connolly are right, then is there anything left for politics to do? Has Becoming bequeathed us any tasks? Won’t Living Matter get by just fine without us? And if there is no political business yet to be undertaken, then in what conceivable sense is this a political philosophy and not an anti-political one?

The real dilemma is this: There are those three options for getting a politics back into ontology—you can devise an ontological hierarchy; you can combat ontological Vergessenheit; or you can promote ontological enlightenment. Bennett and Connolly don’t like two of these, and the third one—the one they opt for—ends up canceling the ontology they mean to advocate. I’ll explain.

Option #1: Hierarchy could work. Bennett and Connolly could try to distinguish between more and less dynamic patches of the universe—or between more and less animate versions of matter—but they don’t want to do that. The entire point of their philosophical program is a metaphysical leveling; witness that defense of anthropomorphism. Bennett, indeed, uses the word “hierarchical” only as an insult, the way that liberals and anarchists and post-structuralists have long been accustomed to doing. Having only just worked out that all of matter has the characteristics of life, she is not about to proclaim that some life forms are more important than others. Her thinking discloses a problem here, if only because it reminds one of how difficult is has been for the neo-vitalists to figure out when to propose hierarchies and when to level them, since each seems to come with political consequences that most readers will find unpalatable. Bennett herself worries that a philosophy of life might remove certain protections historically afforded humans and thus expose them to “unnecessary suffering.” She positions herself as another trans- or post-humanist, but she doesn’t want to give up on Kant and the never really enforced guarantees of a Kantian humanism; she thinks she can go over to Spinoza and Nietzsche and still arrive at a roughly Left-Kantian endpoint. “Vital materialism would … set up a kind of safety net for those humans who are now … routinely made to suffer.” That idea—which sounds rather like the Heidegger of the “Letter on Humanism”—is, of course, wrong. Bennett is right to fret. A vitalist anti-humanism is indeed rather cavalier about persons, as her immediate predecessors and philosophical mentors make amply clear. The hierarchies it erects are the old ones: Michael Hardt and Toni Negri think it is a good thing that entire populations of peasants and tribals were wiped out because their extermination increased the vital energies of the system as a whole. And if vitalism’s hierarchies produce “unnecessary suffering,” well, then so do its levelings: Deleuze and Guattari think that French-occupied Africa was an “open social field” where black people showed how sexually liberated they were by fantasizing about “being beaten by a white man.”

Option #2: They could follow the Heideggerian path, which would require them to show that humanity is a species with weird powers—that humans (and humans alone) can fundamentally distort the universe’s most basic feature or hypokeinomon. That would certainly do the political trick. Vitalism would doubtless take on an urgency if it could make the case that human beings were capable of dematerializing vibrant matter—or of making it less vibrant—or of pouring sugar into the gas tank of Becoming. But Bennett and Connolly are not going to follow this path either, for the simple reason that they don’t believe anything of the sort. Their books are designed in large part to attest the opposite—that humanity has no superpowers, no special role to play nor even to refuse to play. Early on, Bennett praises Spinoza for “rejecting the idea that man ‘disturbs rather than follows Nature’s order.’” We’ll want to note that Spinoza’s claim has no normative force; it’s a statement of fact. We don’t need to be talked out of disturbing nature’s order, because we already don’t. The same grammatical mood obtains when Bennett quotes a modern student of Spinoza: “human beings do not form a separate imperium unto themselves.” We “do not”—the claim in its ontological form means could not—stand apart and so await no homecoming or reunion.

Those sentences sound entirely settled, but there are other passages in Vibrant Matter when you can watch in real time as such claims visibly neutralize the political programs they are being called upon to motivate. Here’s Bennett: “My hunch is that the image of dead or thoroughly instrumentalized matter feeds human hubris and our earth-destroying fantasies of conquest and consumption.” On a quick read you might think that this is nothing more than a little junk Heideggerianism—that techno-thinking turns the world into a lumberyard, &c. But on closer inspection, the sentence sounds nothing like Heidegger and is, indeed, entirely puzzling. For if it is “hubris” to think that human beings could “conquer and consume” the world—not hubris to do it, but hubris only to think it, hubris only in the form of “fantasy”—then in what danger is the earth of actually being destroyed? How could mere imagination have world-negating effects and still remain imagination? Bennett’s position seems to be that I have to recognize that consuming the world is impossible, because if I don’t, I might end up consuming the world. Her argument only gains political traction by crediting the fantasy that she is putatively out to dispel. Or there’s this: Bennett doesn’t like it when a philosopher, in this instance Hannah Arendt, “positions human intentionality as the most important of all agential factors, the bearer on an exceptional kind of power.” Her book’s great unanswered question, in this light, is whether she can account for ecological calamity, which is perhaps her central preoccupation, without some notion of human agency as potent and malign, if only in the sense that human beings have the capacity to destroy entire ecosystems and striped bass don’t. The incoherence that underlies the new vitalism can thus be telegraphed in two complementary questions: If human beings don’t actually possess exceptional power, then why is it important to convince them to adopt a language that attributes to them less of it? But if they do possess such power, then on what grounds do I tell them that their language is wrong?

Option #3: Enlightenment it is, then. What remains, I mean, for both Connolly and Bennett, is the simple idea that most people subscribe to a false ontology and are accordingly in need of re-education. Connolly describes himself and his fellow vitalists as “seers”—he also calls them “those exquisitely sensitive to the world”—and he more then once quotes Nietzsche referring to everyone else, the non-seers, the foggy-eyed, as “apes.” I don’t much like being called an orangutan and know others who will like it even less, but at least this rendering of Bennett/Connolly has the possible merit of making the object-world genuinely autonomous and so getting the cosmos out from under the coercions of thought. Our thinking might affect us, but it cannot affect the universe. But there is a difficulty even here—the most injurious of political ontology’s several problems, I think—which is that via this observation philosophy returns magnetically to its proper object—or non-object—which is thought, and we realize with a start that the only thing that is actually up for grabs in these new realist philosophies of the object is in fact our thinking personhood. This is really quite remarkable. Bennett says that the task facing contemporary philosophy is to “shift from epistemology to ontology,” but she herself undertakes the dead opposite. She has precisely misnamed her procedure: “We are vital materiality,” she writes, “and we are surrounded by it, though we do not always see it that way. The ethical task at hand here is to cultivate the ability to discern nonhuman vitality, to become perceptually open to it.” There is nothing about her ontology that Bennett feels she needs to work out; it is entirely given. The philosopher’s commission is instead to devise the  moralized epistemology that will vindicate this ontology, and which will, in its students, produce “dispositions” or “moods” or, as Connolly has it, a “working upon the self” or the “cultivation of a capacity” or a “sensibility” or maybe even just another intellectual “stance.” Connolly and Bennett have lots of language for describing mindsets and almost no language for describing objects. Their arguments take shape almost entirely on the terrain of Geist. They really just want to get the subjectivity right.

There are various ways one might bring this betrayal of the object into view, in addition to quoting Bennett and Connolly’s plain statements on the matter. Among the great self-defeating deficiencies of these books are the fully pragmatist argumentative procedures adopted by their authors, who adduce no arguments in favor of their  chosen ontology. Bennett points out that her position is really just an “experiment” with different ways of “narrating”; an “experiment with an idea”; a “thought experiment,” Connolly says. “What would happen to our thinking about nature if…” The post-structuralism that both philosophers think they’ve put behind them thus survives intact. But such play with discourse is, of course, entirely inconsistent with a robust philosophy of objects, premised as it is on the idea that the object exerts no pressure on the language we use to describe it, which indeed we elect at will. The mind, as convinced of its freedom as it ever was, chooses a philosophical idiom just to see what it can do.

This problem—the problem, I mean of an object-philosophy that can’t stop talking about the subject—then redoubles itself in two ways:

– The problem is redoubled, first, in the blank epiphanies of Bennett’s prose style, and especially when she makes like Novalis on the streets of Baltimore, putting in front of readers an assemblage of objects the author encountered beneath a highway underpass so that we can imagine ourselves beside her watching them pulsate. The problem is that she literally tells us nothing about these items except that she heard them chime. One begins to say that she chose four particular objects—a glove, pollen, a dead rat, and a bottle cap—except that formulation is already misleading, since lacking further description, these four objects really aren’t particular at all. They are sham specificities, for which any other four objects could have served just as well. She could have changed any or all of them—could have improvised any Borgesian quartet—and she would have written that page in exactly the same manner. You can suggest your own, like this:

-a sock, some leaves, a lame squirrel, and a soda can

-a castoff T-shirt, a fallen tree limb, a hungry kitten, and an empty Cheetos bag

a bowler hat, a beehive, a grimy parasol, and Idi Amin

These aren’t objects; these are slots; and Bennett’s procedure is to that extent entirely abstract. This is what it means to say that materialism, too, is just another philosophy of the subject. It does no more or less than any other intellectual system, maintaining the word “object” only as a vacancy onto which to project its good intentions.

-The problem is redoubled, second, in the nakedly religious idiom in which these two books solemnize their arguments. That idiom, indeed, is really just pragmatism in cassock and cope. The final page of Bennett’s book prints a “Nicene Creed for would-be vital materialists.” Connolly’s book begins by offering its readers “glad tidings.” Nor does the latter build arguments or gather evidence; he “confesses” a “philosophy/faith,” which is also a “faith/conviction,” which is also a “philosophy/creed.” Bennett and Connolly hold vespers for the teeming world. Eager young materialists, turning to these books to help round out their still developing views, must be at least somewhat alarmed to discover that our relationship to matter is actually one of “faith” or “conviction.” A philosophical account of the object is replaced by a pledge—a deferral—a promise, by definition tentative, offered in a mood of expectancy, to take the object on trust. Nor is this in any way a gotcha point. Connolly is completely open about his (Deleuzian) aim “to restore belief in the world.” It’s just that no sooner is this aim uttered than the world undergoes the fate of anything in which we believe, since if you name your belief as belief, then you are conceding that your position is optional and to some considerable degree unfounded and that you do not, in that sense, believe it at all.

It’s not difficult, at any rate, to show that Connolly for one does not believe in his own book. The stated purpose of A World of Becoming is to show us how to “affirm” that condition. That’s really all that’s left for us to do, once one has determined that Becoming will go on becoming even without our help and even if we work against it. Connolly’s writing, it should be said, is generally short on case studies or named examples of emergent conjunctures, leaving readers to guess what exactly they are being asked to affirm. For many chapters on end, one gets the impression that the only important way in which the world is currently becoming is that more people from Somalia are moving to the Netherlands, and that the phrase “people who resist Becoming” is really just Connolly’s idiosyncratically metaphysical synonym for “racists.” But near the end of the book, three concrete examples do appear, all at once—three Acts of Becoming—two completed, one still in train: the 2003 invasion of Iraq; the 2008 financial collapse; and global warming. All three, if regarded from the middle distance, seem to confirm the vitalist position in that they have been transformative and destabilizing and will for the foreseeable future produce unpredictable and ramifying consequences. What is surprising—but then really, no, finally not the least bit surprising—is that Connolly uses a word in regard to these three cases that a Nietzschean committed to boundless affirmation shouldn’t be able to so much as write: “warning.” Melting icecaps are not to be affirmed—that’s Connolly’s own view of the matter. Mass foreclosure is not to be affirmed. Quite the contrary: If you know that the cosmos is capable of shifting suddenly, then you might be able to get the word out. The responsibility borne by philosophers shifts from affirmation to its opposite: Vitalists must caution others about what rushes on. The philosopher of Becoming thus asks us to celebrate transformation only until he runs up against the first change he doesn’t like.

This is tough to take in. Lots of things are missing from political ontology: politics, objects, an intelligible metaphilosophy. But surely one had the right to expect from a theorist of systemic and irreversible change, one with politics on his mind, some reminder of the possibility of revolution, some evocation, since evocations remain needful, of the joy of that mutation, the elation reserved for those moments when Event overtakes Circumstance. But in Connolly, where one might have glimpsed the grinning disbelief of experience unaccounted for, one finds only the bombed out cafés of Diyala, hence fear, hence the old determination to fight the future. The philosopher of fire grabs the extinguisher. The philosopher of water walks in with a mop.

Thanks to Jason Josephson and everyone in the critical theory group at Williams College.

Illegals, Part 2

PART ONE IS HERE. 

ALLEGORICAL COMPLEXITY #1—Super 8, eventually :

You can think of this as a tip for reading: When you are trying to make sense of an allegory, it is not enough to list the resemblances between the allegorical construct and its real-world referent, between the spaceman and the Jewish fugitive; you’ll need to catalogue their divergences, as well. For excess is the permanent condition of allegory. An invented creature never fully disappears into its literal equivalents; the alien is not exhausted by the designation “Jewish.” The reader’s task, then, is not to vaporize a given movie’s specificities, not to absorb them into some higher meaning that, once decrypted, would render the movie itself superfluous. Part of the task is to account, rather, for the allegory’s remainders, the scraps of significance that are left over even once the allegorical identification has been successfully announced. These unattached features are the mark of a contradiction that is internal to allegory; they disclose desires that the world’s already existing names cannot satisfy.

An alien invasion movie of a different kind, then, before we get to Super 8, just to make clear that this point is specific to no one film. The allegory in James Cameron’s Avatar, from 2009, is open-and-shut and, one might object, mostly shut—entirely too neat—elementary and plodding. The movie’s aliens are Indigenous People, a blue-skinned cross between the Chinook and the Zulu, called the Na’vi, which sounds like Navajo + Hopi. But the very obviousness of the allegory ends up producing some interesting effects of its own, for Avatar is so unoverlookably anti-imperialist—anti-imperialist in such a thorough-going way—that no-one who cares about such a politics can afford to just skip it or to write it off too quickly. Its story is certainly familiar; it’s just the twice-told tale about a white guy crossing sides, going native, turning Turk. But a comparative approach would show that the movie actually blows clean past the hedges and outs that typically blight such narratives, and especially the famous recent ones: Dances with Wolves, say, or The Last Samurai. Those movies are easy to hate. The really foul thing about Dances is that Kevin Costner falls in love with an Indian woman, except she isn’t really Indian—she’s the only other white person in the tribe—and you know this because she wears her hair differently, as though the Sioux kept on staff a special whites-only beautician. This only nominally pro-Indian movie goes to completely absurd lengths to prevent inter-racial sex. It is in this sense that the people who insisted that Avatar was nothing more than a live-action replay of FernGully or Disney’s Pocahontas weren’t paying attention. Sure, Avatar borrows from other movies, and yet it distinguishes itself even so by its open-throttle commitment to indigenism and racial treason. Quick—list for me all the other Hollywood movies you’ve seen that end with a vision of white people getting sent back to Europe for good. The movie baptizes everyone who watches it into the end of the American empire.

It does more than that. One of Avatar’s first-order complexities is that the opposing forces on the two sides of its central conflict—the human invaders and the indigenous aliens—have been borrowed from very different periods in the history of empire. The Na’vi call to mind the precolonial Kikuyu or the Algonquin before Columbus, but the movie’s humans are neither Puritan nor pith-helmeted; they are new-model conquistadors, Haliburton-types, the corporate mercenaries of the War on Terror. Avatar asks us to imagine how it would look if the current US army were invading North America or Africa for the first time—What if the Massachusetts Bay Company had employed Blackwater?—which means at the level of the image, the movie manages to insert the Iraq War into some much longer histories, folding Bush-era adventurism into an overarching account of European colonization. To that extent, James Cameron is actually rather smarter about empire than the run-of-the-mill American liberals who talk as though 2003 were some kind of shocking deviation from the fundamental patterns of US history, a freedom-loving nation’s unprecedented deviation into expansion and conquest. And in a similar vein, the movie is willing to dwell, to a quite unusual degree for a blockbuster, on images of imperial atrocity—familiar images, doubtless, if you know that history, but replayed for a global audience with immediacy and renewed grief: The Smurf-Seminoles walk the Trail of Tears.

I also think the movie’s length, about which those prone to headaches might rightfully complain, turns out to be its great asset. And the best thing about those 160 minutes is this: Avatar is a utopia hiding in an action movie. The movie is so indulgent that it can afford to give us a protracted utopian sequence, itself almost as long as an ordinary feature film, when, in fact, there is no genre that commercial film avoids more studiously than utopia. My friends who study the form will get huffy at this point: So yes, absolutely, the utopia in Avatar is badly underspecified; it is not much interested in how the Na’vi feed or govern themselves. It approaches the better society almost only through the natives’ theology. But in some respects, this is actually where the movie is at its most ingenious. Cameron, who as I write is crawling on his hands and knees around the Mariana Trench, has found a way to put his pricey 3D-technology in the service of utopia—or at least of a certain pantheism, which in this case is almost the same thing. As a sensory experience, the movie obviously feels new and exhilarating, and I want to say that in some almost Ruskinite way, the film is determined to revitalize your sensorium, to create a constant sense of wonder at the simple fact that we all live in a three-dimensional world. The movie obviously makes a big deal of the characters being connected, being able to interface with nature, to plug into it, in a way that is both technological and shamanistic, and I think the movie thereby provides its own gloss on its technological ambitions: It’s as though Cameron thinks he can use the most advanced technology that a director has ever commanded to approximate in the viewer a basically vitalist and world-adoring attitude.

But then it is precisely here that instability takes over. It is here, I mean, that we have to shift from naming the ways in which the Na’vi are most like Amazonians to naming where they are least so. Avatar is not only putting in front of us an indigenism; it is putting in front of us a technologized indigenism, and there is something about this latter that is odd and finally unsatisfying. That point comes in a specific form and a general one. Here’s the specific one. The biggest innovation in twentieth-century warfare was air power: the bi-plane, the bomber, firebombing, the atomic bomb, napalm, no-fly zones, shock and awe, assassin drones, death from above. Air power is what has permanently shifted the global balance of power to the hyper-technological nations. And the movie’s trick—ingenious in a sense, but also silly—is to give the indigenous a Luftwaffe: Dragons! The flying monsters, in other words, are the equalizer that makes the movie’s political allegory work, but they are themselves entirely non-allegorizable, which means that the entire system of correspondences actually starts coming unglued around them.

In other words, the movie’s politics are at heart fake, because it is trying to imagine a people who live in harmony with nature, who get by without advanced technology, but it has to give them the equivalent of helicopters, because if they didn’t have the equivalent of helicopters, they would get wiped out by the Helicopter People of Earth. But then the movie is ducking the really hard political question, which is: How might a non-technological people actually survive? How could they defend themselves against the cyborg nations who would steal their land and resources? Avatar dodges those questions, and so ends up being just another impotent historical fantasia.

The broader version of that point, meanwhile, is this: It’s well known that the sci-fi movies that most distrust technology are the ones that rely on it most extensively, but Avatar radicalizes that paradox in both directions. It was upon release the most technologically advanced movie ever made, and yet it is utterly, committedly elfin and eco- in its ideology. But then in another sense, that very antithesis is breached, because the movie devises ways to comprehensively sneak technology back into nature itself. The forest paths light up, as though electrically, when the Na’vi tread on them. The aborigines plug their ponytails into animals and trees as into Ethernet ports or wall sockets. Their manes have slim, wavy organic tendrils, which however also look like fibers or cables. And the Sigourney Weaver character at one point openly compares all this to a computer: the natives are jacking into the planet and downloading information from it. On the one hand, this is itself just allegory for what we take to be “the tribal worldview”—being in touch with nature or what have you—and if we accept the entirely plausible idea that indigenous and stateless peoples have been extraordinarily attentive to ecologies—that they were really good at reading landscapes, &c—then this could merely serve as science-fiction shorthand for that skill. What’s remarkable, though, is that Cameron has translated this into a technological image. That’s the other hand. The non-technological understanding of the world gets its technological allegory. So this is what it means to say that allegory yields contradiction. Is the image of plugging into nature technological or not? It is and it isn’t—and this speaks volumes about the movie’s bad faith. A global viewership sides with a pre-technological people only when it emerges that they have the newest gadgets. Avatar reassures its audience that they could go back to the land and actually give up on nothing—that they could go off the grid and still have the grid—that they could move to the Gallatin Range and keep their every last iPhone.

PART THREE IS HERE…

Special thanks to Crystal Bartolovich, who convinced me to take the role of technology in Avatar much more seriously than I was initially inclined to and who has much more to say on the topic in her forthcoming Natural History of the Common. For a preview of her argument, see also this interview.

Illegals, Part 1

 

I’ve been thinking a lot about alien invasion movies, and especially about the ones that feature human children, boy-explorers or pre-teen ambassadors to the talking bugs. I suppose it would just be easier to say that I’ve been thinking about ET and its recent imitators: Super 8, Attack the Block. But even this would be a way of sidestepping the truth, which is that mostly I’ve been thinking about ALF. I have, in fact, been thinking about ALF for a very long time. In the very late ‘80s, as a teenager, I spent a year in Frankfurt, and there was nothing that bothered me more in that period of my life than the centrality of ALF to modern German culture. I had gone to the Rhine to learn about Günter Grass and anarchism and was still under the impression that I could outrun network television. I suppose I was mildly surprised that the Germans had, like, vacuum cleaners. ALF was at that point a pretty fair summation of everything I thought I was leaving safely back home in New England. But that show was way more popular in Germany than it ever had been in Massachusetts: Ninja-Turtle-early-Bart-Simpson-eat-my-shorts popular. It seemed like it was always running in the background in every house I visited. The stalls at small-town German street fairs were crowded with long-snooted, rusty yellow puppets, in all the places that a visitor might have expected to see hand-made Christmas decorations or tankards in the shape of castle towers. I should point out that it wasn’t just the Federal Republic; a Eurail pass revealed to me that  the series had a pan-continental following. But only in Germany did the puppet’s voice actor spend three months in the pop charts, with a single called “Hallo ALF – hier ist Rhonda.” And the thing is, when I went back to Germany for a year after college—to Berlin in the mid-90s—ALF, having been off the air in the US for half a decade, was still around, still on T-shirts and decals and school folders. The Germans left stranded by the show’s American cancellation had taken to producing ALF radio plays. Project ALF—a one-off TV movie that ran on NBC in 1996—got a theatrical release and a big rollout in Germany: ALF—Der Film. It played in Berlin’s showcase theaters. Garfield-reimagined-as-warthog looked down from on high upon the Kurfürstendamm.

So the question that posed itself ever more insistently was: Why were the Germans so hung up on this show? And one night in Berlin, an American buddy and I drank our way to clarity. ALF, of course, is a Holocaust story—you knew that already; you’re irritated I didn’t see it sooner—a sitcom about a family hiding someone in its attic, someone the government wants to seize, a permanent exile with no homeland to which he can return. Those oversized ALF dolls turned out to be the only way that a young German could take a Jewish proxy home and fantasmatically keep him safe in a wardrobe or nighttime embrace. They belonged at one remove to the history of extravagantly racialized children’s toys — plastic figurines of Native American braves, Black rag dolls. They were the stuffed animals of genocide comedy. The original NBC production hadn’t gone to any lengths to disguise this: those bushy eyebrows; that schnozz; that gruff, Catskills shtick. The show’s lone and improbable joke was that if the fascists ever took power in America, someone would have to agree to shelter Don Rickles. And with this insight in mind, I made a special trip to the university library in Berlin to chase down a hunch, and it was right: Anne Frank was not the girl’s real name, or at least not her full name. Her name was Annelies Frank: A … L … F.

The show, which premiered in 1986, was also directly derived from—or a Muppet-y riff upon—ET, released in 1982. And in that case, most of what we have to say about ALF can simply be repeated about the movie. Spielberg did not wait until the 1990s to start making films about the Holocaust. When ET came out, he had already just made one—Raiders of the Lost Ark, which ends when the insulted might of ancient Israel obliterates a small army’s worth of Nazis. Light flashes and German flesh renders like tallow: Raiders presents an alternate history in which the Jews possessed a small A-bomb of their own, a game-changer and plague of radioactive locusts for the European war. ET, then, was itself just an extrapolation from a Dutch Holocaust diary and perhaps the first narrative in which suburban Americans were invited to imagine keeping Jews as pets.

Something about this argument we will want to generalize, since alien invasion movies are always going to be, to some degree or another, racial allegories. That can’t come as a surprise to anyone who speaks English, a language in which the word “alien” means both “squid creature from another solar system” and “non-citizen.” But then I should say, too, that lots of serious readers think that allegories—or allegorical habits of interpretation—are conceptually pretty low-rent, the literary equivalent of rebuses. They’re wrong. If you really and truly give up on allegorical reading, you’re going to miss too much of importance—too much of what makes storytelling compelling to us—which means that most literary critics don’t, in fact, give up on it. They just waste a lot of time reinventing it piecemeal under other names. Nor is allegory as straightforward as the sophisticates claim; it generates its own forms of complexity and its own revelatory instabilities. But then this last point partially vindicates the people who don’t like allegory. Naming the allegory is the easy part; it’s really just the beginning. Allegories tell us one thing when they work, but they tell us something else—something arguably more valuable—when they don’t. And allegories never work perfectly. They can’t work perfectly. An impeccably rendered allegorical Jew would no longer be recognizable as allegory. He would just be a Jew. Like a dying werewolf shriveling back into its naked human form, he would revert back to literalness, from extraterrestrial to Ashkenazy. Distortion and mismatch are the preconditions of allegory, the dysfunctions that make it function. If you are reading allegorically, you can never just whip out the decoder ring.

So I want to look over the next few days at those recent homages to ET—one from the US, one from the UK—and I want to name their allegories, but I want to underscore from the outset that these are most interesting where least steady.

PART TWO IS HERE.

Outward Bound: On Quentin Meillassoux’s After Finitude

 

 

Il n’y a pas de hors-texte. If post-structuralism has had a motto—a proverb and quotable provocation—then surely it is this, from Derrida’s Of Grammatology. Text has no outside. There is nothing outside the text. It is tempting to put a conventionally Kantian construction on these words—to see them, I mean, as bumping up against an old epistemological barrier: Our thinking is intrinsically verbal—in that sense, textual—and it is therefore impossible for our minds to get past themselves, to leave themselves behind, to shed words and in that shedding to encounter objects as they really are, in their own skins, even when we’re not thinking them, plastering them with language, generating little mind-texts about them. But this is not, in fact, what the sentence says. Derrida’s claim would seem to be rather stronger than that: not There are unknowable objects outside of text, but There are outside of text no objects for us to know. So we reach for another gloss—There is only textain’t nothing but text—except the sentence isn’t really saying that either, since to say that there is nothing outside text points to the possibility that there is, in a manner yet to be explained, something inside text, and this something would not itself have to be text, any more than caramels in a carrying bag have to be made out of cellophane.

So we look for another way into the sentence. An alternate angle of approach would be to consider the claim’s implications in institutional or disciplinary terms. The text has no outside is the sentence via which English professors get to tell everyone else in the university how righteously important they are. No academic discipline can just dispense with language. Sooner or later, archives and labs and deserts will all have to be exited. The historians will have to write up their findings; so will the anthropologists; so will the biochemists. And if that’s true, then it will be in everyone’s interest to have around colleagues who are capable of reflecting on writing—literary critics, philosophers of language, the people we used to call rhetoricians—not just to proofread the manuscripts of their fellows and supply these with their missing commas, but to think hard about whether the language typically adopted by a given discipline can actually do what the discipline needs it to do. If the text has no outside, then literature professors will always have jobs; the idea is itself a kind of tenure, since it means that writerly types can never safely be removed from the interdisciplinary mix. The idea might even establish—or seek to establish—the institutional primacy of literature programs. Il n’y a pas de hors-texte. There is nothing outside the English department, since every other department is itself engaged in a more or less literary endeavor, just one more attempt to make the world intelligible in language.

Such, then, is the interest of Quentin Meillassoux’s After Finitude, first published in French in 2006. It is the book that, more than any other of its generation, means to tell the literature professors that their jobs are not, in fact, safe. Against Derrida it banners a counter-slogan of its own: ““it could be that contemporary philosophers have lost the great outdoors, the absolute outside.” It is Meillassoux’s task to restore to us what he is careful not to call nature, to lead post-structuralists out into the open country, to make sure that we are all getting enough fresh air. Meillassoux means, in other words, to wean us from text, and for anyone beginning to experience a certain eye-strain, a certain cramp of the thigh from not having moved all day from out his favorite chair, this is bound to be an appealing prospect, though if you end up unconvinced by its arguments—and there are good reasons for doubt, as the book amounts to a tissue of misunderstanding and turns, finally, on one genuinely arbitrary prohibition—then it’s all going to end up sounding like a bullying father enrolling his pansy son in the Boy Scouts against his will: Get your head out of that book! Why don’t you go in the yard and play?!

• • •

Of course, Meillassoux’s way of getting the post-structuralists to go hiking with him is by telling them which books to read first. If you start scanning After Finitude’s bibliography, what will immediately stand out is its programmatic borrowing from seventeenth- and early eighteenth-century philosophers. Meillassoux regularly cites Descartes and poses anew the question that once led to the cogito, but will here lead someplace else: What is the one thing I as a thinking person cannot disbelieve even from the stance of radical doubt? He christens one chapter after Hume and proposes, as a knowing radicalization of the latter’s arguments, that we think of the cosmos as “acausal.” In the final pages, Galileo steps forward as modern philosophy’s forgotten hero. His followers are given to saying that Meillassoux’s thinking marks out a totally new direction in the history of philosophy, but I don’t think anyone gets to make that kind of claim until they have first drawn up an exhaustive inventory of debts. At one point, he praises a philosopher publishing in the 1980s for having “written with a concision worthy of the philosophers of the seventeenth century.” That’s one way to get a bead on this book—that it resurrects the Grand Siècle as a term of praise. The movement now coalescing around Meillassoux—the one calling itself speculative realism—is a bid to get past post-structuralism by resurrecting an ante-Kantian, more or less baroque ontology, on the understanding that nearly all of European philosophy since the first Critique can be denounced as one long prelude to Derrida. There never was a “structuralism,” but only “pre-post-structuralism.”

Meillassoux, in sum, is trying to recover the Scientific Revolution and early Enlightenment, which wouldn’t be all that unusual, except he is trying to do this on radical philosophy’s behalf—trying, that is, to get intellectuals of the Left to make their peace with science again, as the better path to some of post-structuralism’s signature positions. His argument’s reliance on early science is to that extent instructive. One of the most appealing features of Meillassoux’s writing is that it restages something of the madness of natural philosophy before the age of positivism and the research grant; it retrieves, paragraph-wise, the sublimity and wonder of an immoderate knowledge. In 1712, Richard Blackmore published an epic called Creation, which you’ve almost certainly never heard of but which remained popular in Britain for several decades. That poem tells the story of the world’s awful making, before humanity’s arrival, and if you read even just its opening lines, you’ll see that this conception is premised on a rather pungent refusal of Virgil and hence on a wholesale refurbishing of the epic as genre: “No more of arms I sing.” Blackmore reclassifies what poets had only just recently been calling “heroic verse” as “vulgar”; the epic, it would seem, has degenerated into bellowing stage plays and popular romances and will have to learn from the astrophysicists if it is to regain its loft and dignity. Poets will have to accompany the natural philosophers as they set out “to see the full extent of nature” and to tally “unnumbered worlds.” The point is that there was lots of writing like this in the eighteenth century, and that it was aligned for the most part with the period’s republicans and pseudo-republicans and whatever else England had in those years instead of a Left. This means that the cosmic epic was to some extent a mutation of an early Puritan culture, a way of carrying into the eighteenth earlier trends in radical Protestant writing, and especially the latter’s Judaizing or philo-Semitic strains. The idea here was that Hebrew poetry provided an alternative model to Greek and Roman poetry: a sublime, direct poetry of high emotion, of inspiration, ecstasy, and astonishment. The Creation is one of the things you could read if you wanted to figure out how ordinary people ever came to care about science—how science was made into something that could turn a person on—and what you’ll find in its pages is a then new aesthetic that is equal parts Longinus and Milton, or rather Longinus plus Moses plus Milton plus Newton, and not a Weberian or Purito-rationalist Newton, but a Newton supernal and thunder-charged, in which the Principia is made to yield science fiction. It is, finally, this writing that Meillassoux is channeling when he asks us—routinely—to contemplate the planet’s earliest, not-yet-human eons; when, like a boy-intellectual collecting philosophical trilobites, he demands that our minds be arrested by the fossil record or that all of modern European philosophy reconfigure itself to accommodate the dinosaurs. And it is the eighteenth-century epic’s penchant for firebolt apocalyptic that echoes in his descriptions of a cosmos beyond law:

Everything could actually collapse: from trees to stars, from stars to laws, from physical laws to logical laws; and this not by virtue of some superior law whereby everything is destined to perish, but by virtue of the absence of any superior law capable of preserve anything, no matter what, from perishing.

Meillassoux’s followers call this an idea that no-one has ever had before. The epic poets once called it Strife.

That so many readers have discovered new political energies in Meillassoux’s argument is perhaps hard to see, since the book contains absolutely nothing that would count, in any of the ordinary senses, as political thought. There are, it’s true, a few passages in which Meillassoux lets you know he thinks of himself as a committed intellectual: a (badly underdeveloped) account of ideology critique; the faint chiming, in one sentence, of The Communist Manifesto; a few pages in tribute to Badiou. With a little effort, though, the political openings can be teased out, and they are basically twofold: 1) Meillassoux says that thought’s most pressing task is to do justice to the possibility—or, indeed, to the archaic historical reality—of a planet stripped of its humans. On at least one occasion, he even uses, in English translation, the phrase “world without us.” For anyone looking to devise a deep ecology by non-Heideggerian means—and there are permanent incentives to reach positions with as little Heidegger as possible—Meillassoux’s thinking is bound to be attractive. The book is an entry, among many other such, in the competition to design the most attractive anti-humanism. 2) The antinomian language in the sentence last quoted—laws could collapse; there is no superior law­—or, indeed, the very notion of a cosmos structured only by unnecessary laws—is no doubt what has drawn to this book those who would otherwise be reading Deleuze, since Meillassoux, like this other, has designed an ontology to anarchist specifications, though he has done so, rather surprisingly, without Spinoza. Another world is possible wasn’t Marx’s slogan—it was Leibniz’s—except at this level, it has to be said, the book’s politics remain for all intents and purposes allegorical. Meillassoux’s argument operates at most as a peculiar, quasi-theological reassurance that if we set out to change the political and legal order of our nation-states, the universe will like it.

Maybe this is already enough information for us to see that After Finitude’s relationship to post-structuralism is actually quite complicated. Any brief description of the book is going to have to say that it is out to demolish German Idealism and post-structuralism and any other philosophy of discourse or mind. But if we take a second pass over After Finitude, we will have to conclude that far from flattening these latter, its chosen task is precisely to shore them up, to move anti-foundationalism itself onto sturdy ontological foundations. Meillassoux’s niftiest trick, the one that having mastered he compulsively performs, is the translating of post-structuralism’s over-familiar epistemological claims into fresh-sounding ontological ones. What readers of Foucault and Lyotard took to be claims about knowledge turn out to have been claims about Being all along, and it is through this device that Meillassoux will preserve what he finds most valuable in the radical philosophy of his parents’ generation: its anti-Hegelianism, its hard-Left anti-totalitarianism, its attack on doctrines of necessity, its counter-doctrine of contingency, its capacity for ideology critique.

Adorno was arguing as early as the mid-‘60s that thought needed to figure out some impossible way to think its other, which is the unthought, “objects open and naked,” the world out of our clutches. “The concept takes as it most pressing business everything it cannot reach.” Is it possible to devise “cognition on behalf of the non-conceptual”? This is the sense in which Meillassoux, far from breaking with post-structuralism and its cousins, is simply answering one of its central questions. It’s just that he does so in a way that any convinced Adornian or Left Heideggerian is going to find baffling. Cognition on behalf of the non-conceptual turns out to have been right in front of us all along—it is called science and math. Celestial mechanics has always been the better anti-humanism. A philosophical anarchism that has thrown its lot in with the geologists and not with the Situationists—that is the possibility for thought that After Finitude opens up.  The book, indeed, sometimes seems to be borrowing some of Heidegger’s idiom of cosmic awe, but it separates this from the latter’s critique of science—such that biology and chemistry and physics can henceforth function as vehicles of ontological wonder, astonishment at the world made manifest. And with that idea there comes to an end almost a century’s worth of radical struggle against domination-through-knowledge, against bureaucracy, rule by experts, the New Class, technocracy, instrumental reason, and epistemological regimes. On the back cover of After Finitude, Bruno Latour says that Meillassoux promises to “liberate us from discourse,” but that’s not exactly right and may be exactly wrong. He wants rather to free us from having to think of discourse as a problem—precisely not to rally us against it, in the manner of Adorno and Foucault—but to license us to make our peace with, and so sink back into, it.

• • •

Lots of people will find good reasons to take this book seriously. It is, nonetheless, unconvincing on five or six fronts at once.

It is philosophically conniving. There are almost no empirical constraints placed on the argumentative enterprise of ontology. Nothing in everyday experience is ever going to suggest that one generalized account of all Being is right and another wrong, and this situation will inevitably grant the philosopher latitude. Ontologies will always be tailored to extra-philosophical considerations, any one of them elected only because a given thinker wants something to be true about the cosmos. Explanations of existence are all speculative and in that sense opportunistic. It is this opportunism we sense when we discover Meillassoux baldly massaging his sources. Here he is on p. 38: “Kant maintains that we can only describe the a priori forms of knowledge…, whereas Hegel insists that it is possible to deduce them.” Kant, we are being told, doesn’t think the categories are deducible. And then here’s Meillassoux on pp. 88 and 89: “the third type of response to Hume’s problem is Kant’s … objective deduction of the categories as elaborated in the Critique of Pure Reason.”

The leap from epistemology to ontology sometimes falls short. At one point, Meillassoux thinks he can get the better of post-structuralists like so: Imagine, he says, that an anti-foundationalist is talking to a Christian (about the afterlife, say). The Christian says: “After we die, the righteous among us will sit at the right hand of the Lord.” And the anti-foundationalist responds the way anti-foundationalists always respond: “Well, you could be right, but it could also be different.” For Meillassoux, that last clause is the ontologist’s opening. His task is now to convince the skeptic that “it could also be different” is not just a skeptical claim about what we can’t know—it is not an ignorance, but rather already an ontological position in its own right. What we know about the real cosmos, existing apart from thought, is that everything in it could also be different. And now suppose that the anti-foundationalist responds to the ontologist by just repeating the same sentence—again, because it’s really all the skeptic knows how to say: “Well, you could be right, but it could also be different.” Meillassoux at this point begins his end-zone dance. He has just claimed that Everything could be different, and the skeptic obviously can’t disagree with this by objecting that Everything could be different. The skeptic has been maneuvered round to agreeing with the ontologist’s position. But Meillassoux doesn’t yet have good reasons to triumph, because, quite simply, he is using “could be different” in two contrary senses, and he rather bafflingly thinks that their shared phrasing is enough to render them identical. He has simply routed his argument through a rigged formulation, one in which ontological claims and epistemological claims seem briefly to coincide. The skeptical, epistemological version of that sentence says: “Everything could be different from how I am thinking it.” And the ontological version says: “Everything could be different from how it really is now.” There may, in fact, occur real-word instances in which skeptics string words into ambiguous sentences that could mean either, and yet this will never indicate that they unwittingly or via logical compulsion mean the latter.

Meillassoux’s theory of language is lunatic. Another way of getting a bead on After Finitude would be to say that it is trying to shut down science studies; it wants to stop literary (and anthropological) types from reading the complicated utterances produced by science as writing (or discourse or culture). Meillassoux is bugged by anyone who reads scientific papers and gets interested in what is least scientific in them—anyone, that is, who attributes to astronomy or kinetics a political unconscious, as when one examines the great new systems devised during the seventeenth century and realizes that they all turned on new ways of understanding “laws” and “forces” and “powers.” Meillassoux’s own philosophy requires, as he puts it, “the belief that the realist meaning of [any utterance about the early history of the planet] is its ultimate meaning—that there is no other regime of meaning capable of deepening our understanding of it.” The problem is, of course, that it’s really easy to show that science writing does, in fact, contain an ideological-conceptual surcharge; that, like any other verbally intricate undertaking, it can’t help but borrow from several linguistic registers at once; and that there is always going to be some other “order of meaning” at play in statements about strontium or the Mesozoic. Science studies, after all, possesses lots of evidence of a more or less empirical kind, and Meillassoux’s response is to object that this evidence concerns nothing “ultimate.” But then what would it mean for a sentence to have an “ultimate meaning” anyway? A meaning that outlasts its rivals? Or that defeats them in televised battle? What, then, is the time that governs meanings, such that some count as final even while the others are still around? And at what point do secondary meanings just disappear? What are the periods of a meaning’s rise and fall? Meillassoux doesn’t possess the resources to answer any of those questions; nor, as best as I can tell, does he mean to try. The phrase “ultimate meaning” is not philosophically serious. It does little more than commit us to a blatant reductionism, commanding us to disregard any complexities and ambiguities that a linguistically attentive person would, upon reading Galileo, discover. We can even watch Meillassoux’s own language drift, such that “ultimate meaning” becomes, over the course of three pages, exclusive meaning. “Either [a scientific] statement has a realist sense, and only a realist sense, or it has no sense at all.” It exasperates Meillassoux that an unscientific language would so regularly worm its way into science writing; and it exasperates him, further, that English professors would take the trouble to point this language out. His response is to install a prohibition, the wholly unscientific injunction to treat scientific language as simpler than it is even when the data show otherwise. It is perhaps a special problem for Meillassoux that the ideological character of science writing is especially pronounced in the very period to which he is looking for intellectual salvation—the generations on either side of Newton, which were crammed with ontologies explicitly modeled on the political theology of the late Middle Ages—new scientific cosmologies, I mean, whose political dimensions were quite overt. And it is definitely a problem for Meillassoux that he has himself written a political ontology of roughly this kind—a cosmology made-to-order for the punks and the Bakuninites—since one of his opening moves is to disallow the very idea of such ontologies. After Finitude only has the implications its anarchist readership takes it to have if its language means more than it literally says, and Meillassoux himself insists that it can have no such meaning.

He poses as secular but is actually a kind of theologian. It is not just that Meillassoux is secular. He is pugnaciously secular or, if you prefer, actively anti-religious. He casually links Levinas with fanaticism and Muslim terror. He sticks up for what Adorno once called the totalitarianism of enlightenment, marveling at philosophy’s now vanished willingness to tell religious people that they’re stupid or at its determination to make even non-philosophers fight on its terms. And against our accustomed sense that liberalism is the spontaneous ideology of secular modernity, Meillassoux sees freedom of opinion instead as an outgrowth of the Counter-Reformation and Counter-Enlightenment. Liberalism, in other words, is how religion gets readmitted to the public sphere even once everyone involved has been forced to concede that it’s bunk. And yet for all that, Meillassoux has entirely underestimated how hard it is going to be to craft a consequent anti-humanism without taking recourse to religious language. At the heart of After Finitude is a simple restatement of the religious mystic’s ecstatic demand that we “get out of ourselves” and thereby learn to “grasp the in-itself”; the book aches for an “outside which thought could explore with the legitimate feeling of being on foreign territory—of being entirely elsewhere.” In the place of God, Meillassoux has installed a principle he calls “hyper-Chaos,” to which, however, he then attaches all manner of conventional theological language, right down to the capital-C-of-adoration. Hyper-Chaos is an entity…

…for which nothing is or would seem to be impossible … capable of destroying both things and worlds, of bringing forth monstrous absurdities, yet also of never doing anything, of realizing every dream, but also every nightmare, of engendering random and frenetic transformations, or conversely, of producing a universe that remains motionless down to its ultimate recess, like a cloud bearing the fiercest storms, then the eeriest bright spells.

No-one reading that passage—even casually, even for the first time—is going to miss the predictable omnipotence language with which it begins: Chaos is the God of Might. Meillassoux himself acknowledges as much. What may be less apparent, though, is that this entire line of argument simply extends into the present the late medieval debate over whether God was constrained to create this particular universe, or whether he could have, at will, created another, and Meillassoux’s position in this sense resembles nothing so much as the orthodox Christian defense of miracles, theorizing a power that can, in defiance of its own quotidian regularities, “bring forth absurdities, engender transformations, cast bright spells.” There have been many different theories of contingency over the last generation, especially among philosophers of history. As a philosopheme, it has, in fact, become rather commonplace. Meillassoux is unusual in this regard only in that he has elevated contingency to the position of demiurge and so returned a full portion of metaphysics to a position that had until now been trying to get by without it. Such is the penalty after all for going back behind Kant, that you’ll have to stop your ears again against the singing of angels. Two generations before the three Critiques there stood Christian Wolff, whom Meillassoux does not name, but on whose system his metaphysics is modeled and who wrote, in the 1720s and ‘30s, that philosophy was “the study of the possible as possible.” Philosophy, in other words, is the one all-important branch of knowledge that does not study actuality. Each more circumscribed intellectual endeavor—biology, history, philology—studies what-now-is, but philosophy studies events and objects in our world only as a subset of the much vaster category of what-could-be. It tries, like some kind of interplanetary structuralism, to work out the entire system of possibilities—every hypothetical aggregate of objects or particles or substances that could combine without contradiction—and thereby reclassifies the universe we currently inhabit as just one unfolding outcome among many unseen others. Meillassoux, in this same spirit, asks us to imagine a cosmos of “open possibility, wherein no eventuality has any more reason to be realized than any other.” And this way of approaching actuality is what Wolff calls theology, which in this instance means not knowledge of God but God’s knowledge. Philosophy, for Wolff—as, by extension, for Meillassoux—is a way of transcending human knowledge in the direction of divine knowledge, when the latter is the science not just of our world but of all things that could ever be, what Hegel called “the thoughts had by God before the Creation”—sheer could-ness, vast and indistinct.

He misdescribes recent European philosophy and is thus unclear about his own place in it. Maybe this point is better made with reference to his supporters than to Meillassoux himself. Here’s how one of his closest allies explains his contribution:

With his term ‘correlationism,’ Meillassoux has already made a permanent contribution to the philosophical lexicon. The rapid adoption of this word, to the point that an intellectual movement has already assembled to combat the menace it describes suggests that ‘correlationism’ describes a pre-existent reality that was badly in need of a name. Whenever disputes arise in philosophy concerning realism and idealism, we immediately note the appearance of a third personage who dismisses both of these alternatives as solutions to a pseudo-problem. This figure is the correlationist, who holds that we can never think of the world without humans nor of humans without the world, but only of a primal correlation or rapport between the two.

As intellectual history, this is almost illiterate. We weren’t in need of a name, because the people who argue in terms of the-rapport-between-humans-and-world or subject-and-object were already called “Hegelians,” and the movement opposing them hasn’t just “sprung up,” because philosophers have been battling the Hegelians as long as there have been Hegelians to fight. Worse still is the notion, projected by Meillassoux himself, that all of European philosophy since Kant must be opposed for leading inexorably, shunt-like, to post-structuralism. This is just the melodrama to which radical philosophy is congenitally prone; the entire history of Western thought has to become a single, uninterrupted exercise in the one perhaps quite local error you would like to correct, the cost of which, in this instance, is that Meillassoux and Company have to turn every major European thinker into a second-rate idealist or vulgar Derridean and so end up glossing Wittgenstein and Heidegger and Sartre and various Marxists in ways that are tendentious to the point of unrecognizability. There are central components of Meillassoux’s project that philosophers have been attempting since the 1790s, and he occasionally gives the impression of not knowing that European philosophy has been trying for generations to get past dialectics or humanism or the philosophy of the subject or whatever else it is for which “correlationism” is simply a new term. Perhaps Meillassoux thinks that his contribution has been to show that Wittgenstein and Heidegger were more Hegelian than they themselves realized. But then this, too, seems more like a repetition than a new direction, since European philosophy has always had a propensity for auto-critique of precisely this kind. Auto-critique is in lots of ways its most fundamental move: One anti-humanist philosopher accuses another of having snuck in some humanist premise or another. One philosopher-against-the-subject accuses another of being secretly attached to theories of subjectivity. And so on. For Meillassoux to come around now and say that there are residues of Kant and Hegel all over the place in contemporary thought—well, sure: That’s just the sort of thing that European philosophers are always saying.

He is wrong about German idealism. Kant, Meillassoux says, is the one who deprived us all of the Great Outdoors, which accusation seems plausible … until you remember that bit about “the starry sky above me.” This is one more indication that Meillassoux is punching air, though the point matters more with reference to Hegel than to Kant. Hegel’s philosophy, after all, turns on a particular way of relating the history of the world: At first, human beings were just pinpricks of consciousness in a world not of their own making, mobile smudges of mind on an alien planet. But human activity gradually remade the world—it refashioned every glade and river valley—worked all the materials—to the point where there now remains nothing in the world that hasn’t to some degree been made subject to human desire and planning. The world has, in this sense, been all but comprehensively humanized; it is saturated with mind. What are we to say, then, when Meillassoux claims that no modern philosopher since Kant can even begin to deal with the existence of the world before humans; that they can’t even take up the question; that they have to duck it; that it is what will blow holes in their systems? Hegel not only has no trouble speaking of the pre-human planet; his historical philosophy downright presupposes it. The world didn’t used to be human; it is now thorough-goingly so; the task of philosophy is to account for that change. And it is the great failing of Meillassoux’s book that, having elevated paleontology to the paradigmatic science, he can’t even begin to explain the transformation. You might ask yourself again whether Meillassoux’s account of science is more plausible than a Hegelian one. What, after all, happened when Europeans began devising modern science? What did science actually start doing? Was it or wasn’t it a rather important part of the ongoing process by which human beings subjected the non-human world to mind? Meillassoux urges us to think of science as the philosophy of the non-human, positing as it does a world separable from thought, a planet independent of humanity, laws that don’t require our enforcing. But does science, in fact, bring that world about? Meillassoux hasn’t even begun to respond to those philosophers, like Adorno and Heidegger, who wanted to pry philosophy away from science, not because they were complacently encased in the thought-bubbles of discourse and subjectivity, but more nearly the opposite—because they thought science was the philosophy of the subject, or one important version of it, the very techno-thinking by which human being secures its final dominion over the non-human. Meillassoux, in this sense, is trying to theorize, not the science that actually entered into the world in the seventeenth century, but something else, an alternate modernity, one in which aletheia and science went hand in hand, a fully non-human science or science that humans didn’t control: gelassene Wissenschaft. But the genuinely materialist position is always going to be the one that takes seriously the effects of thought and discourse upon the world; the one that knows science itself to be a practice; the one that faces up to the realization that the concept of  “the non-human” can only ever be a device by which human beings do things to themselves and their surroundings. There is nothing real about a realism that offers itself only as a utopian counter-science, a communication from the pluriverse, a knowledge that presumes our non-existence and so requires, as bearer, some alternate cosmic intelligence that it would be simplest to call divinity.

(Thanks to Jason Adams, Chris Pye, and Anita Sokolsky. My understanding of Christian Wolff I take from Werner Schneiders’s “Deus est philosophus absolute summus: Über Christian Wolffs Philosophie und Philosophiebegriff.” The ally of Meillassoux’s that I quote is Graham Harman.)

 

Staying Alive, Part 2.3

 

 

Three Theses on Fright Night

 

THE LONG INTRO IS HERE.

THESIS #1 IS HERE.

THESIS #2 IS HERE.

 

•THESIS #3: John Travolta must die.

There are three bits of evidence we need to line up. First, the vampire in Fright Night is played by Chris Sarandon, given name Sarondonethes, which means he’s Greek, the darker side of white, not easily confused with Robert Redford or Owen Wilson. Second, the vampire ensnares the hero’s young girlfriend on the main floor of a throbbing disco, wading into the crowd to dance his gorgon’s boogaloo. Third, he is almost always wearing a man’s dress scarf, which generically marks him out as a swell and specifically, in 1985, seemed to insinuate the ultra-wide collars that had just gone out of style: an amplitude of color spreading out from the neck.

More precisely, it was the combination of scarf and popped collar that approximated the polyester wingspan of a few years back. And approximation is very much the point, since Chris Sarandon was plainly cast in Fright Night because he made a passable surrogate for John Travolta. One of the names for the demon-seducer who engrosses to himself all the women is “father,” but his other is “Tony Manero.” And you can, if you like, think of this figure—the Travolta vampire-dad—in terms of a precise historical moment: The entire movie takes shape in the headspace of a child of the late ‘70s and early ‘80s, someone who has grown up under the strains of “You Should Be Dancing” and “If I Can’t Have You” and who has therefore latched onto Vinnie Barbarino and Danny Zuko as the standard of the masculinity that he will never meet. All of Fright Night is premised on a bowel-shaking fear of John Travolta, the dreadful realization that no American man will ever have sex again until Travolta is destroyed. The struggle that Fright Night stages is in this sense something more than Oedipal; it isn’t just a conflict between an under-ripe masculinity and a fully adult one, since its junk Freudianism has been given such an obvious ethnic overlay: a whitebread masculinity squares off against sheerest Ionian potency. The movie’s adolescent fear of older men is intensified by a worry that a preppy, suburban kid—a 15-year old in a tweed jacket!?—is never going to be able to compete with Travolta’s goombah swank. And this obviously brings us back to Valentino and the Lugosi Dracula. Something we said earlier we’ll want to repeat now as a general point: Not just that Lugosi tapped into a fear of Valentino, but that vampire movies as a genre periodically inculcate a fear of Italian actors. And with this in mind, we can return to the clip from Ken Russell’s Valentino and gawp again at its unlikeliness: Nureyev is playing Valentino as Dracula, but Travolta is the scene’s third term, or, if you like, he is proximate double to its devil-sheikh. Lugosi gives us Dracula + Valentino, and Chris Sarandon Dracula + Travolta, but only Nureyev delivers Dracula + Valentino + Travolta in one. The Russell biopic came out in October of 1977, Saturday Night Fever two months later. And Fright Night, at eight years remove, is Disco Demolition Night restaged as a vampire story: A Mediterranean fop dies so that his WASP neighbors will sleep better. A crate of records explodes on a baseball field.

Staying Alive, Part One

 

What I have to explain this time round is a little strange, and the road we’ll have to walk to get there is, I think, even stranger. I should note first that I’ve been thinking a lot about vampire movies, about which we might, after rooting around, be able to say something that no-one else has ever said. And if you are to understand this New Thing About Vampire Movies—except it’s not a New Thing; it’s an Old and Secret Thing—then you are going to need to watch a short clip from a movie you’ve almost certainly never heard of, and when you watch it, you’re not going to think that it could possibly hold the key to anything. The movie is so obscure that I could only find the relevant scene dubbed into Russian, and even that sentence, once written, requires two intensifying corrections: I didn’t find the clip so much as fluke upon it while chasing down some other hunch. And the movie isn’t exactly dubbed into anything. It features some Russian language-school dropout—one guy; alone; an unaided Petersburg grumble—spot-translating all the dialogue, with the original soundtrack still running audibly in the background, such that he has to shout. Running this clip will be like trying to watch television in the company of a mean drunk. Plus it’s not even a vampire movie, which is what you were just promised. This is all pretty discouraging, I realize, but you’ll see: The clip does weirdly speak.

The film is Ken Russell’s Valentino, as in Rudy, as in hair anointed with jelly and liniment. It’s a biopic released in 1977, and starring Rudolph Nureyev as Rudolph V. At issue is a short scene in which Nureyev takes Carol Kane out onto a ballroom floor to dance the tango. Give it sixty seconds, and you’ll have seen everything important:

A spare cinematic minute—and yet the clip demands our attention by putting on display three things at once, three things that are intertwined even outside of this movie but whose intertwining is here oddly visible, as though lifted up for our examination. I’ll just count them off.

#1) The first thing you’ll want to bear in mind is who Valentino was. The basic facts will do: that he was Hollywood’s first superstar; that he was considered the prettiest man of his generation; and that he wasn’t American—he was born in Italy. The important point is that nothing in this thumbnail is wholly innocuous. A lot of people were unnerved by Valentino. Each of those bare data can and did yield something uncanny. That he struck so many American women as desirable was unusual precisely because he was Italian. He was the first non-Anglo man, after the big wave of southern and eastern European immigration, that large numbers of Americans deigned to think of as beautiful. People remarked on that a lot; the term “Latin lover” was apparently coined for him, even though, given the racial ductility of early Hollywood, he was most famous for playing an Arab. And there was if anything even more handwringing about Valentino the lover than there was about Valentino the Latin. Lots of male commentators said he wasn’t manly enough to represent their kind: that he was a dandy; that he was too polished; that he looked too soft; that he was a screen David sculpted out of talcum and pomade—and this, not as compared to John Wayne or Clint Eastwood—but as compared to Douglas Fairbanks, who agreed not to wear tights only when offered pantaloons.

But then the resentment of the nation’s swashbucklers did nothing to dent Valentino’s popularity. We’ve become accustomed, I guess, to how overtly libidinal the culture of female fandom is; we don’t much pause to remark on the orgiastic qualities of Justin Bieber’s every public appearance, their improbable pre-teen staging of the Dionysian Mysteries, but it might help to pretend that you’ve never seen archival footage of the Beatles and are thus having to face the squalling girl-crowds for the first time. When Valentino died unexpectedly in 1926—he was 31—there were riots in the streets of New York City. Lady fans started smashing windows and battling the hundred or so cops who were called out to restore order. Reports went out that women were killing themselves. That someone also ordered four actors to dress up as Italian blackshirts and tromp around the Upper East Side, to make it seem as though Mussolini himself had personally sent over an honor guard in Valentino’s memory, begins to sound like one of the day’s more pedestrian details.

#2) This should all help explain what anybody who’s just watched the clip will already have noticed, which is that Ken Russell has plainly instructed Nureyev to play Valentino as though he were Dracula: He silences the band just by raising his magical, mesmeric hand, tearing the sound from the very air…

…he activates what seem to be laser eyes; he leads a transfixed woman away from her circle of helpless male guardians and onto the dance floor, where he strut-hunches over her, arcing his shoulders into an insinuated cape…

…he mimes various attacks upon her neck.

A complicated series of observations follows on from this: We’ll want to say that the figure of Valentino has been filtered back through Dracula, and we can feel the force of that revision if we point out that Valentino was actually half-French and generically Continental-looking—you would not pause if someone told you he was German—and seems to have been typecast in Moorish roles only on account of a Mediterranean accent that no silent-moviegoer would ever hear anyway. Nureyev, on the other hand, is sweltering and Slavic and basically looks way more vampiric than the man he’s playing ever did. This could all easily seem like Ken Russell’s inspiration—to recreate, for audiences in the 1970s, the lost effect of Valentino’s magnetism by wrapping it in the easily read conventions of the vampire movie, with which, after all, it was roughly contemporaneous. You make one icon of early Hollywood intelligible by translating him into a second. It would be like deciding to make a movie about Greta Garbo, but then scripting her as Steamboat Willie.

There’s clearly something to this. But if we adhere tenaciously to that line, what are we going to say about the following images?

There is no mistaking the issue. Tod Browning’s Dracula came out in 1931, just five years after the Sheikh’s passing, and the stage versions that the movie was based on were running throughout the 1920s, when the oversized head of Valentino was first smoldering greyly down upon the bodies of American women. We can say that Nureyev was, in 1977, playing Valentino as Dracula, but we have to set against this the observation that Lugosi was already, in 1931, playing Dracula as Valentino. This is itself strong evidence that people were once scared of Valentino, but then we already knew that people—some people—were scared of Valentino, because he flaunted that off-white and insufficiently rugged form of masculinity, and because American women were really into it—or they weren’t just into it—they seemed hypnotized and made freaky by it. So the 1977 movie makes Valentino look more like a vampire than the real man actually did, but that’s because someone involved in the production intuited that Valentino had been one of the inspirations for the screen vampire to begin with. Heartthrob could be the name of a horror movie.

This all matters, because it helps us specify the contribution of Lugosi’s Dracula to the vampire mythos. This isn’t as easy as it sounds. Nearly everything that makes the 1931 movie tick was taken over directly from Stoker’s 1897 novel, and for most purposes, you would be better off bypassing the movie and going straight to the source. The most efficient, if not perhaps the most perspicuous, way of naming Stoker’s achievement would be to say that he turned the vampire story into an ongoing referendum on the philosophy of Friedrich Nietzsche. For real: Nearly every vampire movie that has ever been made is in one way or another a meditation on Nietzscheanism, deliberating on the idea that some people, the rare ones, might yet overcome morality and thereby form a new caste—or race or even species—a breed that never even pauses to consider what ordinary people think of as right and wrong.  Here’s all the Nietzsche you need:

•The great epochs of our lives come when we gather the courage to reconceive our evils as what is best in us.

•Every exquisite person strives instinctively for a castle and a secrecy where he is rescued from the crowds, the many, the vast majority; where, as the exception, he can forget the norm called “human.”

•We think that harshness, violence, slavery, danger in the streets and in the heart, concealment, Stoicism, the art of seduction and experiment, and devilry of every sort; that everything evil, terrible, tyrannical, predatory, and snakelike in humanity serves just as well as its opposite to enhance the species of “man.”

Enhanced and predatory un-humans living in castles, exquisite people who have turned wickedness into a virtue or an accomplishment—if you’re in an intro philosophy class, and you’re trying to make sense of The Genealogy of Morals for the first time, the easiest way to get a handle on Nietzsche will be to realize that he wants to turn you into a vampire, which is superman’s nearest synonym, another word for Übermensch. Or other way around now: Modern vampire stories work by mulishly literalizing Nietzsche’s language, making you stare the superman in the face on the expectation that you will be sent running by his anaconda grin.

This should all become clearer if we break Stoker’s Dracula back into his component parts. What are the several things that the classic vampire story wants you to be scared of?

•Stoker’s novel wants you to be scared of aristocracy. This is perhaps the most glaring point—that vampire stories are the one horror genre driven by naked class animus. The novel makes Dracula seem wiggy even before he starts doing anything supernatural, and it does this simply by making him lord of the manor. His comportment is excessively formal. He is, the first-time reader is surprised to note, seldom referred to as Dracula; the novel almost only ever calls him “the Count,” as though the key to understanding the character lay in his title. It is the very existence of the old-fashioned nobleman that has come to seem unnatural, which no doubt has something to do with his literally feeding upon the blood of the poor, peasant children stuffed into sacks. The movie updates all this, in some pleasingly goofy way, by putting the vampire in ’20s-era evening wear, the lost joke being that he never wears anything else, that he sports white tie everywhere—a tail-coat to play softball in, an opera cloak for when he’s bathing the dog—as though tuxedos were the only threads he owned. Dracula is the character who, having once put on the Ritz, can never again remove it. The vampire, we are licensed to conclude, is our most enduring image of aristocratic tyranny, generated by a paradigmatically liberal and middle-class fever-dream about the character of the old peerage, and anchored in the simple idea that it isn’t even safe to be in the same room as an aristocrat, so driven are such people to dominate others, so unwilling to tolerate a partner or co-equal. “Come here!”: A duke is the name for the kind of person who barks orders at free men as though they were his subordinates. That’s a routine observation, and it’s what ties Dracula back to the early Gothic novel or even to Richardson’s Pamela. But what’s peculiar all the same about Stoker’s novel is its timing, since by the 1890s, the traditional aristocracy in England was, if not exactly obsolete, then at least much weakened. The novel actually registers this historical turn, since the vampire famously lives not in a castle, but in the ruins of a castle, in the rubble of a superannuated class hierarchy, and—this really is an inspired flourish—he has no servants: he drives his own coach, carries his own bags. The Count is what they used to call come-down gentry, accustomed to apologizing to guests for serving them dinner on chipped porcelain. And the threat he poses is therefore not the menace of one who actually possesses power—this is how he is unlike Richardson’s Mr B or William Godwin’s Falkland—but of one who might yet regain it, the name for which regaining would be “reaction” or “counter-revolution.” Stoker’s Dracula is the greatest of right-wing horror stories, scared of foreigners and queer people and women and sex in general, but it nonetheless harbors a certain curdled Jacobinism, the exasperated sense that the European aristocracy should be dead but aren’t, and that the French Revolution is going to have to be staged over and over again.

So much for aristocracy. About those others…

•Stoker’s novel wants you to be scared of foreigners. This goes back to a simple plot point: Dracula sneaks into England from abroad—hides on a ship—slips past customs officers and curious locals. The vampire, in other words, is an illegal immigrant. You might object that this last is a late twentieth-century category, illicitly projected back onto the 1890s, and that’s true—but “stowaway” isn’t an anachronism, and neither is “smuggling.” What’s more, Stoker expressly aligns vampires, via their bats, with colonies and the Third World. Such creatures come from the “islands of the Western seas” or from South America. One character is pretty sure that this is no English bat! It “may be some wild specimen from the South of a more malignant species.” Perhaps most important, the screen Dracula is the figure who has single-handedly made life miserable for generations of Eastern European immigrants, who have had to endure endless rounds of “I vant … to sahk … your bludd!” in roughly the same way that teenaged Asian-American girls have been, since 1987, routinely subjected to obnoxious white boys quoting “Me so horny.”

•Stoker’s novel wants you to be scared of sex in general, though we can also make the point via the film: The first time we see Dracula attack a woman, all he really does is lean in for a kiss, though the street is dim and London-ish, and his victim is a flower-girl-for-which-read-prostitute, and these details inevitably summon overtones of Jack the Ripper, especially if you think Jack was a gentleman or the Prince of Wales.

The point is extended when, later in the film, one weeping survivor uses rape language to describe her evening with the Count:

Survivor: After what’s happened, I can’t…

Fiancé: What’s happened? What’s happened?!

Survivor: I can’t bear to tell you. I can’t.

At this point we need to make a careful distinction. Those scenes both trigger images of sexual violence. And yet one of the vampire story’s more remarkable features is that it communicates a fear of sex even when that violence is largely removed. Indeed, an encompassing fear of sex—and not just of rape—is coded into some of the genre’s most basic conventions. Nothing in the entire history of the horror film is more iconic than the vampire bite, which, if you pause to think about it, is entirely peculiar: Imagine that vampire stories didn’t already exist … and now imagine trying to convince a Hollywood executive to greenlight your new movie about a creature who kills people by giving them hickeys, an honest-to-Christ Cuddle Monster, but scary, you promise him, enemy of scarves and turtlenecks. Or ask yourself for once why so many movies allow vampires to be repelled by garlic. That’s a simple extrapolation from the idea that if you eat too much spicy food—if you go to bed fetid, the reek of sofrito still on your ungargled breath—no-one will want to sleep with you.

But there’s more…

•Stoker’s novel wants you to be scared of sexual women in particular. There’s an underlying point here that is worth reviewing first: Most viewers think that vampires are foxy, which makes them really unlike other classic monsters. If that point is the least bit unclear to you, you might take a moment now to close your eyes and pretend briefly that you are making out with a zombie. But the most clarifying difference is the one we can draw between the vampire and the werewolf, both of whom are canonically shown perpetrating savage violence upon the bodies of women. What I’d like to bring into view is that both werewolf movies and vampire movies deviate from what is perhaps the most routine scenario in a horror movie—a rampaging monster lumbering after a panicked victim—but they deviate in opposite directions. Werewolf stories are the one horror genre that has a certain reluctance or regret or stop-me-before-I-kill-again shame built right into them. Slashers, who otherwise resemble werewolves, never wake up the next morning hating themselves for what they’ve done. No-one casts a chainsaw to one side in self-loathing. But in a werewolf movie, not even the monster is wholly willing. In a vampire movie, then, the point just gets flipped, in that not even the victim is wholly unwilling. Vampire victims collaborate in their own destruction, for the simple reason that men in capes have game. This means that certain types of utterly common horror sequences are largely excluded from the vampire film: People almost never flee from vampires, which means that the vampire flick is the horror subgenre least likely to borrow from action movies; most likely, in other words, to commit to a languid pacing—no chase scenes!—or rather, if a vampire movie does for once break out into a chase scene, you can be pretty sure it’s the vamp and not the victim who is on the run.

What we can now say is that this little myth about willing victims is most often told, in the vampire classics themselves, about women. The form’s conviction that highborn men are predators is counterbalanced by its confidence that this is exactly what many women want—to be preyed upon. The he-vamp awakens the woman to sexual rapaciousness, and the audience is expected to find this creepy. The survivor does sob and say “I can’t bear to tell you what happened,” but she has also just said: “I feel wonderful. I’ve never felt better in my life.” In Stoker, the woman who proves most susceptible to Dracula’s advances is the one who has already asked, even before the vampire has made his move: “Why can’t they let a girl marry three men, or as many as want her?” More important, the novel makes it clear that becoming a vampire is one good way of getting that wish granted. Once she turns, the sexual woman does indeed get all the men—every major male character in the novel willingly opens his veins to give her blood transfusions—she becomes a kind of sponge, allegorically loose, soaking up all this male donation: a “polyandrist,” one of the men calls her. When the men, bearing whale-oil candles, go to visit her in her crypt, they “drop sperm in white patches” across the floor, like pornographic bread crumbs. They finally put her to rest by assaulting her as a group, standing in a circle while one of their number “drives deeper into deeper” into the “dint in [her] white flesh.” In the novel’s opening sections, three women stand over a young Englishman in the Carpathians: “He is young and strong. There are kisses for us all.”

•Stoker’s novel wants you to be scared of deviant sex above all. One point can be made without qualification: All the vampires in the original Dracula are gender-benders. That this is true of those kiss-hungry Transylvaniennes should be immediately apparent, since it will be true of nearly any she-vamp—these lady-penetrators busting the jugular cherries of straight men.

The vampiress is how the very possibility of a certain rather sweeping gender reversal comes out into the open—becomes visible in everyday life, available for the contemplation of suburbanites and middle schoolers. She and her male victims are pop culture’s only iconic image of pegging. In Stoker, the man “waits in languorous ecstasy” while he assesses for the first time the feeling of “hard dents” against his “super sensitive skin.” The point will seem accordingly less clear with regards to Dracula himself, since a man-vamp sinking into a crumpled woman preserves orthodox sexual roles. That Dracula’s manhood is nonetheless unstable discloses the intensity of the novel’s preoccupation with sexual confusion: In one of the book’s more striking scenes, its several heroes bust into the bedroom of a woman they’ve been guarding and find Dracula clasping her head to his naked breast, which he has just gashed open so that she can lap at his blood. The image is not only a riff on oral rape—though it is that, too: a forced blow job. It is also—and rather more literally—a breast feeding, a demonic nursing, with the vampire willing to set aside all his usual male roles in order to take up the position of the monstrous mother, with a chest that runs red and a child at his bosom struggling to be reborn.

So that’s a dense set of associations—aristocracy, foreigners, sex, women, and queer people—and the film does a reasonably good job of preserving this tissue of meaning, a much better job than, say, Whalen’s Frankenstein does at protecting the many-sided allegory that had originally been built up around its monster. But the movie isn’t just a translation, because to those established associations it adds one of its own. The screen Dracula isn’t just an aristocratic holdover. The vampire is the movie star himself, and in all the famous images of Lugosi we see early film beginning to mediate on itself and on its own eerie power. Or perhaps it would be more accurate to say, not that Browning’s Dracula has simply added a new association to Stoker’s list, but that it has found an innovative way of encapsulating that list’s concerns. The Valentino vampire isn’t just a supplement to or replacement for the queer and foreign aristocrat; he is the queer and foreign aristocrat, issued in a new format. What we see in Dracula is film recoiling from its new modes of supercharged male charisma, and you can begin to make sense of Lugosi’s performance if you think of it in terms of any film set’s hierarchy of actors: Van Helsing kills Dracula; Edward Van Sloan, who you’ve never heard of, kills Bela Lugosi; a character actor kills the leading man on behalf of the drab, male masses for the overriding reason that the women who’ve come to the theater with them find him too dishy.

#3) So those are two of the things that the Nureyev clip intertwines: Valentino and vampires. The third thing has everything to do with Carol Kane’s hair.

There’s real a problem here. The movie has been careful to give Nureyev a tallowy comb-back; he would hardly be credible as Valentino without it. But what’s striking about his partner’s tresses is that they are so obviously of the 1970s. The movie, after all, is set in the 1920s, whose iconic hairstyles for women were all short—bobs and Dutch boys and such—but Carol Kane’s hair has been frizzed and teased into fiberglass—it is simultaneously long and fro-like, a headdress of cotton candy. For comparison…

Valentino with Natacha Rambova

The biopic dancer’s most unflapperish do, in other words, breaks the movie’s historical frame, anchoring the production in its own present of 1977 and allowing that decade to worm back into the Coolidge era. More precisely, it tends to transform the ballroom into a disco and the tango into a proto-Hustle. Look again at that shot of Carol Kane and especially at the lighting: One doesn’t typically think of the 1920s as spangly. What we can say now is that Nureyev isn’t just playing Valentino as a vampire—that idea, at least, we’ve been able to explain; he is playing Valentino as a disco vampire, and this is going to reopen the puzzle of the clip. We know that some people really hated disco, but was anybody actually scared of it? This brings us to another movie—the movie we actually need to be thinking about—which is 1985’s Fright Night. Disco, they once said, sucks.

PART 2 BEGINS HERE…

 

The New Way Forward in the Middle West

 

A few quick observations about Zowie Bowie’s Source Code, from earlier this year.

But first, the plot: A terrorist has just blown up a commuter train on the outskirts of Chicago, killing hundreds, and is headed downtown to hit Play on a dirty bomb, which will kill thousands more. Government scientists send a US soldier back in time—onto the train, ante-boom—and instruct him to identify the bomber. The soldier, however, is operating under two major constraints: First, he hasn’t exactly been teleported onto the train. He is, in fact, already dead; portions of his brain are being kept alive; and it’s only his consciousness that has been lobbed backwards into the day’s bad start. In order to conduct his investigation, therefore, he will have to occupy the body of some civilian already on the train; he will have to take as his avatar one of the attack’s imminent victims. Second, the government’s time-travel technology can only project him back eight minutes before the event, which interval he will have to relive over and over again until he can give the government a name: eight minutes—whoosh!—mass death—almost had it—and again, please—a fresh eight minutes are on the clock, like injury time….

 

•OBSERVATION #1:

The movie is set almost entirely in Chicago, and yet its plot is closely modeled on the invasions of Afghanistan and, especially, Iraq. That the detective-soldier is actually an Air Force helicopter pilot recently shot down by the Taliban is enough to establish that the movie has the war on terror on its mind. But it’s the soldier’s character arc—the transformation he has to undergo in the course of the film—that most powerfully channels the history of the past decade. What’s notable about Source Code—what makes it rather unlike an ordinary action movie—is that its hero keeps failing; he keeps letting the train blow up. The movie thinks it can provide an explanation for this, that it can make clear why an American soldier might be rather bad at stopping terrorists. Or rather, it thinks it can teach you—by teaching him—the difference between anti-terrorism and hapless, counterproductive bullying. At first, the soldier panics; he starts yelling at people; he engages in a little racial profiling; he throws a few punches and before long has drawn a gun on the other passengers. One onlooker asks: “You’re military? You spend a lot of time beating up civilians?” The turning point comes when the living officer running the mission from a government super-computer tells our undead hero: “This time try to get to know the other people on the train.” And from that point on, he just keeps ratcheting it down; stops confronting people; gets in nobody’s face; begins coolly collecting information; and finally, in one last triumphant replay of those endlessly fatal eight minutes, slips handcuffs onto the terrorist before anyone else on the train even knows they’re living amidst emergency. The movie, in other words, thinks it knows the right way to prevent a terrorist attack, and in this regard it simply mirrors David Petraeus, whose film this is. The soldier only succeeds, in other words, because halfway through he is given a new counterinsurgency manual, and the difference between hero-at-beginning-of-movie and hero-at-end-of-movie is meant to communicate the difference between Iraq in 2004 and Iraq in 2008. Source Code is, in sum, a Surge movie—it is, to my knowledge, the only Surge movie—with the New Way Forward staging itself in Illinois instead of Anbar, and with science-fiction conventions serving to communicate the panic and steep learning curve of the early occupation. The film’s hyper-repetitive structure is quite peculiar here. It could—and perhaps for a few minutes in the movie’s middle depths even does—convey the infernal quality of the war on terror, the way in which the “vigilance” to which we are enjoined is already a doom: One gets up every morning required again to avert Armageddon. But that’s not really Source Code’s vibe. Repetition in this movie soon stops seeming demonic and becomes instead the medium for learning and self-improvement—this is more somber Groundhog’s Day than it is trashy Sisyphus—and the film’s understanding of recurrence as basically harmless gets at the first of its interlinked fantasies, which is that the US should be able, at no cost, to keep trying to round up the terrorists until it gets it right. The movie to that extent signs on to the central myth of the Surge, which is that it was empire’s magic do-over in Iraq, a geopolitical mulligan.

 

•OBSERVATION #2:

That first point requires that we read Chicago as Baghdad in disguise, but if we instead take the movie’s North American setting at face value, then the movie’s politics become somewhat harder to parse. This difficulty goes back to the military-civilian mish-mash that is at the story’s core: The US soldier has requisitioned the body of some suburban schoolteacher—deputized the dead schmo—drafted his virtual corpse into war without end. Like any such in-between or crossbred figure, this character can be described in two contradictory ways at once, such that Source Code is simultaneously a story about a military guy becoming less militarized and a story about a civilian conscripted into special ops without his even knowing it. At the end of the movie, the soldier, having just arrested the madman and saved morning drive-time, gets to stay in his host body; he just skips off into the city with a pretty girl. At that level, the movie is an innocuous fairy tale about undoing some of the damage the US government is inflicting on a generation—not just giving a soldier his discharge papers and sending him honorably back into street life—but unkilling him, making stupid amends. But the equal-and-opposite story of the civilian who can suddenly break up terror plots sponsors a rather different fantasy, bespeaking the desire for a less obtrusive war on terror, a war less punishing to the Iraqis and the Afghanis, and kinder to Americans, as well—a war on terror without full body scanners at airports or the kind of heavy police presence that makes even white people nervous. In this sense, the movie gets us to wish that the war on terror were even more covert than it already is—that it were all undercover—its representative figure the plainclothes air marshal, the old-fashioned name for whom is Secret Police. Let me repeat a sentence I’ve already written: At the end of the movie, the soldier gets to stay in his host body, which means that the schoolteacher never gets his person back, and Source Code’s happy ending requires not that civilian life be rescued, but that it be negated.

 

•OBSERVATION #3:

Even by the low standards of Hollywood sci-fi, the movie’s fake science is notably addled and underexplained. Worse, having already committed to bushwa in its first act, it just ups and changes the rules in the last ten minutes, which I generally imagine is the one thing that a science-fiction screenwriter has got to promise you he’s not going to do. The audience has been told throughout the movie that the hero cannot change history; he is not really in the past; he has been inserted, rather, into a simulation built up from the memories of dead people; he can therefore only retrieve information; he will never actually save the train. But then in the last ten minutes we discover that each simulation has created an alternate universe after all, and the viewer has had the good fortune to arrive at last in the lone scenario in which every American gets to work on time. That’s feeble, to be sure, and irritating, but there’s something remarkable about it all the same. The single most striking thing about Source Code is that it brings to bear all the dopey arcana of cut-rate science fiction—the full arsenal of time-travel pataphysics and pop Leibniz—in order to generate … the world we already live in. It has maneuvered American normalcy—the AM commute, a commonplace Tuesday, just another trek to the office—into the position of the bizarro world or utopia you might otherwise have expected. The movie’s happy ending feels entirely rote, yeah, until, that is, you realize that it exists only in ontological brackets. By the time Source Code finishes, the Midwestern everyday—the one in which trains don’t blow into the sky—has become thinkable only as a science-fiction scenario, a bit of extravagant speculation. It has shriveled down to the implausible thing that a genre movie must scramble unconvincingly to achieve.

Tarantino, Nazis, and Movies That Can Kill You – Part 2

PART 1 IS HERE

Again, if you want to make sense of Inglourious Basterds, the questions are three: 1) Why take the triumphalist American history of WWII and make it even more triumphalist? 2) Why channel our perceptions of the 1940s via the 1970s? 3) And why commit mass murder upon the audience?

Here are some answers.

Tarantino is on record as saying that Inglourious Basterds is his “bunch-of- guys-on-a-mission film”—which would mean that it’s a version of the Dirty Dozen or The Guns of Navarone. Like almost everything else that Tarantino says in interviews, I think that sentence is a lie or a trick, which should become clear if you pause to consider how uninterested the movie is in the Basterds as Nazi hunters; we see them fighting Nazis almost not all. In fact the Shosanna plot is entirely separate from the Basterds plot and commands our attention every bit as intently. I’d like to say this isn’t really a men-on-a-mission movie; this is first and foremost a revenge movie; and you might say Why can’t it be both?—and yeah, sure, it’s both, but Tarantino has also decided to make nearly all the Basterds Jewish, which means that the revenge framework actually spills over from the Shosanna plot and colonizes the mission plot, too. It’s like the revenge movie is sucking the war movie into its field of gravity. Revenge is the common term that unites the two separate plots. Plus we know that Tarantino is deeply engaged with revenge movies, which were a staple of the ‘70s grindhouse circuit: Last House on the Left, Death Wish, Thriller: En Grym Film, I Spit On Your Grave, movies like that. Tarantino, in fact, has already made an epic revenge movie—that’s Kill Bill—so we can’t be all that surprised to see him returning to the form here.

OK—but if it’s a revenge movie, it’s an unusual one, because it has that oddly doubled narrative—not just one, but two revenge plots, unspooling side by side, and eventually converging, though without either revenge-party ever knowing about the other. And what you think is at stake in the revenge plot will depend in large part on whether you decide to emphasize the Basterds or Shosanna. So ask yourself which agent of revenge your heart favors.

If you emphasize the Basterds, then what really jumps out in the movie is the image of the tough-guy Jew. There’s a word that is common in Hebrew slang—and that Hebrew has bequeathed to Israeli English—and that’s frier, which means something like “pushover” or “sucker”—and it’s become one of the most distinctive Israeli insults. Nobody in Israel wants to be a frier; nobody wants to be pushover. My Israeli friends boast proudly that the country has the world’s highest incidence of fatal car crashes—and I don’t know if that’s true—but I do know that my friends brag about it, which tells me all I need to know—and the explanation they always give is that no Israeli in a car will ever back down, as in: yield the right of way. So all I want to say is that testosterone has become a very big deal in some corners of modern Jewish culture, for reasons that are not hard to reconstruct, and you could think of Inglourious Basterds as playing into this, by projecting an IDF-style masculinity back into the 1940s. And this curious notion obviously goes back to one of the classic, nagging questions in the historiography of the Second World War: Why didn’t European Jews resist the fascists in larger numbers? If Inglourious Basterds generates a compensatory fantasy, it is surely here; it’s not fantasizing about Americans winning the war; it’s fantasizing about Jews winning the war; and this is a fantasy it shares, roughly, with other tough-Jew movies, like Defiance, which features Daniel Craig as the Bärenjude. Those movies ask the question: What if the Warsaw Ghetto Uprising had spread? Or: What if there had already been a Mossad to counteract the SS?

Here’s the thing: If we focus instead on Shosanna, the movie will look rather different. Shosanna of course is also Jewish and also tough, so we can to some extent just fold her into that last point. But only to some extent. Why? Because the image of Eli Roth one handing a baseball bat is obviously an image of Jewish machismo, but the image of a burning movie theater is not.

What I mean is that Shosanna’s method of taking revenge is so different from the Basterds’ that it raises some new issues for us to think about. The blazing screen does not trigger the same set of real-world associations. Shosanna gets her revenge through film: She makes a movie passing judgment on the fascists, whom she then immolates in the flames of burning nitrate reels. So it’s not just that we see a filmmaker killing Nazis; it’s as though film itself were able to strike fascists dead. There are, I think, two different ways of clarifying what Tarantino is up to here.

1) One way to understand the film Shosanna makes and that we eventually see is as Tarantino’s homage to postwar French cinema—and to the kind of anti-fascist film that people like Buñuel were making even before the war. She makes a guerilla film, on the cheap: a technically rough, experimental, low-budget and anti-fascist film. It’s as though Tarantino were trying to engineer a history in which Buñuel never left for Mexico, or trying to backdate Godard by about fifteen years. The movie literally stages a showdown between fascist film and the anti-fascist film of the postwar Left. And this alone licenses us to say that Tarantino is deeply invested in the possibility of anti-fascist film. He has just given us, as hero, an anti-fascist director. Now would be the moment to be point out that he and his associates often seem to think that trash cinema is the continuation of anti-fascist film. If you’ve seen Robert Rodriguez’s Machete—or even just the fake trailer for  the non-existent ‘70s drive-in movie that was the movie’s original incarnation—the point will not be lost on you: An army of illegal immigrants rises up against white bosses and politicians by repurposing as weapons the garden tools of a day laborer.

There’s plenty of precedence for this: One of the key blaxploitation movies is this film from 1976 called Brotherhood of Death, which is about a group of black Vietnam vets who return to the US and start using what the army taught them to fight the Klan. So we know that Tarantino and Rodriguez are fixated on grindhouse, but what they’re too cool to say out loud is that they basically think of grindhouse as a people’s cinema—crude and insurgent—a precious collection of movies about black people taking out the Klan and women turning the knife back against the men who attack them and kung fu masters sticking up for Native Americans.

2) What I’m saying, basically, is that Quentin Tarantino is our Woody Guthrie; he is the Woody Guthrie of mondo and the midnight movie. That is not a joke. The most famous picture of Woody Guthrie gives the viewer a clear look at the folk-singer’s guitar, across which is scrawled: “This machine kills fascists.”

We need to think hard about the fantasy that is communicated by that sentence—because we’re trying to make sense of this image—

—and that sentence provides the second important clarification. Woody Guthrie didn’t just want to sing about justice; he didn’t just want to “inspire his listeners” or get them to raise their voices in the spirit of peace or whatever it is that we usually think folk singers do; he was trying to imagine a music so powerful that it would actually bring justice into the world; he wanted to strum justice into existence; wanted an art that wouldn’t just be in the service of revolution, but that would itself be the completed revolutionary act. And that’s exactly what Tarantino gives us at the end of the movie: “This movie screen kills fascists.” That fantasy—the fantasy of a fully revolutionary art—turns out to be very old. As early in the 1590s, some English poets were trying to write plays that not only depicted revenge, but actually achieved it; they were trying to imagine plays that could actually kill corrupt courtiers and oppressive princes, as though blank verse could actually draw blood. Or if we flash-forward to 1969, we will find Amiri Baraka writing these lines, in a poem called “Black Arts”:

 

We want ‘poems that kill.’

Assassin poems, Poems that shoot

guns. Poems that wrestle cops into alleys

and take their weapons leaving them dead.

 

What we can say now is that Tarantino is paying homage to the history of anti-fascist film; and he is also trying to imagine a movie that could not only describe justice but actually achieve it. And of course, we need to put those points together and say that he is trying to imagine the perfect anti-fascist film—a film so righteously anti-fascist that it literally levels any fascist who wanders into its projected light; a film that fascists cannot watch; a film that turns fascists to dust. So maybe now we can say, or begin to say, explain why Tarantino has rewritten the history of 1944. Inglourious Basterds wants to give credit for the victory in World War II to someone other than the US and Soviet armies; to nominate, as the virtual heroes of some secret history, badass Jews and cinema itself. It’s an extraordinary idea.

…except I think that’s it all wrong. None of what I’ve just written actually works; or rather, the movie does in fact put in play the two fantasies I’ve been describing—the fantasy of a muscular Judaism and the fantasy of the perfect anti-fascist film—but then it takes them back—or at least makes them harder to occupy. First it gets us to share those fantasies and then it starts calling the fantasies into question. There are two good reasons to think this.

The first I will mention only briefly and ask you to think about on your own time. One of the plain ways we have to describe who Shosanna is and what she does in this movie is to say that she is a suicide bomber. If you want to get fancy, you will say that she is a twentieth-century Samson, pulling the roof down on the heads of the Jews’ celebrating enemies, but if you go back and read the Samson story, you’ll be forced to conclude before long that he, too, was a suicide bomber, so it’s really the same point anyway. At that point we will recall that there was a bomb attack on a movie theater in northern India in 2007; another in Mumbai during the wave of coordinated attacks in 2009; an especially bad movie theater bombing in Algeria in 1998; and so on. The movie undoubtedly produces an image of a heroic Judaism, but only at the cost of letting it mutate visibly into one of its putative opposites, which is the Muslim terrorist.

That’s one of the big surprises hidden away in the movie’s fantasies. The second is easiest to communicate through a series of paired images:

1)

2)

“You know something, Utivich? I think this might just be my masterpiece.”

3)

Here’s my gloss on that sequence. 1) We see a Nazi soldier, shot from below, mowing down an improbable number of the gathered enemy. Then we see an American soldier doing the same thing—and in a similar shot. 2) We see an American soldier mutilating an enemy officer and calling it his masterpiece; and we see Hitler telling Goebbels that he has made his masterpiece. 3) We see a fascist turn to the camera in black-and-white and address the audience directly, speaking English for the first time. And then we see the anti-fascist turn to the camera in black-and-white and address the audience directly, speaking English for the first time. We can see what this adds up to. Tarantino has built in unmistakable visual rhymes between the fascist movie and its putatively anti-fascist alternatives. Just to be clear: There are three movies in play here—the movie we are watching, Tarantino’s movie; the fascist movie; and Shosanna’s anti-fascist movie. So two anti-fascist movies and a fascist movie. And the point is that each of the two anti-fascist movies plainly, demonstrably resembles the fascist movie. Everything in the movie starts bleeding into fascism. Two more pairings, to coax over the disbelieving:

4)

An American soldier carves a swastika with a Bowie knife.

A German soldier carves a swastika with a Bowie knife.

5)

“Our battle plan will be that of an Apache resistance.”

But of course what’s true in miniature is also true globally: The fascists are watching a patriotic war movie about the grotesquely exaggerated exploits of a national hero. And you can’t even get that sentence out of your mouth without realizing that, yes, we too have been watching a patriotic war movie about the grotesquely exaggerated exploits of our national heroes. The anti-fascist movie we thought we were watching outs itself as fascism’s secret twin. There’s a lot to say here, but the short version is that I think we are in the presence of a filmmaker losing his confidence in grindhouse as a people’s cinema and trying to find a way to make trash cinema yield a critique of itself instead. This all comes down to the audience: What I find most striking about the shots of the audience in this movie is how attentive they are to the immediate effects of screen violence upon a group of viewers. Let me put it this way: I saw the movie twice in a theater, and each time I saw it, when the movie screen went up in flames, someone in the room clapped—not a full-palmed ovation, just three fingers of one hand in the heel of the other, the quick little rat-a-tat of a person overcome by excitement. But then of course Inglourious Basterds, in four or five different shots, shows a movie audience of fascists whoop-whooping to a blood orgy. Let me come at it from another angle. In the movie, we see one audience member laughing. I’m guessing many people were laughing when you saw the movie; you might have laughed yourself. This gets at something important, because as long as Tarantino has been making movies, high-minded critics have fretted that he makes violence entirely too pleasurable: Michael Madsen slices off a man’s ear, and the audience are bopping in their seats because “Stuck in the Middle With You” is chiming on the soundtrack. You grin as Bruce Willis trades up from hammer to baseball bat to chainsaw to samurai sword. The only movie I have ever walked out on because of the audience was the Coen brothers’ Blood Simple—close cousin to Reservoir Dogs or Pulp Fiction—and I left it because the rest of my row was cracking up while Dan Hedaya was getting buried alive, shrieking keen through mouthfuls of dirt. So how dare anyone make death funny? You have to imagine that Tarantino has always shrugged off that accusation; you can call up YouTube videos of him shrugging it off in interviews—except now he has conceded it. And we know he has conceded it because here’s the one person we see laughing at the violence:

There is only one person laughing, and it is mother-loving Hitler. That is the sight of a filmmaker profoundly alienated from his own fans, wigging out at the ability of the movies he most loves to produce in us a quasi-fascist joy in violence. So why does Tarantino hate us so much? He hates us for liking his movies the way we do; he hates us because he can so easily bring us round to enjoying the sight of people being gathered into a closed space so that they can be exterminated. He hates you for how easily you can be pushed into the Nazi position, as long as the people getting killed are themselves Nazis. He hates you because you are the fascist and you don’t even know it. And he proposes the self-consuming grindhouse solution to this grindhouse dilemma, which is that people like you have to die. You will uphold your death sentence with your applause.

 

 

Tarantino, Nazis, and Movies That Can Kill You – Part 1

I think I can show that Inglourious Basterds is not really a revenge movie, which, if you’ve seen the movie — well, you’re not going to believe me. It’s an implausible point, hard to make stick — and I’d rather start easy. So maybe I’ll just ask a few questions about the film, and then try to answer them, though maybe the questions are really the hard part, after all. It will be harder, I think, to get the questions right than to get the answers right; Basterds is so diabolically entertaining that a person could easily overlook how complicated a thing it really is. So I’m thinking that if we can just name the movie’s complications—if we can lift out its puzzles—the answers might start taking care of themselves.

My questions are three.

First question: Is Inglourious Basterds a historical movie? Is it a period piece? …or not? In some sense, yes, plainly, of course it is. It takes place at a specified moment in history—1944; the story unfolds against the backdrop of a major world event—World War II; it transforms real historical personages into minor fictional characters—Hitler, Goebbels, and the like—and it freely intermixes these “real people” with characters of its own invention. Those are the hallmarks of historical fiction in the mode of Walter Scott or Tolstoy. Scott’s Waverley features the real Scottish prince who, in the middle of the C18, tried to seize the throne of England and Scotland. War and Peace, in turn, actually has Napoleon as a character—a fairly central character, even, at least for part of the novel.

But there’s an obvious problem with this comparison, which is that Tarantino’s movie completely rewrites the history it has chosen to recount. And I can already hear the English professors amidst whom I work murmuring: But wait, historical fiction always, in myriad subtle ways, rewrites the history that it recounts. And they’re right. But Inglourious Basterds is not subtle about this; it does not even pretend to historical insight. It gleefully concocts an alternate history, in a manner that is impossible to overlook. In case anyone has forgotten: American Jews did not storm the Nazi high command and gun Hitler down in an act of heroic retribution. This is not a historical fiction in the usual sense, but rather a kind of fantasia or historical reverie—and the movie makes no effort to hide this. Not even in Tolstoy does Napoleon keep hold of Moscow.

But then this is where things really get strange. So the movie is a flight of fancy on a historical subject. OK; I think I can take that on board, because I’ve seen it before. In science-fiction circles, alternate histories have become a genre in their own right: What would England look like in the C20 if it had stayed Catholic—if, that is, there had never been a Protestant Church of England? What would the world look like today if Europeans had been wiped out in the fourteenth century by the Black Death?—a world without white people; I’ve always rather liked that one. Or closest to the day’s concerns: What would the US look like now if Hitler had never been defeated? Those books all exist and lots more like them: Historical novels about histories that never happened. But then we need to think about which event the movie has chosen to rescript: It doctors the end of World War II, and if we’re going to think about that, then let us call to mind another obvious thing: America actually defeated the Germans in World War II; or rather the Allies did. And Americans defeat the Nazis in the movie, too, with some help from French resisters. It’s worth pausing to register how odd that is. I mean, it’s not like the movie has taken a tale of American failure or hesitation and turned it into an American triumph. If you try to imagine Inglourious Basterds as a Vietnam movie, you’ll begin to see what I mean. There was a period in the mid-‘80s when Hollywood started churning out movies—like Delta Force or the second Rambo joint—in which the US Army was granted some kind of magic do-over in South-East Asia. In Rambo, Sylvester Stallone actually speaks the question: “Do we get to win this time?” And his commanding officer responds: “Yes, Rambo. You get to win this time.” What’s going on there isn’t especially hard to grasp. The historical record—or, if you prefer, popular historical pseudo-memory—contains, in reference to Vietnam, all sorts of ambivalence: feelings of failure, complicity, shame, and so on—and those feelings are a breeding ground for compensatory fantasies. But Tarantino has scripted an alternative to D-Day, of all things, which means he has replaced the most heroic moment in twentieth-century US history—a history that is already fully triumphalist, entirely devoid of ambivalence—with something even more triumphalist, but weirdly, ferociously so. He has scripted a fictional way of winning a war that the US won anyway. So what’s going on? That’s  the first question.

I have a second question that also involves the ways this is not a straightforward historical movie. I want to be careful here: Historical fictions are always complicated, because they always require you to think at the same time about two different historical moments; if you’re reading a historical novel, you need to think about when the book was set, but you also need to think about when the book was written. So take Toni Morrison’s Beloved, which is the one recent historical novel you can count on someone having read. That book is set in the 1870s, but it was written in the 1980s. And a person might ask: What’s the difference between a book written in the 1870s, like Thomas Hardy’s Far From the Madding Crowd, and one set in the 1870s? That second book, Beloved, has a historical shadow dimension that the first book doesn’t. Historical novels belong, as it were, to two historical moments at once. They are always implicitly putting two historical moments in front of you and asking you what connects them or what they share. So Beloved is a novel about America in the nineteenth century—it’s about the aftermath of slavery—but it is also a novel of the 1980s. The 1870s and the 1980s get held up next to each other. If you want to understand Beloved, you have to understand both what Toni Morrison is saying about the past and what she is saying to her contemporaries. It’s Reconstruction; and it’s the Reagan-era; and they’re side by side. Same deal with Inglourious Basterds. Tarantino was talking about this movie as early as 2001; he wrote different versions of the screenplay across the last decade; two or three times, he announced he was going into production only to change his mind; and then he finally began filming in October 2008—a month before the Obama-McCain election, if you want to think of it that way. So this movie is about 1944, but we can also think of it as pretty much the last movie of the Bush administration. And it’s a war movie—and we mustn’t lose sight of this—which recasts WWII as a settling of scores. And few viewers will have overlooked that it’s also a Western. The opening scene has a French farmer living in what you could mistake for the timber shack of a Montana frontiersman; there’s a shootout in a saloon where desperadoes are drinking whiskey; and so on. So who thinks about war as a Western? Six days after 9/11, George Bush stood up in front of the press corps and said: “I want justice. And there’s an old poster out West, I recall, that said: ‘Wanted, Dead or Alive.’”

We seem to be making headway. But the point I’m after is that Inglourious Basterds is actually more complicated than this. Historical fictions are always complicated, and this movie is more complicated still, not least because it is so obviously stitched together out of parts from other movies. Now we know that this is what Tarantino likes to do; he’s got a mash-up aesthetic. So that opening scene?—it’s borrowed from John Ford; and the scene where the French Jewish beauty and the young Nazi hero kill each other?—that’s ripped from a John Woo movie. Now again, movies and novels are always borrowing from other movies and novels, so maybe you’re thinking Big deal. But most movies and novels take some pains to cover their tracks; they don’t want you to spot their borrowings; they invite you to sink into the story, so that you can trick yourself into thinking that you are watching the past unfold organically before you. And Tarantino simply will not let you sink into the story. He does not hide his sources. The most obvious example is the moment when the movie introduces Hugo Stiglitz for the first time; suddenly the movie has a narrator, and the narrator is Sam Jackson, in voiceover, and with an underlay of boom chicka wawa, and every time you hear those pimped-out cadences, you get airlifted briefly out of 1944 and deposited in the mid-‘70s instead—so Sam Jackson, but Sam Jackson in his incarnation as latter-day soul brother.

That’s the single most intrusive moment in the movie; the visible incursion of another film genre into the World War II movie; but it’s hardly the only one. There’s the spaghetti Western soundtrack, which provides an ongoing temporal counterpoint to the action. Or there’s the title. I dutifully went and watched the 1978 Italian movie from which the title Inglourious Basterds has been filched only to discover that it bears absolutely no resemblance to the movie Tarantino made. The later film is in no way a remake of the earlier one. But then knowing that should help us see how programmatic Tarantino’s retro aesthetic is: He wants you to think his movie is a remake even when it isn’t a remake. In the event, the title is something like an all-purpose footnote; it doesn’t do much more than point you, broadly, to the entire body of late ‘60s and ‘70s-era trash movies that we all know Tarantino loves; and the music does the same thing; and so does Sam Jackson. Someone out there was disappointed to discover that Richard Roundtree wasn’t playing Hitler. So the movie doesn’t just whisk us back to 1944; and it doesn’t even really whisk us back to its alternate-reality 1944. Rather, it forces us to contemplate 1944 through a scrim of other movies, and I want us to think of this as an almost geological act of historical layering. This is how Inglourious Basterds is different from an ordinary historical fiction: There aren’t just two historical moments in play, there are at least three. Hence my second question: Why, in 2009, make a ‘70s-style movie about 1944?

One quick point to make, in passing, because it will be important to some people’s experience of the movie: This might be a trash movie; and it might rewrite history in preposterous ways; but its use of historical detail is nonetheless meticulous. The movie’s evident precision begins with its attention to language. It’s a tri-lingual movie, and the German in the movie is impeccable—entirely unlike the Halt!-und-Schnell! that you get in Schindler’s List and other graduates from the Hogan’s Heroes School of War Cinema. And beyond that, the movie is full of historical references that aren’t in the least offhand—references, I mean, that are knowing and apt. Tarantino works in references to early twentieth-century German children’s literature; he briefly introduces, as a character, a cat named Emil Jannings, who was 1) a real German actor of the period; 2) the first person ever to win an Oscar; 3) and a prominent Nazi. And on and on. Now if you’re in a position to appreciate these details—which basically means if you’re German—the experience of the movie has got to be all the more bewildering. The puzzles I’ve been describing intensify, because in lots of ways the movie seems unusually committed to 1944—the movie’s erudition, I mean, can’t help but convey a certain respect for the movie’s historical materials—and yet at the same time 1944 is constantly slipping from sight.

So … a second question. My third question is easier to explain, though it’s probably also the most important one. It all comes down to this image and to the scene that contains it:

We have to be clear about what’s going on here. I can imagine a person being keyed up enough at the sweet sight of all those Nazis getting killed to overlook the second thing that’s going on in the movies climactic scenes—not a second event—but a second, equally plausible way of describing that one event: The movie is showing a Jewish woman wreaking vengeance upon Germans, but it is also showing a filmmaker killing her own audience. That’s amazing; and serious thinking about the movie has got to start there. We need to think hard about the conditions under which some of us saw this movie. If you were lucky enough to see Inglourious Basterds during its original run—and so not on DVD—then you sat in a movie theater and watched people in a movie theater get wiped out. You might have been rooting for Shosanna or the Basterds—I know I was—but the people getting offed were, at the moment of their death, unmistakably like you. The aspect of the movie that most leaps out, I think, is its extraordinary hostility towards the audience. So my third question is: Why does Quentin Tarantino hate us so much?

So those are my three questions: 1) Why take the triumphalist American history of WWII and make it even more triumphalist? 2) Why channel our perceptions of the 1940s via the 1970s? 3) And why commit mass murder upon the audience? I will next attempt some answers.

 

…MORE TO COME…