Category Archives: Articles

To the Political Ontologists

The political ontologists have their work cut out for them. Let’s say you believe that the entire world is made out of fire: Your elms and alders are fed by the sky’s titanic cinder; your belly is a metabolic furnace; your lungs draw in the pyric aether; the air that hugs the earth is a slow flame—a blanket of chafing-dish Sterno—shirring exposed bumpers and cast iron fences; water itself is a mingling of fire air with burning air. The cosmos is ablaze. The question is: How are you going to derive a political program from this insight, and in what sense could that program be a politics of fire? How, that is, are you going to get from your ontology to your political proposals? For if fire is not just a political good, but is in fact the very stuff of existence, the world’s primal and universal substance, then it need be neither produced nor safeguarded. No merely human arrangement—no parliament, no international treaty, no tax policy—could dislodge it from its primacy. It will no longer make sense to describe yourself as a partisan of fire, since you cannot be said to defend something that was never in danger, and you cannot be said to promote something that is everywhere already present. Your ontology, in other words, has already precluded the possibility that fire is a choice or that it is available only in certain political frameworks. This is the fate of all political ontologies: The philosophy of all-being ends up canceling the politics to which it is only superficially attached. The –ology swallows its adjective.

The task, then, when reading the radical ontologists—the Spinozists, the Left Heideggerians, the speculative realists—is to figure out how they think they can get politics back into their systems; to determine by which particular awkwardness they will make room for politics amidst the spissitudes of being. In its structure, this problem repeats an old theological question, which the political ontologists have merely dressed in lay clothes—the question, that is, of whether we are needed by God or the gods. If you have given in to the pressure to subscribe to an ontology, then this is the first question you should ask: Whatever is at the center of your ontology—does it need you? Does Becoming need you? Is Being incomplete without you? Has the cosmic fire deputized you? And if you decide that, no, the fire does not need you—if, that is, you resist the temptation to appoint yourself that astounding entity upon which even the Absolute depends—then you will have yourself already concluded that there is nothing exactly to be gained from getting your ontology right, and you will be free to think about other and more interesting things.

If, on the other hand, you are determined to ontologize, and determined additionally that your ontology yield a politics, there are, roughly speaking, three ways you can make this happen.

First, you could determine that even though fire is the primal stuff of the universe, it is nonetheless unevenly distributed across it; or that the cosmos’s seemingly discrete objects embody fire to greater and lesser degrees. The heavy-gauge universalism of your ontology will prevent you from saying outright that water isn’t fire, but you might conclude all the same that it isn’t very good fire. This, in turn, would allow you to start drawing up league tables, the way that eighteenth-century vitalists, convinced that the whole world was alive, nonetheless distinguished between vita maxima and vita minima. And if you possess ontological rankings of this kind, you should be able to set some political priorities on their basis, finding ways to reward the objects (and people? and groups?) that carry their fiery qualities close to the surface, corona-like, and, equally, to punish those objects and people who burn but slowly and in secret. You might even decide that it is your vocation to help the world’s minimally fiery things—trout ponds, shale—become more like its maximally fiery things—volcanoes, oil-drum barbecue pits. The pyro-Hegelian takes it upon himself to convert the world to fire one timber-framed building at a time.

Alternately—and herewith a second possibility—you can proclaim that the cosmos is made of fire, but then attribute to humanity an appalling power not to know this. “Power” is the important word here, since the worry would have to be that human ignorance on this point could become so profound that it would damage or dampen the world-flame itself. Perhaps you have concluded that fire is not like an ordinary object. We know in some approximate and unconsidered way what it is; we are around it every day, walking in its noontide light, enlisting it to pop our corn, conjuring it from our very pockets with a roll of the thumb or knuckly pivot. And yet we don’t really understand the blaze; we certainly do not grasp its primacy or fathom the ways we are called upon to be its Tenders. You might even have discovered that we are the only beings, the only guttering flames in a universe of flame, capable of defying the fire, proofing the world against it, rebuilding the burning earth in gypsum and asbestos, perversely retarding what we have been given to accelerate. This argument expresses clear misgivings about humanity; it doesn’t trust us to keep the fire stoked; and to that extent it partakes of the anti-humanism that is all but obligatory among political ontologists. And yet it shares with humanism the latter’s sense that human beings are singular, a species apart, the only beings in existence capable of living at odds with the cosmos, capable, that is, of some fundamental ontological misalignment, and this to a degree that could actually abrogate an ontology’s most basic guarantees. From a rigorously anti-humanist perspective, this position could easily seem like a lapse—the residue of the very anthropocentrism that one is pledged to overcome—but it is in fact the most obvious opening for an anti-humanist politics (as opposed, say, to an anti-humanist credo), since you really only get a politics once the creedal guarantees have been lifted. If human beings are capable of forgetting the fire, someone will have to call to remind them. Someone, indeed, will have to ward off the ontological catastrophe—the impossible-but-somehow-still-really-happening nihilation of the fire—the Dousing.

That said, a non-catastrophic version of this last position is also possible, though its politics will be accordingly duller. Maybe duller is even a good thing. Such, at any rate, is the third pathway to a political ontology: You might consider arguments about being politically germane even if you don’t think that humanity’s metaphysical obtuseness can rend the very tissue of existence. You don’t have to say that we are damaging the cosmic fire; it will be enough to say that we are damaging ourselves, though having said that, you are going to have to stop trying to out-anti-humanize your peers. Your position will now be that not knowing the truth about the fire-world deforms our policies; that if we mistake the cosmos for something other than flame, we are likely to attempt impossible feats—its cooling; its petrification—and will then grow resentful when these inevitably fail. You might, in the same vein, determine that there are entire institutions dedicated to broadcasting the false ontologies that underwrite such doomed projects, doctrines of air and doxologies of stone, and you might think it best if such institutions were dismantled. If it’s politics we’re talking about, you might even have plans for their dismantling. Even so, you will have concluded by this point that the problem is in its essentials one of belief—the problem is simply that some people believe in water—in which case, ontology isn’t actually at issue, since nothing can happen ontologically; the fire will crackle on regardless of what we think of it, indifferent to our denials and our elemental philandering. You have thus gotten the politics you asked for, but only having in a certain sense bracketed the ontology or placed it beyond political review. And your political program will accordingly be rather modest: a new framework of conviction—a clarification—an illumination.

Still, even a modest politics sometimes shows its teeth. William Connolly, in a book published in 2011, says that the world-fire is burning hotter than it has ever burnt; the problem is, though, that some “territories … resist” the flame. What we don’t want to miss is the basically militarized language of that claim: “resisting territories” suggests backwaters full of ontological rednecks; Protestant Austrian provinces; the Pyrenees under Napoleon; Anbar. Connolly’s notion is that these districts will need to be enlightened and perhaps even pacified, whereupon political ontology outs itself as just another program of philosophical modernization, a mopping up operation, the People of the Fire’s concluding offensive against the People of the Ice. Don’t fight it, Connolly, in this way, too, an irenicist, instructs the existentially retrograde. Let it burn.

The all-important point, then, is that there is absolutely no reason to get hung up on the word “fire,” in the sense that there is no more sophisticated concept you can put in its place that will make these problems go away: not Being, not Becoming, not Contingency, not Life, not Matter, not Living Matter. Go ahead: Choose your ontological term or totem and mad-lib it back into the last six paragraphs.  Nothing else about them will change.

• • •

Anyone wanting to read Connolly’s World of Becoming, or Jane Bennett’s Vibrant Matter, its companion piece, also from 2011, now has some questions they can ask. The two books share a program:

-to survey theories of chaos, complexity; to repeat the pronouncements of Belgian chemists who declare the end of determinism; and then to resurrect under the cover of this new science a much older intellectual program—a variously Aristotelian, Paracelsian, and hermetic strain in early modern natural philosophy, which once posited and will now posit again a living cosmos a-go-go with active forces, a universe whose intricate assemblages of self-organizing systems will frustrate any attempt to reduce them back to a few teachable formulas;

-or, indeed, to trade in “science” altogether in favor of what used to be called “natural history,” the very name of which strips nature of its pretense to permanence and pattern and nameable laws and finds instead a universe existing wholly in time, as fully exposed to contingency, mutation, and the event as any human invention, with alligators and river valleys and planets now occupying the same ontological horizon as two-field crop rotation and the Lombard Leagues;

-to recklessly anthropomorphize this historical cosmos, to the point where that entirely humanist device, which everywhere it looks sees only persons, tips over into its opposite, as humanity begins divesting itself of its specialness, giving away its privileges and distinguishing features one by one, and so produces a cosmos full of more or less human things, active, volatile, underway—a universe enlivened and maybe even cartoonish, precisely animated, staffed by singing toasters and jitterbugging hedge clippers.

I wouldn’t blame anyone for finding this last idea rather winning, though one problem should be noted right way, which is that Connolly, in particular, despite getting a lot of credit for bringing the findings of the natural sciences into political theory—and despite repeating in A World of Becoming his earlier admonition to radical philosophers for failing to keep up with neurobiology and chemistry and such—really only quotes science when it repeats the platitudes of the old humanities. The biologist Stuart Kauffman has, Connolly notes, “identified real creativity” in the history of the cosmos or of nature. Other research has identified “degrees of real agency” in a “variety of natural-social processes.” The last generation of neuroscience has helped specify the “complexity of experience,” the lethal and Leavisite vagueness of which phrase should be enough to put us on our guard. It turns out that the people who will save the world are still the old aesthetes; it’s just that their banalities can now borrow the authority of Nobel Laureates (always, in Connolly, named as such). Of one scientific finding Connolly notes: “Mystics have known this for centuries, but the neuroscience evidence is nice to have too.” That will tell you pretty much everything you need to know about the role of science in the new vitalism, which is that it gets adduced only to ratify already held positions. This is interdisciplinarity as narcissistic mirror.

But we can grant Connolly his fake science—or rather, his fake deployment of real science. The position he and Bennett share—that the cosmos is full of living matter in a constant state of becoming—isn’t wrong just because it’s warmed over Ovid. What really needs explaining is just which problems the political philosophers think this neuro-metamorphism is going to solve. More to the point, one wonders which problems a vitalist considers still unsolved. If Bennett and Connolly are right, then is there anything left for politics to do? Has Becoming bequeathed us any tasks? Won’t Living Matter get by just fine without us? And if there is no political business yet to be undertaken, then in what conceivable sense is this a political philosophy and not an anti-political one?

The real dilemma is this: There are those three options for getting a politics back into ontology—you can devise an ontological hierarchy; you can combat ontological Vergessenheit; or you can promote ontological enlightenment. Bennett and Connolly don’t like two of these, and the third one—the one they opt for—ends up canceling the ontology they mean to advocate. I’ll explain.

Option #1: Hierarchy could work. Bennett and Connolly could try to distinguish between more and less dynamic patches of the universe—or between more and less animate versions of matter—but they don’t want to do that. The entire point of their philosophical program is a metaphysical leveling; witness that defense of anthropomorphism. Bennett, indeed, uses the word “hierarchical” only as an insult, the way that liberals and anarchists and post-structuralists have long been accustomed to doing. Having only just worked out that all of matter has the characteristics of life, she is not about to proclaim that some life forms are more important than others. Her thinking discloses a problem here, if only because it reminds one of how difficult is has been for the neo-vitalists to figure out when to propose hierarchies and when to level them, since each seems to come with political consequences that most readers will find unpalatable. Bennett herself worries that a philosophy of life might remove certain protections historically afforded humans and thus expose them to “unnecessary suffering.” She positions herself as another trans- or post-humanist, but she doesn’t want to give up on Kant and the never really enforced guarantees of a Kantian humanism; she thinks she can go over to Spinoza and Nietzsche and still arrive at a roughly Left-Kantian endpoint. “Vital materialism would … set up a kind of safety net for those humans who are now … routinely made to suffer.” That idea—which sounds rather like the Heidegger of the “Letter on Humanism”—is, of course, wrong. Bennett is right to fret. A vitalist anti-humanism is indeed rather cavalier about persons, as her immediate predecessors and philosophical mentors make amply clear. The hierarchies it erects are the old ones: Michael Hardt and Toni Negri think it is a good thing that entire populations of peasants and tribals were wiped out because their extermination increased the vital energies of the system as a whole. And if vitalism’s hierarchies produce “unnecessary suffering,” well, then so do its levelings: Deleuze and Guattari think that French-occupied Africa was an “open social field” where black people showed how sexually liberated they were by fantasizing about “being beaten by a white man.”

Option #2: They could follow the Heideggerian path, which would require them to show that humanity is a species with weird powers—that humans (and humans alone) can fundamentally distort the universe’s most basic feature or hypokeinomon. That would certainly do the political trick. Vitalism would doubtless take on an urgency if it could make the case that human beings were capable of dematerializing vibrant matter—or of making it less vibrant—or of pouring sugar into the gas tank of Becoming. But Bennett and Connolly are not going to follow this path either, for the simple reason that they don’t believe anything of the sort. Their books are designed in large part to attest the opposite—that humanity has no superpowers, no special role to play nor even to refuse to play. Early on, Bennett praises Spinoza for “rejecting the idea that man ‘disturbs rather than follows Nature’s order.’” We’ll want to note that Spinoza’s claim has no normative force; it’s a statement of fact. We don’t need to be talked out of disturbing nature’s order, because we already don’t. The same grammatical mood obtains when Bennett quotes a modern student of Spinoza: “human beings do not form a separate imperium unto themselves.” We “do not”—the claim in its ontological form means could not—stand apart and so await no homecoming or reunion.

Those sentences sound entirely settled, but there are other passages in Vibrant Matter when you can watch in real time as such claims visibly neutralize the political programs they are being called upon to motivate. Here’s Bennett: “My hunch is that the image of dead or thoroughly instrumentalized matter feeds human hubris and our earth-destroying fantasies of conquest and consumption.” On a quick read you might think that this is nothing more than a little junk Heideggerianism—that techno-thinking turns the world into a lumberyard, &c. But on closer inspection, the sentence sounds nothing like Heidegger and is, indeed, entirely puzzling. For if it is “hubris” to think that human beings could “conquer and consume” the world—not hubris to do it, but hubris only to think it, hubris only in the form of “fantasy”—then in what danger is the earth of actually being destroyed? How could mere imagination have world-negating effects and still remain imagination? Bennett’s position seems to be that I have to recognize that consuming the world is impossible, because if I don’t, I might end up consuming the world. Her argument only gains political traction by crediting the fantasy that she is putatively out to dispel. Or there’s this: Bennett doesn’t like it when a philosopher, in this instance Hannah Arendt, “positions human intentionality as the most important of all agential factors, the bearer on an exceptional kind of power.” Her book’s great unanswered question, in this light, is whether she can account for ecological calamity, which is perhaps her central preoccupation, without some notion of human agency as potent and malign, if only in the sense that human beings have the capacity to destroy entire ecosystems and striped bass don’t. The incoherence that underlies the new vitalism can thus be telegraphed in two complementary questions: If human beings don’t actually possess exceptional power, then why is it important to convince them to adopt a language that attributes to them less of it? But if they do possess such power, then on what grounds do I tell them that their language is wrong?

Option #3: Enlightenment it is, then. What remains, I mean, for both Connolly and Bennett, is the simple idea that most people subscribe to a false ontology and are accordingly in need of re-education. Connolly describes himself and his fellow vitalists as “seers”—he also calls them “those exquisitely sensitive to the world”—and he more then once quotes Nietzsche referring to everyone else, the non-seers, the foggy-eyed, as “apes.” I don’t much like being called an orangutan and know others who will like it even less, but at least this rendering of Bennett/Connolly has the possible merit of making the object-world genuinely autonomous and so getting the cosmos out from under the coercions of thought. Our thinking might affect us, but it cannot affect the universe. But there is a difficulty even here—the most injurious of political ontology’s several problems, I think—which is that via this observation philosophy returns magnetically to its proper object—or non-object—which is thought, and we realize with a start that the only thing that is actually up for grabs in these new realist philosophies of the object is in fact our thinking personhood. This is really quite remarkable. Bennett says that the task facing contemporary philosophy is to “shift from epistemology to ontology,” but she herself undertakes the dead opposite. She has precisely misnamed her procedure: “We are vital materiality,” she writes, “and we are surrounded by it, though we do not always see it that way. The ethical task at hand here is to cultivate the ability to discern nonhuman vitality, to become perceptually open to it.” There is nothing about her ontology that Bennett feels she needs to work out; it is entirely given. The philosopher’s commission is instead to devise the  moralized epistemology that will vindicate this ontology, and which will, in its students, produce “dispositions” or “moods” or, as Connolly has it, a “working upon the self” or the “cultivation of a capacity” or a “sensibility” or maybe even just another intellectual “stance.” Connolly and Bennett have lots of language for describing mindsets and almost no language for describing objects. Their arguments take shape almost entirely on the terrain of Geist. They really just want to get the subjectivity right.

There are various ways one might bring this betrayal of the object into view, in addition to quoting Bennett and Connolly’s plain statements on the matter. Among the great self-defeating deficiencies of these books are the fully pragmatist argumentative procedures adopted by their authors, who adduce no arguments in favor of their  chosen ontology. Bennett points out that her position is really just an “experiment” with different ways of “narrating”; an “experiment with an idea”; a “thought experiment,” Connolly says. “What would happen to our thinking about nature if…” The post-structuralism that both philosophers think they’ve put behind them thus survives intact. But such play with discourse is, of course, entirely inconsistent with a robust philosophy of objects, premised as it is on the idea that the object exerts no pressure on the language we use to describe it, which indeed we elect at will. The mind, as convinced of its freedom as it ever was, chooses a philosophical idiom just to see what it can do.

This problem—the problem, I mean of an object-philosophy that can’t stop talking about the subject—then redoubles itself in two ways:

– The problem is redoubled, first, in the blank epiphanies of Bennett’s prose style, and especially when she makes like Novalis on the streets of Baltimore, putting in front of readers an assemblage of objects the author encountered beneath a highway underpass so that we can imagine ourselves beside her watching them pulsate. The problem is that she literally tells us nothing about these items except that she heard them chime. One begins to say that she chose four particular objects—a glove, pollen, a dead rat, and a bottle cap—except that formulation is already misleading, since lacking further description, these four objects really aren’t particular at all. They are sham specificities, for which any other four objects could have served just as well. She could have changed any or all of them—could have improvised any Borgesian quartet—and she would have written that page in exactly the same manner. You can suggest your own, like this:

-a sock, some leaves, a lame squirrel, and a soda can

-a castoff T-shirt, a fallen tree limb, a hungry kitten, and an empty Cheetos bag

a bowler hat, a beehive, a grimy parasol, and Idi Amin

These aren’t objects; these are slots; and Bennett’s procedure is to that extent entirely abstract. This is what it means to say that materialism, too, is just another philosophy of the subject. It does no more or less than any other intellectual system, maintaining the word “object” only as a vacancy onto which to project its good intentions.

-The problem is redoubled, second, in the nakedly religious idiom in which these two books solemnize their arguments. That idiom, indeed, is really just pragmatism in cassock and cope. The final page of Bennett’s book prints a “Nicene Creed for would-be vital materialists.” Connolly’s book begins by offering its readers “glad tidings.” Nor does the latter build arguments or gather evidence; he “confesses” a “philosophy/faith,” which is also a “faith/conviction,” which is also a “philosophy/creed.” Bennett and Connolly hold vespers for the teeming world. Eager young materialists, turning to these books to help round out their still developing views, must be at least somewhat alarmed to discover that our relationship to matter is actually one of “faith” or “conviction.” A philosophical account of the object is replaced by a pledge—a deferral—a promise, by definition tentative, offered in a mood of expectancy, to take the object on trust. Nor is this in any way a gotcha point. Connolly is completely open about his (Deleuzian) aim “to restore belief in the world.” It’s just that no sooner is this aim uttered than the world undergoes the fate of anything in which we believe, since if you name your belief as belief, then you are conceding that your position is optional and to some considerable degree unfounded and that you do not, in that sense, believe it at all.

It’s not difficult, at any rate, to show that Connolly for one does not believe in his own book. The stated purpose of A World of Becoming is to show us how to “affirm” that condition. That’s really all that’s left for us to do, once one has determined that Becoming will go on becoming even without our help and even if we work against it. Connolly’s writing, it should be said, is generally short on case studies or named examples of emergent conjunctures, leaving readers to guess what exactly they are being asked to affirm. For many chapters on end, one gets the impression that the only important way in which the world is currently becoming is that more people from Somalia are moving to the Netherlands, and that the phrase “people who resist Becoming” is really just Connolly’s idiosyncratically metaphysical synonym for “racists.” But near the end of the book, three concrete examples do appear, all at once—three Acts of Becoming—two completed, one still in train: the 2003 invasion of Iraq; the 2008 financial collapse; and global warming. All three, if regarded from the middle distance, seem to confirm the vitalist position in that they have been transformative and destabilizing and will for the foreseeable future produce unpredictable and ramifying consequences. What is surprising—but then really, no, finally not the least bit surprising—is that Connolly uses a word in regard to these three cases that a Nietzschean committed to boundless affirmation shouldn’t be able to so much as write: “warning.” Melting icecaps are not to be affirmed—that’s Connolly’s own view of the matter. Mass foreclosure is not to be affirmed. Quite the contrary: If you know that the cosmos is capable of shifting suddenly, then you might be able to get the word out. The responsibility borne by philosophers shifts from affirmation to its opposite: Vitalists must caution others about what rushes on. The philosopher of Becoming thus asks us to celebrate transformation only until he runs up against the first change he doesn’t like.

This is tough to take in. Lots of things are missing from political ontology: politics, objects, an intelligible metaphilosophy. But surely one had the right to expect from a theorist of systemic and irreversible change, one with politics on his mind, some reminder of the possibility of revolution, some evocation, since evocations remain needful, of the joy of that mutation, the elation reserved for those moments when Event overtakes Circumstance. But in Connolly, where one might have glimpsed the grinning disbelief of experience unaccounted for, one finds only the bombed out cafés of Diyala, hence fear, hence the old determination to fight the future. The philosopher of fire grabs the extinguisher. The philosopher of water walks in with a mop.

Thanks to Jason Josephson and everyone in the critical theory group at Williams College.

Illegals, Part 2

PART ONE IS HERE. 

ALLEGORICAL COMPLEXITY #1—Super 8, eventually :

You can think of this as a tip for reading: When you are trying to make sense of an allegory, it is not enough to list the resemblances between the allegorical construct and its real-world referent, between the spaceman and the Jewish fugitive; you’ll need to catalogue their divergences, as well. For excess is the permanent condition of allegory. An invented creature never fully disappears into its literal equivalents; the alien is not exhausted by the designation “Jewish.” The reader’s task, then, is not to vaporize a given movie’s specificities, not to absorb them into some higher meaning that, once decrypted, would render the movie itself superfluous. Part of the task is to account, rather, for the allegory’s remainders, the scraps of significance that are left over even once the allegorical identification has been successfully announced. These unattached features are the mark of a contradiction that is internal to allegory; they disclose desires that the world’s already existing names cannot satisfy.

An alien invasion movie of a different kind, then, before we get to Super 8, just to make clear that this point is specific to no one film. The allegory in James Cameron’s Avatar, from 2009, is open-and-shut and, one might object, mostly shut—entirely too neat—elementary and plodding. The movie’s aliens are Indigenous People, a blue-skinned cross between the Chinook and the Zulu, called the Na’vi, which sounds like Navajo + Hopi. But the very obviousness of the allegory ends up producing some interesting effects of its own, for Avatar is so unoverlookably anti-imperialist—anti-imperialist in such a thorough-going way—that no-one who cares about such a politics can afford to just skip it or to write it off too quickly. Its story is certainly familiar; it’s just the twice-told tale about a white guy crossing sides, going native, turning Turk. But a comparative approach would show that the movie actually blows clean past the hedges and outs that typically blight such narratives, and especially the famous recent ones: Dances with Wolves, say, or The Last Samurai. Those movies are easy to hate. The really foul thing about Dances is that Kevin Costner falls in love with an Indian woman, except she isn’t really Indian—she’s the only other white person in the tribe—and you know this because she wears her hair differently, as though the Sioux kept on staff a special whites-only beautician. This only nominally pro-Indian movie goes to completely absurd lengths to prevent inter-racial sex. It is in this sense that the people who insisted that Avatar was nothing more than a live-action replay of FernGully or Disney’s Pocahontas weren’t paying attention. Sure, Avatar borrows from other movies, and yet it distinguishes itself even so by its open-throttle commitment to indigenism and racial treason. Quick—list for me all the other Hollywood movies you’ve seen that end with a vision of white people getting sent back to Europe for good. The movie baptizes everyone who watches it into the end of the American empire.

It does more than that. One of Avatar’s first-order complexities is that the opposing forces on the two sides of its central conflict—the human invaders and the indigenous aliens—have been borrowed from very different periods in the history of empire. The Na’vi call to mind the precolonial Kikuyu or the Algonquin before Columbus, but the movie’s humans are neither Puritan nor pith-helmeted; they are new-model conquistadors, Haliburton-types, the corporate mercenaries of the War on Terror. Avatar asks us to imagine how it would look if the current US army were invading North America or Africa for the first time—What if the Massachusetts Bay Company had employed Blackwater?—which means at the level of the image, the movie manages to insert the Iraq War into some much longer histories, folding Bush-era adventurism into an overarching account of European colonization. To that extent, James Cameron is actually rather smarter about empire than the run-of-the-mill American liberals who talk as though 2003 were some kind of shocking deviation from the fundamental patterns of US history, a freedom-loving nation’s unprecedented deviation into expansion and conquest. And in a similar vein, the movie is willing to dwell, to a quite unusual degree for a blockbuster, on images of imperial atrocity—familiar images, doubtless, if you know that history, but replayed for a global audience with immediacy and renewed grief: The Smurf-Seminoles walk the Trail of Tears.

I also think the movie’s length, about which those prone to headaches might rightfully complain, turns out to be its great asset. And the best thing about those 160 minutes is this: Avatar is a utopia hiding in an action movie. The movie is so indulgent that it can afford to give us a protracted utopian sequence, itself almost as long as an ordinary feature film, when, in fact, there is no genre that commercial film avoids more studiously than utopia. My friends who study the form will get huffy at this point: So yes, absolutely, the utopia in Avatar is badly underspecified; it is not much interested in how the Na’vi feed or govern themselves. It approaches the better society almost only through the natives’ theology. But in some respects, this is actually where the movie is at its most ingenious. Cameron, who as I write is crawling on his hands and knees around the Mariana Trench, has found a way to put his pricey 3D-technology in the service of utopia—or at least of a certain pantheism, which in this case is almost the same thing. As a sensory experience, the movie obviously feels new and exhilarating, and I want to say that in some almost Ruskinite way, the film is determined to revitalize your sensorium, to create a constant sense of wonder at the simple fact that we all live in a three-dimensional world. The movie obviously makes a big deal of the characters being connected, being able to interface with nature, to plug into it, in a way that is both technological and shamanistic, and I think the movie thereby provides its own gloss on its technological ambitions: It’s as though Cameron thinks he can use the most advanced technology that a director has ever commanded to approximate in the viewer a basically vitalist and world-adoring attitude.

But then it is precisely here that instability takes over. It is here, I mean, that we have to shift from naming the ways in which the Na’vi are most like Amazonians to naming where they are least so. Avatar is not only putting in front of us an indigenism; it is putting in front of us a technologized indigenism, and there is something about this latter that is odd and finally unsatisfying. That point comes in a specific form and a general one. Here’s the specific one. The biggest innovation in twentieth-century warfare was air power: the bi-plane, the bomber, firebombing, the atomic bomb, napalm, no-fly zones, shock and awe, assassin drones, death from above. Air power is what has permanently shifted the global balance of power to the hyper-technological nations. And the movie’s trick—ingenious in a sense, but also silly—is to give the indigenous a Luftwaffe: Dragons! The flying monsters, in other words, are the equalizer that makes the movie’s political allegory work, but they are themselves entirely non-allegorizable, which means that the entire system of correspondences actually starts coming unglued around them.

In other words, the movie’s politics are at heart fake, because it is trying to imagine a people who live in harmony with nature, who get by without advanced technology, but it has to give them the equivalent of helicopters, because if they didn’t have the equivalent of helicopters, they would get wiped out by the Helicopter People of Earth. But then the movie is ducking the really hard political question, which is: How might a non-technological people actually survive? How could they defend themselves against the cyborg nations who would steal their land and resources? Avatar dodges those questions, and so ends up being just another impotent historical fantasia.

The broader version of that point, meanwhile, is this: It’s well known that the sci-fi movies that most distrust technology are the ones that rely on it most extensively, but Avatar radicalizes that paradox in both directions. It was upon release the most technologically advanced movie ever made, and yet it is utterly, committedly elfin and eco- in its ideology. But then in another sense, that very antithesis is breached, because the movie devises ways to comprehensively sneak technology back into nature itself. The forest paths light up, as though electrically, when the Na’vi tread on them. The aborigines plug their ponytails into animals and trees as into Ethernet ports or wall sockets. Their manes have slim, wavy organic tendrils, which however also look like fibers or cables. And the Sigourney Weaver character at one point openly compares all this to a computer: the natives are jacking into the planet and downloading information from it. On the one hand, this is itself just allegory for what we take to be “the tribal worldview”—being in touch with nature or what have you—and if we accept the entirely plausible idea that indigenous and stateless peoples have been extraordinarily attentive to ecologies—that they were really good at reading landscapes, &c—then this could merely serve as science-fiction shorthand for that skill. What’s remarkable, though, is that Cameron has translated this into a technological image. That’s the other hand. The non-technological understanding of the world gets its technological allegory. So this is what it means to say that allegory yields contradiction. Is the image of plugging into nature technological or not? It is and it isn’t—and this speaks volumes about the movie’s bad faith. A global viewership sides with a pre-technological people only when it emerges that they have the newest gadgets. Avatar reassures its audience that they could go back to the land and actually give up on nothing—that they could go off the grid and still have the grid—that they could move to the Gallatin Range and keep their every last iPhone.

PART THREE IS HERE…

Special thanks to Crystal Bartolovich, who convinced me to take the role of technology in Avatar much more seriously than I was initially inclined to and who has much more to say on the topic in her forthcoming Natural History of the Common. For a preview of her argument, see also this interview.

Illegals, Part 1

 

I’ve been thinking a lot about alien invasion movies, and especially about the ones that feature human children, boy-explorers or pre-teen ambassadors to the talking bugs. I suppose it would just be easier to say that I’ve been thinking about ET and its recent imitators: Super 8, Attack the Block. But even this would be a way of sidestepping the truth, which is that mostly I’ve been thinking about ALF. I have, in fact, been thinking about ALF for a very long time. In the very late ‘80s, as a teenager, I spent a year in Frankfurt, and there was nothing that bothered me more in that period of my life than the centrality of ALF to modern German culture. I had gone to the Rhine to learn about Günter Grass and anarchism and was still under the impression that I could outrun network television. I suppose I was mildly surprised that the Germans had, like, vacuum cleaners. ALF was at that point a pretty fair summation of everything I thought I was leaving safely back home in New England. But that show was way more popular in Germany than it ever had been in Massachusetts: Ninja-Turtle-early-Bart-Simpson-eat-my-shorts popular. It seemed like it was always running in the background in every house I visited. The stalls at small-town German street fairs were crowded with long-snooted, rusty yellow puppets, in all the places that a visitor might have expected to see hand-made Christmas decorations or tankards in the shape of castle towers. I should point out that it wasn’t just the Federal Republic; a Eurail pass revealed to me that  the series had a pan-continental following. But only in Germany did the puppet’s voice actor spend three months in the pop charts, with a single called “Hallo ALF – hier ist Rhonda.” And the thing is, when I went back to Germany for a year after college—to Berlin in the mid-90s—ALF, having been off the air in the US for half a decade, was still around, still on T-shirts and decals and school folders. The Germans left stranded by the show’s American cancellation had taken to producing ALF radio plays. Project ALF—a one-off TV movie that ran on NBC in 1996—got a theatrical release and a big rollout in Germany: ALF—Der Film. It played in Berlin’s showcase theaters. Garfield-reimagined-as-warthog looked down from on high upon the Kurfürstendamm.

So the question that posed itself ever more insistently was: Why were the Germans so hung up on this show? And one night in Berlin, an American buddy and I drank our way to clarity. ALF, of course, is a Holocaust story—you knew that already; you’re irritated I didn’t see it sooner—a sitcom about a family hiding someone in its attic, someone the government wants to seize, a permanent exile with no homeland to which he can return. Those oversized ALF dolls turned out to be the only way that a young German could take a Jewish proxy home and fantasmatically keep him safe in a wardrobe or nighttime embrace. They belonged at one remove to the history of extravagantly racialized children’s toys — plastic figurines of Native American braves, Black rag dolls. They were the stuffed animals of genocide comedy. The original NBC production hadn’t gone to any lengths to disguise this: those bushy eyebrows; that schnozz; that gruff, Catskills shtick. The show’s lone and improbable joke was that if the fascists ever took power in America, someone would have to agree to shelter Don Rickles. And with this insight in mind, I made a special trip to the university library in Berlin to chase down a hunch, and it was right: Anne Frank was not the girl’s real name, or at least not her full name. Her name was Annelies Frank: A … L … F.

The show, which premiered in 1986, was also directly derived from—or a Muppet-y riff upon—ET, released in 1982. And in that case, most of what we have to say about ALF can simply be repeated about the movie. Spielberg did not wait until the 1990s to start making films about the Holocaust. When ET came out, he had already just made one—Raiders of the Lost Ark, which ends when the insulted might of ancient Israel obliterates a small army’s worth of Nazis. Light flashes and German flesh renders like tallow: Raiders presents an alternate history in which the Jews possessed a small A-bomb of their own, a game-changer and plague of radioactive locusts for the European war. ET, then, was itself just an extrapolation from a Dutch Holocaust diary and perhaps the first narrative in which suburban Americans were invited to imagine keeping Jews as pets.

Something about this argument we will want to generalize, since alien invasion movies are always going to be, to some degree or another, racial allegories. That can’t come as a surprise to anyone who speaks English, a language in which the word “alien” means both “squid creature from another solar system” and “non-citizen.” But then I should say, too, that lots of serious readers think that allegories—or allegorical habits of interpretation—are conceptually pretty low-rent, the literary equivalent of rebuses. They’re wrong. If you really and truly give up on allegorical reading, you’re going to miss too much of importance—too much of what makes storytelling compelling to us—which means that most literary critics don’t, in fact, give up on it. They just waste a lot of time reinventing it piecemeal under other names. Nor is allegory as straightforward as the sophisticates claim; it generates its own forms of complexity and its own revelatory instabilities. But then this last point partially vindicates the people who don’t like allegory. Naming the allegory is the easy part; it’s really just the beginning. Allegories tell us one thing when they work, but they tell us something else—something arguably more valuable—when they don’t. And allegories never work perfectly. They can’t work perfectly. An impeccably rendered allegorical Jew would no longer be recognizable as allegory. He would just be a Jew. Like a dying werewolf shriveling back into its naked human form, he would revert back to literalness, from extraterrestrial to Ashkenazy. Distortion and mismatch are the preconditions of allegory, the dysfunctions that make it function. If you are reading allegorically, you can never just whip out the decoder ring.

So I want to look over the next few days at those recent homages to ET—one from the US, one from the UK—and I want to name their allegories, but I want to underscore from the outset that these are most interesting where least steady.

PART TWO IS HERE.

Outward Bound: On Quentin Meillassoux’s After Finitude

 

 

Il n’y a pas de hors-texte. If post-structuralism has had a motto—a proverb and quotable provocation—then surely it is this, from Derrida’s Of Grammatology. Text has no outside. There is nothing outside the text. It is tempting to put a conventionally Kantian construction on these words—to see them, I mean, as bumping up against an old epistemological barrier: Our thinking is intrinsically verbal—in that sense, textual—and it is therefore impossible for our minds to get past themselves, to leave themselves behind, to shed words and in that shedding to encounter objects as they really are, in their own skins, even when we’re not thinking them, plastering them with language, generating little mind-texts about them. But this is not, in fact, what the sentence says. Derrida’s claim would seem to be rather stronger than that: not There are unknowable objects outside of text, but There are outside of text no objects for us to know. So we reach for another gloss—There is only textain’t nothing but text—except the sentence isn’t really saying that either, since to say that there is nothing outside text points to the possibility that there is, in a manner yet to be explained, something inside text, and this something would not itself have to be text, any more than caramels in a carrying bag have to be made out of cellophane.

So we look for another way into the sentence. An alternate angle of approach would be to consider the claim’s implications in institutional or disciplinary terms. The text has no outside is the sentence via which English professors get to tell everyone else in the university how righteously important they are. No academic discipline can just dispense with language. Sooner or later, archives and labs and deserts will all have to be exited. The historians will have to write up their findings; so will the anthropologists; so will the biochemists. And if that’s true, then it will be in everyone’s interest to have around colleagues who are capable of reflecting on writing—literary critics, philosophers of language, the people we used to call rhetoricians—not just to proofread the manuscripts of their fellows and supply these with their missing commas, but to think hard about whether the language typically adopted by a given discipline can actually do what the discipline needs it to do. If the text has no outside, then literature professors will always have jobs; the idea is itself a kind of tenure, since it means that writerly types can never safely be removed from the interdisciplinary mix. The idea might even establish—or seek to establish—the institutional primacy of literature programs. Il n’y a pas de hors-texte. There is nothing outside the English department, since every other department is itself engaged in a more or less literary endeavor, just one more attempt to make the world intelligible in language.

Such, then, is the interest of Quentin Meillassoux’s After Finitude, first published in French in 2006. It is the book that, more than any other of its generation, means to tell the literature professors that their jobs are not, in fact, safe. Against Derrida it banners a counter-slogan of its own: ““it could be that contemporary philosophers have lost the great outdoors, the absolute outside.” It is Meillassoux’s task to restore to us what he is careful not to call nature, to lead post-structuralists out into the open country, to make sure that we are all getting enough fresh air. Meillassoux means, in other words, to wean us from text, and for anyone beginning to experience a certain eye-strain, a certain cramp of the thigh from not having moved all day from out his favorite chair, this is bound to be an appealing prospect, though if you end up unconvinced by its arguments—and there are good reasons for doubt, as the book amounts to a tissue of misunderstanding and turns, finally, on one genuinely arbitrary prohibition—then it’s all going to end up sounding like a bullying father enrolling his pansy son in the Boy Scouts against his will: Get your head out of that book! Why don’t you go in the yard and play?!

• • •

Of course, Meillassoux’s way of getting the post-structuralists to go hiking with him is by telling them which books to read first. If you start scanning After Finitude’s bibliography, what will immediately stand out is its programmatic borrowing from seventeenth- and early eighteenth-century philosophers. Meillassoux regularly cites Descartes and poses anew the question that once led to the cogito, but will here lead someplace else: What is the one thing I as a thinking person cannot disbelieve even from the stance of radical doubt? He christens one chapter after Hume and proposes, as a knowing radicalization of the latter’s arguments, that we think of the cosmos as “acausal.” In the final pages, Galileo steps forward as modern philosophy’s forgotten hero. His followers are given to saying that Meillassoux’s thinking marks out a totally new direction in the history of philosophy, but I don’t think anyone gets to make that kind of claim until they have first drawn up an exhaustive inventory of debts. At one point, he praises a philosopher publishing in the 1980s for having “written with a concision worthy of the philosophers of the seventeenth century.” That’s one way to get a bead on this book—that it resurrects the Grand Siècle as a term of praise. The movement now coalescing around Meillassoux—the one calling itself speculative realism—is a bid to get past post-structuralism by resurrecting an ante-Kantian, more or less baroque ontology, on the understanding that nearly all of European philosophy since the first Critique can be denounced as one long prelude to Derrida. There never was a “structuralism,” but only “pre-post-structuralism.”

Meillassoux, in sum, is trying to recover the Scientific Revolution and early Enlightenment, which wouldn’t be all that unusual, except he is trying to do this on radical philosophy’s behalf—trying, that is, to get intellectuals of the Left to make their peace with science again, as the better path to some of post-structuralism’s signature positions. His argument’s reliance on early science is to that extent instructive. One of the most appealing features of Meillassoux’s writing is that it restages something of the madness of natural philosophy before the age of positivism and the research grant; it retrieves, paragraph-wise, the sublimity and wonder of an immoderate knowledge. In 1712, Richard Blackmore published an epic called Creation, which you’ve almost certainly never heard of but which remained popular in Britain for several decades. That poem tells the story of the world’s awful making, before humanity’s arrival, and if you read even just its opening lines, you’ll see that this conception is premised on a rather pungent refusal of Virgil and hence on a wholesale refurbishing of the epic as genre: “No more of arms I sing.” Blackmore reclassifies what poets had only just recently been calling “heroic verse” as “vulgar”; the epic, it would seem, has degenerated into bellowing stage plays and popular romances and will have to learn from the astrophysicists if it is to regain its loft and dignity. Poets will have to accompany the natural philosophers as they set out “to see the full extent of nature” and to tally “unnumbered worlds.” The point is that there was lots of writing like this in the eighteenth century, and that it was aligned for the most part with the period’s republicans and pseudo-republicans and whatever else England had in those years instead of a Left. This means that the cosmic epic was to some extent a mutation of an early Puritan culture, a way of carrying into the eighteenth earlier trends in radical Protestant writing, and especially the latter’s Judaizing or philo-Semitic strains. The idea here was that Hebrew poetry provided an alternative model to Greek and Roman poetry: a sublime, direct poetry of high emotion, of inspiration, ecstasy, and astonishment. The Creation is one of the things you could read if you wanted to figure out how ordinary people ever came to care about science—how science was made into something that could turn a person on—and what you’ll find in its pages is a then new aesthetic that is equal parts Longinus and Milton, or rather Longinus plus Moses plus Milton plus Newton, and not a Weberian or Purito-rationalist Newton, but a Newton supernal and thunder-charged, in which the Principia is made to yield science fiction. It is, finally, this writing that Meillassoux is channeling when he asks us—routinely—to contemplate the planet’s earliest, not-yet-human eons; when, like a boy-intellectual collecting philosophical trilobites, he demands that our minds be arrested by the fossil record or that all of modern European philosophy reconfigure itself to accommodate the dinosaurs. And it is the eighteenth-century epic’s penchant for firebolt apocalyptic that echoes in his descriptions of a cosmos beyond law:

Everything could actually collapse: from trees to stars, from stars to laws, from physical laws to logical laws; and this not by virtue of some superior law whereby everything is destined to perish, but by virtue of the absence of any superior law capable of preserve anything, no matter what, from perishing.

Meillassoux’s followers call this an idea that no-one has ever had before. The epic poets once called it Strife.

That so many readers have discovered new political energies in Meillassoux’s argument is perhaps hard to see, since the book contains absolutely nothing that would count, in any of the ordinary senses, as political thought. There are, it’s true, a few passages in which Meillassoux lets you know he thinks of himself as a committed intellectual: a (badly underdeveloped) account of ideology critique; the faint chiming, in one sentence, of The Communist Manifesto; a few pages in tribute to Badiou. With a little effort, though, the political openings can be teased out, and they are basically twofold: 1) Meillassoux says that thought’s most pressing task is to do justice to the possibility—or, indeed, to the archaic historical reality—of a planet stripped of its humans. On at least one occasion, he even uses, in English translation, the phrase “world without us.” For anyone looking to devise a deep ecology by non-Heideggerian means—and there are permanent incentives to reach positions with as little Heidegger as possible—Meillassoux’s thinking is bound to be attractive. The book is an entry, among many other such, in the competition to design the most attractive anti-humanism. 2) The antinomian language in the sentence last quoted—laws could collapse; there is no superior law­—or, indeed, the very notion of a cosmos structured only by unnecessary laws—is no doubt what has drawn to this book those who would otherwise be reading Deleuze, since Meillassoux, like this other, has designed an ontology to anarchist specifications, though he has done so, rather surprisingly, without Spinoza. Another world is possible wasn’t Marx’s slogan—it was Leibniz’s—except at this level, it has to be said, the book’s politics remain for all intents and purposes allegorical. Meillassoux’s argument operates at most as a peculiar, quasi-theological reassurance that if we set out to change the political and legal order of our nation-states, the universe will like it.

Maybe this is already enough information for us to see that After Finitude’s relationship to post-structuralism is actually quite complicated. Any brief description of the book is going to have to say that it is out to demolish German Idealism and post-structuralism and any other philosophy of discourse or mind. But if we take a second pass over After Finitude, we will have to conclude that far from flattening these latter, its chosen task is precisely to shore them up, to move anti-foundationalism itself onto sturdy ontological foundations. Meillassoux’s niftiest trick, the one that having mastered he compulsively performs, is the translating of post-structuralism’s over-familiar epistemological claims into fresh-sounding ontological ones. What readers of Foucault and Lyotard took to be claims about knowledge turn out to have been claims about Being all along, and it is through this device that Meillassoux will preserve what he finds most valuable in the radical philosophy of his parents’ generation: its anti-Hegelianism, its hard-Left anti-totalitarianism, its attack on doctrines of necessity, its counter-doctrine of contingency, its capacity for ideology critique.

Adorno was arguing as early as the mid-‘60s that thought needed to figure out some impossible way to think its other, which is the unthought, “objects open and naked,” the world out of our clutches. “The concept takes as it most pressing business everything it cannot reach.” Is it possible to devise “cognition on behalf of the non-conceptual”? This is the sense in which Meillassoux, far from breaking with post-structuralism and its cousins, is simply answering one of its central questions. It’s just that he does so in a way that any convinced Adornian or Left Heideggerian is going to find baffling. Cognition on behalf of the non-conceptual turns out to have been right in front of us all along—it is called science and math. Celestial mechanics has always been the better anti-humanism. A philosophical anarchism that has thrown its lot in with the geologists and not with the Situationists—that is the possibility for thought that After Finitude opens up.  The book, indeed, sometimes seems to be borrowing some of Heidegger’s idiom of cosmic awe, but it separates this from the latter’s critique of science—such that biology and chemistry and physics can henceforth function as vehicles of ontological wonder, astonishment at the world made manifest. And with that idea there comes to an end almost a century’s worth of radical struggle against domination-through-knowledge, against bureaucracy, rule by experts, the New Class, technocracy, instrumental reason, and epistemological regimes. On the back cover of After Finitude, Bruno Latour says that Meillassoux promises to “liberate us from discourse,” but that’s not exactly right and may be exactly wrong. He wants rather to free us from having to think of discourse as a problem—precisely not to rally us against it, in the manner of Adorno and Foucault—but to license us to make our peace with, and so sink back into, it.

• • •

Lots of people will find good reasons to take this book seriously. It is, nonetheless, unconvincing on five or six fronts at once.

It is philosophically conniving. There are almost no empirical constraints placed on the argumentative enterprise of ontology. Nothing in everyday experience is ever going to suggest that one generalized account of all Being is right and another wrong, and this situation will inevitably grant the philosopher latitude. Ontologies will always be tailored to extra-philosophical considerations, any one of them elected only because a given thinker wants something to be true about the cosmos. Explanations of existence are all speculative and in that sense opportunistic. It is this opportunism we sense when we discover Meillassoux baldly massaging his sources. Here he is on p. 38: “Kant maintains that we can only describe the a priori forms of knowledge…, whereas Hegel insists that it is possible to deduce them.” Kant, we are being told, doesn’t think the categories are deducible. And then here’s Meillassoux on pp. 88 and 89: “the third type of response to Hume’s problem is Kant’s … objective deduction of the categories as elaborated in the Critique of Pure Reason.”

The leap from epistemology to ontology sometimes falls short. At one point, Meillassoux thinks he can get the better of post-structuralists like so: Imagine, he says, that an anti-foundationalist is talking to a Christian (about the afterlife, say). The Christian says: “After we die, the righteous among us will sit at the right hand of the Lord.” And the anti-foundationalist responds the way anti-foundationalists always respond: “Well, you could be right, but it could also be different.” For Meillassoux, that last clause is the ontologist’s opening. His task is now to convince the skeptic that “it could also be different” is not just a skeptical claim about what we can’t know—it is not an ignorance, but rather already an ontological position in its own right. What we know about the real cosmos, existing apart from thought, is that everything in it could also be different. And now suppose that the anti-foundationalist responds to the ontologist by just repeating the same sentence—again, because it’s really all the skeptic knows how to say: “Well, you could be right, but it could also be different.” Meillassoux at this point begins his end-zone dance. He has just claimed that Everything could be different, and the skeptic obviously can’t disagree with this by objecting that Everything could be different. The skeptic has been maneuvered round to agreeing with the ontologist’s position. But Meillassoux doesn’t yet have good reasons to triumph, because, quite simply, he is using “could be different” in two contrary senses, and he rather bafflingly thinks that their shared phrasing is enough to render them identical. He has simply routed his argument through a rigged formulation, one in which ontological claims and epistemological claims seem briefly to coincide. The skeptical, epistemological version of that sentence says: “Everything could be different from how I am thinking it.” And the ontological version says: “Everything could be different from how it really is now.” There may, in fact, occur real-word instances in which skeptics string words into ambiguous sentences that could mean either, and yet this will never indicate that they unwittingly or via logical compulsion mean the latter.

Meillassoux’s theory of language is lunatic. Another way of getting a bead on After Finitude would be to say that it is trying to shut down science studies; it wants to stop literary (and anthropological) types from reading the complicated utterances produced by science as writing (or discourse or culture). Meillassoux is bugged by anyone who reads scientific papers and gets interested in what is least scientific in them—anyone, that is, who attributes to astronomy or kinetics a political unconscious, as when one examines the great new systems devised during the seventeenth century and realizes that they all turned on new ways of understanding “laws” and “forces” and “powers.” Meillassoux’s own philosophy requires, as he puts it, “the belief that the realist meaning of [any utterance about the early history of the planet] is its ultimate meaning—that there is no other regime of meaning capable of deepening our understanding of it.” The problem is, of course, that it’s really easy to show that science writing does, in fact, contain an ideological-conceptual surcharge; that, like any other verbally intricate undertaking, it can’t help but borrow from several linguistic registers at once; and that there is always going to be some other “order of meaning” at play in statements about strontium or the Mesozoic. Science studies, after all, possesses lots of evidence of a more or less empirical kind, and Meillassoux’s response is to object that this evidence concerns nothing “ultimate.” But then what would it mean for a sentence to have an “ultimate meaning” anyway? A meaning that outlasts its rivals? Or that defeats them in televised battle? What, then, is the time that governs meanings, such that some count as final even while the others are still around? And at what point do secondary meanings just disappear? What are the periods of a meaning’s rise and fall? Meillassoux doesn’t possess the resources to answer any of those questions; nor, as best as I can tell, does he mean to try. The phrase “ultimate meaning” is not philosophically serious. It does little more than commit us to a blatant reductionism, commanding us to disregard any complexities and ambiguities that a linguistically attentive person would, upon reading Galileo, discover. We can even watch Meillassoux’s own language drift, such that “ultimate meaning” becomes, over the course of three pages, exclusive meaning. “Either [a scientific] statement has a realist sense, and only a realist sense, or it has no sense at all.” It exasperates Meillassoux that an unscientific language would so regularly worm its way into science writing; and it exasperates him, further, that English professors would take the trouble to point this language out. His response is to install a prohibition, the wholly unscientific injunction to treat scientific language as simpler than it is even when the data show otherwise. It is perhaps a special problem for Meillassoux that the ideological character of science writing is especially pronounced in the very period to which he is looking for intellectual salvation—the generations on either side of Newton, which were crammed with ontologies explicitly modeled on the political theology of the late Middle Ages—new scientific cosmologies, I mean, whose political dimensions were quite overt. And it is definitely a problem for Meillassoux that he has himself written a political ontology of roughly this kind—a cosmology made-to-order for the punks and the Bakuninites—since one of his opening moves is to disallow the very idea of such ontologies. After Finitude only has the implications its anarchist readership takes it to have if its language means more than it literally says, and Meillassoux himself insists that it can have no such meaning.

He poses as secular but is actually a kind of theologian. It is not just that Meillassoux is secular. He is pugnaciously secular or, if you prefer, actively anti-religious. He casually links Levinas with fanaticism and Muslim terror. He sticks up for what Adorno once called the totalitarianism of enlightenment, marveling at philosophy’s now vanished willingness to tell religious people that they’re stupid or at its determination to make even non-philosophers fight on its terms. And against our accustomed sense that liberalism is the spontaneous ideology of secular modernity, Meillassoux sees freedom of opinion instead as an outgrowth of the Counter-Reformation and Counter-Enlightenment. Liberalism, in other words, is how religion gets readmitted to the public sphere even once everyone involved has been forced to concede that it’s bunk. And yet for all that, Meillassoux has entirely underestimated how hard it is going to be to craft a consequent anti-humanism without taking recourse to religious language. At the heart of After Finitude is a simple restatement of the religious mystic’s ecstatic demand that we “get out of ourselves” and thereby learn to “grasp the in-itself”; the book aches for an “outside which thought could explore with the legitimate feeling of being on foreign territory—of being entirely elsewhere.” In the place of God, Meillassoux has installed a principle he calls “hyper-Chaos,” to which, however, he then attaches all manner of conventional theological language, right down to the capital-C-of-adoration. Hyper-Chaos is an entity…

…for which nothing is or would seem to be impossible … capable of destroying both things and worlds, of bringing forth monstrous absurdities, yet also of never doing anything, of realizing every dream, but also every nightmare, of engendering random and frenetic transformations, or conversely, of producing a universe that remains motionless down to its ultimate recess, like a cloud bearing the fiercest storms, then the eeriest bright spells.

No-one reading that passage—even casually, even for the first time—is going to miss the predictable omnipotence language with which it begins: Chaos is the God of Might. Meillassoux himself acknowledges as much. What may be less apparent, though, is that this entire line of argument simply extends into the present the late medieval debate over whether God was constrained to create this particular universe, or whether he could have, at will, created another, and Meillassoux’s position in this sense resembles nothing so much as the orthodox Christian defense of miracles, theorizing a power that can, in defiance of its own quotidian regularities, “bring forth absurdities, engender transformations, cast bright spells.” There have been many different theories of contingency over the last generation, especially among philosophers of history. As a philosopheme, it has, in fact, become rather commonplace. Meillassoux is unusual in this regard only in that he has elevated contingency to the position of demiurge and so returned a full portion of metaphysics to a position that had until now been trying to get by without it. Such is the penalty after all for going back behind Kant, that you’ll have to stop your ears again against the singing of angels. Two generations before the three Critiques there stood Christian Wolff, whom Meillassoux does not name, but on whose system his metaphysics is modeled and who wrote, in the 1720s and ‘30s, that philosophy was “the study of the possible as possible.” Philosophy, in other words, is the one all-important branch of knowledge that does not study actuality. Each more circumscribed intellectual endeavor—biology, history, philology—studies what-now-is, but philosophy studies events and objects in our world only as a subset of the much vaster category of what-could-be. It tries, like some kind of interplanetary structuralism, to work out the entire system of possibilities—every hypothetical aggregate of objects or particles or substances that could combine without contradiction—and thereby reclassifies the universe we currently inhabit as just one unfolding outcome among many unseen others. Meillassoux, in this same spirit, asks us to imagine a cosmos of “open possibility, wherein no eventuality has any more reason to be realized than any other.” And this way of approaching actuality is what Wolff calls theology, which in this instance means not knowledge of God but God’s knowledge. Philosophy, for Wolff—as, by extension, for Meillassoux—is a way of transcending human knowledge in the direction of divine knowledge, when the latter is the science not just of our world but of all things that could ever be, what Hegel called “the thoughts had by God before the Creation”—sheer could-ness, vast and indistinct.

He misdescribes recent European philosophy and is thus unclear about his own place in it. Maybe this point is better made with reference to his supporters than to Meillassoux himself. Here’s how one of his closest allies explains his contribution:

With his term ‘correlationism,’ Meillassoux has already made a permanent contribution to the philosophical lexicon. The rapid adoption of this word, to the point that an intellectual movement has already assembled to combat the menace it describes suggests that ‘correlationism’ describes a pre-existent reality that was badly in need of a name. Whenever disputes arise in philosophy concerning realism and idealism, we immediately note the appearance of a third personage who dismisses both of these alternatives as solutions to a pseudo-problem. This figure is the correlationist, who holds that we can never think of the world without humans nor of humans without the world, but only of a primal correlation or rapport between the two.

As intellectual history, this is almost illiterate. We weren’t in need of a name, because the people who argue in terms of the-rapport-between-humans-and-world or subject-and-object were already called “Hegelians,” and the movement opposing them hasn’t just “sprung up,” because philosophers have been battling the Hegelians as long as there have been Hegelians to fight. Worse still is the notion, projected by Meillassoux himself, that all of European philosophy since Kant must be opposed for leading inexorably, shunt-like, to post-structuralism. This is just the melodrama to which radical philosophy is congenitally prone; the entire history of Western thought has to become a single, uninterrupted exercise in the one perhaps quite local error you would like to correct, the cost of which, in this instance, is that Meillassoux and Company have to turn every major European thinker into a second-rate idealist or vulgar Derridean and so end up glossing Wittgenstein and Heidegger and Sartre and various Marxists in ways that are tendentious to the point of unrecognizability. There are central components of Meillassoux’s project that philosophers have been attempting since the 1790s, and he occasionally gives the impression of not knowing that European philosophy has been trying for generations to get past dialectics or humanism or the philosophy of the subject or whatever else it is for which “correlationism” is simply a new term. Perhaps Meillassoux thinks that his contribution has been to show that Wittgenstein and Heidegger were more Hegelian than they themselves realized. But then this, too, seems more like a repetition than a new direction, since European philosophy has always had a propensity for auto-critique of precisely this kind. Auto-critique is in lots of ways its most fundamental move: One anti-humanist philosopher accuses another of having snuck in some humanist premise or another. One philosopher-against-the-subject accuses another of being secretly attached to theories of subjectivity. And so on. For Meillassoux to come around now and say that there are residues of Kant and Hegel all over the place in contemporary thought—well, sure: That’s just the sort of thing that European philosophers are always saying.

He is wrong about German idealism. Kant, Meillassoux says, is the one who deprived us all of the Great Outdoors, which accusation seems plausible … until you remember that bit about “the starry sky above me.” This is one more indication that Meillassoux is punching air, though the point matters more with reference to Hegel than to Kant. Hegel’s philosophy, after all, turns on a particular way of relating the history of the world: At first, human beings were just pinpricks of consciousness in a world not of their own making, mobile smudges of mind on an alien planet. But human activity gradually remade the world—it refashioned every glade and river valley—worked all the materials—to the point where there now remains nothing in the world that hasn’t to some degree been made subject to human desire and planning. The world has, in this sense, been all but comprehensively humanized; it is saturated with mind. What are we to say, then, when Meillassoux claims that no modern philosopher since Kant can even begin to deal with the existence of the world before humans; that they can’t even take up the question; that they have to duck it; that it is what will blow holes in their systems? Hegel not only has no trouble speaking of the pre-human planet; his historical philosophy downright presupposes it. The world didn’t used to be human; it is now thorough-goingly so; the task of philosophy is to account for that change. And it is the great failing of Meillassoux’s book that, having elevated paleontology to the paradigmatic science, he can’t even begin to explain the transformation. You might ask yourself again whether Meillassoux’s account of science is more plausible than a Hegelian one. What, after all, happened when Europeans began devising modern science? What did science actually start doing? Was it or wasn’t it a rather important part of the ongoing process by which human beings subjected the non-human world to mind? Meillassoux urges us to think of science as the philosophy of the non-human, positing as it does a world separable from thought, a planet independent of humanity, laws that don’t require our enforcing. But does science, in fact, bring that world about? Meillassoux hasn’t even begun to respond to those philosophers, like Adorno and Heidegger, who wanted to pry philosophy away from science, not because they were complacently encased in the thought-bubbles of discourse and subjectivity, but more nearly the opposite—because they thought science was the philosophy of the subject, or one important version of it, the very techno-thinking by which human being secures its final dominion over the non-human. Meillassoux, in this sense, is trying to theorize, not the science that actually entered into the world in the seventeenth century, but something else, an alternate modernity, one in which aletheia and science went hand in hand, a fully non-human science or science that humans didn’t control: gelassene Wissenschaft. But the genuinely materialist position is always going to be the one that takes seriously the effects of thought and discourse upon the world; the one that knows science itself to be a practice; the one that faces up to the realization that the concept of  “the non-human” can only ever be a device by which human beings do things to themselves and their surroundings. There is nothing real about a realism that offers itself only as a utopian counter-science, a communication from the pluriverse, a knowledge that presumes our non-existence and so requires, as bearer, some alternate cosmic intelligence that it would be simplest to call divinity.

(Thanks to Jason Adams, Chris Pye, and Anita Sokolsky. My understanding of Christian Wolff I take from Werner Schneiders’s “Deus est philosophus absolute summus: Über Christian Wolffs Philosophie und Philosophiebegriff.” The ally of Meillassoux’s that I quote is Graham Harman.)

 

Staying Alive, Part 2.3

 

 

Three Theses on Fright Night

 

THE LONG INTRO IS HERE.

THESIS #1 IS HERE.

THESIS #2 IS HERE.

 

•THESIS #3: John Travolta must die.

There are three bits of evidence we need to line up. First, the vampire in Fright Night is played by Chris Sarandon, given name Sarondonethes, which means he’s Greek, the darker side of white, not easily confused with Robert Redford or Owen Wilson. Second, the vampire ensnares the hero’s young girlfriend on the main floor of a throbbing disco, wading into the crowd to dance his gorgon’s boogaloo. Third, he is almost always wearing a man’s dress scarf, which generically marks him out as a swell and specifically, in 1985, seemed to insinuate the ultra-wide collars that had just gone out of style: an amplitude of color spreading out from the neck.

More precisely, it was the combination of scarf and popped collar that approximated the polyester wingspan of a few years back. And approximation is very much the point, since Chris Sarandon was plainly cast in Fright Night because he made a passable surrogate for John Travolta. One of the names for the demon-seducer who engrosses to himself all the women is “father,” but his other is “Tony Manero.” And you can, if you like, think of this figure—the Travolta vampire-dad—in terms of a precise historical moment: The entire movie takes shape in the headspace of a child of the late ‘70s and early ‘80s, someone who has grown up under the strains of “You Should Be Dancing” and “If I Can’t Have You” and who has therefore latched onto Vinnie Barbarino and Danny Zuko as the standard of the masculinity that he will never meet. All of Fright Night is premised on a bowel-shaking fear of John Travolta, the dreadful realization that no American man will ever have sex again until Travolta is destroyed. The struggle that Fright Night stages is in this sense something more than Oedipal; it isn’t just a conflict between an under-ripe masculinity and a fully adult one, since its junk Freudianism has been given such an obvious ethnic overlay: a whitebread masculinity squares off against sheerest Ionian potency. The movie’s adolescent fear of older men is intensified by a worry that a preppy, suburban kid—a 15-year old in a tweed jacket!?—is never going to be able to compete with Travolta’s goombah swank. And this obviously brings us back to Valentino and the Lugosi Dracula. Something we said earlier we’ll want to repeat now as a general point: Not just that Lugosi tapped into a fear of Valentino, but that vampire movies as a genre periodically inculcate a fear of Italian actors. And with this in mind, we can return to the clip from Ken Russell’s Valentino and gawp again at its unlikeliness: Nureyev is playing Valentino as Dracula, but Travolta is the scene’s third term, or, if you like, he is proximate double to its devil-sheikh. Lugosi gives us Dracula + Valentino, and Chris Sarandon Dracula + Travolta, but only Nureyev delivers Dracula + Valentino + Travolta in one. The Russell biopic came out in October of 1977, Saturday Night Fever two months later. And Fright Night, at eight years remove, is Disco Demolition Night restaged as a vampire story: A Mediterranean fop dies so that his WASP neighbors will sleep better. A crate of records explodes on a baseball field.

Staying Alive, Part One

 

What I have to explain this time round is a little strange, and the road we’ll have to walk to get there is, I think, even stranger. I should note first that I’ve been thinking a lot about vampire movies, about which we might, after rooting around, be able to say something that no-one else has ever said. And if you are to understand this New Thing About Vampire Movies—except it’s not a New Thing; it’s an Old and Secret Thing—then you are going to need to watch a short clip from a movie you’ve almost certainly never heard of, and when you watch it, you’re not going to think that it could possibly hold the key to anything. The movie is so obscure that I could only find the relevant scene dubbed into Russian, and even that sentence, once written, requires two intensifying corrections: I didn’t find the clip so much as fluke upon it while chasing down some other hunch. And the movie isn’t exactly dubbed into anything. It features some Russian language-school dropout—one guy; alone; an unaided Petersburg grumble—spot-translating all the dialogue, with the original soundtrack still running audibly in the background, such that he has to shout. Running this clip will be like trying to watch television in the company of a mean drunk. Plus it’s not even a vampire movie, which is what you were just promised. This is all pretty discouraging, I realize, but you’ll see: The clip does weirdly speak.

The film is Ken Russell’s Valentino, as in Rudy, as in hair anointed with jelly and liniment. It’s a biopic released in 1977, and starring Rudolph Nureyev as Rudolph V. At issue is a short scene in which Nureyev takes Carol Kane out onto a ballroom floor to dance the tango. Give it sixty seconds, and you’ll have seen everything important:

A spare cinematic minute—and yet the clip demands our attention by putting on display three things at once, three things that are intertwined even outside of this movie but whose intertwining is here oddly visible, as though lifted up for our examination. I’ll just count them off.

#1) The first thing you’ll want to bear in mind is who Valentino was. The basic facts will do: that he was Hollywood’s first superstar; that he was considered the prettiest man of his generation; and that he wasn’t American—he was born in Italy. The important point is that nothing in this thumbnail is wholly innocuous. A lot of people were unnerved by Valentino. Each of those bare data can and did yield something uncanny. That he struck so many American women as desirable was unusual precisely because he was Italian. He was the first non-Anglo man, after the big wave of southern and eastern European immigration, that large numbers of Americans deigned to think of as beautiful. People remarked on that a lot; the term “Latin lover” was apparently coined for him, even though, given the racial ductility of early Hollywood, he was most famous for playing an Arab. And there was if anything even more handwringing about Valentino the lover than there was about Valentino the Latin. Lots of male commentators said he wasn’t manly enough to represent their kind: that he was a dandy; that he was too polished; that he looked too soft; that he was a screen David sculpted out of talcum and pomade—and this, not as compared to John Wayne or Clint Eastwood—but as compared to Douglas Fairbanks, who agreed not to wear tights only when offered pantaloons.

But then the resentment of the nation’s swashbucklers did nothing to dent Valentino’s popularity. We’ve become accustomed, I guess, to how overtly libidinal the culture of female fandom is; we don’t much pause to remark on the orgiastic qualities of Justin Bieber’s every public appearance, their improbable pre-teen staging of the Dionysian Mysteries, but it might help to pretend that you’ve never seen archival footage of the Beatles and are thus having to face the squalling girl-crowds for the first time. When Valentino died unexpectedly in 1926—he was 31—there were riots in the streets of New York City. Lady fans started smashing windows and battling the hundred or so cops who were called out to restore order. Reports went out that women were killing themselves. That someone also ordered four actors to dress up as Italian blackshirts and tromp around the Upper East Side, to make it seem as though Mussolini himself had personally sent over an honor guard in Valentino’s memory, begins to sound like one of the day’s more pedestrian details.

#2) This should all help explain what anybody who’s just watched the clip will already have noticed, which is that Ken Russell has plainly instructed Nureyev to play Valentino as though he were Dracula: He silences the band just by raising his magical, mesmeric hand, tearing the sound from the very air…

…he activates what seem to be laser eyes; he leads a transfixed woman away from her circle of helpless male guardians and onto the dance floor, where he strut-hunches over her, arcing his shoulders into an insinuated cape…

…he mimes various attacks upon her neck.

A complicated series of observations follows on from this: We’ll want to say that the figure of Valentino has been filtered back through Dracula, and we can feel the force of that revision if we point out that Valentino was actually half-French and generically Continental-looking—you would not pause if someone told you he was German—and seems to have been typecast in Moorish roles only on account of a Mediterranean accent that no silent-moviegoer would ever hear anyway. Nureyev, on the other hand, is sweltering and Slavic and basically looks way more vampiric than the man he’s playing ever did. This could all easily seem like Ken Russell’s inspiration—to recreate, for audiences in the 1970s, the lost effect of Valentino’s magnetism by wrapping it in the easily read conventions of the vampire movie, with which, after all, it was roughly contemporaneous. You make one icon of early Hollywood intelligible by translating him into a second. It would be like deciding to make a movie about Greta Garbo, but then scripting her as Steamboat Willie.

There’s clearly something to this. But if we adhere tenaciously to that line, what are we going to say about the following images?

There is no mistaking the issue. Tod Browning’s Dracula came out in 1931, just five years after the Sheikh’s passing, and the stage versions that the movie was based on were running throughout the 1920s, when the oversized head of Valentino was first smoldering greyly down upon the bodies of American women. We can say that Nureyev was, in 1977, playing Valentino as Dracula, but we have to set against this the observation that Lugosi was already, in 1931, playing Dracula as Valentino. This is itself strong evidence that people were once scared of Valentino, but then we already knew that people—some people—were scared of Valentino, because he flaunted that off-white and insufficiently rugged form of masculinity, and because American women were really into it—or they weren’t just into it—they seemed hypnotized and made freaky by it. So the 1977 movie makes Valentino look more like a vampire than the real man actually did, but that’s because someone involved in the production intuited that Valentino had been one of the inspirations for the screen vampire to begin with. Heartthrob could be the name of a horror movie.

This all matters, because it helps us specify the contribution of Lugosi’s Dracula to the vampire mythos. This isn’t as easy as it sounds. Nearly everything that makes the 1931 movie tick was taken over directly from Stoker’s 1897 novel, and for most purposes, you would be better off bypassing the movie and going straight to the source. The most efficient, if not perhaps the most perspicuous, way of naming Stoker’s achievement would be to say that he turned the vampire story into an ongoing referendum on the philosophy of Friedrich Nietzsche. For real: Nearly every vampire movie that has ever been made is in one way or another a meditation on Nietzscheanism, deliberating on the idea that some people, the rare ones, might yet overcome morality and thereby form a new caste—or race or even species—a breed that never even pauses to consider what ordinary people think of as right and wrong.  Here’s all the Nietzsche you need:

•The great epochs of our lives come when we gather the courage to reconceive our evils as what is best in us.

•Every exquisite person strives instinctively for a castle and a secrecy where he is rescued from the crowds, the many, the vast majority; where, as the exception, he can forget the norm called “human.”

•We think that harshness, violence, slavery, danger in the streets and in the heart, concealment, Stoicism, the art of seduction and experiment, and devilry of every sort; that everything evil, terrible, tyrannical, predatory, and snakelike in humanity serves just as well as its opposite to enhance the species of “man.”

Enhanced and predatory un-humans living in castles, exquisite people who have turned wickedness into a virtue or an accomplishment—if you’re in an intro philosophy class, and you’re trying to make sense of The Genealogy of Morals for the first time, the easiest way to get a handle on Nietzsche will be to realize that he wants to turn you into a vampire, which is superman’s nearest synonym, another word for Übermensch. Or other way around now: Modern vampire stories work by mulishly literalizing Nietzsche’s language, making you stare the superman in the face on the expectation that you will be sent running by his anaconda grin.

This should all become clearer if we break Stoker’s Dracula back into his component parts. What are the several things that the classic vampire story wants you to be scared of?

•Stoker’s novel wants you to be scared of aristocracy. This is perhaps the most glaring point—that vampire stories are the one horror genre driven by naked class animus. The novel makes Dracula seem wiggy even before he starts doing anything supernatural, and it does this simply by making him lord of the manor. His comportment is excessively formal. He is, the first-time reader is surprised to note, seldom referred to as Dracula; the novel almost only ever calls him “the Count,” as though the key to understanding the character lay in his title. It is the very existence of the old-fashioned nobleman that has come to seem unnatural, which no doubt has something to do with his literally feeding upon the blood of the poor, peasant children stuffed into sacks. The movie updates all this, in some pleasingly goofy way, by putting the vampire in ’20s-era evening wear, the lost joke being that he never wears anything else, that he sports white tie everywhere—a tail-coat to play softball in, an opera cloak for when he’s bathing the dog—as though tuxedos were the only threads he owned. Dracula is the character who, having once put on the Ritz, can never again remove it. The vampire, we are licensed to conclude, is our most enduring image of aristocratic tyranny, generated by a paradigmatically liberal and middle-class fever-dream about the character of the old peerage, and anchored in the simple idea that it isn’t even safe to be in the same room as an aristocrat, so driven are such people to dominate others, so unwilling to tolerate a partner or co-equal. “Come here!”: A duke is the name for the kind of person who barks orders at free men as though they were his subordinates. That’s a routine observation, and it’s what ties Dracula back to the early Gothic novel or even to Richardson’s Pamela. But what’s peculiar all the same about Stoker’s novel is its timing, since by the 1890s, the traditional aristocracy in England was, if not exactly obsolete, then at least much weakened. The novel actually registers this historical turn, since the vampire famously lives not in a castle, but in the ruins of a castle, in the rubble of a superannuated class hierarchy, and—this really is an inspired flourish—he has no servants: he drives his own coach, carries his own bags. The Count is what they used to call come-down gentry, accustomed to apologizing to guests for serving them dinner on chipped porcelain. And the threat he poses is therefore not the menace of one who actually possesses power—this is how he is unlike Richardson’s Mr B or William Godwin’s Falkland—but of one who might yet regain it, the name for which regaining would be “reaction” or “counter-revolution.” Stoker’s Dracula is the greatest of right-wing horror stories, scared of foreigners and queer people and women and sex in general, but it nonetheless harbors a certain curdled Jacobinism, the exasperated sense that the European aristocracy should be dead but aren’t, and that the French Revolution is going to have to be staged over and over again.

So much for aristocracy. About those others…

•Stoker’s novel wants you to be scared of foreigners. This goes back to a simple plot point: Dracula sneaks into England from abroad—hides on a ship—slips past customs officers and curious locals. The vampire, in other words, is an illegal immigrant. You might object that this last is a late twentieth-century category, illicitly projected back onto the 1890s, and that’s true—but “stowaway” isn’t an anachronism, and neither is “smuggling.” What’s more, Stoker expressly aligns vampires, via their bats, with colonies and the Third World. Such creatures come from the “islands of the Western seas” or from South America. One character is pretty sure that this is no English bat! It “may be some wild specimen from the South of a more malignant species.” Perhaps most important, the screen Dracula is the figure who has single-handedly made life miserable for generations of Eastern European immigrants, who have had to endure endless rounds of “I vant … to sahk … your bludd!” in roughly the same way that teenaged Asian-American girls have been, since 1987, routinely subjected to obnoxious white boys quoting “Me so horny.”

•Stoker’s novel wants you to be scared of sex in general, though we can also make the point via the film: The first time we see Dracula attack a woman, all he really does is lean in for a kiss, though the street is dim and London-ish, and his victim is a flower-girl-for-which-read-prostitute, and these details inevitably summon overtones of Jack the Ripper, especially if you think Jack was a gentleman or the Prince of Wales.

The point is extended when, later in the film, one weeping survivor uses rape language to describe her evening with the Count:

Survivor: After what’s happened, I can’t…

Fiancé: What’s happened? What’s happened?!

Survivor: I can’t bear to tell you. I can’t.

At this point we need to make a careful distinction. Those scenes both trigger images of sexual violence. And yet one of the vampire story’s more remarkable features is that it communicates a fear of sex even when that violence is largely removed. Indeed, an encompassing fear of sex—and not just of rape—is coded into some of the genre’s most basic conventions. Nothing in the entire history of the horror film is more iconic than the vampire bite, which, if you pause to think about it, is entirely peculiar: Imagine that vampire stories didn’t already exist … and now imagine trying to convince a Hollywood executive to greenlight your new movie about a creature who kills people by giving them hickeys, an honest-to-Christ Cuddle Monster, but scary, you promise him, enemy of scarves and turtlenecks. Or ask yourself for once why so many movies allow vampires to be repelled by garlic. That’s a simple extrapolation from the idea that if you eat too much spicy food—if you go to bed fetid, the reek of sofrito still on your ungargled breath—no-one will want to sleep with you.

But there’s more…

•Stoker’s novel wants you to be scared of sexual women in particular. There’s an underlying point here that is worth reviewing first: Most viewers think that vampires are foxy, which makes them really unlike other classic monsters. If that point is the least bit unclear to you, you might take a moment now to close your eyes and pretend briefly that you are making out with a zombie. But the most clarifying difference is the one we can draw between the vampire and the werewolf, both of whom are canonically shown perpetrating savage violence upon the bodies of women. What I’d like to bring into view is that both werewolf movies and vampire movies deviate from what is perhaps the most routine scenario in a horror movie—a rampaging monster lumbering after a panicked victim—but they deviate in opposite directions. Werewolf stories are the one horror genre that has a certain reluctance or regret or stop-me-before-I-kill-again shame built right into them. Slashers, who otherwise resemble werewolves, never wake up the next morning hating themselves for what they’ve done. No-one casts a chainsaw to one side in self-loathing. But in a werewolf movie, not even the monster is wholly willing. In a vampire movie, then, the point just gets flipped, in that not even the victim is wholly unwilling. Vampire victims collaborate in their own destruction, for the simple reason that men in capes have game. This means that certain types of utterly common horror sequences are largely excluded from the vampire film: People almost never flee from vampires, which means that the vampire flick is the horror subgenre least likely to borrow from action movies; most likely, in other words, to commit to a languid pacing—no chase scenes!—or rather, if a vampire movie does for once break out into a chase scene, you can be pretty sure it’s the vamp and not the victim who is on the run.

What we can now say is that this little myth about willing victims is most often told, in the vampire classics themselves, about women. The form’s conviction that highborn men are predators is counterbalanced by its confidence that this is exactly what many women want—to be preyed upon. The he-vamp awakens the woman to sexual rapaciousness, and the audience is expected to find this creepy. The survivor does sob and say “I can’t bear to tell you what happened,” but she has also just said: “I feel wonderful. I’ve never felt better in my life.” In Stoker, the woman who proves most susceptible to Dracula’s advances is the one who has already asked, even before the vampire has made his move: “Why can’t they let a girl marry three men, or as many as want her?” More important, the novel makes it clear that becoming a vampire is one good way of getting that wish granted. Once she turns, the sexual woman does indeed get all the men—every major male character in the novel willingly opens his veins to give her blood transfusions—she becomes a kind of sponge, allegorically loose, soaking up all this male donation: a “polyandrist,” one of the men calls her. When the men, bearing whale-oil candles, go to visit her in her crypt, they “drop sperm in white patches” across the floor, like pornographic bread crumbs. They finally put her to rest by assaulting her as a group, standing in a circle while one of their number “drives deeper into deeper” into the “dint in [her] white flesh.” In the novel’s opening sections, three women stand over a young Englishman in the Carpathians: “He is young and strong. There are kisses for us all.”

•Stoker’s novel wants you to be scared of deviant sex above all. One point can be made without qualification: All the vampires in the original Dracula are gender-benders. That this is true of those kiss-hungry Transylvaniennes should be immediately apparent, since it will be true of nearly any she-vamp—these lady-penetrators busting the jugular cherries of straight men.

The vampiress is how the very possibility of a certain rather sweeping gender reversal comes out into the open—becomes visible in everyday life, available for the contemplation of suburbanites and middle schoolers. She and her male victims are pop culture’s only iconic image of pegging. In Stoker, the man “waits in languorous ecstasy” while he assesses for the first time the feeling of “hard dents” against his “super sensitive skin.” The point will seem accordingly less clear with regards to Dracula himself, since a man-vamp sinking into a crumpled woman preserves orthodox sexual roles. That Dracula’s manhood is nonetheless unstable discloses the intensity of the novel’s preoccupation with sexual confusion: In one of the book’s more striking scenes, its several heroes bust into the bedroom of a woman they’ve been guarding and find Dracula clasping her head to his naked breast, which he has just gashed open so that she can lap at his blood. The image is not only a riff on oral rape—though it is that, too: a forced blow job. It is also—and rather more literally—a breast feeding, a demonic nursing, with the vampire willing to set aside all his usual male roles in order to take up the position of the monstrous mother, with a chest that runs red and a child at his bosom struggling to be reborn.

So that’s a dense set of associations—aristocracy, foreigners, sex, women, and queer people—and the film does a reasonably good job of preserving this tissue of meaning, a much better job than, say, Whalen’s Frankenstein does at protecting the many-sided allegory that had originally been built up around its monster. But the movie isn’t just a translation, because to those established associations it adds one of its own. The screen Dracula isn’t just an aristocratic holdover. The vampire is the movie star himself, and in all the famous images of Lugosi we see early film beginning to mediate on itself and on its own eerie power. Or perhaps it would be more accurate to say, not that Browning’s Dracula has simply added a new association to Stoker’s list, but that it has found an innovative way of encapsulating that list’s concerns. The Valentino vampire isn’t just a supplement to or replacement for the queer and foreign aristocrat; he is the queer and foreign aristocrat, issued in a new format. What we see in Dracula is film recoiling from its new modes of supercharged male charisma, and you can begin to make sense of Lugosi’s performance if you think of it in terms of any film set’s hierarchy of actors: Van Helsing kills Dracula; Edward Van Sloan, who you’ve never heard of, kills Bela Lugosi; a character actor kills the leading man on behalf of the drab, male masses for the overriding reason that the women who’ve come to the theater with them find him too dishy.

#3) So those are two of the things that the Nureyev clip intertwines: Valentino and vampires. The third thing has everything to do with Carol Kane’s hair.

There’s real a problem here. The movie has been careful to give Nureyev a tallowy comb-back; he would hardly be credible as Valentino without it. But what’s striking about his partner’s tresses is that they are so obviously of the 1970s. The movie, after all, is set in the 1920s, whose iconic hairstyles for women were all short—bobs and Dutch boys and such—but Carol Kane’s hair has been frizzed and teased into fiberglass—it is simultaneously long and fro-like, a headdress of cotton candy. For comparison…

Valentino with Natacha Rambova

The biopic dancer’s most unflapperish do, in other words, breaks the movie’s historical frame, anchoring the production in its own present of 1977 and allowing that decade to worm back into the Coolidge era. More precisely, it tends to transform the ballroom into a disco and the tango into a proto-Hustle. Look again at that shot of Carol Kane and especially at the lighting: One doesn’t typically think of the 1920s as spangly. What we can say now is that Nureyev isn’t just playing Valentino as a vampire—that idea, at least, we’ve been able to explain; he is playing Valentino as a disco vampire, and this is going to reopen the puzzle of the clip. We know that some people really hated disco, but was anybody actually scared of it? This brings us to another movie—the movie we actually need to be thinking about—which is 1985’s Fright Night. Disco, they once said, sucks.

PART 2 BEGINS HERE…

 

The New Way Forward in the Middle West

 

A few quick observations about Zowie Bowie’s Source Code, from earlier this year.

But first, the plot: A terrorist has just blown up a commuter train on the outskirts of Chicago, killing hundreds, and is headed downtown to hit Play on a dirty bomb, which will kill thousands more. Government scientists send a US soldier back in time—onto the train, ante-boom—and instruct him to identify the bomber. The soldier, however, is operating under two major constraints: First, he hasn’t exactly been teleported onto the train. He is, in fact, already dead; portions of his brain are being kept alive; and it’s only his consciousness that has been lobbed backwards into the day’s bad start. In order to conduct his investigation, therefore, he will have to occupy the body of some civilian already on the train; he will have to take as his avatar one of the attack’s imminent victims. Second, the government’s time-travel technology can only project him back eight minutes before the event, which interval he will have to relive over and over again until he can give the government a name: eight minutes—whoosh!—mass death—almost had it—and again, please—a fresh eight minutes are on the clock, like injury time….

 

•OBSERVATION #1:

The movie is set almost entirely in Chicago, and yet its plot is closely modeled on the invasions of Afghanistan and, especially, Iraq. That the detective-soldier is actually an Air Force helicopter pilot recently shot down by the Taliban is enough to establish that the movie has the war on terror on its mind. But it’s the soldier’s character arc—the transformation he has to undergo in the course of the film—that most powerfully channels the history of the past decade. What’s notable about Source Code—what makes it rather unlike an ordinary action movie—is that its hero keeps failing; he keeps letting the train blow up. The movie thinks it can provide an explanation for this, that it can make clear why an American soldier might be rather bad at stopping terrorists. Or rather, it thinks it can teach you—by teaching him—the difference between anti-terrorism and hapless, counterproductive bullying. At first, the soldier panics; he starts yelling at people; he engages in a little racial profiling; he throws a few punches and before long has drawn a gun on the other passengers. One onlooker asks: “You’re military? You spend a lot of time beating up civilians?” The turning point comes when the living officer running the mission from a government super-computer tells our undead hero: “This time try to get to know the other people on the train.” And from that point on, he just keeps ratcheting it down; stops confronting people; gets in nobody’s face; begins coolly collecting information; and finally, in one last triumphant replay of those endlessly fatal eight minutes, slips handcuffs onto the terrorist before anyone else on the train even knows they’re living amidst emergency. The movie, in other words, thinks it knows the right way to prevent a terrorist attack, and in this regard it simply mirrors David Petraeus, whose film this is. The soldier only succeeds, in other words, because halfway through he is given a new counterinsurgency manual, and the difference between hero-at-beginning-of-movie and hero-at-end-of-movie is meant to communicate the difference between Iraq in 2004 and Iraq in 2008. Source Code is, in sum, a Surge movie—it is, to my knowledge, the only Surge movie—with the New Way Forward staging itself in Illinois instead of Anbar, and with science-fiction conventions serving to communicate the panic and steep learning curve of the early occupation. The film’s hyper-repetitive structure is quite peculiar here. It could—and perhaps for a few minutes in the movie’s middle depths even does—convey the infernal quality of the war on terror, the way in which the “vigilance” to which we are enjoined is already a doom: One gets up every morning required again to avert Armageddon. But that’s not really Source Code’s vibe. Repetition in this movie soon stops seeming demonic and becomes instead the medium for learning and self-improvement—this is more somber Groundhog’s Day than it is trashy Sisyphus—and the film’s understanding of recurrence as basically harmless gets at the first of its interlinked fantasies, which is that the US should be able, at no cost, to keep trying to round up the terrorists until it gets it right. The movie to that extent signs on to the central myth of the Surge, which is that it was empire’s magic do-over in Iraq, a geopolitical mulligan.

 

•OBSERVATION #2:

That first point requires that we read Chicago as Baghdad in disguise, but if we instead take the movie’s North American setting at face value, then the movie’s politics become somewhat harder to parse. This difficulty goes back to the military-civilian mish-mash that is at the story’s core: The US soldier has requisitioned the body of some suburban schoolteacher—deputized the dead schmo—drafted his virtual corpse into war without end. Like any such in-between or crossbred figure, this character can be described in two contradictory ways at once, such that Source Code is simultaneously a story about a military guy becoming less militarized and a story about a civilian conscripted into special ops without his even knowing it. At the end of the movie, the soldier, having just arrested the madman and saved morning drive-time, gets to stay in his host body; he just skips off into the city with a pretty girl. At that level, the movie is an innocuous fairy tale about undoing some of the damage the US government is inflicting on a generation—not just giving a soldier his discharge papers and sending him honorably back into street life—but unkilling him, making stupid amends. But the equal-and-opposite story of the civilian who can suddenly break up terror plots sponsors a rather different fantasy, bespeaking the desire for a less obtrusive war on terror, a war less punishing to the Iraqis and the Afghanis, and kinder to Americans, as well—a war on terror without full body scanners at airports or the kind of heavy police presence that makes even white people nervous. In this sense, the movie gets us to wish that the war on terror were even more covert than it already is—that it were all undercover—its representative figure the plainclothes air marshal, the old-fashioned name for whom is Secret Police. Let me repeat a sentence I’ve already written: At the end of the movie, the soldier gets to stay in his host body, which means that the schoolteacher never gets his person back, and Source Code’s happy ending requires not that civilian life be rescued, but that it be negated.

 

•OBSERVATION #3:

Even by the low standards of Hollywood sci-fi, the movie’s fake science is notably addled and underexplained. Worse, having already committed to bushwa in its first act, it just ups and changes the rules in the last ten minutes, which I generally imagine is the one thing that a science-fiction screenwriter has got to promise you he’s not going to do. The audience has been told throughout the movie that the hero cannot change history; he is not really in the past; he has been inserted, rather, into a simulation built up from the memories of dead people; he can therefore only retrieve information; he will never actually save the train. But then in the last ten minutes we discover that each simulation has created an alternate universe after all, and the viewer has had the good fortune to arrive at last in the lone scenario in which every American gets to work on time. That’s feeble, to be sure, and irritating, but there’s something remarkable about it all the same. The single most striking thing about Source Code is that it brings to bear all the dopey arcana of cut-rate science fiction—the full arsenal of time-travel pataphysics and pop Leibniz—in order to generate … the world we already live in. It has maneuvered American normalcy—the AM commute, a commonplace Tuesday, just another trek to the office—into the position of the bizarro world or utopia you might otherwise have expected. The movie’s happy ending feels entirely rote, yeah, until, that is, you realize that it exists only in ontological brackets. By the time Source Code finishes, the Midwestern everyday—the one in which trains don’t blow into the sky—has become thinkable only as a science-fiction scenario, a bit of extravagant speculation. It has shriveled down to the implausible thing that a genre movie must scramble unconvincingly to achieve.

Tarantino, Nazis, and Movies That Can Kill You – Part 2

PART 1 IS HERE

Again, if you want to make sense of Inglourious Basterds, the questions are three: 1) Why take the triumphalist American history of WWII and make it even more triumphalist? 2) Why channel our perceptions of the 1940s via the 1970s? 3) And why commit mass murder upon the audience?

Here are some answers.

Tarantino is on record as saying that Inglourious Basterds is his “bunch-of- guys-on-a-mission film”—which would mean that it’s a version of the Dirty Dozen or The Guns of Navarone. Like almost everything else that Tarantino says in interviews, I think that sentence is a lie or a trick, which should become clear if you pause to consider how uninterested the movie is in the Basterds as Nazi hunters; we see them fighting Nazis almost not all. In fact the Shosanna plot is entirely separate from the Basterds plot and commands our attention every bit as intently. I’d like to say this isn’t really a men-on-a-mission movie; this is first and foremost a revenge movie; and you might say Why can’t it be both?—and yeah, sure, it’s both, but Tarantino has also decided to make nearly all the Basterds Jewish, which means that the revenge framework actually spills over from the Shosanna plot and colonizes the mission plot, too. It’s like the revenge movie is sucking the war movie into its field of gravity. Revenge is the common term that unites the two separate plots. Plus we know that Tarantino is deeply engaged with revenge movies, which were a staple of the ‘70s grindhouse circuit: Last House on the Left, Death Wish, Thriller: En Grym Film, I Spit On Your Grave, movies like that. Tarantino, in fact, has already made an epic revenge movie—that’s Kill Bill—so we can’t be all that surprised to see him returning to the form here.

OK—but if it’s a revenge movie, it’s an unusual one, because it has that oddly doubled narrative—not just one, but two revenge plots, unspooling side by side, and eventually converging, though without either revenge-party ever knowing about the other. And what you think is at stake in the revenge plot will depend in large part on whether you decide to emphasize the Basterds or Shosanna. So ask yourself which agent of revenge your heart favors.

If you emphasize the Basterds, then what really jumps out in the movie is the image of the tough-guy Jew. There’s a word that is common in Hebrew slang—and that Hebrew has bequeathed to Israeli English—and that’s frier, which means something like “pushover” or “sucker”—and it’s become one of the most distinctive Israeli insults. Nobody in Israel wants to be a frier; nobody wants to be pushover. My Israeli friends boast proudly that the country has the world’s highest incidence of fatal car crashes—and I don’t know if that’s true—but I do know that my friends brag about it, which tells me all I need to know—and the explanation they always give is that no Israeli in a car will ever back down, as in: yield the right of way. So all I want to say is that testosterone has become a very big deal in some corners of modern Jewish culture, for reasons that are not hard to reconstruct, and you could think of Inglourious Basterds as playing into this, by projecting an IDF-style masculinity back into the 1940s. And this curious notion obviously goes back to one of the classic, nagging questions in the historiography of the Second World War: Why didn’t European Jews resist the fascists in larger numbers? If Inglourious Basterds generates a compensatory fantasy, it is surely here; it’s not fantasizing about Americans winning the war; it’s fantasizing about Jews winning the war; and this is a fantasy it shares, roughly, with other tough-Jew movies, like Defiance, which features Daniel Craig as the Bärenjude. Those movies ask the question: What if the Warsaw Ghetto Uprising had spread? Or: What if there had already been a Mossad to counteract the SS?

Here’s the thing: If we focus instead on Shosanna, the movie will look rather different. Shosanna of course is also Jewish and also tough, so we can to some extent just fold her into that last point. But only to some extent. Why? Because the image of Eli Roth one handing a baseball bat is obviously an image of Jewish machismo, but the image of a burning movie theater is not.

What I mean is that Shosanna’s method of taking revenge is so different from the Basterds’ that it raises some new issues for us to think about. The blazing screen does not trigger the same set of real-world associations. Shosanna gets her revenge through film: She makes a movie passing judgment on the fascists, whom she then immolates in the flames of burning nitrate reels. So it’s not just that we see a filmmaker killing Nazis; it’s as though film itself were able to strike fascists dead. There are, I think, two different ways of clarifying what Tarantino is up to here.

1) One way to understand the film Shosanna makes and that we eventually see is as Tarantino’s homage to postwar French cinema—and to the kind of anti-fascist film that people like Buñuel were making even before the war. She makes a guerilla film, on the cheap: a technically rough, experimental, low-budget and anti-fascist film. It’s as though Tarantino were trying to engineer a history in which Buñuel never left for Mexico, or trying to backdate Godard by about fifteen years. The movie literally stages a showdown between fascist film and the anti-fascist film of the postwar Left. And this alone licenses us to say that Tarantino is deeply invested in the possibility of anti-fascist film. He has just given us, as hero, an anti-fascist director. Now would be the moment to be point out that he and his associates often seem to think that trash cinema is the continuation of anti-fascist film. If you’ve seen Robert Rodriguez’s Machete—or even just the fake trailer for  the non-existent ‘70s drive-in movie that was the movie’s original incarnation—the point will not be lost on you: An army of illegal immigrants rises up against white bosses and politicians by repurposing as weapons the garden tools of a day laborer.

There’s plenty of precedence for this: One of the key blaxploitation movies is this film from 1976 called Brotherhood of Death, which is about a group of black Vietnam vets who return to the US and start using what the army taught them to fight the Klan. So we know that Tarantino and Rodriguez are fixated on grindhouse, but what they’re too cool to say out loud is that they basically think of grindhouse as a people’s cinema—crude and insurgent—a precious collection of movies about black people taking out the Klan and women turning the knife back against the men who attack them and kung fu masters sticking up for Native Americans.

2) What I’m saying, basically, is that Quentin Tarantino is our Woody Guthrie; he is the Woody Guthrie of mondo and the midnight movie. That is not a joke. The most famous picture of Woody Guthrie gives the viewer a clear look at the folk-singer’s guitar, across which is scrawled: “This machine kills fascists.”

We need to think hard about the fantasy that is communicated by that sentence—because we’re trying to make sense of this image—

—and that sentence provides the second important clarification. Woody Guthrie didn’t just want to sing about justice; he didn’t just want to “inspire his listeners” or get them to raise their voices in the spirit of peace or whatever it is that we usually think folk singers do; he was trying to imagine a music so powerful that it would actually bring justice into the world; he wanted to strum justice into existence; wanted an art that wouldn’t just be in the service of revolution, but that would itself be the completed revolutionary act. And that’s exactly what Tarantino gives us at the end of the movie: “This movie screen kills fascists.” That fantasy—the fantasy of a fully revolutionary art—turns out to be very old. As early in the 1590s, some English poets were trying to write plays that not only depicted revenge, but actually achieved it; they were trying to imagine plays that could actually kill corrupt courtiers and oppressive princes, as though blank verse could actually draw blood. Or if we flash-forward to 1969, we will find Amiri Baraka writing these lines, in a poem called “Black Arts”:

 

We want ‘poems that kill.’

Assassin poems, Poems that shoot

guns. Poems that wrestle cops into alleys

and take their weapons leaving them dead.

 

What we can say now is that Tarantino is paying homage to the history of anti-fascist film; and he is also trying to imagine a movie that could not only describe justice but actually achieve it. And of course, we need to put those points together and say that he is trying to imagine the perfect anti-fascist film—a film so righteously anti-fascist that it literally levels any fascist who wanders into its projected light; a film that fascists cannot watch; a film that turns fascists to dust. So maybe now we can say, or begin to say, explain why Tarantino has rewritten the history of 1944. Inglourious Basterds wants to give credit for the victory in World War II to someone other than the US and Soviet armies; to nominate, as the virtual heroes of some secret history, badass Jews and cinema itself. It’s an extraordinary idea.

…except I think that’s it all wrong. None of what I’ve just written actually works; or rather, the movie does in fact put in play the two fantasies I’ve been describing—the fantasy of a muscular Judaism and the fantasy of the perfect anti-fascist film—but then it takes them back—or at least makes them harder to occupy. First it gets us to share those fantasies and then it starts calling the fantasies into question. There are two good reasons to think this.

The first I will mention only briefly and ask you to think about on your own time. One of the plain ways we have to describe who Shosanna is and what she does in this movie is to say that she is a suicide bomber. If you want to get fancy, you will say that she is a twentieth-century Samson, pulling the roof down on the heads of the Jews’ celebrating enemies, but if you go back and read the Samson story, you’ll be forced to conclude before long that he, too, was a suicide bomber, so it’s really the same point anyway. At that point we will recall that there was a bomb attack on a movie theater in northern India in 2007; another in Mumbai during the wave of coordinated attacks in 2009; an especially bad movie theater bombing in Algeria in 1998; and so on. The movie undoubtedly produces an image of a heroic Judaism, but only at the cost of letting it mutate visibly into one of its putative opposites, which is the Muslim terrorist.

That’s one of the big surprises hidden away in the movie’s fantasies. The second is easiest to communicate through a series of paired images:

1)

2)

“You know something, Utivich? I think this might just be my masterpiece.”

3)

Here’s my gloss on that sequence. 1) We see a Nazi soldier, shot from below, mowing down an improbable number of the gathered enemy. Then we see an American soldier doing the same thing—and in a similar shot. 2) We see an American soldier mutilating an enemy officer and calling it his masterpiece; and we see Hitler telling Goebbels that he has made his masterpiece. 3) We see a fascist turn to the camera in black-and-white and address the audience directly, speaking English for the first time. And then we see the anti-fascist turn to the camera in black-and-white and address the audience directly, speaking English for the first time. We can see what this adds up to. Tarantino has built in unmistakable visual rhymes between the fascist movie and its putatively anti-fascist alternatives. Just to be clear: There are three movies in play here—the movie we are watching, Tarantino’s movie; the fascist movie; and Shosanna’s anti-fascist movie. So two anti-fascist movies and a fascist movie. And the point is that each of the two anti-fascist movies plainly, demonstrably resembles the fascist movie. Everything in the movie starts bleeding into fascism. Two more pairings, to coax over the disbelieving:

4)

An American soldier carves a swastika with a Bowie knife.

A German soldier carves a swastika with a Bowie knife.

5)

“Our battle plan will be that of an Apache resistance.”

But of course what’s true in miniature is also true globally: The fascists are watching a patriotic war movie about the grotesquely exaggerated exploits of a national hero. And you can’t even get that sentence out of your mouth without realizing that, yes, we too have been watching a patriotic war movie about the grotesquely exaggerated exploits of our national heroes. The anti-fascist movie we thought we were watching outs itself as fascism’s secret twin. There’s a lot to say here, but the short version is that I think we are in the presence of a filmmaker losing his confidence in grindhouse as a people’s cinema and trying to find a way to make trash cinema yield a critique of itself instead. This all comes down to the audience: What I find most striking about the shots of the audience in this movie is how attentive they are to the immediate effects of screen violence upon a group of viewers. Let me put it this way: I saw the movie twice in a theater, and each time I saw it, when the movie screen went up in flames, someone in the room clapped—not a full-palmed ovation, just three fingers of one hand in the heel of the other, the quick little rat-a-tat of a person overcome by excitement. But then of course Inglourious Basterds, in four or five different shots, shows a movie audience of fascists whoop-whooping to a blood orgy. Let me come at it from another angle. In the movie, we see one audience member laughing. I’m guessing many people were laughing when you saw the movie; you might have laughed yourself. This gets at something important, because as long as Tarantino has been making movies, high-minded critics have fretted that he makes violence entirely too pleasurable: Michael Madsen slices off a man’s ear, and the audience are bopping in their seats because “Stuck in the Middle With You” is chiming on the soundtrack. You grin as Bruce Willis trades up from hammer to baseball bat to chainsaw to samurai sword. The only movie I have ever walked out on because of the audience was the Coen brothers’ Blood Simple—close cousin to Reservoir Dogs or Pulp Fiction—and I left it because the rest of my row was cracking up while Dan Hedaya was getting buried alive, shrieking keen through mouthfuls of dirt. So how dare anyone make death funny? You have to imagine that Tarantino has always shrugged off that accusation; you can call up YouTube videos of him shrugging it off in interviews—except now he has conceded it. And we know he has conceded it because here’s the one person we see laughing at the violence:

There is only one person laughing, and it is mother-loving Hitler. That is the sight of a filmmaker profoundly alienated from his own fans, wigging out at the ability of the movies he most loves to produce in us a quasi-fascist joy in violence. So why does Tarantino hate us so much? He hates us for liking his movies the way we do; he hates us because he can so easily bring us round to enjoying the sight of people being gathered into a closed space so that they can be exterminated. He hates you for how easily you can be pushed into the Nazi position, as long as the people getting killed are themselves Nazis. He hates you because you are the fascist and you don’t even know it. And he proposes the self-consuming grindhouse solution to this grindhouse dilemma, which is that people like you have to die. You will uphold your death sentence with your applause.

 

 

Tarantino, Nazis, and Movies That Can Kill You – Part 1

I think I can show that Inglourious Basterds is not really a revenge movie, which, if you’ve seen the movie — well, you’re not going to believe me. It’s an implausible point, hard to make stick — and I’d rather start easy. So maybe I’ll just ask a few questions about the film, and then try to answer them, though maybe the questions are really the hard part, after all. It will be harder, I think, to get the questions right than to get the answers right; Basterds is so diabolically entertaining that a person could easily overlook how complicated a thing it really is. So I’m thinking that if we can just name the movie’s complications—if we can lift out its puzzles—the answers might start taking care of themselves.

My questions are three.

First question: Is Inglourious Basterds a historical movie? Is it a period piece? …or not? In some sense, yes, plainly, of course it is. It takes place at a specified moment in history—1944; the story unfolds against the backdrop of a major world event—World War II; it transforms real historical personages into minor fictional characters—Hitler, Goebbels, and the like—and it freely intermixes these “real people” with characters of its own invention. Those are the hallmarks of historical fiction in the mode of Walter Scott or Tolstoy. Scott’s Waverley features the real Scottish prince who, in the middle of the C18, tried to seize the throne of England and Scotland. War and Peace, in turn, actually has Napoleon as a character—a fairly central character, even, at least for part of the novel.

But there’s an obvious problem with this comparison, which is that Tarantino’s movie completely rewrites the history it has chosen to recount. And I can already hear the English professors amidst whom I work murmuring: But wait, historical fiction always, in myriad subtle ways, rewrites the history that it recounts. And they’re right. But Inglourious Basterds is not subtle about this; it does not even pretend to historical insight. It gleefully concocts an alternate history, in a manner that is impossible to overlook. In case anyone has forgotten: American Jews did not storm the Nazi high command and gun Hitler down in an act of heroic retribution. This is not a historical fiction in the usual sense, but rather a kind of fantasia or historical reverie—and the movie makes no effort to hide this. Not even in Tolstoy does Napoleon keep hold of Moscow.

But then this is where things really get strange. So the movie is a flight of fancy on a historical subject. OK; I think I can take that on board, because I’ve seen it before. In science-fiction circles, alternate histories have become a genre in their own right: What would England look like in the C20 if it had stayed Catholic—if, that is, there had never been a Protestant Church of England? What would the world look like today if Europeans had been wiped out in the fourteenth century by the Black Death?—a world without white people; I’ve always rather liked that one. Or closest to the day’s concerns: What would the US look like now if Hitler had never been defeated? Those books all exist and lots more like them: Historical novels about histories that never happened. But then we need to think about which event the movie has chosen to rescript: It doctors the end of World War II, and if we’re going to think about that, then let us call to mind another obvious thing: America actually defeated the Germans in World War II; or rather the Allies did. And Americans defeat the Nazis in the movie, too, with some help from French resisters. It’s worth pausing to register how odd that is. I mean, it’s not like the movie has taken a tale of American failure or hesitation and turned it into an American triumph. If you try to imagine Inglourious Basterds as a Vietnam movie, you’ll begin to see what I mean. There was a period in the mid-‘80s when Hollywood started churning out movies—like Delta Force or the second Rambo joint—in which the US Army was granted some kind of magic do-over in South-East Asia. In Rambo, Sylvester Stallone actually speaks the question: “Do we get to win this time?” And his commanding officer responds: “Yes, Rambo. You get to win this time.” What’s going on there isn’t especially hard to grasp. The historical record—or, if you prefer, popular historical pseudo-memory—contains, in reference to Vietnam, all sorts of ambivalence: feelings of failure, complicity, shame, and so on—and those feelings are a breeding ground for compensatory fantasies. But Tarantino has scripted an alternative to D-Day, of all things, which means he has replaced the most heroic moment in twentieth-century US history—a history that is already fully triumphalist, entirely devoid of ambivalence—with something even more triumphalist, but weirdly, ferociously so. He has scripted a fictional way of winning a war that the US won anyway. So what’s going on? That’s  the first question.

I have a second question that also involves the ways this is not a straightforward historical movie. I want to be careful here: Historical fictions are always complicated, because they always require you to think at the same time about two different historical moments; if you’re reading a historical novel, you need to think about when the book was set, but you also need to think about when the book was written. So take Toni Morrison’s Beloved, which is the one recent historical novel you can count on someone having read. That book is set in the 1870s, but it was written in the 1980s. And a person might ask: What’s the difference between a book written in the 1870s, like Thomas Hardy’s Far From the Madding Crowd, and one set in the 1870s? That second book, Beloved, has a historical shadow dimension that the first book doesn’t. Historical novels belong, as it were, to two historical moments at once. They are always implicitly putting two historical moments in front of you and asking you what connects them or what they share. So Beloved is a novel about America in the nineteenth century—it’s about the aftermath of slavery—but it is also a novel of the 1980s. The 1870s and the 1980s get held up next to each other. If you want to understand Beloved, you have to understand both what Toni Morrison is saying about the past and what she is saying to her contemporaries. It’s Reconstruction; and it’s the Reagan-era; and they’re side by side. Same deal with Inglourious Basterds. Tarantino was talking about this movie as early as 2001; he wrote different versions of the screenplay across the last decade; two or three times, he announced he was going into production only to change his mind; and then he finally began filming in October 2008—a month before the Obama-McCain election, if you want to think of it that way. So this movie is about 1944, but we can also think of it as pretty much the last movie of the Bush administration. And it’s a war movie—and we mustn’t lose sight of this—which recasts WWII as a settling of scores. And few viewers will have overlooked that it’s also a Western. The opening scene has a French farmer living in what you could mistake for the timber shack of a Montana frontiersman; there’s a shootout in a saloon where desperadoes are drinking whiskey; and so on. So who thinks about war as a Western? Six days after 9/11, George Bush stood up in front of the press corps and said: “I want justice. And there’s an old poster out West, I recall, that said: ‘Wanted, Dead or Alive.’”

We seem to be making headway. But the point I’m after is that Inglourious Basterds is actually more complicated than this. Historical fictions are always complicated, and this movie is more complicated still, not least because it is so obviously stitched together out of parts from other movies. Now we know that this is what Tarantino likes to do; he’s got a mash-up aesthetic. So that opening scene?—it’s borrowed from John Ford; and the scene where the French Jewish beauty and the young Nazi hero kill each other?—that’s ripped from a John Woo movie. Now again, movies and novels are always borrowing from other movies and novels, so maybe you’re thinking Big deal. But most movies and novels take some pains to cover their tracks; they don’t want you to spot their borrowings; they invite you to sink into the story, so that you can trick yourself into thinking that you are watching the past unfold organically before you. And Tarantino simply will not let you sink into the story. He does not hide his sources. The most obvious example is the moment when the movie introduces Hugo Stiglitz for the first time; suddenly the movie has a narrator, and the narrator is Sam Jackson, in voiceover, and with an underlay of boom chicka wawa, and every time you hear those pimped-out cadences, you get airlifted briefly out of 1944 and deposited in the mid-‘70s instead—so Sam Jackson, but Sam Jackson in his incarnation as latter-day soul brother.

That’s the single most intrusive moment in the movie; the visible incursion of another film genre into the World War II movie; but it’s hardly the only one. There’s the spaghetti Western soundtrack, which provides an ongoing temporal counterpoint to the action. Or there’s the title. I dutifully went and watched the 1978 Italian movie from which the title Inglourious Basterds has been filched only to discover that it bears absolutely no resemblance to the movie Tarantino made. The later film is in no way a remake of the earlier one. But then knowing that should help us see how programmatic Tarantino’s retro aesthetic is: He wants you to think his movie is a remake even when it isn’t a remake. In the event, the title is something like an all-purpose footnote; it doesn’t do much more than point you, broadly, to the entire body of late ‘60s and ‘70s-era trash movies that we all know Tarantino loves; and the music does the same thing; and so does Sam Jackson. Someone out there was disappointed to discover that Richard Roundtree wasn’t playing Hitler. So the movie doesn’t just whisk us back to 1944; and it doesn’t even really whisk us back to its alternate-reality 1944. Rather, it forces us to contemplate 1944 through a scrim of other movies, and I want us to think of this as an almost geological act of historical layering. This is how Inglourious Basterds is different from an ordinary historical fiction: There aren’t just two historical moments in play, there are at least three. Hence my second question: Why, in 2009, make a ‘70s-style movie about 1944?

One quick point to make, in passing, because it will be important to some people’s experience of the movie: This might be a trash movie; and it might rewrite history in preposterous ways; but its use of historical detail is nonetheless meticulous. The movie’s evident precision begins with its attention to language. It’s a tri-lingual movie, and the German in the movie is impeccable—entirely unlike the Halt!-und-Schnell! that you get in Schindler’s List and other graduates from the Hogan’s Heroes School of War Cinema. And beyond that, the movie is full of historical references that aren’t in the least offhand—references, I mean, that are knowing and apt. Tarantino works in references to early twentieth-century German children’s literature; he briefly introduces, as a character, a cat named Emil Jannings, who was 1) a real German actor of the period; 2) the first person ever to win an Oscar; 3) and a prominent Nazi. And on and on. Now if you’re in a position to appreciate these details—which basically means if you’re German—the experience of the movie has got to be all the more bewildering. The puzzles I’ve been describing intensify, because in lots of ways the movie seems unusually committed to 1944—the movie’s erudition, I mean, can’t help but convey a certain respect for the movie’s historical materials—and yet at the same time 1944 is constantly slipping from sight.

So … a second question. My third question is easier to explain, though it’s probably also the most important one. It all comes down to this image and to the scene that contains it:

We have to be clear about what’s going on here. I can imagine a person being keyed up enough at the sweet sight of all those Nazis getting killed to overlook the second thing that’s going on in the movies climactic scenes—not a second event—but a second, equally plausible way of describing that one event: The movie is showing a Jewish woman wreaking vengeance upon Germans, but it is also showing a filmmaker killing her own audience. That’s amazing; and serious thinking about the movie has got to start there. We need to think hard about the conditions under which some of us saw this movie. If you were lucky enough to see Inglourious Basterds during its original run—and so not on DVD—then you sat in a movie theater and watched people in a movie theater get wiped out. You might have been rooting for Shosanna or the Basterds—I know I was—but the people getting offed were, at the moment of their death, unmistakably like you. The aspect of the movie that most leaps out, I think, is its extraordinary hostility towards the audience. So my third question is: Why does Quentin Tarantino hate us so much?

So those are my three questions: 1) Why take the triumphalist American history of WWII and make it even more triumphalist? 2) Why channel our perceptions of the 1940s via the 1970s? 3) And why commit mass murder upon the audience? I will next attempt some answers.

 

…MORE TO COME…

Postmodernism Is Maybe After All A Historicism, Part 3

PART ONE IS HERE.

PART TWO IS HERE.

You’re going to understand De Palma’s Body Double better if you understand why Theodor Adorno liked Mahler. Somebody might have told you once that Adorno championed difficult art in general and atonal music in particular: string quartets made to skirl; the mathematically precise caterwaul of that half-stepping dozen, the series chromatic and uncanny. This isn’t exactly wrong, and it is the regular stuff of encyclopedia entries and intro classes, but it’s not exactly right either. For Adorno did not want an art entirely without subjectivity, which is what serial music sometimes suggests, a pure and as it were automatic music that would never suggest to anyone listening a link back to human utterance or expressiveness; that would never once yield a tune that someone, at least, would want to sing; a music, in fine, that was all system. What he was seeking, rather, was an art organized around antitheses, in which the conflict between subject and system would become audible; and he worried there were different ways an artwork could instead obliterate any sense we had of a living person struggling to come to speech within it, and he didn’t like any of these. Traditionalism was the obvious problem: the expert mimicry of older styles, the striking of already petrified poses, the chanting of sentences already spoken. Adorno said of Stravinsky that he was a U2 tribute band. But then a radical aesthetic can beat its own experimental path to the same deadly place, one he identified in the fully developed versions of twelve-tone music, in Webern, that is, and the late modernists of the ‘60s: serial music become oppressive because now wholly itself, without any concession to its historical rivals or predecessors, routinized and ascetic, sealed off inside its own rigors and formulae.

It is this rejection of Webern that should clarify Adorno’s championing of both Alban Berg and Gustav Mahler, which is to say both a composer conventionally classified as atonal and one typically reckoned not, the point being that each of these two absorbed into his music the opposition that musical history tries to construct only between them. Mahler and Berg can be conceptualized together as the Composers of the Break, neither tonal nor atonal, but first-one-and-then-the-other, by turns and in shifting ratios or proportions. If it’s misleading to say that Adorno was one of the great theorists of serial music, then that’s because it was this music-at-the-cusp—and not the purity of The Twelve—that he meant to recommend. At issue were compositions in which the conflict between entire aesthetic periods or modes of cultural production was openly theatricalized, and from this perspective, a composer’s starting point was irrelevant. You could fill your music with tunes, but let them curdle on occasion into noise; or, alternately, you could plunge your listeners into noise, but remind them occasionally of what tunes used to sound like. Either way, you would be staging a face-off between the entire history of human songfulness and some other, radically new aesthetic mode in which art no longer takes our pleasure as its aim and limit. And here, perhaps, is the most curious point: These last are scenarios in which either term, tonality or atonality, can count as subject and either as structure. You can say that the fine old tunes sustain us as subjects and that the mere math of the twelve-tone series recreates for us in the concert hall the experience of structure and rationalization. But you can just as plausibly say that those tunes are sedimented and mindless convention, at which point we might welcome dissonance as the opening out of the composer’s idiom—or simply as the afflicted yowl of anyone who wishes the radio would for once play something different.

We can’t make listeners choose between Mahler and Berg, because it is really easy to find Mahler in Berg. If we want to get back to Body Double, all we need to do, then, is generalize Adorno’s argument in a direction he probably wouldn’t have; to insist that antithesis, far from being the special achievement of these two Austrians, is the inevitable condition of most artworks, nearly all of which absorb into themselves piecewise the styles and conventions of various historical periods, social classes, and political tendencies. You can call this “liminal art” if you want, as long as you are prepared to add that threshold never becomes room. The struggles that a Gramscian reader thinks go on between artworks are usually reproduced one by one within those same works, which, if patiently read, will generate maps of the broader cultural fields of which they are also a part. What we can say now of postmodern art is that it is almost never wholly itself, that in order even to be recognized as postmodern, it will have to announce its own distinctiveness, marking itself off from its modernist counterparts, which it will have to after a fashion name and in naming preserve. The sentences regularly encountered in Jameson in which x artist is declared to be a postmodern revision of y modernist are thus oddly self-defeating. How often do you find yourself wanting to remind Jameson of how the dialectic works?—stammering, in this case, that one cannot name a break between two terms without simultaneously positing their continuity. If you want to lift out what was new in the movie Body Heat, having first spotted that it was, as Jameson has it, a “remake of James M. Cain’s Double Indemnity,” then you have yourself already conceded that the one was really, actually, finally a lot like the other. When we designate a work as “postmodern,” the superseded and modernist version thereof will persist, as its not-really negated shadow, and this shadow will, in turn, vitiate our sense of postmodernism as ahistorical. You can say that Body Double is a movie about other movies, but that very reliance on other films—prior films—will be a prompt to historical thinking. Postmodern Body Double preserves within itself the memory of movies that weren’t yet postmodern. But then this or something like it is going to be true of most really existing postmodernism, which we now have to reconceive as the arena of a certain fight—the showdown between the various modernisms and a postmodernism available only as ideal type.

This point is available, first, at the level of genre. There’s a remarkable moment about an hour into Body Double when we witness our hero decide to take matters into his own hands, make his own inquiries about the murder, get to the bottom of things. The spectator-actor prepares himself to assume the detective functions of classic crime narrative. And at just that moment, when the movie seems ready at last to lead us back behind the spectacle—to, you know, strike the set—it instead amplifies by the pageantry by launching into a full-fledged music video—for Frankie Goes To Hollywood’s “Relax,” complete with shots of lip-synching lead-singer Holly Johnson. What makes the sequence even more compelling is that the music video stands in for hardcore porn; it’s the point in the movie when the hero is trying to infiltrate a porn set by pretending to be a hired stud, and De Palma is letting FGTH’s lubricious, post-disco electro-march substitute for the obscenities he cannot show. The movie thereby directs our attention neither to porn nor to MTV, but to whatever it is rather that the two share—and thus to an entire set of new or newly prevalent video genres, characteristic of the last few decades and defined by their collective willingness to abandon narrative or at least scale it back to some barely-more-than-sequential minimum. From our own vantage, we would want to add, above and beyond the raunch and the Duran Duran, YouTube shorts, initially capped at ten minutes and now majestically extended to fifteen, and new-model movie trailers, which, following Jameson, deserve to be considered as a form in their own right, with their own conventions and feature-usurping pleasures.

This is what it would mean to talk about Body Double not as postmodern but as a conflict-ridden composite of postmodernism and the pop modernism of the detective story, which still thinks of itself as a device for disclosing hidden truths. The competing genres are entirely visible within the movie. And then the all-important point to be made in this regard is that the detective story more or less wins out, and not only because the movie ends with a literal unmasking, latex pulled from a face. The movie does indeed document the spectator’s inability to act, though even here its procedure is basically satirical, in a manner that depends on our memory of other heroes having once done something, a memory counterposed to which postmodernity will register not as a schizoid intensity but only as a vacuity. Check your Jameson: The movie’s parody isn’t all that blank, because its very genre provides a set of expectations against which its innovations will be judged. But even beyond this, Body Double seems dedicated to the idea that certain forms of agency remain available even in the society of the spectacle. The movie’s hero doubles himself—he is both spectator and actor—and then this pairing is itself in some sense doubled, because spectator and actor both come in a second version that we could call juridical or epistemological, and not just inactive or image-consuming. There has after all always been an affinity between the spectator and the detective, with the latter now understood as the-one-who-watches, the one who arrives on the crime scene like an apparition, pledged to leave no mark, to pollute no object, to minimize the observer effect by leaving the murder bed unmade. To this we need merely append the observation that performer-cops are also a familiar species, called “narcs” or “undercover agents,” and that acting, too, can be a form of information gathering. Body Double does to this extent grant its cipher a certain limited effectivity, within the bounds of acting and spectating, as gumshoe and mole. The once corrosive insight that the detective is like a voyeur is thus replaced by its opposite, a reminder that the detective functions might in fact survive, that epistemological and moral purpose can still be roused from within the position of the spectator.

This last is a point to be made at the level of genre as a whole. But we can make a few similar observations if we start calling out the titles of specific movies, or at least of one specific movie. For Body Double’s relationship back to Rear Window also contains its own historical argument. De Palma updates his Hitchcock in one absolutely crucial way: In the later movie, the spectator-hero is meant to see the murder, which is to say that his spectatorship has been factored in in advance. We can think of the matter this way: Rear Window was still easily explained within the usual Enlightenment paradigm of truth and knowledge, the magical version of which is the usual stuff of crime stories, in which once the solution is announced and the murderer identified, everything automatically sets itself to right: culprits march themselves off to jail, widows and fatherless children return to their business suddenly unbereaved, &c. Hitchcock had some good questions to put to that paradigm, epistemological questions, for one—about whether one really knows what one thinks ones knows—and also psychoanalytic questions—about the relationship between the knower and the peeper and hence about the sneaky way in which desire rides in on knowledge’s back. De Palma, however, radicalizes this scenario by inventing a murderer who wants to be seen, a murderer, in other words, whose plans depends on the existence of a manipulated witness. The shift from Hitchcock to De Palma thus secretes a certain periodization, marking out the difference between a society in which the media exercise independent oversight functions over the government and other major actors, like corporations, and a society in which government and corporations have already reckoned the cameras into all their calculations and so incessantly stage themselves for the public, which means that watchdogs are called upon only to play an already scripted role. Body Double is really and truly a meditation on that condition, but within the narrow parameters of the thriller.

This brings us to the big point: There was always something unresolved in Jameson’s postmodernism argument, and especially in his claim that postmodern culture tends to jettison historical thinking. It’s not just that narrative forms are never going to be able to revert back to some zero degree of history-less-ness, though that’s also true. The issue is rather that Jameson was making two claims that are finally rather hard to square with one another: that under names like “retro” and “vintage,” postmodernism revived the copycat historicism of the nineteenth-century art academy … and also that it wasn’t a historicism. The best chance you’ve got of making this argument work is by making it accusatory, because you have to be able to say that postmodern historicism isn’t really historical, that it is fake history, history reduced back to image or consumer good, just so many styles for the donning, as when the ‘50s mean Formica and the ‘70s Fiestaware. Sometimes that blow is going to land. But if you’re doing anything other than designing your kitchen—if you’re making a movie or writing a novel or metering out a poem—the citations you introduce will often be, not an aping farrago, but their own path to chronology, an exercise in temporal counterpoint or Ungleichzeitigkeit, a dozen arrows pointing us outside the present, and so a request that we resume the project of historical thinking only just terminated.