How the feminist internet utopia failed, and we ended up with speculative realism
A/S/L? With the quintessential chat room question, you’re asking: where are you, and what body are you in? It doesn’t matter if the answer is true—any answer lets you envision a person on the other end of the line. You just need to imagine that person in a body to believe you’re not talking to a computer.
In Alan Turing’s original Imitation Game, a computer tries to convince a person that it’s human. The computer sits behind a curtain and answers questions coming from a person on the other side through written notes. The test-taker doesn’t even know he or she is differentiating between a computer and a person—the questioner is trying to figure out whether he or she is talking to a woman or a man. If the computer can convince the questioner it’s a woman, it’s convincingly human, and therefore it can be called artificially intelligent. In this case it’s not the A or the L, but the S that proves the computer has a body.
The era of the text-based Internet in the ’80s and ’90s was a unique period of Turing-style, behind-the-curtain interaction.1 Rather than proving to each other that they were humans, users of the developing system had fantasies of transcending their bodies altogether—something akin to the out-of-body mind-travel in Neuromancer. Passing written notes back and forth through the curtain of the screen, they could invent and reinvent their A/S/Ls.
Through this cyber-body freedom, cyberfeminists anticipated finally moving beyond gender. Multiple User Domains (MUDs), Bulletin Board Services (BBSs), Role Playing Games (RPGs) and various other interactive internet portals were to be populated by polymorphous men, women, transgendered animals, three-headed aliens, medieval warriors … identities with genders and sexualities of infinite type.
As the infrastructure of the net evolved into the ’90s, these virtual promises began to seem tangible. This was the short decade of posthumanism and its cyber-hopes. Anonymity and the ability to inhabit several invented selves online were going to un- de- or re- identify us—at least those of us with Internet access.
Minorities, especially women, were supposedly going to get a big cut of cyberspace. British philosopher Sadie Plant synopsized the mood in her 1993 essay “On the Matrix”:
“After decades of ambivalence towards technology, many feminists are now finding a wealth of new opportunities, spaces, and lines of thought amidst the new complexities of the ‘telecoms revolution’. The Internet promises women a network of lines on which to chatter, natter, work and play; virtuality brings a fluidity to identities which had once been fixed, and multimedia provides a new tactile environment in which women artists can find their space.”
Not only gender but race, hair color, disability—all hallmarks of social hierarchy and sexual appeal—could become, if not irrelevant, then distorted, upturned, obscured. Beginning in 1990, the popular MUD, LambdaMOO, became a forum for debate, discourse, and experimentation in the politics of virtual embodiment. It was a community of believers whose population reached 10,000.2 One player, Legba, expressed the communal optimism in 1994: “We exist in a world of pure communication, where looks don’t matter and only the best writers get laid.”
Multiple personalities and virtual rape
In 1995, transgender theorist Allucquére Rosanne Stone declared Multiple Personality Disorder the new contemporary condition. Usurping or compounding the schizophrenia of the early ’80s—classified by temporal discontinuity, intense isolation, and disconnection—MPD was the condition of multiple, disconnected selves housed in one body. It had only recently been officially DSM-sanctioned as a mental disorder in 1980. Stone and others thought the condition of fractured, online, networked identity had manifested as a physical disorder. Stone did not see the condition as a necessarily negative one.
Like any visitor in a chat room could, someone with MPD could have more than one (gendered) identity at once. The line between the virtual, imagined, constructed self and the embodied self was disintegrating—and exposing how tenuous the link between body and constructed identity has always been. The human behind the curtain could now be man, woman, and computer all at once. We had become posthuman.
The Turing Test of the ’90s was not a measure of artificial intelligence; it was the test of how real a virtual body can be in relation to the body sitting at the keyboard. Ultimate proof of embodiment apparently lies in bodily violation, and so the updated litmus test became: can a virtual self, one of multiple personalities, be physically violated like the one-body-one-self kind? More literally: can the posthuman body be raped? This question arose in many instances at the pivotal moment of the transition from the text- to image-based Internet, the time when bodily representation gained a new dimension online.
First, in 1990, a 27-year-old woman with Multiple Personality Disorder accused a man of raping her. The case went to trial in Oshkosh, Wisconsin. However, only the personality present when raped—a naive young girl named Franny—remembered the violation or could testify to it on the stand. Beyond a superficial legal debate about testimony from the mentally ill, the underlying issue for the jury was the acceptance or rejection of the concept of a body with many subjectivities. In this case, the rapist was convicted. A win for the posthuman.3
Second, in the early ’90s, a disabled, elderly woman named Julie who had appeared in chat rooms for years—making friends and admirers with her “thoughtful, caring” personality—was exposed as the invention of a middle-aged male psychiatrist. Upon discovering the true identity of “Julie,” one woman who had shared intimate secrets with her wrote: “I felt raped.” In this case the rape was one of identity deceit. Passing off fiction as reality was perceived as fraudulent and malicious rather than inventive; self had to be authentic; singular persona and body had to be one. The fact that the liar was a man and a psychiatrist didn’t help. Truth-telling is apparently more important when certain identities are involved. Posthumanism loss.4
Third, in 1993, Julien Dibbel wrote “A Rape in Cyberspace,” in which he described a rape controversy in the LambdaMOO community. Through a “voodoo doll” subprogram, a character named Mr_Bungle was able to attribute “actions to other characters that their users did not actually write.” For example, Moondreamer was forced to type: “As if against her will, Moondreamer jabs a steak knife up her ass, causing immense joy. You hear Mr_Bungle laughing evilly in the distance.” The woman in Seattle behind one character who was possessed by Mr_Bungle’s voodoo doll claimed to be physically traumatized by the rape of her character.
In the case of Mr_Bungle and his “sadistic fantasies,” there was no attempt to separate fact from fiction; the rape was taken at face value by the LambdaMOO community as a literal fact, as the existence of the community itself was predicated on fiction. However, members were divided as to Mr_Bungle’s punishment. After heated discussion, it was decided that he be “eliminated,” or “toaded,” that is, not allowed to re-enter the conversation. This decision had high political stakes; given the opportunity to develop a new virtual world, does the need for virtual safety trump the need for free expression? Is virtual rape real enough to merit virtual capital punishment? Posthuman verdict unclear.
These cases show that the multiple-self/anonymity scenario has its drawbacks. If virtual bodies are to be taken seriously, real-world politics will enter into the equation. But of course, cyberculture had real-world political intent all along.
Dibbel’s Mr_Bungle article is tolerant of weirdness, but his optimism about radical online possibilities is cautious. He’s realistic. He doesn’t have any illusions about a utopian cyber-era to come. “The Bungle Affair raises questions that … demand a clear-eyed, sober, and unmystified consideration,” he writes. “It asks us to shut our ears for the time being to techno-utopian ecstasies and look without illusion upon the present possibilities for building, in the on-line spaces of this world, societies more decent and free than those mapped onto dirt and concrete and capital.” This isn’t naïveté—it’s optimism.
Lucid and realist perspectives like Dibbell’s are integral to a nuanced understanding of ’90s cyberculture, which was not entirely characterized by science fictions, escapist fantasies, or pseudo-rationalist phenomenology, as it often looks from our contemporary vantage point. The mistaken interpretation of cyberculture as voodoo mysticism accounts for much of its dismissal or disregard in both philosophical and pop cultural discourse today.
I’m not arguing for the continued political validity of posthumanism or cyberfeminism. It was inextricably tied to the technologies through which it developed, and it is decidedly over. Without nostalgia or sentimentality, I want to question that era’s optimism and predictive capacity, attend to its (failed?) legacy, and ask where it left us—where it left me—in my body and life online.
There was a reason posthumanism developed in tandem with the information superhighway. Virtuality still offered the possibility for new realities, not a replication of the real world with all its top-down management, predictable flows of capital, and prescribed social roles. Why does this optimism sound utopian or naïve today? It’s partially because the historical moment of posthumanism has been characterized as irrational, unrealistic, mystical, bodily, sentimental, weakly scientific. In short, the posthuman era became a girl.
In contrast, the philosophy of the 2010s is increasingly inclined towards objectivity and absolutism. The undercurrent of much contemporary theory is the belief, or desire for the belief, in objective reality: the notion that reality exists in and of itself without human interaction. It’s no longer created or constructed from scratch by disembodied, invented selves floating around in online space.
Though varied in approach and opinion, a lot of philosophers share this preoccupation with objective reality. The most well-known of the movements involved in this debate are famous because they have been labeled: Speculative Realism and Object-Oriented Ontology. There are rifts within the groups because in being labeled their ideas are reduced to a singularity; in the interest of avoiding such reduction I’ll use the fuzzy but catchall term “new realism” to refer to the broadening theoretical and cultural obsession with objectivity. The attitude I’m referring to is pervasive, and it’s shared by many who don’t frame it in philosophical terms.
At the heart of New Realism is the rejection of human existence as privileged over nonhuman objects. This model rejects human embodiment or body-based subjectivity—what speculative realist Quentin Meillassoux, for example, calls “correlationism,” or the human-centric view of the world. Multiple identity discourse and its non-rational qualities are Over. Post-Cartesian conceptions of space are dismissed. Subjectivity, a male (post-Internet) artist once told me, is for art students.5
Objectivity/subjectivity has been a central philosophical tension throughout history. The definitions of these dichotomized concepts have been constructed and deconstructed and mashed together in all possible ways. Explicit shifts in philosophy are expected and fine; no moral judgment on progress. But the pendulum often swings too far in one direction, obscuring its own implicit aims, and obscuring the very attitude it is swinging away from. Before plunging waist-deep back into the Real, we should probably recall that certain kinds of realism are historically male.
New realism’s rejection of subjectivity as such has the potential to be extremely … regressive. Its hypotheses lean on a lot of old realism—back to Heidegger, Kant, and Descartes—while largely skipping over theorists from a certain very recent yet inconvenient era. While debates about cyberculture were relatively explicit about the effects of philosophy on lived bodily experience and the politics at stake, new realism is not. Objectivity must have a political agenda, like subjectivity does—but by calling itself objective this agenda is obscured.
Has the human body become irrelevant, or is the topic generally avoided because it might expose new realism’s uncomfortable political side effects? Is it really incidental that mostly men are writing this stuff? If I were to believe that, I’d have to believe a priori that subjectivity is irrelevant. I’d also have to believe that cyberculture had succeeded in creating a post-gender world.
Architects and users
Comscore, Inc.’s 2010 report “Women on the Web” summarizes its findings under these headings: “Women ARE the digital mainstream”; Social networking is central to women’s Internet experience”; “Divas drive the dollars”; “Boys will be boys, but on the web just like in the offline world, gender stereotypes only go so far”; “Women tweet like Venus, Men like Mars”; “The global is regional, the regional is local, and culture prevails.”
The way we use the Internet today generally reinforces normative gender roles (with minor yet notable exceptions). Despite the web’s potential as a platform for political organization and knowledge exchange, the numbers are clear in Comscore’s report: women equal online purchasing power, as per their traditional role as household consumers, and besides buying cosmetics, clothes, and coupons, we women spend more time than men on community-building websites trading social information. These activities are easily equated to the affective labor of reproducing subjectivities and family-based networks that both second-wave feminists (in particular socialist feminists) were so intent on re-defining.
Cyberculture and second-wave feminism died around the same time. Their mutual demise was no coincidence. Second-wave feminism’s end can be attributed to any number of factors: neoliberal philosophy purporting an equality based on free-market tendencies; its relegation to the discipline of gender studies; its reduction to reductive, simplistic identity politics; reaction against second-wave-feminist mothers; and the self-subverting political correctness problem. It was ultimately flattened by the drastic economic shift, begun in the ’70s, that was accelerated and finalized by Web 2.0. This was the development of the information economy, which bulldozed user-generated cyberculture by corporatizing the Internet, in turn converting participants into revenue-generators and nailing 1:1 identities back into place via social media profiles.
Of course, if we weren’t classified as “divas driving the dollars,” we wouldn’t be driving so many. N+1’s Issue 15 editorial paraphrases the tone of online journalism geared towards Women Everywhere, like The Atlantic’s articles “Marry Him!” or “Why Women Still Can’t Have it All”: “Listen up ladies, these articles say. We’re here to talk to you in a way that’s limited and denigrating.” N+1 points out that such a patronizing, paternal tenor brings in a lot of cash for print-publications-gone-online like The Atlantic. To the savvy marketing mind, women online are a wriggling mass of spenders whose anxieties are ready to be both stoked and placated by the people churning out news media. “Gently, like a good friend, The Atlantic tells women they can stop pretending to be feminists now.”
Many women and men alike are using, not building, the web. In a 2012 piece called “Turing Complete User” for the website Contemporary Home Computing, Olia Lialina argues for preserving the word “user” when talking about hardware and interfaces. “Computers are getting invisible,” she says, and we forget that we’re using a system largely designed for us, not by us. Marketing terminology focusing on user-friendly design prefers to call us “people”—and heaven forbid “consumers”—equal partners or associates in the experience, which we are not. Yet the fact that we are users does not mean all users are equal. The quintessential user envisioned and targeted by corporations is a woman.
The information economy is not synonymous with but is supportive of the “experience economy”—I’m not a user of products but a subject having immersive, easy, fluid, entertaining experiences. The awkward term “prosumer” that tries to re-infuse a sense of agency in the user may accurately describe the way products are geared in real-time towards overt consumer desires rather than prescribed for us in advance, but this in no way makes us equal partners in the production/consumption process. User input matters, but the consumer-as-producer model is a limited one, restricted and managed by the corporate platforms within which the produced material is disseminated.
Making YouTube videos is reproductive labor. It reproduces the cultural viewpoints instilled by primary producers, those who build platforms like YouTube and the marketing systems upon which it depends. Rather than focus on who produces cultural artifacts through online avenues I want to focus on who builds the infrastructure as a whole. Not who designs apps, but who constructs the programming language and hardware they run on.
Someone builds the architecture and infrastructure of the Internet, the vast shopping malls and living rooms and churches online that we (women) are shopping and chatting and praying in. These material laborers of the immaterial era are web designers and programmers. Most designers and programmers are men. To give a rough outline of the predictable numbers, the 2011 survey by A List Apart website for design professionals found that 81.6 percent of web designers were men. In 2010-2011, the Computing Research Association reported that 87.3 percent of the total bachelor’s degrees in computer science, computer engineering, and information sciences awarded in the U.S. went to men. The traditional material/immaterial; male/female labor divide remains.
The environments we live in, whether virtual or actual, generate lived, bodily experiences. In homage to the ’90s, I propose a return to considering the virtual body in a locative sense, circuitously approaching subjective body-experience through architecture or place. A/S/L?
It’s only in the last decade that the construction of the Internet has become comparable to a top-down architectural model. Though computer research has always been financed by someone at the top (most early computing was supported by military money during wartime), mega-corporations have not always owned so much of the virtual floor space.
The early Internet was mapped out by a lot of non-professionalized users. As Lialina writes in the introduction to Digital Folklore, an overview of early net aesthetics and cultures: “Consider the way early amateur websites were made. As clumsy as they might appear to trained professionals, in terms of spreading the Internet’s architecture and culture, they were of huge importance … Users could easily write the code for their own web pages and were, by building their pages, literally building the Internet.” Most of us don’t have the tools to do this anymore. Successful front-end design obscures its processes and mechanisms so we can forget the device altogether and focus on texting or buying shoes or calling a taxi. This is not devious; it’s convenient6.
Forgetting the device makes it possible to forget that your online identity does not directly correlate with your physical one. Sure, you still “create” or curate yourself online, but now that the Internet is a visual arena with real-time access, your identity is no longer as amorphous or abstracted from reality as it once was. You represent yourself online using a variety of multimedia material, a complete, sensory, dynamic simulacrum of yourself who cannot instantly change or disappear according to a line of text on the screen.
Contemporary mass culture equates anonymity with secrecy or downright negative intent, not harmless experimentation. Who lies about who they are online? Pedophiles, scammers, hackers, bullies, Wikileaks. Anonymity has turned from thrilling to terrifying. 1:1 self-to-body ratio is a moral mandate. It’s no wonder that nailing down objective reality seems so attractive.
In retrospect, the cyberfeminist utopia was over before it began. User-generated MUDs and RPGs were mostly built and populated by men, and the roles these men played were often predictable stereotypes.7 Sociological research into early text-based systems suggests that developing online cultures were often male-dominated and heteronormative.8 From a structural point of view, a gender dichotomy is not surprising—the Internet itself is based on a system of binaries. Dualism, mutual exclusion, and absolutes are inherent in its structure. If architecture engenders subjectivities, this is no small fact. Plus: why wouldn’t the Internet directly reflect the culture that produced it?
Through the cyborg figure, a semi-ironic conglomeration of natural body and techno-prosthetics, in the 1980s Donna Haraway proposed radical acceptance of technological innovation in the face of the feminized technophobia that perpetuated the alignment of cultural advancement, innovation, technology, and militarization with masculinity. This was a strategy to prevent the reinforcement of traditional (dichotomous) gendered relationships before they were re-entrenched by emerging technologies. These relationships were 1:1. Science: Technology: Cultural advancement: Military: Objectivity: Realism: Man.
Science and rationalism are not inherently male or regressive. Yet the fact that the prominent new realist philosophers—as well as today’s web designers and programmers—happen to be white/male/western, along with the fact that body-based subjectivity has been thrown out with the cyberculture bathwater, seem to perpetuate this classic alignment.
New Realism focused on object-based absolutes is admittedly provoked by and continues to provoke the postmodern concern (fear) that nothing is real anymore, the very concept that cybertheorists reveled in. As post-Internet lifestyle publication DIS magazine puts it, we are living “in a world in which there is no ‘alternative’.” No alternative lifestyle, no alter-identity, no end to capitalism. No argument there. We operate within, not against. This lack of absolute alternatives can feel like a trap. Maybe we hate—and fetishize—cyberculture because it was the last definable counterculture.
Red Light District
Re-hashed cyberpunk aesthetics are all over the post-Internet era. It’s not irony, cynicism, jealousy, or nostalgia—it’s probably closest to pastiche. But military aesthetics and super-streamlined advertising tactics are taking over too, in a different way. The confluence of these attitudes demonstrates that subjectivity and realism aren’t mutually exclusive—never have been—but it’s important to keep tabs on the way they interact.
Theoretical philosophy aside, a red light should blink on when art claims to do no more than reproduce absolute reality, 1:1, locking down singular interpretations and denying its own subjectivity and agency. I’m wary of contemporary cultural production that relies heavily on military and advertising aesthetics and straightforwardly reproduces pre-Haraway gender ideals without a trace of criticality. It’s necessary and obvious that the militarization and corporatization of society will appear in artistic representation, but the way they are represented is not pre-determined. I like a ratio more like 1:1.0000001.
In the spirit of transparency, I’ll offer two bits of “high culture” I think are in the red light district. One is Thomas Hirschhorn’s “The Incommensurable Banner,” a giant collage of images of random conflict scenes, printed out from his browser and glued to a protest banner. Two is Kenneth Goldsmith’s book Seven American Deaths and Disasters. It’s a direct transcript of the assassinations of John F. Kennedy, Robert F. Kennedy, and John Lennon, as well as the Challenger disaster, Columbine shootings, 9/11, and death of Michael Jackson.
Both of these works may be complex and meaningful on their own, but they are defined and limited by the rhetoric attached to them by their creators. Hirschhorn insists that “The Incommensurable Banner” is “not a political work.” He has said that his job as an artist is to reflect our world back at us without any interpretation or judgment.9 Goldsmith’s similar-sounding credo is his Uncreative Writing mantra. The gist of his argument is that new writing today should re-hash/re-arrange existing material, not invent “creative” new content. You should mush stuff together and then say yes, it is literally mush, I did not have anything to do with it. The mush is all around us; there is no use inventing new mush.10
These works seem to say that reality should be dealt with only through the most straightforward representative mechanisms. Yet when reality is straight-up copied, mainstream attitudes—including ones about gender—are perpetuated. Fear of being mistaken for a naïve counter-culture that believes in its own agency can suppress any hint of criticality, pounding it down until it’s unrecognizable.
An example of a project that takes responsibility for its own agenda might be the recently-developed DIS Images database, a collection of stock photographs by artists that are on sale for anyone to use. “Dedicated to manipulating the codes and trends in stock photography, DIS Images invites artists to create alternative scenarios and new stereotypes, thus broadening the spectrum of lifestyle portrayal.” Theoretically buyable by any advertiser or artist, the images don’t purport a completely balanced ratio to the real—that is, the existing or the actual. Apparently, even in a world with no alternative, it’s possible to represent “alternative scenarios.” Representation that’s a shade off from the Real admits its own desire to influence reality.
A 1:1 relationship of representation to reality is not what cyberculture predicted of the Internet. New technologies always arrive with ridiculous, unfulfillable promises. Actual outcomes are unpredictable. If there’s one idea from the text-based Internet era I’d hold onto, it wouldn’t be the possibility of cyberutopia. It would be the acceptance of my subjectivity as the basis of my agency. Whether I’m standing behind the curtain or in front of it makes all the difference.
Much of the research for this article was prepared for a paper written with Liz Feder called Entrance Strategies: Towards a new conversation about labor, gender, and the built environment. It was presented in January 2013 at the Relocating Borders Conference at Humboldt University, Berlin. Full list of references available here.
1. I’m roughly defining the text-based Internet era as the years between the release of the first widely available chat service, CB Simulator, in 1980, and the first widely-available graphical web browser, Monarch, in 1993. For a disambiguation of the terms “web” and “internet,” see this talk by Olia Lialina.
2. MUD originally stood for Multiple User Dungeon, then Dimension, then Domain. A MOO is an Object-Oriented MUD. LambdaMOO still exists today, though its players are largely inactive.
3. This case is described in Stone’s “Identity in Oshkosh.”
4. This case is described in Stone’s “Will the Real Body Please Stand Up?”
5. Quentin Meillasoux is probably the most hardcore of the Speculative Realists. He seems to believe in pure empiricism, mathematical certainty, and a straightforward dichotomous object/subject divide. Importantly, Graham Harman, who invented the term Object-Oriented Ontology and popularized the term Speculative Realism (originally coined by Ray Brassier), does not make such absolutist claims.
There are also new definitions of subjectivity emerging, such as the Anthropocene Project, an ongoing presentation series at Berlin’s Haus der Kulturen der Welt that proposes we are entering a new human-centric era. The sheer number of contradictory attitudes surrounding the question of objectivity effectively demonstrate its high-stakes position in philosophy, politics, and art today.
6. Writer J.R. Carpenter describes the shift from text- to image-based internet and its effects on the growing number of women using the internet: “One day we were minding our own business writing shell scripts on the command line when a bright spec appeared on the horizon. It was a pixel. It was a mass of pixels. The pixels joined forces. Soon they formed a thumbnail, and then a whole jpeg. An image! The next thing we knew no one knew who was issuing commands anymore. We were all clicking away on icons. What we saw was what we got. One thing linking to another, faster and faster, around and around we went.”
7. Discussed in Stone’s, “Will the Real Body Please Stand Up?”
8. In particular, NAJM van Doorn’s field work in Internet-Relay Chat (IRC), Weblogs, MySpace, and YouPorn.
9. Hirschhorn contradicts himself often on the topic of his politics. This summary is what I concluded from listening to him lecture and speaking to him once. For an argument for open criticism in art and an excellent critique of Hirschhorn’s work, see Peter Geimer’s “The Necessity of Critique and the Love of Art.”
10. This credo is itself of course recycled from the 60s, the last era of objectivity. Take, for example, Douglas Huebler’s famous 1969 quote: “The world is full of objects, more or less interesting; I do not wish to add any more.”