Floating Signifiers

You Are Not a Gadget: A Manifesto BY Jaron Lanier. Knopf. Hardcover, 224 pages. $24.
Always On: Language in an Online and Mobile World BY Naomi S. Baron. Oxford University Press, USA. Hardcover, 304 pages. $45.
Txtng: The Gr8 Db8 BY David Crystal. Oxford University Press, USA. Paperback, 256 pages. $12.

The cover of You Are Not a Gadget: A Manifesto The cover of Always On: Language in an Online and Mobile World The cover of Txtng: The Gr8 Db8

In the late 1870s, the advent of the telephone created a curious social question: What was the proper way to greet someone at the beginning of a call? The first telephones were always “on” and connected pairwise to each other, so you didn’t need to dial a number to attract the attention of the person on the other end; you just picked up the handset and shouted something into it. But what? Alexander Graham Bell argued that “Ahoy!” was best, since it had traditionally been used for hailing ships. But Thomas Edison, who was creating a competing telephone system for Western Union, proposed a different greeting: “Hello!,” a variation on “Halloo!,” a holler historically used to summon hounds during a hunt. As we know, Edison—aided by the hefty marketing budget of Western Union—won that battle, and hello became the routine way to begin a phone conversation.

Yet here’s the thing: For decades, hello was enormously controversial. That’s because prephone guardians of correct usage regarded it (and halloo) as vulgar. These late-nineteenth-century Emily Posts urged people not to use the word, and the dispute carried on until the 1940s. By the ’60s and ’70s, though, hello was fully domesticated, and people moved on to even more scandalously casual phrasings like hi and hey. Today, hello can actually sound slightly formal.

Outright panic, prophecies of doom, furious debate, and eventually a puzzled “Wasn’t it always this way?” shrug: We’ve seen this cycle many times before, for every innovation in communications technology from the printing press to the phonograph to the television. But in the past decade, the mainstreaming of the Internet has created possibly the most frantic incarnation of the conflict yet, if only because the welter of new forms of communication is so dizzying. Two decades ago, the only ways to communicate remotely were the mail, the phone, the fax, and the telegram. Now we have SMS messages, e-mail, tweets, Facebook status updates, cameraphones that can upload straight to Flickr, discussion boards, YouTube, instant messaging, blogs, and “geo-aware” apps that report our location down to the city block.

While there’s an increasing amount of video in our daily communications diet, the majority is still text. This has given our own whither-the-language tech debate a particularly literary undertone. Opponents of hello worried that it would coarsen human relations, but opponents of Internet culture worry that it’s eroding the fabric of language itself. In virtually every op-ed bemoaning the kudzulike spread of Twitter or texting, the critic mocks the loopy short forms, misspellings, and grammatical barbarisms. “Texting is penmanship for illiterates,” John Sutherland complained in the Sunday Telegraph. “The changes we see taking place today in the language will be a prelude to the dying use of good English,” a London Sun journalist moaned. Teachers complain that teenagers gormlessly employ instant-messaging truncations—like r for “are,” LOL for “laugh out loud,” and, more risibly still, emoticons—in academic essays, unaware that this isn’t good usage. Above all this hovers the fear that tools like Facebook are making us preeningly narcissistic. “Who really cares what I am doing, every hour of the day? Even I don’t care,” Alex Beam, a Boston Globe columnist, wrote of Twitter.

You could dismiss this carping as the reflexive posturing of curmudgeons. As an enthusiastic user of Twitter (and texting and Facebook), I often do. But I admit that I, too, have occasionally worried about the state of language. As a writer, I’m sealed in a bubble along with other people who are pretty handy with prose, so I don’t always have an objective sense of the Internet’s effect on communication. After digesting reams of op-ed bloviation, I was happy to consult a crop of actually well-researched books that tackle the question, Are the critics merely alarmist— or r they write?

• • • • • 

Naomi Baron’s answer gives language purists grounds for cautious optimism. Baron is a professor of linguistics at American University in Washington, DC, and her book Always On, recently released in paperback, offers a mostly—and refreshingly—noncranky view of language in the digital age. She’s well aware that the way we write and talk has changed throughout history, especially when propelled by technology. (My tale above of hello comes mostly from her book.) And she’s even more aware that Chicken Little naysayers have been bemoaning the imminent death of literacy for a millennium now. “Distinguishing between language change and language decline,” she writes dryly, “is a very tricky business.” Baron is also one of a small group of linguists and sociologists who in recent years have collected actual data on how writing by young people—the Patients Zero in our Ebola-like outbreak of Netspeak—is changing.

Or isn’t, as it turns out. In one of her studies, Baron collected the transcripts of twenty-three instant-messaging discussions and 191 text-messaging communications between her students. These pithy vehicles for chat are usually regarded as the chief culprits in linguistic decline, being the most chronic abettors of short forms and cryptic code.

Yet Baron found that the bulk of the students’ IM prose wasn’t much worse grammatically than their traditional, “formal” writing. For example, out of nearly twelve thousand words, Baron found a mere ninety LOL-like initialisms and only thirty-one short forms, like cya for “See you.” Straightforward misspellings occurred once every 12.8 words; this rate of error kind of shocked me, but Baron regards it as “not bad” given that instant messaging is an inherently rapid-fire, live form of writing. (“My students’ essays sometimes show poorer spelling,” she adds.) But it was also clear that IMing and texting were different from formal writing in a few central ways: The students used far less punctuation, and the utterances were shorter. She found that similar distinctions held when she looked at other types of online writing, ranging from blog entries to “away” messages.

The upshot, Baron concludes, is that online writing—particularly messaging—is a mongrel form: It is like formal writing that’s been pulled in the direction of speech. When we craft messages online, we’re more grammatically and stylistically proper than we are when we talk, by far; but we’re not quite as careful as when we’re writing traditional stuff, like articles or papers or letters. A neatly revealing comparison point that Baron discovered is contractions: In spoken conversation, we usually contract about 95 percent of the words that are contractable, using can’t instead of cannot. But in instant messaging, the rate was only 65 percent.

So if you like your glass half full, this is the way to look at it: The Internet hasn’t wrecked the craft of writing yet. Indeed, in one sense online communication has helped revive the written word in an otherwise rampantly postprint culture. Because young people conduct a huge chunk of their socializing and self-expression online, they are generating far more prose than any generation before. But because most of the writing is social, it takes on conversational qualities: not quite a transcript of speech, if not quite a pithy op-ed. “The idea that everyone under the age of twenty-five knows an entire new language is simply poppycock,” Baron writes.

Still, she worries about a subtler form of linguistic decline. As we continue to produce a torrent of words online each day—quickly, on the fly, and conversationally—we might start redefining “what is ephemeral and what is durable.” And if we read and produce so much nonformal, conversational prose, it might become harder to appreciate the value of truly formal writing. For writer and reader alike, elegant, formal prose is both a catalyst and a medium for certain types of thinking. If the Internet encourages us to write primarily in “informal spoken language,” Baron fears it is bound to affect our thought as well. She dislikes Wikipedia for precisely this reason. Prose written by a collection of writers might be factually strong, but it’s stylistically dull—and dull writing is neither memorable nor engaging. Baron is also concerned that other online tools corrode our mental abilities, ranging from the crutch of the spell-checker to Google’s always-on-tap trivia stream. Why bother to remember a fact if it’s a quick search away?

Speed, ultimately, is Baron’s deep concern. If you pick up the pace at which it’s possible to write and read—and at which it’s considered normal to write and read—do you inherently damage the processes? Has hastening the rate at which writers create text undermined the attributes of written culture, especially the cognitive depth that writing and reading can bestow? She fears so.

• • • • • 

Many of Baron’s concerns are reasonable, and I agree with most of them. But here again, history is instructive. Baron’s worries about Google are weirdly similar to those that cropped up in Gutenberg’s day. If everyone could read books and write stuff down, detractors of mass printing worried, wouldn’t they stop committing everything to memory? Worse, if everyone owned a Bible, wouldn’t the priestly class lose power? Well, sure, those things did happen. But it turns out many people were considerably happier under the new regime, and books proved staggeringly useful as memory aids. But because books are now so familiar as our fallback idea-delivery system, we forget what a shock it must have been when they were the hot up-and-coming technology.

You could argue that the online arena has given birth to some quite nifty literary and cognitive styles. Blog for a day or two and you might wonder what the point is, but blog for five years and you wind up with a map of your obsessions that can be revealing even to yourself. As for Twitter, critics regularly mock the brutal 140-character limit—what can be usefully said in such a short space? But as anyone who’s written a sonnet or a haiku knows, limitations are a source of creativity. (I’ve suggested that Twitter is training millions in the compressive skill of newspaper-headline writing—ironically enough, at precisely the moment when actual newspaper-headline writing seems at the point of vanishing.) And consider the odd literary function of the link: a tool that lets you compose a status update that’s suggestive and intriguing but incomprehensible unless you follow the link itself, at which point the sentence’s meaning is revealed.

This conceit—that new tools for writing can promote, and not erode, fresh literary style—is at the core of David Crystal’s book Txtng: the gr8 db8, now in paperback.Crystal, also a linguist who has long written about the fate of written English, focuses on mobile-phone messaging, and he is straightforwardly enthusiastic. Like Baron, he discovered that when you actually study the prose used in texting, it’s far more normal than aberrant: His research found that only a minority of texted words—from 20 percent to as low as 6 percent, depending on the study—were WTF-like abbreviations.

If anything, Crystal appears to wish that people would use even more acronyms and truncations, because he regards them as a playful and clever evolution of everyday language. He argues that short forms such as ur (“you are”) and gr8 (“great”) are similar to rebuses, in which the puzzler must decode strings of letters that at first glance are little more than gibberish. (“YY UR YY UB,” which translates as “Too wise you are, too wise you be,” is a classic of the genre.) Rebuses go back centuries—the word comes from the Latin phrase non verbis sed rebus, “not with words but with things”—and Leonardo da Vinci used to doodle them. But until texting came along, nobody had ever used rebuses in everyday language. Why would they? Rebuses require extra mental effort to decode.

Which is exactly Crystal’s point: Shorthand in texting and online isn’t lazy or sloppy at all. It takes an often ridiculous amount of work to generate the short forms, and texters appear to be keenly aware that these forms violate normal diction. (Some of the more abstruse acronyms are generated, Crystal discovers, when two chat partners egg each other on to produce ever-longer strings of text; this is probably how LOL—“laughing out loud”—gave birth to ROTFLMAO, “rolling on the floor laughing my ass off.”)

Crystal sounds only a few mild notes of concern. He mentions, for example, a study that found that descriptions written by young people who text were shorter than those by their nontexting counterparts. But overall, he’s pretty doubtful that the Internet has been around for long enough to corrode our language culture. And for her part, Baron believes that if students today aren’t as careful with their grammar and style as previous generations (a view that she herself subscribes to; she’s noticed the linguistic competence of her students degrade over the years), there are plenty of culprits to blame other than the Internet. Specifically, schools: Baron writes that by adopting the otherwise laudable goals of encouraging cultural diversity and self-expression, schools have necessarily abandoned the idea that there is one proper, formal way to write. Or to put things another way, both technophobes and technophiles might be overstating how much power computers and mobile phones have in shaping culture.

• • • • • 

In this context, longtime computer maven Jaron Lanier is kind of a heretic. In his new book, You Are Not a Gadget, Lanier argues that we are being mentally enslaved by the online tools we use each day. Twitter, Facebook, and the like aren’t merely destroying language: They are training us in an anti-human way of thinking.

Lanier notes that most contemporary online accessories force us into incredibly restrictive boxes. Whereas the first crop of hand-built, do-it-yourself websites in the ’90s were genuinely creative and expressive of identity (if also insufferably ugly), today’s glossy, clean-cut pages on social-networking sites reduce your personality to a set of bullet points. “I fear that we are beginning to design ourselves to suit digital models of us, and I worry about a leaching of empathy and humanity in that process,” he writes. Meanwhile, projects like Wikipedia and Google celebrate the “hive mind,” denigrating the importance of individual vision; and the ability to post anonymously has turned online conversation into a septic cocktail of snarky, angry drive-by attacks. The best lack all conviction, and the worst are out there talking all manner of smack.

At the heart of Lanier’s concern is the notion that software designers often make shortsighted or profit-motivated decisions when they conceptualize their work. We might later realize that the tools are limiting our ability to express ourselves—but by then it’s too late, because those design decisions are locked in, and other software is designed around them.

As a key illustration, Lanier describes how MIDI emerged in the early ’80s as a format for letting people compose and play music on computers. At first, MIDI excited him; Lanier is, in addition to being an accomplished programmer and scientist (he helped invent the concept of virtual reality), an enormously talented musician. He recounts how he helped build a MIDI program for early Apple computers, and he and his peers marveled at how easy it was to create and alter songs on the fly, harnessing oodles of virtual instruments. But MIDI’s many limitations soon became clear. MIDI represents notes in the black-and-white half-note style of a piano keyboard, so it’s not good at replicating the messy, in-between, semitone sounds of, say, a note bent on a guitar neck or through the reeds of a harmonica. Plus, MIDI is robotically precise and therefore discourages tempo changes, as well as the sort of irregularity that traditionally forms part of a musician’s personal style. But because MIDI was so useful, it didn’t matter. As more and more people employed it, it became, Lanier notes, something of an industry standard—and pretty soon, producers and musicians began adapting their music to its rigidities. It became the “lock in” mode for creating songs; as Lanier sees it, MIDI has stripped personality and weirdness out of music for the past twenty years.

Plenty of people have made this complaint before, of course, but Lanier has actually conducted informal and clever research to demonstrate it. He plays people songs from the past sixty years and asks them to identify the decade they came from; while they can easily identify music of the 1940s, ’50s, ’60s, ’70s, and ’80s, they can’t quite place music from the 1990s or 2000s. For Lanier, this is proof that the creative, organic spark has been drummed out: Musicians now merely recycle the sounds of previous ages. It’s a neatly delivered and convincing argument.

Alas, it is pretty much Lanier’s only such argument. The rest of his manifesto is maddeningly vague and nearly incoherent. Lanier maintains again and again that the design decisions behind most Internet tools—the facilitation of anonymity, the database style of social-networking sites—reduce us to ciphers. But he almost never offers a scrap of evidence for his assertions, which make them hard to assess. He contends that online discussion has devolved to “collective ritual hatred” yet spends barely a page buttressing this point by rehashing a few already overanalyzed cases of online harassment. He sighs over the “endless stress” that young Facebookers face, having to constantly and anxiously monitor their online reputations—yet if he actually talked to any of these supposedly ground-down youngsters, he doesn’t quote them. He writes that online, “insincerity is rewarded, while sincerity carries a lifelong taint.” What in God’s name is he talking about? I can’t tell you, because again he fails to give any empirical support for his sweeping claim. (And many other pundits, of course, argue completely the opposite: that young people are too confessional online, and not guarded enough.)

But there’s a bigger philosophical problem here. Lanier’s critique of online life has a strong whiff of the “false consciousness” dicta that gained currency in the aftermath of the New Left. Lanier assumes people are essentially imprisoned by the software around them and are so witless that they aren’t aware of how impoverished their lot has become—Facebook as the high-tech iteration of Plato’s cave. Now, it’s certainly true that software can influence our behavior; Lawrence Lessig nailed this point down a decade ago when he coined the phrase “Code is law.” Plenty of people have observed how, for example, the different designs of Twitter and Facebook lead to different behaviors. Because most Twitter accounts are “open” for anyone to read while Facebook accounts are private by default (or were, anyway), social networkers tend to use Twitter to burnish their professional reputations while leaving their Smurfette cosplay photo sets on Facebook.

But it’s also true that users aren’t so easily controlled. Indeed, the history of technology is full of people using software in ways the designers never intended or even imagined. Texting is a good example. As Crystal notes, twelve-button mobile phones were never designed for alphabetic communication. On the contrary, they’re ergonomic nightmares. Typing a letter as common as s requires four pushes of the 7 button, and n and o are on the same key. What’s more, the carriers’ wireless networks weren’t designed for texting; carriers originally just set up a little back channel of data so they could ping a phone and make sure it was operating. So when they decided to allow messaging on the channel, it was only capable of relaying 160 characters at a time. But it turned out that consumers liked the cheapness and convenience of texting (since you can do it surreptitiously, unlike a phone call), and mobile-phone companies really liked milking new revenue out of this previously unused channel. And because this format—and Twitter’s still-shorter variation on it —proved so incredibly restrictive, people suddenly found themselves harnessing ancient rebus formats in daily conversation. It’s certainly true that ill-considered design decisions from thirty years ago affect how people communicate today on Twitter and SMS. But they haven’t exactly locked in a single style of expression or of thought.

You can pity the poor language purists. As the long-ago case of the telephone proves, they’ve been fighting an uphill battle for ages and probably always will be. Perhaps fifty years from now—with ESP chips implanted in our brains, involuntarily live-streaming our chattering thoughts to one another every millisecond—we’ll be pining for the slow pace and stylistic elegance of a well-turned tweet. Hello, indeed.

Clive Thompson is a contributing writer for the New York Times Magazine and Wired.