By ALEXANDER NAZARYAN l The New Yorker May 2, 2013
On August 6, 2010, a computer scientist named Vinay Deolalikar published a paper with a name as concise as it was audacious: “P ≠ NP.” If Deolalikar was right, he had cut one of mathematics’ most tightly tied Gordian knots. In 2000, the P = NP problem was designated by the Clay Mathematics Institute as one of seven Millennium Problems—“important classic questions that have resisted solution for many years”—only one of which has been solved since. (The Poincaré Conjecture was vanquished in 2003 by the reclusive Russian mathematician Grigory Perelman, who refused the attached million-dollar prize.)
A few of the Clay problems are long-standing head-scratchers. The Riemann hypothesis, for example, made its debut in 1859. By contrast, P versus NP is relatively young, having been introduced by the University of Toronto mathematical theorist Stephen Cook in 1971, in a paper titled “The complexity of theorem-proving procedures,” though it had been touched upon two decades earlier in a letter by Kurt Gödel, whom David Foster Wallace branded “modern math’s absolute Prince of Darkness.” The question inherent in those three letters is a devilish one: Does P (problems that we can easily solve) equal NP (problems that we can easily check)?
Take your e-mail password as an analogy. Its veracity is checked within a nanosecond of your hitting the return key. But for someone to solve your password would probably be a fruitless pursuit, involving a near-infinite number of letter-number permutations—a trial and error lasting centuries upon centuries. Deolalikar was saying, in essence, that there will always be some problems for which we can recognize an answer without being able to quickly find one—intractable problems that lie beyond the grasp of even our most powerful microprocessors, that consign us to a world that will never be quite as easy as some futurists would have us believe. There always will be problems unsolved, answers unknown.
If Deolalikar’s audacious proof were to hold, he could not only quit his day job as a researcher for Hewlett-Packard but rightly expect to enter the pantheon as one of the day’s great mathematicians. But such glory was not forthcoming. Computer scientists and mathematicians went at Deolalikar’s proof—which runs to dozens of pages of fixed-point logistics and k-SAT structures and other such goodies—with the ferocity of sharks in the presence of blood. The M.I.T. computational theorist Scott Aaronson (with whom I consulted on this essay’s factual assertions) wrote on his blog, “If Vinay Deolalikar is awarded the $1,000,000 Clay Millennium Prize for his proof of P ≠ NP, then I, Scott Aaronson, will personally supplement his prize by the amount of $200,000.” It wasn’t long before Deolalikar’s paper was thoroughly discredited, with Dr. Moshe Vardi, a computer-science professor at Rice University, telling the Times, “I think Deolalikar got his 15 minutes of fame.”
As Lance Fortnow describes in his new book, “The Golden Ticket: P, NP and the Search for the Impossible,” P versus NP is “one of the great open problems in all of mathematics” not only because it is extremely difficult to solve but because it has such obvious practical applications. It is the dream of total ease, of the confidence that there is an efficient way to calculate nearly everything, “from cures to deadly diseases to the nature of the universe,” even “an algorithmic process to recognize greatness.” So while a solution for the Birch and Swinnerton-Dyer conjecture, another of the Clay Millennium Prize problems, would be an impressive feat, it would have less practical application than definitive proof that anything we are able to quickly check (NP), we can also quickly solve (P).
Fortnow’s book—which, yes, takes its name from “Willy Wonka & the Chocolate Factory”—bills itself as a primer for the general reader, though you will likely regret not having paid slightly more attention during calculus class. Reading “The Golden Ticket” is sort of like watching a movie in a foreign language without captions. You will miss some things, but not everything. There is some math humor, which is at once amusing, cheesy, and endearing exactly in the way that you think a mathematician’s humor might be amusing, cheesy, and endearing.
What Fortnow calls “P” stands for polynomial time, meaning the size of the input raised to a fixed number like two or three. Conversely, exponential time is some number raised to the size of the input. Though polynomial time can be long (say, 502), it is nothing compared to its exponential opposite (250). If the first is the Adirondacks, the second is the Himalayas. When solving things, we want to keep them in polynomial time if we still want to have time for lunch.
“NP” (nondeterministic polynomial time) is a set of problems we want to solve, of varying degrees of difficulty. Many everyday activities rely on NP problems: modern computer encryption, for example, which involves the prime factors of extremely large numbers. Some forty years ago, Richard Karp, the Berkeley theoretician, first identified twenty-one problems as being “NP-complete,” meaning that they are at least as hard as any other NP problem. The NP-complete problems are a sort of inner sanctum of computational difficulty; solve one and you’ve solved them all, not to mention all the lesser NP problems lurking in the rear. Karp’s foreboding bunch of problems have names like “directed Hamiltonian cycle” and “vertex cover.” Though they are extremely hard to solve, solutions are easy to check. A human may be able to solve a variation of one of these problems through what Soviet mathematicians called “perebor,” which Fortnow translates as “brute-force search.” The question of P versus NP is whether a much faster way exists.
So far, the answer is no. Take one of these NP-complete problems, called “k-clique,” which Fortnow explains as follows: “What is the largest clique on Facebook [such that] all of [them] are friends with each other?” Obviously, the more users there are on Facebook, the more difficult it is to find the biggest self-enclosed clique. And thus far, no algorithm to efficiently solve the clique problem has been discovered. Or, for that matter, to solve any of its NP-complete siblings, which is why most people do think that P ≠ NP.
There are considerations here, too, beyond math. Aaronson, the M.I.T. scientist, wrote a blog post about why he thinks P ≠ NP, providing ten reasons for why this is so. The ninth of these he called “the philosophical argument.” It runs, in part, as follows: “If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in ‘creative leaps,’ no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss; everyone who could recognize a good investment strategy would be Warren Buffett.”
We already check novels for literary qualities; most critics could easily enough put together a list of categories that make a novel great. Imagine, now, if you could write an algorithm to efficiently create verifiably great fiction. It isn’t quite as outlandish as you think: back in 2008, the Russian writer Alexander Prokopovich “wrote” the novel “True Love” by taking seventeen classics that were recombined via computer in seventy-two hours into an entirely new work. As Prokopovich told the St. Petersburg Times, “Today publishing houses use different methods of the fastest possible book creation in this or that style meant for this or that readers’ audience. Our program can help with that work.” He then added a note of caution: “However, the program can never become an author, like Photoshop can never be Raphael.” But if P = NP, then it could only be a matter of time before someone figured out how to create verifiably “great” novels and paintings with mathematical efficiency.
Much of Fortnow’s book is spent depicting a world in which P is proven to equal NP, a world of easily computed bliss. He imagines, for example, an oncologist no longer having to struggle with the trial and error of chemotherapy because “we can now examine a person’s DNA as well as the mutated DNA of the cancer cells and develop proteins that will fold in just the right way to effectively starve the cancer cells without causing any problems for the normal cells.” He also whips up a political scandal in which a campaign manager “hired a computer programmer, who downloaded tens of thousands of well-received speeches throughout the decades. The programmer then used [an] algorithm to develop a new speech based on current events”—one that the unwitting public predictably loves.
To postulate that P ≠ NP, as Fortnow does, is to allow for a world of mystery, difficulty, and frustration—but also of discovery and inquiry, of pleasures pleasingly delayed. Fortnow concedes the possibility that “it will forever remain one of the true great mysteries of mathematics and science.” Yet Vinay Deolalikar is unlikely the last to attempt a proof, for all of mathematics rests on a fundamental hubris, a belief that we can order what Wallace Stevens calls “a slovenly wilderness.” It is a necessary confidence, yet we are not always rewarded for it.
Today, depending on your favoured futurist prophet, a kind of digital Elysium awaits us all. Over millennia, we have managed to unshackle ourselves from the burdens of time and space — from heat, cold, hunger, thirst, physical distance, mechanical effort — along a trajectory seemingly aimed at abstraction. Humanity’s collective consciousness is to be uploaded into the super-Matrix of the near future — or augmented into cyborg immortality, or out-evolved by self-aware machine minds. Whatever happens, the very meat of our physical being is to be left behind.
Except, of course, so far we remain thorougly embodied. Flesh and blood. There is just us, slumped in our chairs, at our desks, inside our cars, stroking our smartphones and tablets. Peel back the layers of illusion, and what remains is not a brain in a jar — however much we might fear or hunger for this — but a brain within a body, as remorselessly obedient to that body’s urges and limitations as any paleolithic hunter-gatherer.
It’s a point that has been emphasised by much recent research into thought and behaviour. To quote from Thinking, Fast and Slow (2011) by Nobel laureate Daniel Kahneman, ‘cognition is embodied; you think with your body, not only with your brain’. Yet when it comes to culture’s cutting edge, there remains an overwhelming tendency to treat embodiment not as a central condition of being human that our tools ought to serve, but rather as an inconvenience to be eliminated.
One of my favourite accounts of our genius for unreality is a passage from the David Foster Wallace essay ‘E Unibus Pluram: Television and US Fiction’ (1990), in which he describes, with escalating incredulity, the layers of illusion involved in watching television.
First comes the artifice of performance. ‘Illusion (1) is that we’re voyeurs here at all,’ he writes, ‘the “voyees” behind the screen’s glass are only pretending ignorance. They know perfectly well we’re out there.’ Then there’s the capturing of these performances, ‘the second layer of glass, the lenses and monitors via which technicians and arrangers apply ingenuity to hurl the visible images at us’. And then there are the nestled layers of artificiality involved in scripting, devising and selling the scenarios to be filmed, which aren’t ‘people in real situations that do or even could go on without consciousness of Audience’.
After this comes the actual screen that we’re looking at: not what it appears to show, but its physical reality in ‘analog waves and ionised streams and rear-screen chemical reactions throwing off phosphenes in grids of dots not much more lifelike than Seurat’s own impressionist “statements” on perceptual illusion’.
But even this is only the warm-up. Because — ‘Good lord’ he exclaims in climax — ‘the dots are coming out of our furniture, all we’re really spying on is our furniture; and our very own chairs and lamps and bookspines sit visible but unseen at our gaze’s frame…’
There’s a certain awe at our capacity for self-deception, here — if ‘deception’ is the right word for the chosen, crafted unrealities in play. But Foster Wallace’s ‘good lord’ is also a cry of awakening into uncomfortable truth.
It reminds me of the scene in the film The Matrix (1999) in which Neo has to decide between taking the blue pill that will preserve his illusions, and the red pill that will reveal what his world actually looks like. He swallows the red pill, gulps a glass of water, and is led into another room. Nothing happens, until he reaches out to touch a mirror. Its surface shivers, sticks to his hand, then begins to flow over his skin like liquid cement, rising along his arm and down his throat. Choking, he screams — and wakes up somewhere else, naked, bald, gasping for air inside a cocoon filled with fluid.
It’s the perfect contemporary depiction of an atavistic fear: that the world around us is a lie. However, The Matrix is also a suitably ambivalent fable for modern times — because its lies aren’t supernatural tricks, but the apotheosis of human ingenuity. And the problem isn’t so much illusion itself as who’s in charge. The baddies here are the evil machines. But so long as we’re the ones running the show, it’s sunglasses, guns, and anti-gravity kung fu all the way, which is an infinitely more enticing destiny than unenhanced actuality.
What the red pill promises isn’t actually the real world at all. It’s the Matrix as it ought to be, knowingly bent to serve our desires: a dream of omnipotence through disembodiment.
In a 2012 essay, under the delightful title ‘Arsebestos’, the American science fiction author Neal Stephenson rails against one particular aspect of contemporary contempt for the body: laziness. ‘Ergonomic swivel chairs,’ the essay argues, ‘are the next asbestos’. That is, our sedentary screen-staring habits are as great a lurking hazard for the 21st century as asbestos was for the 20th. The point, for Stephenson, is simple — ‘the reaper comes first for those who sit’ — as is the path we took there. ‘First, we all bought in to the idea that a normal job involved sitting in a chair, and then we found ourselves imprisoned by our own furniture…’
Once again, furniture is the foe. Equipped with increasingly smart digital systems, we now perform an entirely on-screen, virtual version of many hundreds of daily acts that used to take us out of our chairs and around the house, office or neighbourhood: It used to be that reading the mail required walking to the mailbox, slicing open envelopes, and other small but real physical exertions. Now we do it by twitching our fingers. Similar remarks could be made about talking on the phone (now replaced by Skype), filing or throwing away documents (now a matter of dragging icons around or, if that’s too strenuous, using command-key combinations), watching television (YouTube), and meeting with co-workers (videoconferencing).
Stephenson, who today does most of his work strolling at a steady pace at a treadmill desk, is making a point about the act of sitting itself: that too much of it is harmful and that, in an age of ever-more-nimble computing, it’s absurd for us to sit around all day staring at screens. Leaving aside the irony of an author known for his pioneering depictions of virtual worlds acting as a light-exercise guru, it’s sensible advice. For me, though, this is also a point about how we conceive of the relationship between ourselves and our tools.
We think, feel and work better when we’re at least a little mobile; we have better blood chemistry and concentration; we’re more creative and energetic, not to mention less prone to all manners of malaise. Why, then, is sedentary ease quite so attractive — even addictive? The answer lies in the interlocked vast systems and assumptions of which our furniture is but the visible tip.
At the start of the 1990s, screens — whether televisions or computers, deployed for work or leisure — were bulky, static objects. For those on the move and lucky enough to employ a weight-lifting personal assistant, a Macintosh ‘portable’ cost $6,500 and weighed 7.2 kilos (close to 16 lbs). For everyone else, computing was a crude, solitary domain, inaccessible to anyone other than aficionados.
Today, just two decades on from Foster Wallace’s ‘E Unibus Pluram’, we inhabit an age of extraordinary intimacy with screen-based technologies. As well as our home and office computers, and the 40-inch-plus glories of our living room screens, few of us are now without the tactile, constant presence of at least one smart device in our pocket or bag.
These are tools that can feel more like extensions of ourselves than separate devices: the first thing we touch when we wake up in the morning, the last thing we touch before going to bed at night. Yet what they offer is a curious kind of intimacy — and the ‘us’ to which all this is addressed doesn’t often look or feel much like a living, breathing human being.
Instead, we are metaphorically dismembered by our tools: regarded by the sites and services we visit as ‘eyeballs’, as tapping and touching fingertips on keyboards and screens, as attention spans to be harnessed and data-rich profiles to be harvested. So far as most screens are concerned, we exist only in order to be transfixed by their gaze.
It’s as if we’ve mistaken a particular, contingent set of historical circumstances — that screens used to be extremely heavy, and the only way to use them was to sit down for an extended period of time — for a truth about human nature. Most of us work at desks in offices that wouldn’t look too strange to 18th-century clerks, and spend our leisure gazing at vast wall-mounted monitors while cradling second screens in the palms of our hands.
And it would be amusing if it weren’t so insidious: in public places, at work in a room full of colleagues, in our homes, our favourite activity remains hanging out with furniture.
There are, of course, those who seem to be trying to set us free from these shackles that make, or encourage us to be so indolent. Take one of the most futuristic pieces of kit to hit headlines in recent years: ‘Google Glass’, which contains a camera, microphone, internet connection, head-up-display and touchpad — all housed within a miraculously sleek pair of spectacles. The launch event last year was a frenzy of hyper-kinetic bodily endeavour, with skydivers, abseilers and stunt BMX riders streaming the evidence of this awesomeness live from their own faces.
The very idea of the screen, here, has shifted from something you look at and has transformed into something you look through — a digital veil overlaid on the world like a kind of auxiliary consciousness. This is the cyborg dream at its most imminently available: Google Glass (essentially digital eyewear) might be on sale by the end of this year. Could it mark a potential escape from the tyranny of furniture into a future of strolling productivity?
Yet it’s also a hyper-reality that isn’t half as human-centric as it might appear at first. Consider Google’s cheery demo video of what wearable computing might be able to do for me. Accompanied by an aspirational soft-rock soundtrack, I stretch my arms, yawn, and browse a plethora of icons corresponding to online services in the middle of my field of vision. I make myself some coffee, check the time and my diary via another few icons, then float the weather forecast into view while looking outside the window. A friend asks if I fancy meeting up via another popup, to which I dictate a reply and head out. Handily, as I approach the subway, my glasses tell me it isn’t working and plot out a walking route instead, complete with real-time map and sequential directions.
And so on. There’s a great deal of emphasis on how my information-poor perceptions might be enhanced by integration with the internet — and how all manner of errors and inefficiencies will be ironed out along the way. Yet there’s little sense of how my ability to think my own thoughts, explore my own feelings or enjoy my own space will be similarly served, enhanced or encouraged. What’s on offer is, effectively, a smartphone strapped to my face.
This is all very well if my aim is to become a more effective operator of technological systems. However, if computing itself isn’t the primary objective — if I’m more interested in fomenting ideas and memories than in broadcasting a video of my daily exploits — the notion of wearable computing suddenly starts to seem, in this incarnation at least, not so much an escape from the desk and the sofa as an intensification of all that they represent.
In fact, there’s a surprising amount of common ground between the visions of progress represented by ergonomic office chairs and by Google Glass. In each case, the focus is not on people as such, but ‘people’ as incarnated within certain kinds of digital system: data points within a vast grid whose every need can be anticipated and answered by more precisely targeted information.
Distance, difference, fleshy frailties: all these are to be erased, while actuality itself is useful only as grist to the mill of content-generation and sharing (video, photos, audio, status updates!). Similarly, rather than you — your whole, embodied being — what the world really cares about is ‘you’ as represented by your avatar, profile, inbox, image, account, uploads, shares, likes, dislikes, group memberships, search history, purchase, orders and subscriptions.
This is the deal. No matter where you are, whom you’re with, or what you’re doing, it only counts if the system itself is counting.
I was born in 1980, meaning I missed out on many of the opportunities afforded to subsequent generations of shy, tech-savvy teens. Compelled to rely on a parental landline and face-to-face awkwardness for communication with the opposite sex, my first attempt at asking someone out for a date ended sufficiently badly for me to spend the years 1994 to 1996, inclusive, in a near-monastic state. I would have given a great deal for the opportunity to type my way into others’ affections, or simply to browse the social world from a safe distance.
What I longed for was something that I could understand. Other people were messy, strange creatures, who played games (with rules that they didn’t bother to explain). This is one reason why social media has proved stupendously successful: because they provide an enviable and historically unprecedented sense of control over friendships, relationships, interests, and ambitions. It’s all there to be browsed and selected, to be liked and commented upon.
The defining illusion of television is escape — the belief that burning hour after hour in front of the TV screen offers a refuge from the mundane world, even while it ever-more-deeply embeds us in the embrace of our sofas. But the defining illusion of interactive screens is agency. Suffused with feedback, an entire universe of data at our fingertips, we’re inclined to confuse knowledge with control, and information with comprehension. And, like my hypothetical teenage self, we’re grateful to be given the chance.
In a sense, it all comes back to what Foster Wallace labelled ‘Audience’, with a capital ‘A’: the transforming force of others’ simulated presence, and our presence simulated right back at them. Online, we are simultaneously author and audience, not to mention our own full-time publicist and agent. And we are lavishly talented at playing these roles. We are — don’t get me wrong — extremely lucky to be blessed by this apotheosis of human imagining and ingenuity.
Yet it’s also a heavy burden to heft — and all the more so for the infinite, weightless capacities of the medium within which we do so. If there’s only one lesson we should take from Kahneman et al, it is that every human illusion from consciousness up takes effort to maintain — and too much performance in one area can leave the rest of us stretched thin.
Consider the grand performance of incarnating ourselves online. It takes place courtesy of screens, wires, radio waves, incandescent dots and colours, together with the apparatus of content creation itself, from keyboards and cameras to website templates. Yet for it all to hang together, we must privilege these illusions over the merely real world surrounding us: the rooms, shelves, sofas, streets and people who uniquely share our time and place. We play, we pretend — quite brilliantly — and in return we are gifted mastery, barely sensing the embrace of other assumptions.
Perhaps that’s why the American technology journalist and author Paul Miller decided to live ‘off-line’ for a year. In the essay ‘Project Glass and the Epic History of Wearable Computers’ for The Verge magazine, he argued that ‘much of what passes for innovation these days is enclosed inside a very small space: a better way to check-in, or upload a photo, or manage your friend list’. This is the narrow zone within which every vision of progress is a further step towards data-led disembodiment: more content, more connection, faster and more ubiquitous computing, brimming the screens in our pockets and the overlays in front of our eyes. It’s an intoxicating offering. But it’s also a steady constriction of what it means to be us.
Is there another way? I would argue that there is, and that much of it lies apart from the maelstrom of ‘Audience’. If it means anything, intimacy is surely about what we are not willing to share; those things closest to us, both literally and metaphorically, through which we uniquely define ourselves.
Indeed, there are forms of enhancement that are about thickening our presence in a particular place at a particular moment in time, not turning our back on reality, and that help us to give a certain quality of time or attention to those around us, and ourselves. Similarly, there are ways of wearing our own tools more lightly and of using them to turn us more passionately towards reality — not to mention the intractable physicality of these self-same tools, which are neither massless nor placeless, no matter how many claims they may make to the contrary.
Ultimately, there is a symmetry between treating ourselves as disembodied and seeing our machines as a weightless other world. In each case, chains of true cause and effect are replaced by a kind of magical thinking, and the gifts of human illusion cross over into delusion.
‘Any sufficiently advanced technology,’ Arthur C Clarke wrote in 1973, ‘is indistinguishable from magic.’ It’s one of science fiction’s most famous maxims — and I’ve always hated it. Assuming that there’s no such thing as ‘real’ magic, and that what we mean when we talk about magic is someone being fooled by someone else, what is he actually saying: that, past a certain point, all we can do is gawp and applaud at the end of the show?
This won’t do. All the magic, after all, belongs not to these tools, but to us: in the stories we tell, the illusions we share. It’s ours, and we can withhold it if we see fit — refuse to clap, peek behind the curtain, tell the performers that we know there’s a trapdoor somewhere onstage. You don’t have to believe in magic to love it.
Quite the reverse, in fact. Just like belonging to any ‘Audience’, it isn’t proper fun unless everyone has tacitly agreed the rules. If only one side knows what’s going on, it’s no longer entertainment: it’s a con trick, and a price is being extracted.
This is our future. We’re playing better, brighter games than ever — and bringing them ever closer to the place where we hold ourselves. It’s terrific, and I’m thrilled to be on board for the ride. More than ever, though, we cannot afford to believe in magic, or to overlook the effortful divide between us as we actually are and ‘us’ as we appear on screen. Because the screen is only the beginning — and it will be a sad thing indeed if our best model for humanity’s self-invention remains a chunk of furniture.
The original ThinkPad design is the product of a collaboration between IBM and Germany’s ‘other industrial designer’ Richard Sapper. Sapper suggested a design inspired by the traditional black-lacquered bento boxes (Shōkadō bentō), a refined object concealing well though out insides (the lunch beautifully and orderly arranged in compartments) that would “reveal its nature only when you open it.” Sapper had previously developed a similar idea with the Cubo transistor radio (1965) and the ST201 TV set (1969) for Italian electronics company Brionvega and the Microsplit 520 stopwatch (1974) for Heuer, three variations on the “black box” theme that still look surprisingly modern despite their age.
Since the introduction of the Thinkpad 20 years ago, the fundamental design has remained almost unchanged, a phenomenon unheard of in the laptop industry. And If the ThinkPad might appear dated today, its introduction in 1992 didn’t go unnoticed. The all black chassis went against industry standards (the German DIN standard prohibited the use of any color other than off-white for office products for fear it might cause eye strain, the disclaimer “Not for Office Use” was slapped on to all German-sold models). Similarly, the Thinkpad signature red TrackPoint was originally refused by IBM (the color red was strictly reserved for emergency power off switches) who pushed for black instead. Sapper, however, saw the use of the color red as critical to call attention to the pointing device right in the middle of an all black keyboard. The presence of bright red details was also a discreet signature that Sapper had incorporated into several of his previous designs. As a workaround, the color was finally changed to purple the first year, Sapper reintroduced the red TrackPoint the following year.
When we talk about “searching” these days, we’re almost always talking about using Google to find something online. That’s quite a twist for a word that has long carried existential connotations, that has been bound up in our sense of what it means to be conscious and alive. We don’t just search for car keys or missing socks. We search for truth and meaning, for love, for transcendence, for peace, for ourselves. To be human is to be a searcher.
In its highest form, a search has no well-defined object. It’s open-ended, an act of exploration that takes us out into the world, beyond the self, in order to know the world, and the self, more fully. T. S. Eliot expressed this sense of searching in his famously eloquent lines from “Little Gidding”:
We shall not cease from exploration And the end of all our exploring Will be to arrive where we started And know the place for the first time.
Google searches have always been more cut and dried, keyed as they are to particular words or phrases. But in its original conception, the Google search engine did transport us into a messy and confusing world—the world of the web—with the intent of helping us make some sense of it. It pushed us outward, away from ourselves. It was a means of exploration. That’s much less the case now. Google’s conception of searching has changed markedly since those early days, and that means our own idea of what it means to search is changing as well.
Google’s goal is no longer to read the web. It’s to read us. Ray Kurzweil, the inventor and AI speculator, recently joined the company as its director of research. His general focus will be on machine learning and natural language processing. But his particular concern, as he said in a recent interview, will entail reconfiguring the company’s search engine to focus not outwardly on the world but inwardly on the user:
“I envision some years from now that the majority of search queries will be answered without you actually asking. It’ll just know this is something that you’re going to want to see.” While it may take some years to develop this technology, Kurzweil added that he personally thinks it will be embedded into what Google offers currently, rather than as a stand-alone product necessarily.
This has actually been Google’s great aspiration for a while now. We’ve already begun to see its consequences in the customized search results the company serves up by tracking and analyzing our behavior. But such “personalization” is only the start. Back in 2006, Eric Schmidt, then the company’s CEO, said that Google’s “ultimate product” would be a service that would “tell me what I should be typing.” It would give you an answer before you asked a question, obviating the need for searching entirely. This service is beginning to take shape, at least embryonically, in the form of Google Now, which delivers useful information, through your smartphone, before you ask for it. Kurzweil’s brief is to accelerate the development of personalized, preemptive information delivery: search without searching.
In its new design, Google’s search engine doesn’t push us outward; it turns us inward. It gives us information that fits the behavior and needs and biases we have displayed in the past, as meticulously interpreted by Google’s algorithms. Because it reinforces the existing state of the self rather than challenging it, it subverts the act of searching. We find out little about anything, least of all ourselves, through self-absorption.
A few more lines of poetry seem in order. These are from the start of Robert Frost’s poem “The Most of It”:
He thought he kept the universe alone; For all the voice in answer he could wake Was but the mocking echo of his own From some tree-hidden cliff across the lake. Some morning from the boulder-broken beach He would cry out on life, that what it wants Is not its own love back in copy speech, But counter-love, original response.
I’m far from understanding the mysteries of this poem. As with all of Frost’s greatest lyrics, there is no bottom to it. To read it is to be humbled. But one thing it’s about is the attitude we take toward the world. To be turned inward, to listen to speech that is only a copy, or reflection, of our own speech, is to keep the universe alone. To free ourselves from that prison — the prison we now call personalization — we need to voyage outward to discover “counter-love,” to hear “original response.” As Frost understood, a true search is as dangerous as it is essential. It’s about breaking the shackles of the self, not tightening them.
There was a time, back when Larry Page and Sergey Brin were young and naive and idealistic, that Google spoke to us with the voice of original response. Now, what Google seeks to give us is copy speech, our own voice returned to us. It’s a great tragedy.
(Some will say this is not the time. I disagree. This is the time when every mixed emotion needs to find voice.)
Since his arresting the early morning of January 11, 2011 — two years to the day before Aaron Swartz ended his life — I have known more about the events that began this spiral than I have wanted to know. Aaron consulted me as a friend and lawyer that morning. He shared with me what went down and why, and I worked with him to get help. When my obligations to Harvard created a conflict that made it impossible for me to continue as a lawyer, I continued as a friend. Not a good enough friend, no doubt, but nothing was going to draw that friendship into doubt.
The billions of snippets of sadness and bewilderment spinning across the Net confirm who this amazing boy was to all of us. But as I’ve read these aches, there’s one strain I wish we could resist:
Please don’t pathologize this story.
No doubt it is a certain crazy that brings a person as loved as Aaron was loved (and he was surrounded in NY by people who loved him) to do what Aaron did. It angers me that he did what he did. But if we’re going to learn from this, we can’t let slide what brought him here.
First, of course, Aaron brought Aaron here. As I said when I wrote about the case (when obligations required I say something publicly), if what the government alleged was true — and I say “if” because I am not revealing what Aaron said to me then — then what he did was wrong. And if not legally wrong, then at least morally wrong. The causes that Aaron fought for are my causes too. But as much as I respect those who disagree with me about this, these means are not mine.
But all this shows is that if the government proved its case, some punishment was appropriate. So what was that appropriate punishment? Was Aaron a terrorist? Or a cracker trying to profit from stolen goods? Or was this something completely different?
Early on, and to its great credit, JSTOR figured “appropriate” out: They declined to pursue their own action against Aaron, and they asked the government to drop its. MIT, to its great shame, was not as clear, and so the prosecutor had the excuse he needed to continue his war against the “criminal” who we who loved him knew as Aaron.
Here is where we need a better sense of justice, and shame. For the outrageousness in this story is not just Aaron. It is also the absurdity of the prosecutor’s behavior. From the beginning, the government worked as hard as it could to characterize what Aaron did in the most extreme and absurd way. The “property” Aaron had “stolen,” we were told, was worth “millions of dollars” — with the hint, and then the suggestion, that his aim must have been to profit from his crime. But anyone who says that there is money to be made in a stash of ACADEMIC ARTICLES is either an idiot or a liar. It was clear what this was not, yet our government continued to push as if it had caught the 9/11 terrorists red-handed.
Aaron had literally done nothing in his life “to make money.” He was fortunate Reddit turned out as it did, but from his work building the RSS standard, to his work architecting Creative Commons, to his work liberating public records, to his work building a free public library, to his work supporting Change Congress/FixCongressFirst/Rootstrikers, and then Demand Progress, Aaron was always and only working for (at least his conception of) the public good. He was brilliant, and funny. A kid genius. A soul, a conscience, the source of a question I have asked myself a million times: What would Aaron think? That person is gone today, driven to the edge by what a decent society would only call bullying. I get wrong. But I also get proportionality. And if you don’t get both, you don’t deserve to have the power of the United States government behind you.
For remember, we live in a world where the architects of the financial crisis regularly dine at the White House — and where even those brought to “justice” never even have to admit any wrongdoing, let alone be labeled “felons.”
In that world, the question this government needs to answer is why it was so necessary that Aaron Swartz be labeled a “felon.” For in the 18 months of negotiations, that was what he was not willing to accept, and so that was the reason he was facing a million dollar trial in April — his wealth bled dry, yet unable to appeal openly to us for the financial help he needed to fund his defense, at least without risking the ire of a district court judge. And so as wrong and misguided and fucking sad as this is, I get how the prospect of this fight, defenseless, made it make sense to this brilliant but troubled boy to end it.
Fifty years in jail, charges our government. Somehow, we need to get beyond the “I’m right so I’m right to nuke you” ethics that dominates our time. That begins with one word: Shame.
My father died almost twenty years ago, after an illness spanning decades. My parents had enjoyed a very affectionate, happy marriage; prepared though we had been for long years before the end came, my mom was utterly shocked and devastated when he died. A protracted gloom overwhelmed her naturally sunny demeanor. But one day, maybe a year or so after my father’s death, I had a phone call from her, and she was laughing. Laughing quite hard, really.
“What is it?” I said, laughing too, just contagiously.
“Oh — oh, your father was called to jury duty, and I sent the form back saying ‘deceased’…”
“And they wrote him back.”
“It says, ‘Your excuse has been accepted.’”
The machinery of human affairs churns blindly on and on, no matter what, in a manner absurd enough to send even deeply grieving people into gales of uproarious laughter (years later, the phrase, “your excuse,” etc., still has the power to reduce both my mom and me to helpless guffaws.) The system, the bureaucracy, the forms to fill out. The alarm clock rings, appointments to keep. The crazy futility of it all is a little bit sad, too, the way perhaps all truly hilarious things have to be.
That Kafkaesque sensation of tragicomic futility has now acquired a new and larger dimension of weirdness, because the seeming permanence of the Internet is so crisply, coldly digital, and therefore so entirely at odds with the messiness of real life. You might say that human beings are analog creatures with certain digital tendencies, and that the digital and analog parts of our nature are inevitably at war with one another.
It’s long been evident that death is liable to create all sorts of snafus online. It can be difficult to prove or even to determine on the Internet whether or not someone has really died. In the case of celebrities, TMZ and the like will be leapfrogging over one another on Twitter to be the first to announce a death; reports may turn out to be true, false and then true again. There’s a continual stream of hoax reports of celebrity deaths online: Jeff Goldblum, Natalie Portman, Tom Cruise, and Tom Hanks all fell off the same hoax cliff in New Zealand, or celebrities can hoax-die of being stabbed in a bar brawl, as Daniel Radcliffe did.
Even for those who are not hounded by the media there is still plenty of opportunity for confusion. For instance, I have a Google Alert on my own name, so that I can keep track of any blog posts or reviews of my stuff that I might want to see; one morning last year I had an email from Google containing my own obituary, or rather, what turned out to be the obituary of another Maria (G.) Bustillos. Not that I was confused about whether or not I am alive! (Though after having seen The Sixth Sense or what have you, who can be entirely sure?)
Strategies for verification of an actual death online vary a great deal, creating more and more potential for chaos. Money can be trapped in the deceased’s Paypal accounts, horrified friends and relations meet with Facebook recommendations to “friend” the dead (“People You May Know”) and so on.
In 2004, Yahoo refused to provide the father of a Marine killed in Fallujah access to his son’s email. It was quite sobering for me to read about this; nobody knows my passwords for Paypal, Gmail, my cell phone account — probably a dozen or more accounts that would need closing if I were to be done in by the zombies tomorrow. I thought, maybe it wouldn’t be a bad idea to include a page with all those passwords and whatnot with your will, when you’re making one. Of course, then you won’t even die, and you will go and change all your accounts instead, rendering all these preparations useless!! It’s such a mess.
Efforts are underway to identify the issues surrounding death online in order to make better policy. Thanatosensitivity is a term formally coined by researchers at the University of Toronto, in a paper presented at the 2009 ACM Conference on Human Factors in Computing Systems. It means, “a humanistically-grounded approach to human-computer interaction (HCI) research and design that recognizes and engages with the conceptual and practical issues surrounding death in the creation of interactive systems.” Sounds simultaneously dry and far-out, but authors Massimi and Charise have some solid ideas:
One compelling example […] is the recent suggestion by American and British ambulatory care units to program into one’s mobile phone a contact named “ICE” (“in case of emergency”) so that rescuers can easily identify and call an emergency contact when the phone’s owner is possibly dying. The need for this type of preparation crystallizes how difficult it has become to unravel the data stored in highly personalized devices.
Broad implementation of standards like these will certainly be miles ahead of the private efforts I’ve seen, such as the “electronic safe deposit boxes” on offer at assetlock.net (formerly the unfortunately-named and no-confidence-inspiring “youdeparted.com”), where you can store sensitive information to be released to designated parties in the event of your demise. Top-tier access costs $79.95 per year (or $239.95 for a “Lifetime Membership” [?!]) for “unlimited entries and up to 5GB storage.” Or you could invest in a piece of paper and print out a list for your executors! Just sayin’.
Meatspace, as it is sometimes called — the analog, temporary, fleshly arena of the world — is inextricably linked with, or more like suffused with, the passage of time. We’re accustomed to think of “real life” as taking place there, though for many of us, the online world and the real one have begun increasingly to blur into one another. For those who have been known to fall asleep holding a smartphone, really, which world is the “real” one?
Meatspace equals entropy. Impermanence. The fading of anger or passion is analogous to the fading of a photograph, the yellowing of old newspaper, as we’ve seen in a thousand movies. Through time we mend, heal, alter our convictions, learn; what burned cools, and what froze melts; both grief and delight are fated to end, sometimes abruptly, yes, but more often gradually, even imperceptibly. Entropy is our enemy, but also our friend; it defines that part of us that is changing, coming into bloom and then, because we are mortal, fading.
The contrast between the magical perfection of recordings of the past, and that past’s ultimate irretrievability, is in itself nothing very new. It’s something like seeing Greta Garbo or James Stewart in old films, so vividly real, their particularities so peculiarly manifest; they breathe, talk, move, their gleaming eyes and moist lips parting to speak or laugh in an inimitably beautiful way. To rage, marvel or sigh. Though their clothes and manners might strike us as strangely old-fashioned, they might still be standing right beside us. But it’s a trick, and we know it; we know that in reality the remains of James Stewart (Wee Kirk o’ the Heather churchyard, Forest Lawn) and Greta Garbo (Skogskyrkogården, in southern Stockholm) are just that: dust, still, quiet, moldering for many years in the cold ground, and yet, something of them yet lives.
When someone dies nowadays, we are liable to return to find that person’s digital self — his blog, say, or his Flickr, tumblr or Facebook‐entirely unchanged. An online persona will date, but agelessly, without wrinkling or acquiring dust, and unless someone removes each separate element there it will stay, to remind us of that person’s favorite song, of all his minutest concerns, exactly as if he’d typed them in yesterday. Facebook doesn’t fade. It just stays cyanotically fresh and crisp forever.
By JULIAN DIBBELL l Scientific American March 2012
Just after midnight on January 28, 2011, the government of Egypt, rocked by three straight days of massive antiregime protests organized in part through Facebook and other online social networks, did something unprecedented in the history of 21st-century telecommunications: it turned off the Internet. Exactly how it did this remains unclear, but the evidence suggests that five well-placed phone calls—one to each of the country’s biggest Internet service providers (ISPs)—may have been all it took. At 12:12 a.m. Cairo time, network routing records show, the leading ISP, Telecom Egypt, began shutting down its customers’ connections to the rest of the Internet, and in the course of the next 13 minutes, four other providers followed suit. By 12:40 a.m. the operation was complete. An estimated 93 percent of the Egyptian Internet was now unreachable. When the sun rose the next morning, the protesters made their way to Tahrir Square in almost total digital darkness.
Both strategically and tactically, the Internet blackout accomplished little—the crowds that day were the biggest yet, and in the end, the demonstrators prevailed. But as an object lesson in the Internet’s vulnerability to top-down control, the shutdown was alarmingly instructive and perhaps long overdue.
Much has been made of the Internet’s ability to resist such control. The network’s technological origins, we are sometimes told, lie in the cold war–era quest for a communications infrastructure so robust that even a nuclear attack could not shut it down. Although that is only partly true, it conveys something of the strength inherent in the Internet’s elegantly decentralized design. With its multiple, redundant pathways between any two network nodes and its ability to accommodate new nodes on the fly, the TCP/IP protocol that defines the Internet should ensure that it can keep on carrying data no matter how many nodes are blocked and whether it’s an atom bomb or a repressive regime that does it. As digital-rights activist John Gilmore once famously said, “The Internet interprets censorship as damage and routes around it.”
That is what it was designed to do anyway. And yet if five phone calls can cut off the Internet access of 80 million Egyptians, things have not worked quite that way in practice. The Egyptian cutoff was only the starkest of a growing list of examples that demonstrate how susceptible the Internet can be to top-down control. During the Tunisian revolution the month before, authorities had taken a more targeted approach, blocking only some sites from the national Internet. In the Iranian postelection protests of 2009, Iran’s government slowed nationwide Internet traffic rather than stopping it altogether. And for years China’s “great firewall” has given the government the ability to block whatever sites it chooses. In Western democracies, consolidation of Internet service providers has put a shrinking number of corporate entities in control of growing shares of Internet traffic, giving companies such as Comcast and AT&T both the incentive and the power to speed traffic served by their own media partners at the expense of competitors.
What happened, and can it be fixed? Can an Internet as dynamically resilient as the one Gilmore idealized—an Internet that structurally resists government and corporate throttles and kill switches—be recovered? A small but dedicated community of digital activists are working on it. Here is what it might look like.
It’s a dazzling summer afternoon at the wien-semmering power plant in Vienna, Austria. Aaron Kaplan has spent the past seven minutes caged inside a dark, cramped utility elevator headed for the top of the plant’s 200-meter-high exhaust stack, the tallest structure in the city. When Kaplan finally steps out onto the platform at its summit, the surrounding view is a panorama that takes in Alpine foothills to the west, green Slovakian borderlands in the east and the glittering Danube straight below. But Kaplan did not come here for the view. He walks straight to the platform’s edge to look instead at four small, weatherized Wi-Fi routers bolted to the guardrail.
These routers form one node in a nonprofit community network called FunkFeuer, of which Kaplan is a co-founder and lead developer. The signals that the routers beam and pick up link them, directly or indirectly, to some 200 similar nodes on rooftops all over greater Vienna, each one owned and maintained by the user who installed it and each contributing its bandwidth to a communal, high-speed Internet connection shared almost as far and wide as Kaplan, from the top of the smokestack, can see.
FunkFeuer is what is known as a wireless mesh network. No fees are charged for connecting to it; all you need is a $150 hardware setup (“a Linksys router in a Tupperware box, basically,” Kaplan says), a roof to put your equipment on and a line-of-sight connection to at least one other node. Direct radio contact with more than a few other nodes isn’t necessary, because each node relies on its immediate neighbors to pass along any data meant for nodes it cannot directly reach. In the network’s early months, soon after Kaplan and his friend Michael Bauer started it in 2003, the total number of nodes was only about a dozen, and this bucket brigade transmission scheme was a sometimes spotty affair: if even one node went down, there was a good chance the remainder could be cut off from one another or, crucially, from the network’s uplink, the one node connecting it to the Internet at large. Keeping the network viable around the clock back then “was a battle,” Kaplan recalls. He and Bauer made frequent house calls to help fix ailing user nodes, including one 2 a.m. rooftop session in the middle of a –15 degree Celsius snowstorm, made bearable only by the mugs of hot wine ferried over by Kaplan’s wife.
As the local do-it-yourself tech scene learned what FunkFeuer offered, however, the network grew. At somewhere between 30 and 40 nodes, it became self-sustaining. The network’s topology was rich enough that if any one node dropped out, any others that had been relying on it could always find a new path. The network had reached that critical density at which, as Kaplan puts it, “the magic of mesh networking kicks in.”
Mesh networking is a relatively young technology, but the “magic” Kaplan talks about is nothing new: it is the same principle that has long underpinned the Internet’s reputation for infrastructural resilience. Packet-switched store-and-forward routing—in which every computer connected to the network is capable not just of sending and receiving information but of relaying it on behalf of other connected computers—has been a defining architectural feature of the Internet since its conception. It is what creates the profusion of available transmission routes that lets the network simply “route around damage.” It is what makes the Internet, theoretically at least, so hard to kill.
If the reality of the Internet today more closely matched the theory, mesh networks would be superfluous. But in the two decades since the Internet outgrew its academic origins and started becoming the ubiquitous commercial service it is now, the store-and-forward principle has come to play a steadily less meaningful role. The vast majority of new nodes added to the network in this period have been the home and business computers brought online by Internet service providers. And in the ISP’s connection model, the customer’s machine is never a relay point; it’s an end point, a terminal node, configured only to send and receive and only to do so via machines owned by the ISP. The Internet’s explosive growth, in other words, has not added new routes to the network map so much as it has added cul-de-sacs, turning ISPs and other traffic aggregators into focal points of control over the hundreds of millions of nodes they serve. For those nodes there is no routing around the damage if their ISP goes down or shuts them off. Far from keeping the Internet tough to kill, the ISP, in effect, becomes the kill switch.
What mesh networks do, on the other hand, is precisely what an ISP does not: they let the end user’s machine act as a data relay. In less technical terms, they let users stop being merely Internet consumers and start being their own Internet providers. If you want a better sense of what that means, consider how things might have happened on January 28 if Egypt’s citizens communicated not through a few ISPs but by way of mesh networks. At the very least, it would have taken a lot more than five phone calls to shut that network down. Because each user of a mesh network owns and controls his or her own small piece of the network infrastructure, it might have taken as many phone calls as there were users—and much more persuading, for most of those users, than the ISPs’ executives needed.
At 37 years old, sascha meinrath has been a key player in the community mesh-networking scene for about as long as there has been a scene. As a graduate student at the University of Illinois, he helped to start the Champaign-Urbana Community Wireless Network (CUWiN), one of the first such networks in the U.S. Later, he co-organized a post-Katrina volunteer response team that set up an ad hoc mesh network that spanned 60 kilometers of the disaster area, restoring telecommunications in the first weeks after the hurricane. Along the way, he moved to Washington, D.C., intent on starting a community wireless business but instead ending up being “headhunted,” as he puts it, by the New America Foundation, a high-powered think tank that hired Meinrath to generate and oversee technology initiatives. It was there, early last year, that he launched the Commotion wireless project, an open-source wireless mesh-networking venture backed by a $2-million grant from the U.S. State Department.
The near-term goal of the project is to develop technology that “circumvents any kill switch and any sort of central surveillance,” Meinrath says. To illustrate the idea, he and other core Commotion developers put together what has been called a prototype “Internet in a suitcase”: a small, integrated package of wireless communications hardware, suitable for smuggling into a repressive government’s territory. From there, dissidents and activists could provide unblockable Internet coverage. The suitcase system is really just a rough-and-ready assemblage of technologies already well known to mesh-networking enthusiasts. Any sufficiently motivated geek could set one up and keep it working.
The long-term question for Meinrath and his colleagues is, “How do you make it so easy to configure that the other 99.9 percent of nongeek humanity can do it?” Because the more people use a mesh network, the harder it is to kill.
In one way, this is numerically self-evident: a mesh network of 100 nodes takes less effort to shut down, node by node, than a mesh of 1,000 nodes. Perhaps more important, a larger mesh network will tend to contain more links to the broader Internet. These uplinks—the sparsely distributed portal nodes standing as choke points between the mesh and the rest of the Internet—become less of a vulnerability as the mesh gets bigger. With more uplinks safely inside the local mesh, fewer everyday communications face disruption should any one link to the global network get cut. And because any node in the mesh could in principle become an uplink using any external Internet connection it can find (dial-up ISP, tethered mobile phone), more mesh nodes also mean a greater likelihood of quickly restoring contact with the outside world.
Size matters, in a word. Thus, in mesh-networking circles, the open question of mesh networks’ scalability—of just what size they can grow to—has tended to be a pressing one. Whether it is even theoretically possible for mesh networks to absorb significant numbers of nodes without significantly bogging down remains controversial, depending on what kind of numbers count as significant. Just a few years ago some network engineers were arguing that mesh sizes could never grow past the low hundreds of nodes. Yet currently the largest pure-mesh networks have node counts in the low four digits, and dozens of community networks thrive, with the biggest of them using hybrid mesh-and-backbone infrastructures to reach node counts as high as 5,000 (like the Athens Wireless Metropolitan Network in Greece) and even 15,000 (like Guifi.net in and around Barcelona). The doubt that lingers is whether it is humanly possible for mesh networks to grow much bigger, given how most humans feel about dealing with technologies as finicky and complicated as mesh networks.
Unlike most open-source technologies, which tend to downplay the importance of a user-friendly interface, the mesh movement is beginning to realize how critical it is for its equipment to be simple. But if Commotion is not alone in seeking to make mesh networks simpler to use, the key simplification it proposes is a uniquely radical one: instead of making it easier to install and run mesh-node equipment in the user’s home or business, Commotion aims to make it unnecessary. “The notion is that you can repurpose cell phones, laptops, existing wireless routers, et cetera,” Meinrath explains, “and build a network out of what’s already in people’s pockets and book bags.” He calls it a “device as infrastructure” network, and in the version he envisions, adding one more node to the mesh would require all the effort of flipping a switch. “So in essence, on your iPhone or your Android phone, you would push a button and say, yes, join this network,” he says. “It needs to be that level of ease.”
Imagine a world, then, in which mesh networks have finally reached that level—finally cleared the hurdle of mass usability to become, more or less, just another app running in the background. What happens next? Does the low cost of do-it-yourself Internet service squeeze the commercial options out of the market until the last of the ISPs’ hub-and-spoke fiefdoms give way to a single, world-blanketing mesh?
Even the most committed supporters of network decentralization aren’t betting on it. “This type of system, I think, will always be a poor man’s Internet,” says Jonathan Zittrain, a Harvard Law School professor and author of The Future of the Internet: And How to Stop It. Zittrain would be happy to see the mesh approach succeed, but he recognizes it may never match some of the efficiencies of more centrally controlled networks. “There are real benefits to centralization,” he says, “including ease of use.” Ramon Roca, founder of Guifi.net, likewise doubts mesh networks will ever put the ISPs out of business—and for that matter, doubts such networks will ever take much more than 15 percent of the market from them. Even at that low a rate of penetration, however, mesh networks can serve to “sanitize the market,” Roca argues, opening up the Internet to lower-income households that otherwise could not afford it and spurring the dominant ISPs to bring down prices for everybody else.
As welcome as those economic effects might be, the far more important civic effects—mesh networking’s built-in resistances to censorship and surveillance—need a lot more than a 15 percent market share to thrive. And if it is clear that market forces alone are not going to get that number up much higher, then the question is, What will?
One of the really tough questions to answer in relation to any technology is: When do you make something easy and when do you make it hard? This problem is perhaps most obvious in the realm of game design, since people get bored by games that are too easy and get frustrated by games that are too hard. So game-makers have to learn to split the difference, which in practice means alternating between the easy and the hard. You allow gamers to get some momentum and confidence by completing easy tasks, which helps them to push through the annoyance and even anger that can arise when a nearly intractable challenge comes their way.
But this problem occurs in other technological arenas too. Consider typography, of all things. In his recent book Thinking, Fast and Slow — which is fascinating in more ways than I can tell you right now — Daniel Kahneman explains research that has been done on the cognitive burdens placed on us by various type designs. A well-designed text, with a highly legible typeface and appropriate spacing, places a considerably lighter cognitive burden on us than a badly designed page. It works in conjunction with other factors, of course — but it matters:
A sentence that is printed in a clear font, or has been repeated, or has been primed, will be fluently processed with cognitive ease. Hearing a speaker when you are in a good mood, or even when you have a pencil stuck crosswise in your mouth to make you “smile,” also induces cognitive ease. Conversely, you experience cognitive strain when you read instructions in a poor font, or in faint colors, or worded in complicated language, or when you are in a bad mood, and even when you frown.
Reading a page done right is like sliding on the ice: we just flow right along. Take a look at this smart post by Dan Cohen on how much we value cognitive ease when reading, and how many recent tools provide it for us.
However, as Kahneman also points out, flowing right along isn’t always the best recipe for understanding:
Experimenters recruited 40 Princeton students to take the CRT [Shane Frederick’s Cognitive Reflection Test]. Half of them saw the puzzles in a small font in washed-out gray print. The puzzles were legible, but the font induced cognitive strain. The results tell a clear story: 90% of the students who saw the CRT in normal font made at least one mistake in the test, but the proportion dropped to 35% when the font was barely legible. You read this correctly: performance was better with the bad font. Cognitive strain, whatever its source, mobilizes System 2 [slow, conscious, laborious thinking], which is more likely to reject the intuitive answer suggested by System 1 [the immediate, unreflective thinking by which we make most of our minute-to-minute judgments].
I think about the value of cognitive strain, or as I sometimes call it cognitive friction, when I’m annotating texts. As many people have noted, today’s e-ink readers allow annotation — highlighting and commenting — but in a pretty kludgy fashion. It can take a good many clicks to get a simple job of highlighting done. By contrast, touch-sensitive tablets like the iPad and the Kindle Fire make highlighting very easy: you just draw your finger across the text you want to highlight, and there: you’re done.
Nice. But I prefer the kludge. Why? Because I remember what I’m reading better if the process of highlighting is a tad slow. It may also help that when I highlight on a tablet my hand tends to cover much of the text I’m highlighting, whereas on an e-ink reader my hand is off to one side and I can focus my attention on the text even as I click to draw lines under it. (It’s not relevant to this particular post, but on e-ink Kindles you can highlight across page breaks, which cannot now be done on touchscreen devices. Sometimes I have to shrink the typeface to finish a highlight. Very annoying.)
For the very same reason I prefer underlining in codex books with a pencil rather than a highlighter: the highlighter is just too smooth, whereas I have to take some care to underline accurately when I’m using a pencil: there’s a degree of manual strain that accompanies and encourages the cognitive strain.
E-books are in their infancy now: there’s little textual design to speak of, typography is often terrible, illustrations are limited, errors are shockingly frequent. They’ll get much better. But it would be cool if, when they improve, readers were given means of introducing a bit of cognitive friction when that would make the reading experience a stronger one. Sort of like cranking up the speed and increasing the incline on an elliptical trainer.