By YUDHIJIT BHATTACHARJEE l New York Times Magazine April 23, 2013
One summer night in 2011, a tall, 40-something professor named Diederik Stapel stepped out of his elegant brick house in the Dutch city of Tilburg to visit a friend around the corner. It was close to midnight, but his colleague Marcel Zeelenberg had called and texted Stapel that evening to say that he wanted to see him about an urgent matter. The two had known each other since the early ’90s, when they were Ph.D. students at the University of Amsterdam; now both were psychologists at Tilburg University. In 2010, Stapel became dean of the university’s School of Social and Behavioral Sciences and Zeelenberg head of the social psychology department. Stapel and his wife, Marcelle, had supported Zeelenberg through a difficult divorce a few years earlier. As he approached Zeelenberg’s door, Stapel wondered if his colleague was having problems with his new girlfriend.
Zeelenberg, a stocky man with a shaved head, led Stapel into his living room. “What’s up?” Stapel asked, settling onto a couch. Two graduate students had made an accusation, Zeelenberg explained. His eyes began to fill with tears. “They suspect you have been committing research fraud.”
Stapel was an academic star in the Netherlands and abroad, the author of several well-regarded studies on human attitudes and behavior. That spring, he published a widely publicized study in Science about an experiment done at the Utrecht train station showing that a trash-filled environment tended to bring out racist tendencies in individuals. And just days earlier, he received more media attention for a study indicating that eating meat made people selfish and less social.
His enemies were targeting him because of changes he initiated as dean, Stapel replied, quoting a Dutch proverb about high trees catching a lot of wind. When Zeelenberg challenged him with specifics — to explain why certain facts and figures he reported in different studies appeared to be identical — Stapel promised to be more careful in the future. As Zeelenberg pressed him, Stapel grew increasingly agitated.
Finally, Zeelenberg said: “I have to ask you if you’re faking data.”
That weekend, Zeelenberg relayed the allegations to the university rector, a law professor named Philip Eijlander, who often played tennis with Stapel. After a brief meeting on Sunday, Eijlander invited Stapel to come by his house on Tuesday morning. Sitting in Eijlander’s living room, Stapel mounted what Eijlander described to me as a spirited defense, highlighting his work as dean and characterizing his research methods as unusual. The conversation lasted about five hours. Then Eijlander politely escorted Stapel to the door but made it plain that he was not convinced of Stapel’s innocence.
That same day, Stapel drove to the University of Groningen, nearly three hours away, where he was a professor from 2000 to 2006. The campus there was one of the places where he claimed to have collected experimental data for several of his studies; to defend himself, he would need details from the place. But when he arrived that afternoon, the school looked very different from the way he remembered it being five years earlier. Stapel started to despair when he realized that he didn’t know what buildings had been around at the time of his study. Then he saw a structure that he recognized, a computer center. “That’s where it happened,” he said to himself; that’s where he did his experiments with undergraduate volunteers. “This is going to work.”
On his return trip to Tilburg, Stapel stopped at the train station in Utrecht. This was the site of his study linking racism to environmental untidiness, supposedly conducted during a strike by sanitation workers. In the experiment described in the Science paper, white volunteers were invited to fill out a questionnaire in a seat among a row of six chairs; the row was empty except for the first chair, which was taken by a black occupant or a white one. Stapel and his co-author claimed that white volunteers tended to sit farther away from the black person when the surrounding area was strewn with garbage. Now, looking around during rush hour, as people streamed on and off the platforms, Stapel could not find a location that matched the conditions described in his experiment.
“No, Diederik, this is ridiculous,” he told himself at last. “You really need to give it up.”
After he got home that night, he confessed to his wife. A week later, the university suspended him from his job and held a news conference to announce his fraud. It became the lead story in the Netherlands and would dominate headlines for months. Overnight, Stapel went from being a respected professor to perhaps the biggest con man in academic science.
Today, depending on your favoured futurist prophet, a kind of digital Elysium awaits us all. Over millennia, we have managed to unshackle ourselves from the burdens of time and space — from heat, cold, hunger, thirst, physical distance, mechanical effort — along a trajectory seemingly aimed at abstraction. Humanity’s collective consciousness is to be uploaded into the super-Matrix of the near future — or augmented into cyborg immortality, or out-evolved by self-aware machine minds. Whatever happens, the very meat of our physical being is to be left behind.
Except, of course, so far we remain thorougly embodied. Flesh and blood. There is just us, slumped in our chairs, at our desks, inside our cars, stroking our smartphones and tablets. Peel back the layers of illusion, and what remains is not a brain in a jar — however much we might fear or hunger for this — but a brain within a body, as remorselessly obedient to that body’s urges and limitations as any paleolithic hunter-gatherer.
It’s a point that has been emphasised by much recent research into thought and behaviour. To quote from Thinking, Fast and Slow (2011) by Nobel laureate Daniel Kahneman, ‘cognition is embodied; you think with your body, not only with your brain’. Yet when it comes to culture’s cutting edge, there remains an overwhelming tendency to treat embodiment not as a central condition of being human that our tools ought to serve, but rather as an inconvenience to be eliminated.
One of my favourite accounts of our genius for unreality is a passage from the David Foster Wallace essay ‘E Unibus Pluram: Television and US Fiction’ (1990), in which he describes, with escalating incredulity, the layers of illusion involved in watching television.
First comes the artifice of performance. ‘Illusion (1) is that we’re voyeurs here at all,’ he writes, ‘the “voyees” behind the screen’s glass are only pretending ignorance. They know perfectly well we’re out there.’ Then there’s the capturing of these performances, ‘the second layer of glass, the lenses and monitors via which technicians and arrangers apply ingenuity to hurl the visible images at us’. And then there are the nestled layers of artificiality involved in scripting, devising and selling the scenarios to be filmed, which aren’t ‘people in real situations that do or even could go on without consciousness of Audience’.
After this comes the actual screen that we’re looking at: not what it appears to show, but its physical reality in ‘analog waves and ionised streams and rear-screen chemical reactions throwing off phosphenes in grids of dots not much more lifelike than Seurat’s own impressionist “statements” on perceptual illusion’.
But even this is only the warm-up. Because — ‘Good lord’ he exclaims in climax — ‘the dots are coming out of our furniture, all we’re really spying on is our furniture; and our very own chairs and lamps and bookspines sit visible but unseen at our gaze’s frame…’
There’s a certain awe at our capacity for self-deception, here — if ‘deception’ is the right word for the chosen, crafted unrealities in play. But Foster Wallace’s ‘good lord’ is also a cry of awakening into uncomfortable truth.
It reminds me of the scene in the film The Matrix (1999) in which Neo has to decide between taking the blue pill that will preserve his illusions, and the red pill that will reveal what his world actually looks like. He swallows the red pill, gulps a glass of water, and is led into another room. Nothing happens, until he reaches out to touch a mirror. Its surface shivers, sticks to his hand, then begins to flow over his skin like liquid cement, rising along his arm and down his throat. Choking, he screams — and wakes up somewhere else, naked, bald, gasping for air inside a cocoon filled with fluid.
It’s the perfect contemporary depiction of an atavistic fear: that the world around us is a lie. However, The Matrix is also a suitably ambivalent fable for modern times — because its lies aren’t supernatural tricks, but the apotheosis of human ingenuity. And the problem isn’t so much illusion itself as who’s in charge. The baddies here are the evil machines. But so long as we’re the ones running the show, it’s sunglasses, guns, and anti-gravity kung fu all the way, which is an infinitely more enticing destiny than unenhanced actuality.
What the red pill promises isn’t actually the real world at all. It’s the Matrix as it ought to be, knowingly bent to serve our desires: a dream of omnipotence through disembodiment.
In a 2012 essay, under the delightful title ‘Arsebestos’, the American science fiction author Neal Stephenson rails against one particular aspect of contemporary contempt for the body: laziness. ‘Ergonomic swivel chairs,’ the essay argues, ‘are the next asbestos’. That is, our sedentary screen-staring habits are as great a lurking hazard for the 21st century as asbestos was for the 20th. The point, for Stephenson, is simple — ‘the reaper comes first for those who sit’ — as is the path we took there. ‘First, we all bought in to the idea that a normal job involved sitting in a chair, and then we found ourselves imprisoned by our own furniture…’
Once again, furniture is the foe. Equipped with increasingly smart digital systems, we now perform an entirely on-screen, virtual version of many hundreds of daily acts that used to take us out of our chairs and around the house, office or neighbourhood: It used to be that reading the mail required walking to the mailbox, slicing open envelopes, and other small but real physical exertions. Now we do it by twitching our fingers. Similar remarks could be made about talking on the phone (now replaced by Skype), filing or throwing away documents (now a matter of dragging icons around or, if that’s too strenuous, using command-key combinations), watching television (YouTube), and meeting with co-workers (videoconferencing).
Stephenson, who today does most of his work strolling at a steady pace at a treadmill desk, is making a point about the act of sitting itself: that too much of it is harmful and that, in an age of ever-more-nimble computing, it’s absurd for us to sit around all day staring at screens. Leaving aside the irony of an author known for his pioneering depictions of virtual worlds acting as a light-exercise guru, it’s sensible advice. For me, though, this is also a point about how we conceive of the relationship between ourselves and our tools.
We think, feel and work better when we’re at least a little mobile; we have better blood chemistry and concentration; we’re more creative and energetic, not to mention less prone to all manners of malaise. Why, then, is sedentary ease quite so attractive — even addictive? The answer lies in the interlocked vast systems and assumptions of which our furniture is but the visible tip.
At the start of the 1990s, screens — whether televisions or computers, deployed for work or leisure — were bulky, static objects. For those on the move and lucky enough to employ a weight-lifting personal assistant, a Macintosh ‘portable’ cost $6,500 and weighed 7.2 kilos (close to 16 lbs). For everyone else, computing was a crude, solitary domain, inaccessible to anyone other than aficionados.
Today, just two decades on from Foster Wallace’s ‘E Unibus Pluram’, we inhabit an age of extraordinary intimacy with screen-based technologies. As well as our home and office computers, and the 40-inch-plus glories of our living room screens, few of us are now without the tactile, constant presence of at least one smart device in our pocket or bag.
These are tools that can feel more like extensions of ourselves than separate devices: the first thing we touch when we wake up in the morning, the last thing we touch before going to bed at night. Yet what they offer is a curious kind of intimacy — and the ‘us’ to which all this is addressed doesn’t often look or feel much like a living, breathing human being.
Instead, we are metaphorically dismembered by our tools: regarded by the sites and services we visit as ‘eyeballs’, as tapping and touching fingertips on keyboards and screens, as attention spans to be harnessed and data-rich profiles to be harvested. So far as most screens are concerned, we exist only in order to be transfixed by their gaze.
It’s as if we’ve mistaken a particular, contingent set of historical circumstances — that screens used to be extremely heavy, and the only way to use them was to sit down for an extended period of time — for a truth about human nature. Most of us work at desks in offices that wouldn’t look too strange to 18th-century clerks, and spend our leisure gazing at vast wall-mounted monitors while cradling second screens in the palms of our hands.
And it would be amusing if it weren’t so insidious: in public places, at work in a room full of colleagues, in our homes, our favourite activity remains hanging out with furniture.
There are, of course, those who seem to be trying to set us free from these shackles that make, or encourage us to be so indolent. Take one of the most futuristic pieces of kit to hit headlines in recent years: ‘Google Glass’, which contains a camera, microphone, internet connection, head-up-display and touchpad — all housed within a miraculously sleek pair of spectacles. The launch event last year was a frenzy of hyper-kinetic bodily endeavour, with skydivers, abseilers and stunt BMX riders streaming the evidence of this awesomeness live from their own faces.
The very idea of the screen, here, has shifted from something you look at and has transformed into something you look through — a digital veil overlaid on the world like a kind of auxiliary consciousness. This is the cyborg dream at its most imminently available: Google Glass (essentially digital eyewear) might be on sale by the end of this year. Could it mark a potential escape from the tyranny of furniture into a future of strolling productivity?
Yet it’s also a hyper-reality that isn’t half as human-centric as it might appear at first. Consider Google’s cheery demo video of what wearable computing might be able to do for me. Accompanied by an aspirational soft-rock soundtrack, I stretch my arms, yawn, and browse a plethora of icons corresponding to online services in the middle of my field of vision. I make myself some coffee, check the time and my diary via another few icons, then float the weather forecast into view while looking outside the window. A friend asks if I fancy meeting up via another popup, to which I dictate a reply and head out. Handily, as I approach the subway, my glasses tell me it isn’t working and plot out a walking route instead, complete with real-time map and sequential directions.
And so on. There’s a great deal of emphasis on how my information-poor perceptions might be enhanced by integration with the internet — and how all manner of errors and inefficiencies will be ironed out along the way. Yet there’s little sense of how my ability to think my own thoughts, explore my own feelings or enjoy my own space will be similarly served, enhanced or encouraged. What’s on offer is, effectively, a smartphone strapped to my face.
This is all very well if my aim is to become a more effective operator of technological systems. However, if computing itself isn’t the primary objective — if I’m more interested in fomenting ideas and memories than in broadcasting a video of my daily exploits — the notion of wearable computing suddenly starts to seem, in this incarnation at least, not so much an escape from the desk and the sofa as an intensification of all that they represent.
In fact, there’s a surprising amount of common ground between the visions of progress represented by ergonomic office chairs and by Google Glass. In each case, the focus is not on people as such, but ‘people’ as incarnated within certain kinds of digital system: data points within a vast grid whose every need can be anticipated and answered by more precisely targeted information.
Distance, difference, fleshy frailties: all these are to be erased, while actuality itself is useful only as grist to the mill of content-generation and sharing (video, photos, audio, status updates!). Similarly, rather than you — your whole, embodied being — what the world really cares about is ‘you’ as represented by your avatar, profile, inbox, image, account, uploads, shares, likes, dislikes, group memberships, search history, purchase, orders and subscriptions.
This is the deal. No matter where you are, whom you’re with, or what you’re doing, it only counts if the system itself is counting.
I was born in 1980, meaning I missed out on many of the opportunities afforded to subsequent generations of shy, tech-savvy teens. Compelled to rely on a parental landline and face-to-face awkwardness for communication with the opposite sex, my first attempt at asking someone out for a date ended sufficiently badly for me to spend the years 1994 to 1996, inclusive, in a near-monastic state. I would have given a great deal for the opportunity to type my way into others’ affections, or simply to browse the social world from a safe distance.
What I longed for was something that I could understand. Other people were messy, strange creatures, who played games (with rules that they didn’t bother to explain). This is one reason why social media has proved stupendously successful: because they provide an enviable and historically unprecedented sense of control over friendships, relationships, interests, and ambitions. It’s all there to be browsed and selected, to be liked and commented upon.
The defining illusion of television is escape — the belief that burning hour after hour in front of the TV screen offers a refuge from the mundane world, even while it ever-more-deeply embeds us in the embrace of our sofas. But the defining illusion of interactive screens is agency. Suffused with feedback, an entire universe of data at our fingertips, we’re inclined to confuse knowledge with control, and information with comprehension. And, like my hypothetical teenage self, we’re grateful to be given the chance.
In a sense, it all comes back to what Foster Wallace labelled ‘Audience’, with a capital ‘A’: the transforming force of others’ simulated presence, and our presence simulated right back at them. Online, we are simultaneously author and audience, not to mention our own full-time publicist and agent. And we are lavishly talented at playing these roles. We are — don’t get me wrong — extremely lucky to be blessed by this apotheosis of human imagining and ingenuity.
Yet it’s also a heavy burden to heft — and all the more so for the infinite, weightless capacities of the medium within which we do so. If there’s only one lesson we should take from Kahneman et al, it is that every human illusion from consciousness up takes effort to maintain — and too much performance in one area can leave the rest of us stretched thin.
Consider the grand performance of incarnating ourselves online. It takes place courtesy of screens, wires, radio waves, incandescent dots and colours, together with the apparatus of content creation itself, from keyboards and cameras to website templates. Yet for it all to hang together, we must privilege these illusions over the merely real world surrounding us: the rooms, shelves, sofas, streets and people who uniquely share our time and place. We play, we pretend — quite brilliantly — and in return we are gifted mastery, barely sensing the embrace of other assumptions.
Perhaps that’s why the American technology journalist and author Paul Miller decided to live ‘off-line’ for a year. In the essay ‘Project Glass and the Epic History of Wearable Computers’ for The Verge magazine, he argued that ‘much of what passes for innovation these days is enclosed inside a very small space: a better way to check-in, or upload a photo, or manage your friend list’. This is the narrow zone within which every vision of progress is a further step towards data-led disembodiment: more content, more connection, faster and more ubiquitous computing, brimming the screens in our pockets and the overlays in front of our eyes. It’s an intoxicating offering. But it’s also a steady constriction of what it means to be us.
Is there another way? I would argue that there is, and that much of it lies apart from the maelstrom of ‘Audience’. If it means anything, intimacy is surely about what we are not willing to share; those things closest to us, both literally and metaphorically, through which we uniquely define ourselves.
Indeed, there are forms of enhancement that are about thickening our presence in a particular place at a particular moment in time, not turning our back on reality, and that help us to give a certain quality of time or attention to those around us, and ourselves. Similarly, there are ways of wearing our own tools more lightly and of using them to turn us more passionately towards reality — not to mention the intractable physicality of these self-same tools, which are neither massless nor placeless, no matter how many claims they may make to the contrary.
Ultimately, there is a symmetry between treating ourselves as disembodied and seeing our machines as a weightless other world. In each case, chains of true cause and effect are replaced by a kind of magical thinking, and the gifts of human illusion cross over into delusion.
‘Any sufficiently advanced technology,’ Arthur C Clarke wrote in 1973, ‘is indistinguishable from magic.’ It’s one of science fiction’s most famous maxims — and I’ve always hated it. Assuming that there’s no such thing as ‘real’ magic, and that what we mean when we talk about magic is someone being fooled by someone else, what is he actually saying: that, past a certain point, all we can do is gawp and applaud at the end of the show?
This won’t do. All the magic, after all, belongs not to these tools, but to us: in the stories we tell, the illusions we share. It’s ours, and we can withhold it if we see fit — refuse to clap, peek behind the curtain, tell the performers that we know there’s a trapdoor somewhere onstage. You don’t have to believe in magic to love it.
Quite the reverse, in fact. Just like belonging to any ‘Audience’, it isn’t proper fun unless everyone has tacitly agreed the rules. If only one side knows what’s going on, it’s no longer entertainment: it’s a con trick, and a price is being extracted.
This is our future. We’re playing better, brighter games than ever — and bringing them ever closer to the place where we hold ourselves. It’s terrific, and I’m thrilled to be on board for the ride. More than ever, though, we cannot afford to believe in magic, or to overlook the effortful divide between us as we actually are and ‘us’ as we appear on screen. Because the screen is only the beginning — and it will be a sad thing indeed if our best model for humanity’s self-invention remains a chunk of furniture.
By MAGGIE KOERTH-BAKER l New York Times January 25, 2013
In 1999, government workers in Mexico took their last officially sanctioned siesta. Until then, it was normal for clerks and bureaucrats to take two- or three-hour breaks in the middle of the workday. Many of them went home for lunch, took a nap, then returned to their offices, working into the evening to make up for lost time. The siesta used to be commonplace in Spanish-speaking countries, but the tradition was already waning as Latin America’s economies developed throughout the ’80s and ’90s. As companies and governments modernized, they adopted the same schedules as their counterparts in other countries. Mexico failed to properly anticipate the effect this would have on energy consumption.
By shifting work from the sweltering afternoon into cooler evening hours, the siesta provided a kind of de facto air-conditioning, says Elizabeth Shove, a professor of sociology at Lancaster University in England. Getting rid of siestas makes people more dependent, during the hottest part of the day, on energy-intensive forms of cooling. Air-conditioning use in Mexico has skyrocketed since the siesta ban. In 1995, 10 percent of Mexican homes had A.C. By 2011, that figure had grown to 80 percent.
Shove studies the cultural and historical factors underlying sustainable living. Historically, she says, societies developed methods of dealing with their local climates, and those tools and behaviors became ingrained cultural customs. As the world becomes more interconnected, these customs are changing, and so is the definition of something as elemental as comfort.
That’s right: there is no universal definition of comfort, especially as it relates to temperature. Both Shove and Susan Mazur-Stommen, of the American Council for an Energy-Efficient Economy, told me two decades’ worth of research data clearly demonstrate that different people experience the same temperature differently. People report being comfortable all over the thermostat, from 43 degrees Fahrenheit all the way up to 86.
“What people count as comfortable is what they get used to,” Shove says, and this becomes obvious when you examine different societies side by side. In 1996, Harold Wilhite, director at the University of Oslo’s Center for Development and Environment, published a paper comparing energy-use cultural norms in Oslo, Norway, and Fukuoka, Japan. The two cities are similar in population size, level of industrial development, spending power and average home size. But southern Japan is warmer than southern Norway, and Japanese culture is very different from Norwegian culture.
Wilhite found that Norwegians placed emphasis on something they call koselighet — which roughly translates as “coziness,” but with certain social connotations. Part of koselighet is making your home a place other people want to visit and spend time in. In Oslo, that means making sure nobody thinks your house is cold. Ever. Half the households Wilhite sampled didn’t turn the thermostat down before bed. Nearly 30 percent kept it turned up even when they weren’t home. In Fukuoka, where winters are comparatively mild, there wasn’t a cultural objection to entering cold rooms. In fact, homes in southern Japan usually didn’t have central heating at all. On chilly nights, families gathered on heated rugs, or around a kotatsu — a table with a built-in heat element.
Koselighet also concerns the quality of light. The Norwegians that Wilhite interviewed told him that ceiling lights felt cold. Not one subject used them in the living room, where instead they had incandescent table and floor lamps to create little golden pools throughout the room. On average, Oslo living rooms had 9.6 light bulbs. Meanwhile, in Fukuoka, the living rooms had an average of only 2.5 light bulbs, mostly more energy-efficient fluorescents fitted into the ceiling. There, people prized visibility, and the color of the fluorescent light had no temperature connotation at all.
But Wilhite also noted that cultural understandings of comfort are changing. Even back in 1996, he reported that people in Fukuoka were buying more space heaters, allowing family members to warm up by their lonesome. And they were buying air-conditioners, something that hadn’t been normal, even in a city with hot summers. Although many of Wilhite’s Japanese subjects believed A.C. units to be unhealthful and unpleasant, they were starting to expect their presence in any prosperous, modern home — a byproduct of globalization, according to Shove and other researchers.
Along with air-conditioning, globalization has also helped popularize something called Ashrae 55: a building code created by the American Society of Heating, Refrigerating and Air-Conditioning Engineers, to determine the ideal temperature for large buildings. The standard, which has set thermostats across the globe, is hardly culture-free. It’s based on Fanger’s Comfort Equation, a mathematical model developed in Denmark and the United States in the 1960s and ’70s, which seeks to make a very specific worker comfortable: a man wearing a full business suit.
Consider the impact on office workers in hotter countries, where a thobe or a dashiki might be perfectly acceptable business attire. They might start dressing differently, which makes them less comfortable outside and at home, which in turn makes them more likely to seek out air-conditioning. It also affects women. “In spring, it’s socially expected that women will wear thinner blouses, skirts, open-toed shoes,” Mazur-Stommen says. “But the building temperature is set for men, who are assumed to be wearing long-sleeved shirts and closed-toed shoes year-round. If everyone just dressed appropriately for the weather, we wouldn’t have to heat or cool the building as much.”
Fortunately, the same forces that drive people to consume more can also goad them toward sustainability. Wesley Schultz, a professor of psychology at California State University, San Marcos, has spent the last decade studying why people choose to be more energy-efficient — turning off lights when they aren’t in the room, for instance, or buying Energy Star appliances. Over and over, he has found that the most powerful force for positive change is to tell people how much energy their neighbors are using, and to make sure people know that those neighbors value energy efficiency. People in the United States don’t think this form of peer pressure works, Schultz told me. “But when we actually study them,” he said, “we see they’re wrong.”
Not all of us ache to ride a rocket or sail the infinite sea. Yet as a species we’re curious enough, and intrigued enough by the prospect, to help pay for the trip and cheer at the voyagers’ return. Yes, we explore to find a better place to live or acquire a larger territory or make a fortune. But we also explore simply to discover what’s there.
“No other mammal moves around like we do,” says Svante Pääbo, a director of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, where he uses genetics to study human origins. “We jump borders. We push into new territory even when we have resources where we are. Other animals don’t do this. Other humans either. Neanderthals were around hundreds of thousands of years, but they never spread around the world. In just 50,000 years we covered everything. There’s a kind of madness to it. Sailing out into the ocean, you have no idea what’s on the other side. And now we go to Mars. We never stop. Why?”
Why indeed? Pääbo and other scientists pondering this question are themselves explorers, walking new ground. They know that they might have to backtrack and regroup at any time. They know that any notion about why we explore might soon face revision as their young disciplines—anthropology, genetics, developmental neuropsychology—turn up new fundamentals. Yet for those trying to figure out what makes humans tick, our urge to explore is irresistible terrain. What gives rise to this “madness” to explore? What drove us out from Africa and on to the moon and beyond?
If an urge to explore rises in us innately, perhaps its foundation lies within our genome. In fact there is a mutation that pops up frequently in such discussions: a variant of a gene called DRD4, which helps control dopamine, a chemical brain messenger important in learning and reward. Researchers have repeatedly tied the variant, known as DRD4-7R and carried by roughly 20 percent of all humans, to curiosity and restlessness. Dozens of human studies have found that 7R makes people more likely to take risks; explore new places, ideas, foods, relationships, drugs, or sexual opportunities; and generally embrace movement, change, and adventure. Studies in animals simulating 7R’s actions suggest it increases their taste for both movement and novelty. (Not incidentally, it is also closely associated with ADHD.)
Most provocatively, several studies tie 7R to human migration. The first large genetic study to do so, led by Chuansheng Chen of the University of California, Irvine in 1999, found 7R more common in present-day migratory cultures than in settled ones. A larger, more statistically rigorous 2011 study supported this, finding that 7R, along with another variant named 2R, tends to be found more frequently than you would expect by chance in populations whose ancestors migrated longer distances after they moved out of Africa. Neither study necessarily means that the 7R form of the gene actually made those ancestors especially restless; you’d have to have been around back then to test that premise with certainty. But both studies support the idea that a nomadic lifestyle selects for the 7R variant.
Another recent study backs this up. Among Ariaal tribesmen in Africa, those who carry 7R tend to be stronger and better fed than their non-7R peers if they live in nomadic tribes, possibly reflecting better fitness for a nomadic life and perhaps higher status as well. However, 7R carriers tend to be less well nourished if they live as settled villagers. The variant’s value, then, like that of many genes and traits, may depend on the surroundings. A restless person may thrive in a changeable environment but wither in a stable one; likewise with any genes that help produce the restlessness.
So is 7R the explorer’s gene or adventure gene, as some call it? Yale University evolutionary and population geneticist Kenneth Kidd thinks that overstates its role. Kidd speaks with special authority here, as he was part of the team that discovered the 7R variant 20 years ago. Like other skeptics, he thinks that many of the studies linking 7R to exploratory traits suffer from mushy methods or math. He notes too that the pile of studies supporting 7R’s link with these traits is countered by another stack contradicting it.
“You just can’t reduce something as complex as human exploration to a single gene,” he says, laughing. “Genetics doesn’t work that way.”
Better, Kidd suggests, to consider how groups of genes might lay a foundation for such behavior. On this he and most 7R advocates agree: Whatever we ultimately conclude about 7R’s role in driving restlessness, no one gene or set of genes can hardwire us for exploration. More likely, different groups of genes contribute to multiple traits, some allowing us to explore, and others, 7R quite possibly among them, pressing us to do so. It helps, in short, to think not just of the urge to explore but of the ability, not just the motivation but the means. Before you can act on the urge, you need the tools or traits that make exploration possible.
Fortunately for me, I had to wander only a floor down from Kidd’s office to find someone who studies such tools: developmental and evolutionary geneticist Jim Noonan. His research focuses on the genes that build two key systems: our limbs and our brains. “So I’m biased,” he says, when I press him about what makes us explorers. “But if you want to boil this down, I’d say our ability to explore comes from those two systems.”
The genes that build our human limbs and brains, Noonan says, are pretty much the same as those that build the same parts of other hominids and apes. Each species’ limbs and brains end up different largely because the construction projects directed by these developmental genes start and stop at different times. In humans the result is legs and hips that let us walk long distances; clever, clever hands; and an even cleverer brain that grows far more slowly but much larger than other ape brains. This triad separates us from other apes and, in small but vital developmental details, from other hominids.
Together, says Noonan, these differences compose a set of traits uniquely suited for creating explorers. We have great mobility, extraordinary dexterity, “and, the big one, brains that can think imaginatively.” And each amplifies the others: Our conceptual imagination greatly magnifies the effect of our mobility and dexterity, which in turn stirs our imaginations further.
“Think of a tool,” says Noonan. “If you can use it well and have imagination, you think of more applications for it.” As you think of more ways to use the tool, you imagine more goals it can help you accomplish.
This feedback loop, Noonan points out, helped empower the great Anglo-Irish explorer Ernest Shackleton—and saved him when he and his crew were stranded on Elephant Island in 1916. After polar ice crushed their ship, Shackleton, 800 miles from anywhere with 27 exhausted men, little food, and three small open boats, conceived an insanely ambitious sea voyage. Using a handful of basic tools to modify a 22-foot lifeboat, the James Caird (another tool), for a task absurdly beyond its original design, he gathered his navigational instruments and five of his men and executed a trip that few would dare imagine. He reached South Georgia, then returned to Elephant Island to rescue the rest of the crew.
Shackleton’s adventure shows starkly, says Noonan, a dynamic that has driven human progress and exploration from the start: As we leverage dexterity with imagination, we create advantages “that select for both traits.”
Noonan makes a good case that our big brain and clever hands build a capacity for imagination. Alison Gopnik, a child-development psychologist at the University of California, Berkeley, says humans also possess another, less obvious advantage that fosters that imaginative capacity: a long childhood in which we can exercise our urge to explore while we’re still dependent on our parents. We stop nursing roughly a year and a half sooner than gorillas and chimps, and then take a far slower path to puberty—about a decade, compared with the three to five years typical for gorillas and chimps. Dental evidence from Neanderthals suggests they too grew up faster than we do. As a result, we have an unmatched period of protected “play” in which to learn exploration’s rewards.
Many animals play, says Gopnik. Yet while other animals play mainly by practicing basic skills such as fighting and hunting, human children play by creating hypothetical scenarios with artificial rules that test hypotheses. Can I build a tower of blocks as tall as I am? What’ll happen if we make the bike ramp go even higher? How will this schoolhouse game change if I’m the teacher and my big brother is the student? Such play effectively makes children explorers of landscapes filled with competing possibilities.
By ALEX STONE l The New York Times Sunday Review Aug.18, 2012
Some years ago, executives at a Houston airport faced a troubling customer-relations issue. Passengers were lodging an inordinate number of complaints about the long waits at baggage claim. In response, the executives increased the number of baggage handlers working that shift. The plan worked: the average wait fell to eight minutes, well within industry benchmarks. But the complaints persisted.
Puzzled, the airport executives undertook a more careful, on-site analysis. They found that it took passengers a minute to walk from their arrival gates to baggage claim and seven more minutes to get their bags. Roughly 88 percent of their time, in other words, was spent standing around waiting for their bags.
So the airport decided on a new approach: instead of reducing wait times, it moved the arrival gates away from the main terminal and routed bags to the outermost carousel. Passengers now had to walk six times longer to get their bags. Complaints dropped to near zero.
This story hints at a general principle: the experience of waiting, whether for luggage or groceries, is defined only partly by the objective length of the wait. “Often the psychology of queuing is more important than the statistics of the wait itself,” notes the M.I.T. operations researcher Richard Larson, widely considered to be the world’s foremost expert on lines. Occupied time (walking to baggage claim) feels shorter than unoccupied time (standing at the carousel). Research on queuing has shown that, on average, people overestimate how long they’ve waited in a line by about 36 percent.
This is also why one finds mirrors next to elevators. The idea was born during the post-World War II boom, when the spread of high-rises led to complaints about elevator delays. The rationale behind the mirrors was similar to the one used at the Houston airport: give people something to occupy their time, and the wait will feel shorter. With the mirrors, people could check their hair or slyly ogle other passengers. And it worked: almost overnight, the complaints ceased.
The drudgery of unoccupied time also accounts in large measure for the popularity of impulse-buy items, which earn supermarkets about $5.5 billion annually. The tabloids and packs of gum offer relief from the agony of waiting.
Our expectations further affect how we feel about lines. Uncertainty magnifies the stress of waiting, while feedback in the form of expected wait times and explanations for delays improves the tenor of the experience.
And beating expectations buoys our mood. All else being equal, people who wait less than they anticipated leave happier than those who wait longer than expected. This is why Disney, the universally acknowledged master of applied queuing psychology, overestimates wait times for rides, so that its guests — never customers, always guests — are pleasantly surprised when they ascend Space Mountain ahead of schedule.
This is a powerful ploy because our memories of a queuing experience, to use an industry term, are strongly influenced by the final moments, according to research conducted by Ziv Carmon, a professor of marketing at the business school Insead, and the behavioral economist Daniel Kahneman. When a long wait ends on a happy note — the line speeds up, say — we tend to look back on it positively, even if we were miserable much of the time. Conversely, if negative emotions dominate in the final minutes, our retrospective audit of the process will skew toward cynicism, even if the experience as a whole was relatively painless.
Professors Carmon and Kahneman have also found that we are more concerned with how long a line is than how fast it’s moving. Given a choice between a slow-moving short line and a fast-moving long one, we will often opt for the former, even if the waits are identical. (This is why Disney hides the lengths of its lines by wrapping them around buildings and using serpentine queues.)
Perhaps the biggest influence on our feelings about lines, though, has to do with our perception of fairness. When it comes to lines, the universally acknowledged standard is first come first served: any deviation is, to most, a mark of iniquity and can lead to violent queue rage. Last month a man was stabbed at a Maryland post office by a fellow customer who mistakenly thought he’d cut in line. Professor Larson calls these unwelcome intrusions “slips” and “skips.”
The demand for fairness extends beyond mere self-interest. Like any social system, lines are governed by an implicit set of norms that transcend the individual. A study of fans in line for U2 tickets found that people are just as upset by slips and skips that occur behind them, and thus don’t lengthen their wait, as they are by those in front of them.
Surveys show that many people will wait twice as long for fast food, provided the establishment uses a first-come-first-served, single-queue ordering system as opposed to a multi-queue setup. Anyone who’s ever had to choose a line at a grocery store knows how unfair multiple queues can seem; invariably, you wind up kicking yourself for not choosing the line next to you moving twice as fast.
But there’s a curious cognitive asymmetry at work here. While losing to the line at our left drives us to despair, winning the race against the one to our right does little to lift our spirits. Indeed, in a system of multiple queues, customers almost always fixate on the line they’re losing to and rarely the one they’re beating.
Fairness also dictates that the length of a line should be commensurate with the value of the product or service for which we’re waiting. The more valuable it is, the longer one is willing to wait for it. Hence the supermarket express line, a rare, socially sanctioned violation of first come first served, based on the assumption that no reasonable person thinks a child buying a candy bar should wait behind an old man stocking up on provisions for the Mayan apocalypse.
Americans spend roughly 37 billion hours each year waiting in line. The dominant cost of waiting is an emotional one: stress, boredom, that nagging sensation that one’s life is slipping away. The last thing we want to do with our dwindling leisure time is squander it in stasis. We’ll never eliminate lines altogether, but a better understanding of the psychology of waiting can help make those inevitable delays that inject themselves into our daily lives a touch more bearable. And when all else fails, bring a book.
By TIM PARKS l The New York Review of Books July 20, 2012
Let’s talk about money. In his history of world art, E.H. Gombrich mentions a Renaissance artist whose uneven work was a puzzle, until art historians discovered some of his accounts and compared incomes with images: paid less he worked carelessly; well-remunerated he excelled. So, given the decreasing income of writers over recent years—one thinks of the sharp drop in payments for freelance journalism and again in advances for most novelists, partly to do with a stagnant market for books, partly to do with the liveliness and piracy of the Internet—are we to expect a corresponding falling off in the quality of what we read? Can the connection really be that simple? On the other hand, can any craft possibly be immune from a relationship with money?
Asked to write blogs for other sites, some with much larger audiences, I chose to stay with the New York Review, partly out of an old loyalty and partly because they pay me better. Would I write worse if I wrote for a more popular site for less money? Or would I write better because I was excited by the larger number of people following the site? And would this larger public then lead to my making more money some other way, say, when I sold a book to an American publisher? And if that book did make more money further down the line, having used the blog as a loss leader, does that mean the next book would be better written? Or do I always write as well or as badly as I anyway do regardless of payment, so that these monetary transactions and the decisions that go with them affect my bank balance and anxiety levels, but not the quality of what I do?
Let’s try to get some sense of this. When they are starting out writers rarely make anything at all for what they do. I wrote seven novels over a period of six years before one was accepted for publication. Rejected by some twenty publishers that seventh eventually earned me an advance of £1,000 for world rights. Evidently, I wasn’t working for money. What then? Pleasure? I don’t think so; I remember I was on the point of giving up when that book was accepted. I’d had enough. However much I enjoyed trying to get the world into words, the rejections were disheartening; and the writing habit was keeping me from a “proper” career elsewhere.
I was writing, I think, in my early twenties, to prove to myself that I could write, that I could become part of the community of writers, and it seemed to me I could not myself be the final judge of that question. To prove I could write, that I could put together in words and interesting take on experience, I needed the confirmation of a publisher’s willingness to invest in me, and I needed readers, hopefully serious readers, and critics. For me, that is, a writer was not just someone who writes, but someone published, read and, yes, praised. Why I had set my heart on becoming that person remains unclear.
Today, of course, aspiring writers go to creative writing schools and so already have feedback from professionals. Many of them will self-publish short stories on line and receive comments from unknown readers through the web. Yet I notice on the few occasions when I have taught creative writing courses that this encouragement, professional or otherwise, is never enough. Students are glad to hear you think they can write, but they need, as I did, the confirmation of a publishing contract, which involves money. Not that they’re calculating how much money, not at this point. They’re thinking of a token of recognition—they want to exist, as writers.
Yet as soon as one has left the starting line, money matters. Of course it’s partly a question of making ends meet; but there must be few novelists who believe they will live entirely from their writing as soon as a first novel is published. No, the money is important aside from a question of need because it indicates how much the publisher is planning to invest in you, how much recognition they will afford you, how much they will push your book, getting you that attention you crave, and of course the level of the advance will tell you where you stand in relation to other authors. If the self-esteem that comes with “being a writer” can only be conferred when a publisher is willing to invest, it follows that the more they invest the more self-esteem they afford.
Is this a healthy state of affairs? Clearly we are far away from the minor Renaissance painter who coolly calibrates his efforts in relation to price, unflustered by concerns about his self-image or reputation in centuries to come. In his masterpiece Jakob von Gunten, Robert Walser has his young alter ego commiserate with his artist brother and question how a person can ever be at ease if his or her mental well-being depends on the critical judgment of others.
Paradoxically, then, almost the worst thing that can happen to writers, at least if it’s the quality of their work we’re thinking about, is to receive, immediately, all the money and recognition they want. At this point all other work, all other sane and sensible economic relation to society, is rapidly dropped and the said author now absolutely reliant on the world’s response to his or her books, and at the same time most likely surrounded by people who will be building their own careers on his or her triumphant success, all eager to reinforce intimations of grandeur. An older person, long familiar with the utter capriciousness of the world’s response to art, might deal with such an enviable situation with aplomb. For most of us it would be hard not to grow presumptuous and self-satisfied, or alternatively (but perhaps simultaneously) over-anxious to satisfy the expectations implied by six-figure payments. An interesting project, if any academic has the stomach to face the flak, would be to analyze the quality of the work of those young literary authors paid extravagant advances in the 1980s and 1990s; did their writing and flair, so far as these things can be judged, fall off along with the cash? For how long did the critical world remain in denial that their new darling was not producing the goods? Celebrity almost always outlives performance.
But if too much money can be damaging, dribs and drabs are not going to get the best out of a writer either. Our persistent romantic desire that the author, or at least his or her work, be somehow detached from the practicalities of money, together with the piety that insists that novels and poems be analyzed quite separately from the lives of their creators has meant that there have been very few studies of the relationship between a writer’s work and income. Randall Jarrell’s 1965 introduction to Christina Stead’s masterpiece, The Man Who Loved Children (1940), is a rare exception; seeking to recover Stead’s writing for a new generation, Jarrell suggested that the Australian writer’s failure to find a regular publisher—which he ascribed partly to her writing such wonderfully different novels, partly to her political position, and partly to her moving around so much from one country to another—eventually had a detrimental effect on her writing. Despite having written a dozen, highly-praised novels, she had no community of reference, no group of critics who felt obliged to track her development from one work to the next, and as a result poor sales, to the point that she was eventually obliged to take in typing work to survive. Her profound sense of frustration and disillusionment began to color the writing itself, making it shriller and more self-indulgent, something Jarrell feels would not have happened had her publishing circumstances been different.
The key idea here it seems to me is that of a community of reference. Writers can deal with a modest income if they feel they are writing toward a body of readers who are aware of their work and buy enough of it to keep the publisher happy. But the nature of contemporary globalization, with its tendency to unify markets for literature, is such that local literary communities are beginning to weaken, while the divide between those selling vast quantities of books worldwide and those selling very few and mainly on home territory is growing all the time.
It would be intriguing here to run a comparison of the incomes and work of writers like U. R. Ananthamurthy, an Indian who has continued to write in his native Kannada language and whose translated fiction, when you can get hold of it, has the all difficulty and rewards of the genuinely exotic, and the far more familiar Indians writing in English (Salman Rushdie, Vikram Seth, and others) who have used their energy and imagination to present a version of India to the West where exoticism is at once emphasized and made easy. Ananthamurthy, in his eighties, has worked steadily for decades, presumably on a fairly modest income; those more celebrated names, working in the glamor of huge advances and writing to the whole world rather than any particular community, find themselves constantly obliged to risk burnout in novels whose towering ambition might somehow justify their global reputation.
But for every Ananthamurthy there will be scores of local writers who did not find sufficient income to continue; for every Rushdie there will be hundreds whose reputation never reached that giddy orbit where a certain kind of literature can survive without the sustenance of a particular community of readers.
For decades, the U.S. government banned medical studies of the effects of LSD. But for one longtime, elite researcher, the promise of mind-blowing revelations was just too tempting.
At 9:30 in the morning, an architect and three senior scientists—two from Stanford, the other from Hewlett-Packard—donned eyeshades and earphones, sank into comfy couches, and waited for their government-approved dose of LSD to kick in. From across the suite and with no small amount of anticipation, Dr. James Fadiman spun the knobs of an impeccable sound system and unleashed Beethoven’s “Symphony No. 6 in F Major, Op. 68.” Then he stood by, ready to ease any concerns or discomfort.
For this particular experiment, the couched volunteers had each brought along three highly technical problems from their respective fields that they’d been unable to solve for at least several months. In approximately two hours, when the LSD became fully active, they were going to remove the eyeshades and earphones, and attempt to find some solutions. Fadiman and his team would monitor their efforts, insights, and output to determine if a relatively low dose of acid—100 micrograms to be exact—enhanced their creativity.
It was the summer of ’66. And the morning was beginning like many others at the International Foundation for Advanced Study, an inconspicuously named, privately funded facility dedicated to psychedelic drug research, which was located, even less conspicuously, on the second floor of a shopping plaza in Menlo Park, Calif. However, this particular morning wasn’t going to go like so many others had during the preceding five years, when researchers at IFAS (pronounced “if-as”) had legally dispensed LSD. Though Fadiman can’t recall the exact date, this was the day, for him at least, that the music died. Or, perhaps more accurately for all parties involved in his creativity study, it was the day before.
At approximately 10 a.m., a courier delivered an express letter to the receptionist, who in turn quickly relayed it to Fadiman and the other researchers. They were to stop administering LSD, by order of the U.S. Food and Drug Administration. Effective immediately. Dozens of other private and university-affiliated institutions had received similar letters that day.
That research centers once were permitted to explore the further frontiers of consciousness seems surprising to those of us who came of age when a strongly enforced psychedelic prohibition was the norm. They seem not unlike the last generation of children’s playgrounds, mostly eradicated during the ’90s, that were higher and riskier than today’s soft-plastic labyrinths. (Interestingly, a growing number of child psychologists now defend these playgrounds, saying they provided kids with both thrills and profound life lessons that simply can’t be had close to the ground.)
When the FDA’s edict arrived, Fadiman was 27 years old, IFAS’s youngest researcher. He’d been a true believer in the gospel of psychedelics since 1961, when his old Harvard professor Richard Alpert (now Ram Dass) dosed him with psilocybin, the magic in the mushroom, at a Paris café. That day, his narrow, self-absorbed thinking had fallen away like old skin. People would live more harmoniously, he’d thought, if they could access this cosmic consciousness. Then and there he’d decided his calling would be to provide such access to others. He migrated to California (naturally) and teamed up with psychiatrists and seekers to explore how and if psychedelics in general—and LSD in particular—could safely augment psychotherapy, addiction treatment, creative endeavors, and spiritual growth. At Stanford University, he investigated this subject at length through a dissertation—which, of course, the government ban had just dead-ended.
Couldn’t they comprehend what was at stake? Fadiman was devastated and more than a little indignant. However, even if he’d wanted to resist the FDA’s moratorium on ideological grounds, practical matters made compliance impossible: Four people who’d never been on acid before were about to peak.
“I think we opened this tomorrow,” he said to his colleagues.
And so one orchestra after the next wove increasingly visual melodies around the men on the couch. Then shortly before noon, as arranged, they emerged from their cocoons and got to work.
Over the course of the preceding year, IFAS researchers had dosed a total of 22 other men for the creativity study, including a theoretical mathematician, an electronics engineer, a furniture designer, and a commercial artist. By including only those whose jobs involved the hard sciences (the lack of a single female participant says much about mid-century career options for women), they sought to examine the effects of LSD on both visionary and analytical thinking. Such a group offered an additional bonus: Anything they produced during the study would be subsequently scrutinized by departmental chairs, zoning boards, review panels, corporate clients, and the like, thus providing a real-world, unbiased yardstick for their results.
In surveys administered shortly after their LSD-enhanced creativity sessions, the study volunteers, some of the best and brightest in their fields, sounded like tripped-out neopagans at a backwoods gathering. Their minds, they said, had blossomed and contracted with the universe. They’d beheld irregular but clean geometrical patterns glistening into infinity, felt a rightness before solutions manifested, and even shapeshifted into relevant formulas, concepts, and raw materials.
But here’s the clincher. After their 5HT2A neural receptors simmered down, they remained firm: LSD absolutely had helped them solve their complex, seemingly intractable problems. And the establishment agreed. The 26 men unleashed a slew of widely embraced innovations shortly after their LSD experiences, including a mathematical theorem for NOR gate circuits, a conceptual model of a photon, a linear electron accelerator beam-steering device, a new design for the vibratory microtome, a technical improvement of the magnetic tape recorder, blueprints for a private residency and an arts-and-crafts shopping plaza, and a space probe experiment designed to measure solar properties. Fadiman and his colleagues published these jaw-dropping results and closed shop.
At a congressional subcommittee hearing that year, Sen. Robert F. Kennedy grilled FDA regulators about their ban on LSD studies: “Why, if they were worthwhile six months ago, why aren’t they worthwhile now?” For him, the ban was personal, too: His wife, Ethel, had received LSD-augmented therapy in Vancouver. “Perhaps to some extent we have lost sight of the fact that it”—Sen. Kennedy was referring specifically to LSD here—“can be very, very helpful in our society if used properly.”
His objection did nothing to slow the panic that surged through halls of government. The state of California outlawed LSD in the fall of 1966, and was followed in quick succession by numerous other states and then the federal government. In 1970, agents of the Drug Enforcement Administration released a comprehensive database in which they’d sorted commonly known drugs into categories, or schedules. “Schedule 1” drugs, which included LSD and psilocybin, have a “significant potential for abuse,” they said, and “no recognized medicinal value.” Because Schedule 1 drugs were seen as the most dangerous of the bunch, those who used, manufactured, bought, possessed, or distributed them were thought to be deserving of the harshest penalties.
By waging war on psychedelics and their aficionados, the U.S. government not only halted promising studies but also effectively shoved open discourse of these substances to the countercultural margins. And so conventional wisdom continues to argue that psychedelics offer one of a few possibilities: a psychotic break, a glimpse of God, or a visually stunning but fairly mindless journey. But no way would they help with practical, results-based thinking. (That’s what Ritalin is for, just ask any Ivy League undergrad.)
Still, intriguing hints suggest that, despite stigma and risk of incarceration, some of our better innovators continued to feed their heads—and society as a whole reaped the benefits. Francis Crick confessed that he was tripping the first time he envisioned the double helix. Steve Jobs called LSD “one of the two or three most important things” he’d experienced. And Bill Wilson claimed it helped to facilitate breakthroughs of a more soulful variety: Decades after co-founding Alcoholics Anonymous, he tried LSD, said it tuned him in to the same spiritual awareness that made sobriety possible, and pitched its therapeutic use—unsuccessfully—to the AA board. So perhaps the music never really died. Perhaps it’s more accurate to say instead that the music got much softer. And the ones who were still listening had to pretend they couldn’t hear anything at all.
By JOHN MONTEROSSO and BARRY SCHWARTZ l The New York Times Sunday Review July 27, 2012
ARE you responsible for your behavior if your brain “made you do it”?
Often we think not. For example, research now suggests that the brain’s frontal lobes, which are crucial for self-control, are not yet mature in adolescents. This finding has helped shape attitudes about whether young people are fully responsible for their actions. In 2005, when the Supreme Court ruled that the death penalty for juveniles was unconstitutional, its decision explicitly took into consideration that “parts of the brain involved in behavior control continue to mature through late adolescence.”
Similar reasoning is often applied to behavior arising from chemical imbalances in the brain. It is possible, when the facts emerge, that the case of James E. Holmes, the suspect in the Colorado shootings, will spark debate about neurotransmitters and culpability.
Whatever the merit of such cases, it’s worth stressing an important point: as a general matter, it is always true that our brains “made us do it.” Each of our behaviors is always associated with a brain state. If we view every new scientific finding about brain involvement in human behavior as a sign that the behavior was not under the individual’s control, the very notion of responsibility will be threatened. So it is imperative that we think clearly about when brain science frees someone from blame — and when it doesn’t.
Unfortunately, our research shows that clear thinking on this issue doesn’t come naturally to people. Several years ago, with the psychologist Edward B. Royzman, we published a study in the journal Ethics & Behavior that demonstrated the power of neuroscientific explanations to free people from blame.
In our experiment, we asked participants to consider various situations involving an individual who behaved in ways that caused harm, including committing acts of violence. We included information about the protagonist that might help make sense of the action in question: in some cases, that information was about a history of psychologically horrific events that the individual had experienced (e.g., suffering abuse as a child), and in some cases it was about biological characteristics or anomalies in the individual’s brain (e.g., an imbalance in neurotransmitters). In the different situations, we also varied how strong the connection was between those factors and the behavior (e.g., whether most people who are abused as a child act violently, or only a few).
The pattern of results was striking. A brain characteristic that was even weakly associated with violence led people to exonerate the protagonist more than a psychological factor that was strongly associated with violent acts. Moreover, the participants in our study were much more likely, given a protagonist with a brain characteristic, to view the behavior as “automatic” rather than “motivated,” and to view the behavior as unrelated to the protagonist’s character. The participants described the protagonists with brain characteristics in ways that suggested that the “true” person was not at the helm of himself. The behavior was caused, not intended.
In contrast, while psychologically damaging experiences like childhood abuse often elicited sympathy for the protagonist and sometimes even prompted considerable mitigation of blame, the participants still saw the protagonist’s behavior as intentional. The protagonist himself was twisted by his history of trauma; it wasn’t just his brain. Most participants felt that in such cases, personal character remained relevant in determining how the protagonist went on to act.
We labeled this pattern of responses “naïve dualism.” This is the belief that acts are brought about either by intentions or by the physical laws that govern our brains and that those two types of causes — psychological and biological — are categorically distinct. People are responsible for actions resulting from one but not the other. (In citing neuroscience, the Supreme Court may have been guilty of naïve dualism: did it really need brain evidence to conclude that adolescents are immature?)
Naïve dualism is misguided. “Was the cause psychological or biological?” is the wrong question when assigning responsibility for an action. All psychological states are also biological ones.
A better question is “how strong was the relation between the cause (whatever it happened to be) and the effect?” If, hypothetically, only 1 percent of people with a brain malfunction (or a history of being abused) commit violence, ordinary considerations about blame would still seem relevant. But if 99 percent of them do, you might start to wonder how responsible they really are.
It is crucial that as a society, we learn how to think more clearly about causes and personal responsibility — not only for extraordinary actions like crime but also for ordinary ones, like maintaining exercise regimens, eating sensibly and saving for retirement. As science advances, there will be more and more “causal” alternatives to intentional explanations, and we will be faced with more decisions about when to hold people responsible for their behavior. It’s important that we don’t succumb to the allure of neuroscientific explanations and let everyone off the hook.
John Monterosso is an associate professor of psychology and neuroscience at the University of Southern California. Barry Schwartz, a co-author of “Practical Wisdom,” is a professor of psychology at Swarthmore College.
“Faith begins as an experiment and ends as an experience.” That quotation from the Anglican priest William Ralph Inge, which begins the documentary “Kumaré: The True Story of a False Prophet,” evokes the film’s ambiguous exploration of religion, teaching and spiritual leadership.
When Vikram Gandhi — the movie’s New Jersey-born director, protagonist and narrator — grows a beard and flowing hair and dons Indian robes to make a film in which he poses as a swami, you anticipate a cruel, “Borat”-like stunt. Cynics will expect a nasty chortle when this glib charlatan finally pulls the rug out from under his credulous followers.
But the outcome is much more complicated.
Disturbed by the yoga craze in the United States, Mr. Gandhi, a self-described first-generation immigrant from a Hindu background, travels to India and discovers that the swamis desperately trying to “outguru” one another are, he says, “just as phony as those I met in America.”
After returning to the United States, he transforms himself into Sri Kumaré and travels to Phoenix, where he gathers a circle of disciples. Imitating his grandmother’s voice, he imparts mystical truisms in halting, broken English. With his soulful brown eyes and soft, androgynous voice, he is a very convincing wise man.
Initially, Mr. Gandhi recalls, “I wanted to see how far I could push it.” He is shown presiding at one gathering with a picture of himself between portraits of Barack Obama and Osama bin Laden. But his earnest followers, including a death-row lawyer, a recovering cocaine addict and a morbidly obese young woman, are sympathetic, highly stressed Americans who pour out their troubles.
As Mr. Gandhi warms to these people, who demonstrate an unalloyed faith in his wisdom, the film becomes a deeper, more problematic exploration of identity and the power of suggestion, and its initially sour taste turns to honey. The meditations, mantras and yoga moves he invents, however bogus, transform lives, as his followers discover their inner gurus and gain a self-mastery.
For all his deceptiveness, Mr. Gandhi is not an egomaniacal prankster but a benign teacher whose “mirror” philosophy involves uniting the everyday self with the ideal self. A goal of this practical program of discipline and reflection is to cultivate an inner guru so that you don’t need someone like Kumaré.
“Kumaré” builds up to the big reveal, in which Mr. Gandhi, with great trepidation, presents himself to his flock as himself, without mystical trappings and speaking in his regular voice.
The film’s message lies in a paradox expressed early in the film. His impersonation was the biggest lie he’s ever told and the greatest truth he’s ever experienced. It is a thought worth pondering.
PROBLEM: To optimize creativity, how quiet or noisy should your workspace be?
METHODOLOGY: Researchers led by Ravi Mehta conducted five experiments to understand how ambient sounds affect creative cognition. In one key trial, they tested people’s creativity at different levels of background noise by asking participants to brainstorm ideas for a new type of mattress or enumerate uncommon uses for a common object.
RESULTS: Compared to a relatively quiet environment (50 decibels), a moderate level of ambient noise (70 dB) enhanced subjects’ performance on the creativity tasks, while a high level of noise (85 dB) hurt it. Modest background noise, the scientists explain, creates enough of a distraction to encourage people to think more imaginatively. (Here’s a helpful chart on typical noise levels.)
CONCLUSION: The next time you’re stumped on a creative challenge, head to a bustling coffee shop, not the library. As the researchers write in their paper, “[I]nstead of burying oneself in a quiet room trying to figure out a solution, walking out of one’s comfort zone and getting into a relatively noisy environment may trigger the brain to think abstractly, and thus generate creative ideas.”