##### The 9 kinds of physics seminar →

By MATTHEW RAVE

As a public service, I hereby present my findings on physics seminars in convenient graph form. In each case, you will see the **Understanding of an Audience Member** (assumed to be a run-of-the-mill PhD physicist) graphed as a function of **Time Elapsed** during the seminar. All talks are normalized to be of length 1 hour, although this might not be the case in reality.

**The “Typical”** starts innocently enough: there are a few slides introducing the topic, and the speaker will talk clearly and generally about a field of physics you’re not really familiar with. Somewhere around the 15 minute mark, though, the wheels will come off the bus. Without you realizing it, the speaker will have crossed an invisible threshold and you will lose the thread entirely. Your understanding by the end of the talk will rarely ever recover past 10%.

**The “Ideal”** is what physicists strive for in a seminar talk. You have to start off easy, and only gradually ramp up the difficulty level. Never let any PhD in the audience fall below 50%. You do want their understanding to fall below 100%, though, since that makes you look smarter and justifies the work you’ve done. It’s always good to end with a few easy slides, bringing the audience up to 80%, say, since this tricks the audience into thinking they’ve learned something.

**The “Unprepared Theorist”** is a talk to avoid if you can. The theorist starts on slide 1 with a mass of jumbled equations, and the audience never climbs over 10% the entire time. There may very well be another theorist who understands the whole talk, but interestingly their understanding never climbs above 10% either because they’re not paying attention to the speaker’s mumbling.

**The “Unprepared Experimentalist”** is only superficially better. Baseline understanding is often a little higher (because it’s experimental physics) but still rarely exceeds 25%. Also, the standard deviation is much higher, and so (unlike the theorist) the experimentalist will quite often take you into 0% territory. The flip side is that there is often a slide or two that make perfect sense, such as “Here’s a picture of our laboratory facilities in Tennessee.”

You have to root for undergraduates who are willing to give a seminar in front of the faculty and grad student sharks. That’s why the **“Well-meaning Undergrad”** isn’t a bad talk to attend. Because the material is so easy, a PhD physicist in the audience will stay near 100% for most of the talk. However, there is most always a 10-20 minute stretch in the middle somewhere when the poor undergrad is in over his/her head. For example, their adviser may have told them to “briefly discuss renormalization group theory as it applies to your project” and gosh darn it, they try. This is a typical case of what Gary Larson referred to as “physics floundering”. In any case, if they’re a good student (and they usually are) they will press on and regain the thread before the end.

**The “Guest From Another Department”** is an unusual talk. Let’s say a mathematician from one building over decides to talk to the physics department about manifold theory. Invariably, an audience member will gradually lose understanding and, before reaching 0%, will start to daydream or doodle. Technically, the understanding variable **U** has entered the complex plane. Most of the time, the imaginary part of **U** goes back to zero right before the end and the guest speaker ends on a high note.

**The “Nobel Prize Winner”** is a talk to attend only for name-dropping purposes. For example, you might want to be able to say (as I do) that “I saw Hans Bethe give a talk a year before he died.” The talk itself is mostly forgettable; it starts off well but approaches 0% almost linearly. By the end you’ll wonder why you didn’t just go to the Aquarium instead.

**The “Poetry”** physics seminar is a rare beast. Only Feynman is known to have given such talks regularly. The talks starts off confusingly, and you may only understand 10% of what is being said, but gradually the light will come on in your head and you’ll “get it” more and more. By the end, you’ll understand everything, and you’ll get the sense that the speaker has solved a difficult Sudoku problem before your eyes. Good poetry often works this way; hence the name.

The less said about **“The Politician”**, the better. The hallmark of such a talk is that the relationship between understanding and time isn’t even a function. After the talk, no one will even agree about what the talk was about, or how good the talk was. Administrators specialize in this.

##### The Paradox of the Proof →

By CAROLINE CHEN l *Project Wordsworth* May 9, 2013

On August 31, 2012, Japanese mathematician Shinichi Mochizuki posted four papers on the Internet.

The titles were inscrutable. The volume was daunting: 512 pages in total. The claim was audacious: he said he had proved the ABC Conjecture, a famed, beguilingly simple number theory problem that had stumped mathematicians for decades.

Then Mochizuki walked away. He did not send his work to the Annals of Mathematics. Nor did he leave a message on any of the online forums frequented by mathematicians around the world. He just posted the papers, and waited.

Two days later, Jordan Ellenberg, a math professor at the University of Wisconsin-Madison, received an email alert from Google Scholar, a service which scans the Internet looking for articles on topics he has specified. On September 2, Google Scholar sent him Mochizuki’s papers: *You might be interested in this.*

“I was like, ‘Yes, Google, I am kind of interested in that!’” Ellenberg recalls. “I posted it on Facebook and on my blog, saying, ‘By the way, it seems like Mochizuki solved the ABC Conjecture.’”

The Internet exploded. Within days, even the mainstream media had picked up on the story. “World’s Most Complex Mathematical Theory Cracked,” announced the Telegraph. “Possible Breakthrough in ABC Conjecture,” reported the New York Times, more demurely.

On MathOverflow, an online math forum, mathematicians around the world began to debate and discuss Mochizuki’s claim. The question which quickly bubbled to the top of the forum, encouraged by the community’s “upvotes,” was simple: “Can someone briefly explain the philosophy behind his work and comment on why it might be expected to shed light on questions like the ABC conjecture?” asked Andy Putman, assistant professor at Rice University. Or, in plainer words: I don’t get it. Does anyone?

The problem, as many mathematicians were discovering when they flocked to Mochizuki’s website, was that the proof was impossible to read. The first paper, entitled “Inter-universal Teichmuller Theory I: Construction of Hodge Theaters,” starts out by stating that the goal is “to establish an arithmetic version of Teichmuller theory for number fields equipped with an elliptic curve…by applying the theory of semi-graphs of anabelioids, Frobenioids, the etale theta function, and log-shells.”

This is not just gibberish to the average layman. It was gibberish to the math community as well.

“Looking at it, you feel a bit like you might be reading a paper from the future, or from outer space,” wrote Ellenberg on his blog.

“It’s very, very weird,” says Columbia University professor Johan de Jong, who works in a related field of mathematics.

Mochizuki had created so many new mathematical tools and brought together so many disparate strands of mathematics that his paper was populated with vocabulary that nobody could understand. It was totally novel, and totally mystifying.

As Tufts professor Moon Duchin put it: “He’s really created his own world.”

It was going to take a while before anyone would be able to understand Mochizuki’s work, let alone judge whether or not his proof was right. In the ensuing months, the papers weighed like a rock in the math community. A handful of people approached it and began examining it. Others tried, then gave up. Some ignored it entirely, preferring to observe from a distance. As for the man himself, the man who had claimed to solve one of mathematics’ biggest problems, there was not a sound.

For centuries, mathematicians have strived towards a single goal: to understand how the universe works, and describe it. To this objective, math itself is only a tool — it is the language that mathematicians have invented to help them describe the known and query the unknown.

This history of mathematical inquiry is marked by milestones that come in the form of theorems and conjectures. Simply put, a theorem is an observation known to be true. The Pythagorean theorem, for example, makes the observation that for all right-angled triangles, the relationship between the lengths of the three sides, *a*, *b* and *c *is expressed in the equation a^{2}+ b^{2}= c^{2}. Conjectures are predecessors to a theorem — they are proposals for theorems, observations that mathematicians believe to be true, but are yet to be confirmed. When a conjecture is proved, it becomes a theorem and when that happens, mathematicians rejoice, and add the new theorem to their tally of the understood universe.

“The point is not to prove the theorem,” explains Ellenberg. “The point is to understand how the universe works and what the hell is going on.”

Ellenberg is doing the dishes while talking to me over the phone, and I can hear the sound of a small infant somewhere in the background. Ellenberg is passionate about explaining mathematics to the world. He writes a math column for Slate magazine and is working on a book called *How Not To Be Wrong, *which is supposed to help laypeople apply math to their lives.

The sounds of the dishes pause as Ellenberg explains what motivates him and his fellow mathematicians. I imagine him gesturing in the air with soapy hands: “There’s a feeling that there’s a vast dark area of ignorance, but all of us are pushing together, taking steps together to pick at the boundaries.”

The ABC Conjecture probes deep into the darkness, reaching at the foundations of math itself. First proposed by mathematicians David Masser and Joseph Oesterle in the 1980s, it makes an observation about a fundamental relationship between addition and multiplication. Yet despite its deep implications, the ABC Conjecture is famous because, on the surface, it seems rather simple.

It starts with an easy equation: *a* + *b* =* c*.

The variables *a*, *b*, and *c, *which give the conjecture its name, have some restrictions. They need to be whole numbers, and *a *and *b *cannot share any common factors, that is, they cannot be divisible by the same prime number. So, for example, if *a *was 64, which equals 2^{6}, then *b *could not be any number that is a multiple of two. In this case, *b *could be 81, which is 3^{4}. Now *a *and *b *do not share any factors, and we get the equation 64 + 81 = 145.

It isn’t hard to come up with combinations of *a *and *b *that satisfy the conditions. You could come up with huge numbers, such as 3,072 + 390,625 = 393,697 (3,072 = 2^{10 }x 3 and 390,625 = 5^{8}, no overlapping factors there), or very small numbers, such as 3 + 125 = 128 (125 = 5 x 5 x5).

What the ABC conjecture then says is that the properties of *a* and *b *affect the properties of *c.* To understand the observation, it first helps to rewrite these equations *a + b = c* into versions made up of the prime factors:

Our first equation, 64 + 81 = 145, is equivalent to 2^{6}+ 3^{4}= 5 x 29.

Our second example, 3,072 + 390,625 = 393,697 is equivalent to 2^{10 }x 3 + 5^{8 }= 393,697 (which happens to be prime!)

Our last example, 3 + 125 = 128, is equivalent to 3 + 5^{3}= 2^{7}

The first two equations are not like the third, because in the first two equations, you have lots of prime factors on the left hand side of the equation, and very few on the right hand side. The third example is the opposite — there are more primes on the right hand side (seven) of the equation than on the left (only four). As it turns out, in all the possible combinations of *a, b, *and *c, *situation three is pretty rare. The ABC Conjecture essentially says that when there are lots of prime factors on the left hand of the equation then, usually, there will be not very many on the right side of the equation.

Of course, “lots of,” “not very many,” and “usually” are very vague words, and in a formal version of the ABC Conjecture, all these terms are spelled out in more precise math-speak. But even in this watered-down version, one can begin to appreciate the conjecture’s implications. The equation is based on addition, but the conjecture’s observation is more about multiplication.

“It really is about something very, very basic, about a tight constraint that relates multiplicative and additive properties of numbers,” says Minhyong Kim, professor at Oxford University. “If there’s something new to discover about that, you might expect it to be very influential.”

This is not intuitive. While mathematicians came up with addition and multiplication in the first place, based on their current knowledge of mathematics, there is no reason for them to presume that the additive properties of numbers would somehow influence or affect their multiplicative properties.

“There’s very little evidence for it,” says Peter Sarnak, professor at Princeton University, who is a self-described skeptic of the ABC conjecture. “I’ll only believe it when it’s proved.”

But if it were true? Mathematicians say that it would reveal a deep relationship between addition and multiplication that they never knew of before.

Even Sarnak, the skeptic, acknowledges this.

“If it’s true, then it will be the most powerful thing we have,” he says.

It would be so powerful, in fact, that it would automatically unlock many legendary math puzzles. One of these would be Fermat’s last theorem, an infamous math problem that was proposed in 1637, and solved only recently by Andrew Wiles in 1993. Wiles’ proof earned him more than 100,000 Deutsche marks in prize money (equivalent to about $50,000 in 1997), a reward that was offered almost a century before, in 1908. Wiles did not solve Fermat’s Last Theorem via the ABC conjecture — he took a different route — but if the ABC conjecture were to be true, then the proof for Fermat’s Last Theorem would be an easy consequence.

Because of its simplicity, the ABC Conjecture is well-known by all mathematicians. CUNY professor Lucien Szpiro says that “every professional has tried at least one night” to theorize about a proof. Yet few people have seriously attempted to crack it. Szpiro, whose eponymous conjecture is a precursor of the ABC Conjecture, presented a proof in 2007, but it was soon found to be problematic. Since then, nobody has dared to touch it, not until Mochizuki.

When Mochizuki posted his papers, the math community had much reason to be enthusiastic. They were excited not just because someone had claimed to prove an important conjecture, but because of who that someone was.

Mochizuki was known to be brilliant. Born in Tokyo, he moved to New York with his parents, Kiichi and Anne Mochizuki, when he was 5 years old. He left home for high school, attending Philips Exeter Academy, a selective prep school in New Hampshire. There, he whipped through his academics with lightning speed, graduating after two years, at age 16, with advanced placements in mathematics, physics, American and European history, and Latin.

Then Mochizuki enrolled at Princeton University where, again, he finished ahead of his peers, earning his bachelor’s degree in mathematics in three years and moving quickly onto his Ph.D, which he received at age 23. After lecturing at Harvard University for two years, he returned to Japan, joining the Research Institute for Mathematical Sciences at Kyoto University. In 2002, he became a full professor at the unusually young age of 33. His early papers were widely acknowledged to be very good work.

Academic prowess is not the only characteristic that set Mochizuki apart from his peers. His friend, Oxford professor Minhyong Kim, says that Mochizuki’s most outstanding characteristic is his intense focus on work.

“Even among many mathematicians I’ve known, he seems to have an extremely high tolerance for just sitting and doing mathematics for long, long hours,” says Kim.

Mochizuki and Kim met in the early 1990s, when Mochizuki was still an undergraduate student at Princeton. Kim, on exchange from Yale University, recalls Mochizuki making his way through the works of French mathematician Alexander Grothedieck, whose books on algebraic and arithmetic geometry are a must-read for any mathematician in the field.

“Most of us gradually come to understand [Grothendieck’s works] over many years, after dipping into it here and there,” said Kim. “It adds up to thousands and thousands of pages.”

But not Mochizuki.

“Mochizuki…just read them from beginning to end sitting at his desk,” recalls Kim. “He started this process when he was still an undergraduate, and within a few years, he was just completely done.”

A few years after returning to Japan, Mochizuki turned his focus to the ABC Conjecture. Over the years, word got around that he believed to have cracked the puzzle, and Mochizuki himself said that he expected results by 2012. So when the papers appeared, the math community was waiting, and eager. But then the enthusiasm stalled.

“His other papers – they’re readable, I can understand them and they’re fantastic,” says de Jong, who works in a similar field. Pacing in his office at Columbia University, de Jong shook his head as he recalled his first impression of the new papers. They were different. They were unreadable. After working in isolation for more than a decade, Mochizuki had built up a structure of mathematical language that only he could understand. To even begin to parse the four papers posted in August 2012, one would have to read through hundreds, maybe even thousands, of pages of previous work, none which had been vetted or peer-reviewed. It would take at least a year to read and understand everything. De Jong, who was about to go on sabbatical, briefly considered spending his year on Mochizuki’s papers, but when he saw height of the mountain, he quailed.

“I decided, I can’t possibly work on this. It would drive me nuts,” he said.

Soon, frustration turned into anger. Few professors were willing to directly critique a fellow mathematician, but almost every person I interviewed was quick to point out that Mochizuki was not following community standards. Usually, they said, mathematicians discuss their findings with their colleagues. Normally, they publish pre-prints to widely respected online forums. Then they submit their papers to the Annals of Mathematics, where papers are refereed by eminent mathematicians before publication. Mochizuki was bucking the trend. He was, according to his peers, “unorthodox.”

But what roused their ire most was Mochizuki’s refusal to lecture. Usually, after publication, a mathematician lectures on his papers, travelling to various universities to explain his work and answer questions from his colleagues. Mochizuki has turned down multiple invitations.

“A very prominent research university has asked him, ‘Come explain your result,’ and he said, ‘I couldn’t possibly do that in one talk,’” says Cathy O’Neil, de Jong’s wife, a former math professor better known as the blogger “Mathbabe.”

“And so they said, ‘Well then, stay for a week,’ and he’s like, ‘I couldn’t do it in a week.’

“So they said, ‘Stay for a month. Stay as long as you want,’ and he still said no.

“The guy does not want to do it.”

Kim sympathizes with his frustrated colleagues, but suggests a different reason for the rancor. “It really is painful to read other people’s work,” he says. “That’s all it is… All of us are just too lazy to read them.”

Kim is also quick to defend his friend. He says Mochizuki’s reticence is due to being a “slightly shy character” as well as his assiduous work ethic. “He’s a very hard working guy and he just doesn’t want to spend time on airplanes and hotels and so on.”

O’Neil, however, holds Mochizuki accountable, saying that his refusal to cooperate places an unfair burden on his colleagues.

“You don’t get to say you’ve proved something if you haven’t explained it,” she says. “A proof is a social construct. If the community doesn’t understand it, you haven’t done your job.”

Today, the math community faces a conundrum: the proof to a very important conjecture hangs in the air, yet nobody will touch it. For a brief moment in October, heads turned when Yale graduate student Vesselin Dimitrov pointed out a potential contradiction in the proof, but Mochizuki quickly responded, saying he had accounted for the problem. Dimitrov retreated, and the flicker of activity subsided.

As the months pass, the silence has also begun to call into question a basic premise of mathematical academia. Duchin explains the mainstream view this way: “Proofs are right or wrong. The community passes verdict.”

This foundational stone is one that mathematicians are proud of. The community works together; they are not cut-throat or competitive. Colleagues check each other’s work, spending hours upon hours verifying that a peer got it right. This behavior is not just altruistic, but also necessary: unlike in medical science, where you know you’re right if the patient is cured, or in engineering, where the rocket either launches or it doesn’t, theoretical math, better known as “pure” math, has no physical, visible standard. It is entirely based on logic. To know you’re right means you need someone else, preferably many other people, to walk in your footsteps and confirm that every step was made on solid ground. A proof in a vacuum is no proof at all.

Even an incorrect proof is better than no proof, because if the ideas are novel, they may still be useful for other problems, or inspire another mathematician to figure out the right answer. So the most pressing question isn’t whether or not Mochizuki is right — the more important question is, will the math community fulfill their promise, step up to the plate and read the papers?

The prospects seem thin. Szpiro is among the few who have made attempts to understand short segments of the paper. He holds a weekly workshop with his post-doctoral students at CUNY to discuss the paper, but he says they are limited to “local” analysis and do not understand the big picture yet. The only other known candidate is Go Yamashita, a colleague of Mochizuki at Kyoto University. According to Kim, Mochizuki is holding a private seminar with Yamashita, and Kim hopes that Yamashita will then go on to share and explain the work. If Yamashita does not pull through, it is unclear who else might be up to the task.

For now, all the math community can do is wait. While they wait, they tell stories, and recall great moments in math — the year Wiles cracked Fermat’s Last Theorem; how Perelman proved the Poincaré Conjecture. Columbia professor Dorian Goldfeld tells the story of Kurt Heegner, a high school teacher in Berlin, who solved a classic problem proposed by Gauss. “Nobody believed it. All the famous mathematicians pooh-poohed it and said it was wrong.” Heegner’s paper gathered dust for more than a decade until finally, four years after his death, mathematicians realized that Heegner had been right all along. Kim recalls Yoichi Miyaoka’s proposed proof of Fermat’s Last Theorem in 1988, which garnered a lot of media attention before serious flaws were discovered. “He became very embarrassed,” says Kim.

As they tell these stories, Mochizuki and his proofs hang in the air. All these stories are possible outcomes. The only question is – which?

Kim is one of the few people who remains optimistic about the future of this proof. He is planning a conference at Oxford University this November, and hopes to invite Yamashita to come and share what he has learned from Mochizuki. Perhaps more will be made clear, then.

As for Mochizuki, who has refused all media requests, who seems so reluctant to promote even his own work, one has to wonder if he is even aware of the storm he has created.

On his website, one of the only photos of Mochizuki available on the Internet shows a middle-aged man with old-fashioned 90’s style glasses, staring up and out, somewhere over our heads. A self-given title runs over his head. It is not “mathematician” but, rather, “Inter-universal Geometer.”

What does it mean? His website offers no clues. There are his papers, thousands of pages long, reams upon reams of dense mathematics. His resume is spare and formal. He reports his marital status as “Single (never married).” And then there is a page called *Thoughts of Shinichi Mochizuki*, which has only 17 entries. “I would like to report on my recent progress,” he writes, February 2009. “Let me report on my progress,” October 2009. “Let me report on my progress,” April 2010, June 2011, January 2012. Then follows math-speak. It is hard to tell if he is excited, daunted, frustrated, or enthralled.

Mochizuki has reported all this progress for years, but where is he going? This “inter-universal geometer,” this possible genius, may have found the key that would redefine number theory as we know it. He has, perhaps, charted a new path into the dark unknown of mathematics. But for now, his footsteps are untraceable. Wherever he is going, he seems to be travelling alone.

##### He Conceived the Mathematics of Roughness →

By JIM HOLT | *The New York Review of Books* May 23, 2013

Benoit Mandelbrot, the brilliant Polish-French-American mathematician who died in 2010, had a poet’s taste for complexity and strangeness. His genius for noticing deep links among far-flung phenomena led him to create a new branch of geometry, one that has deepened our understanding of both natural forms and patterns of human behavior. The key to it is a simple yet elusive idea, that of self-similarity.

To see what self-similarity means, consider a homely example: the cauliflower. Take a head of this vegetable and observe its form—the way it is composed of florets. Pull off one of those florets. What does it look like? It looks like a little head of cauliflower, with its own subflorets. Now pull off one of those subflorets. What does *that* look like? A still tinier cauliflower. If you continue this process—and you may soon need a magnifying glass—you’ll find that the smaller and smaller pieces all resemble the head you started with. The cauliflower is thus said to be self-similar. Each of its parts echoes the whole.

Other self-similar phenomena, each with its distinctive form, include clouds, coastlines, bolts of lightning, clusters of galaxies, the network of blood vessels in our bodies, and, quite possibly, the pattern of ups and downs in financial markets. The closer you look at a coastline, the more you find it is jagged, not smooth, and each jagged segment contains smaller, similarly jagged segments that can be described by Mandelbrot’s methods. Because of the essential roughness of self-similar forms, classical mathematics is ill-equipped to deal with them. Its methods, from the Greeks on down to the last century, have been better suited to smooth forms, like circles. (Note that a circle is not self-similar: if you cut it up into smaller and smaller segments, those segments become nearly straight.)

Only in the last few decades has a mathematics of roughness emerged, one that can get a grip on self-similarity and kindred matters like turbulence, noise, clustering, and chaos. And Mandelbrot was the prime mover behind it.

##### A Most Profound Math Problem →

By ALEXANDER NAZARYAN l *The New Yorker* May 2, 2013

On August 6, 2010, a computer scientist named Vinay Deolalikar published a paper with a name as concise as it was audacious: “P ≠ NP.” If Deolalikar was right, he had cut one of mathematics’ most tightly tied Gordian knots. In 2000, the P = NP problem was designated by the Clay Mathematics Institute as one of seven Millennium Problems—“important classic questions that have resisted solution for many years”—only one of which has been solved since. (The Poincaré Conjecture was vanquished in 2003 by the reclusive Russian mathematician Grigory Perelman, who refused the attached million-dollar prize.)

A few of the Clay problems are long-standing head-scratchers. The Riemann hypothesis, for example, made its debut in 1859. By contrast, P versus NP is relatively young, having been introduced by the University of Toronto mathematical theorist Stephen Cook in 1971, in a paper titled “The complexity of theorem-proving procedures,” though it had been touched upon two decades earlier in a letter by Kurt Gödel, whom David Foster Wallace branded “modern math’s absolute Prince of Darkness.” The question inherent in those three letters is a devilish one: Does P (problems that we can easily solve) equal NP (problems that we can easily check)?

Take your e-mail password as an analogy. Its veracity is checked within a nanosecond of your hitting the return key. But for someone to *solve* your password would probably be a fruitless pursuit, involving a near-infinite number of letter-number permutations—a trial and error lasting centuries upon centuries. Deolalikar was saying, in essence, that there will always be some problems for which we can recognize an answer without being able to quickly find one—intractable problems that lie beyond the grasp of even our most powerful microprocessors, that consign us to a world that will never be quite as easy as some futurists would have us believe. There always will be problems unsolved, answers unknown.

If Deolalikar’s audacious proof were to hold, he could not only quit his day job as a researcher for Hewlett-Packard but rightly expect to enter the pantheon as one of the day’s great mathematicians. But such glory was not forthcoming. Computer scientists and mathematicians went at Deolalikar’s proof—which runs to dozens of pages of fixed-point logistics and *k*-SAT structures and other such goodies—with the ferocity of sharks in the presence of blood. The M.I.T. computational theorist Scott Aaronson (with whom I consulted on this essay’s factual assertions) wrote on his blog, “If Vinay Deolalikar is awarded the $1,000,000 Clay Millennium Prize for his proof of P ≠ NP, then I, Scott Aaronson, will personally supplement his prize by the amount of $200,000.” It wasn’t long before Deolalikar’s paper was thoroughly discredited, with Dr. Moshe Vardi, a computer-science professor at Rice University, telling the *Times*, “I think Deolalikar got his 15 minutes of fame.”

As Lance Fortnow describes in his new book, “The Golden Ticket: P, NP and the Search for the Impossible,” P versus NP is “one of the great open problems in all of mathematics” not only because it is extremely difficult to solve but because it has such obvious practical applications. It is the dream of total ease, of the confidence that there is an efficient way to calculate nearly everything, “from cures to deadly diseases to the nature of the universe,” even “an algorithmic process to recognize greatness.” So while a solution for the Birch and Swinnerton-Dyer conjecture, another of the Clay Millennium Prize problems, would be an impressive feat, it would have less practical application than definitive proof that anything we are able to quickly check (NP), we can also quickly solve (P).

Fortnow’s book—which, yes, takes its name from “Willy Wonka & the Chocolate Factory”—bills itself as a primer for the general reader, though you will likely regret not having paid slightly more attention during calculus class. Reading “The Golden Ticket” is sort of like watching a movie in a foreign language without captions. You will miss some things, but not everything. There is some math humor, which is at once amusing, cheesy, and endearing exactly in the way that you think a mathematician’s humor might be amusing, cheesy, and endearing.

What Fortnow calls “P” stands for polynomial time, meaning the size of the input raised to a fixed number like two or three. Conversely, exponential time is some number raised to the size of the input. Though polynomial time can be long (say, 50^{2}), it is nothing compared to its exponential opposite (2^{50}). If the first is the Adirondacks, the second is the Himalayas. When solving things, we want to keep them in polynomial time if we still want to have time for lunch.

“NP” (nondeterministic polynomial time) is a set of problems we want to solve, of varying degrees of difficulty. Many everyday activities rely on NP problems: modern computer encryption, for example, which involves the prime factors of extremely large numbers. Some forty years ago, Richard Karp, the Berkeley theoretician, first identified twenty-one problems as being “NP-complete,” meaning that they are at least as hard as any other NP problem. The NP-complete problems are a sort of inner sanctum of computational difficulty; solve one and you’ve solved them all, not to mention all the lesser NP problems lurking in the rear. Karp’s foreboding bunch of problems have names like “directed Hamiltonian cycle” and “vertex cover.” Though they are extremely hard to solve, solutions are easy to check. A human may be able to solve a variation of one of these problems through what Soviet mathematicians called “*perebor*,” which Fortnow translates as “brute-force search.” The question of P versus NP is whether a much faster way exists.

So far, the answer is no. Take one of these NP-complete problems, called “*k*-clique,” which Fortnow explains as follows: “What is the largest clique on Facebook [such that] all of [them] are friends with each other?” Obviously, the more users there are on Facebook, the more difficult it is to find the biggest self-enclosed clique. And thus far, no algorithm to efficiently solve the clique problem has been discovered. Or, for that matter, to solve any of its NP-complete siblings, which is why most people do think that P ≠ NP.

There are considerations here, too, beyond math. Aaronson, the M.I.T. scientist, wrote a blog post about why he thinks P ≠ NP, providing ten reasons for why this is so. The ninth of these he called “the philosophical argument.” It runs, in part, as follows: “If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in ‘creative leaps,’ no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss; everyone who could recognize a good investment strategy would be Warren Buffett.”

We already check novels for literary qualities; most critics could easily enough put together a list of categories that make a novel great. Imagine, now, if you could write an algorithm to efficiently create verifiably great fiction. It isn’t quite as outlandish as you think: back in 2008, the Russian writer Alexander Prokopovich “wrote” the novel “True Love” by taking seventeen classics that were recombined via computer in seventy-two hours into an entirely new work. As Prokopovich told the St. Petersburg *Times*, “Today publishing houses use different methods of the fastest possible book creation in this or that style meant for this or that readers’ audience. Our program can help with that work.” He then added a note of caution: “However, the program can never become an author, like Photoshop can never be Raphael.” But if P = NP, then it could only be a matter of time before someone figured out how to create verifiably “great” novels and paintings with mathematical efficiency.

Much of Fortnow’s book is spent depicting a world in which P is proven to equal NP, a world of easily computed bliss. He imagines, for example, an oncologist no longer having to struggle with the trial and error of chemotherapy because “we can now examine a person’s DNA as well as the mutated DNA of the cancer cells and develop proteins that will fold in just the right way to effectively starve the cancer cells without causing any problems for the normal cells.” He also whips up a political scandal in which a campaign manager “hired a computer programmer, who downloaded tens of thousands of well-received speeches throughout the decades. The programmer then used [an] algorithm to develop a new speech based on current events”—one that the unwitting public predictably loves.

To postulate that P ≠ NP, as Fortnow does, is to allow for a world of mystery, difficulty, and frustration—but also of discovery and inquiry, of pleasures pleasingly delayed. Fortnow concedes the possibility that “it will forever remain one of the true great mysteries of mathematics and science.” Yet Vinay Deolalikar is unlikely the last to attempt a proof, for all of mathematics rests on a fundamental hubris, a belief that we can order what Wallace Stevens calls “a slovenly wilderness.” It is a necessary confidence, yet we are not always rewarded for it.

##### Great Scientist ≠ Good at Math →

By E.O. Wilson l * The Wall Street Journal* April 5, 2013

For many young people who aspire to be scientists, the great bugbear is mathematics. Without advanced math, how can you do serious work in the sciences? Well, I have a professional secret to share: Many of the most successful scientists in the world today are mathematically no more than semiliterate.

During my decades of teaching biology at Harvard, I watched sadly as bright undergraduates turned away from the possibility of a scientific career, fearing that, without strong math skills, they would fail. This mistaken assumption has deprived science of an immeasurable amount of sorely needed talent. It has created a hemorrhage of brain power we need to stanch.

I speak as an authority on this subject because I myself am an extreme case. Having spent my precollege years in relatively poor Southern schools, I didn’t take algebra until my freshman year at the University of Alabama. I finally got around to calculus as a 32-year-old tenured professor at Harvard, where I sat uncomfortably in classes with undergraduate students only a bit more than half my age. A couple of them were students in a course on evolutionary biology I was teaching. I swallowed my pride and learned calculus.

I was never more than a C student while catching up, but I was reassured by the discovery that superior mathematical ability is similar to fluency in foreign languages. I might have become fluent with more effort and sessions talking with the natives, but being swept up with field and laboratory research, I advanced only by a small amount.

Fortunately, exceptional mathematical fluency is required in only a few disciplines, such as particle physics, astrophysics and information theory. Far more important throughout the rest of science is the ability to form concepts, during which the researcher conjures images and processes by intuition.

Everyone sometimes daydreams like a scientist. Ramped up and disciplined, fantasies are the fountainhead of all creative thinking. Newton dreamed, Darwin dreamed, you dream. The images evoked are at first vague. They may shift in form and fade in and out. They grow a bit firmer when sketched as diagrams on pads of paper, and they take on life as real examples are sought and found.

Pioneers in science only rarely make discoveries by extracting ideas from pure mathematics. Most of the stereotypical photographs of scientists studying rows of equations on a blackboard are instructors explaining discoveries already made. Real progress comes in the field writing notes, at the office amid a litter of doodled paper, in the hallway struggling to explain something to a friend, or eating lunch alone. Eureka moments require hard work. And focus.

Ideas in science emerge most readily when some part of the world is studied for its own sake. They follow from thorough, well-organized knowledge of all that is known or can be imagined of real entities and processes within that fragment of existence. When something new is encountered, the follow-up steps usually require mathematical and statistical methods to move the analysis forward. If that step proves too technically difficult for the person who made the discovery, a mathematician or statistician can be added as a collaborator.

In the late 1970s, I sat down with the mathematical theorist George Oster to work out the principles of caste and the division of labor in the social insects. I supplied the details of what had been discovered in nature and the lab, and he used theorems and hypotheses from his tool kit to capture these phenomena. Without such information, Mr. Oster might have developed a general theory, but he would not have had any way to deduce which of the possible permutations actually exist on earth.

Over the years, I have co-written many papers with mathematicians and statisticians, so I can offer the following principle with confidence. Call it Wilson’s Principle No. 1: It is far easier for scientists to acquire needed collaboration from mathematicians and statisticians than it is for mathematicians and statisticians to find scientists able to make use of their equations.

This imbalance is especially the case in biology, where factors in a real-life phenomenon are often misunderstood or never noticed in the first place. The annals of theoretical biology are clogged with mathematical models that either can be safely ignored or, when tested, fail. Possibly no more than 10% have any lasting value. Only those linked solidly to knowledge of real living systems have much chance of being used.

If your level of mathematical competence is low, plan to raise it, but meanwhile, know that you can do outstanding scientific work with what you have. Think twice, though, about specializing in fields that require a close alternation of experiment and quantitative analysis. These include most of physics and chemistry, as well as a few specialties in molecular biology.

Newton invented calculus in order to give substance to his imagination. Darwin had little or no mathematical ability, but with the masses of information he had accumulated, he was able to conceive a process to which mathematics was later applied.

For aspiring scientists, a key first step is to find a subject that interests them deeply and focus on it. In doing so, they should keep in mind Wilson’s Principle No. 2: For every scientist, there exists a discipline for which his or her level of mathematical competence is enough to achieve excellence.

**>**Bruno Latour

##### The Heretic →

By TIM DOODY l *The Morning News* July 26, 2012

*For decades, the U.S. government banned medical studies of the effects of LSD. But for one longtime, elite researcher, the promise of mind-blowing revelations was just too tempting.*

At 9:30 in the morning, an architect and three senior scientists—two from Stanford, the other from Hewlett-Packard—donned eyeshades and earphones, sank into comfy couches, and waited for their government-approved dose of LSD to kick in. From across the suite and with no small amount of anticipation, Dr. James Fadiman spun the knobs of an impeccable sound system and unleashed Beethoven’s “Symphony No. 6 in F Major, Op. 68.” Then he stood by, ready to ease any concerns or discomfort.

For this particular experiment, the couched volunteers had each brought along three highly technical problems from their respective fields that they’d been unable to solve for at least several months. In approximately two hours, when the LSD became fully active, they were going to remove the eyeshades and earphones, and attempt to find some solutions. Fadiman and his team would monitor their efforts, insights, and output to determine if a relatively low dose of acid—100 micrograms to be exact—enhanced their creativity.

It was the summer of ’66. And the morning was beginning like many others at the International Foundation for Advanced Study, an inconspicuously named, privately funded facility dedicated to psychedelic drug research, which was located, even less conspicuously, on the second floor of a shopping plaza in Menlo Park, Calif. However, this particular morning wasn’t going to go like so many others had during the preceding five years, when researchers at IFAS (pronounced “if-as”) had legally dispensed LSD. Though Fadiman can’t recall the exact date, this was the day, for him at least, that the music died. Or, perhaps more accurately for all parties involved in his creativity study, it was the day before.

At approximately 10 a.m., a courier delivered an express letter to the receptionist, who in turn quickly relayed it to Fadiman and the other researchers. They were to stop administering LSD, by order of the U.S. Food and Drug Administration. Effective immediately. Dozens of other private and university-affiliated institutions had received similar letters that day.

That research centers once were permitted to explore the further frontiers of consciousness seems surprising to those of us who came of age when a strongly enforced psychedelic prohibition was the norm. They seem not unlike the last generation of children’s playgrounds, mostly eradicated during the ’90s, that were higher and riskier than today’s soft-plastic labyrinths. (Interestingly, a growing number of child psychologists now defend these playgrounds, saying they provided kids with both thrills and profound life lessons that simply can’t be had close to the ground.)

When the FDA’s edict arrived, Fadiman was 27 years old, IFAS’s youngest researcher. He’d been a true believer in the gospel of psychedelics since 1961, when his old Harvard professor Richard Alpert (now Ram Dass) dosed him with psilocybin, the magic in the mushroom, at a Paris café. That day, his narrow, self-absorbed thinking had fallen away like old skin. People would live more harmoniously, he’d thought, if they could access this cosmic consciousness. Then and there he’d decided his calling would be to provide such access to others. He migrated to California (naturally) and teamed up with psychiatrists and seekers to explore how and if psychedelics in general—and LSD in particular—could safely augment psychotherapy, addiction treatment, creative endeavors, and spiritual growth. At Stanford University, he investigated this subject at length through a dissertation—which, of course, the government ban had just dead-ended.

Couldn’t they comprehend what was at stake? Fadiman was devastated and more than a little indignant. However, even if he’d wanted to resist the FDA’s moratorium on ideological grounds, practical matters made compliance impossible: Four people who’d never been on acid before were about to peak.

“I think we opened this tomorrow,” he said to his colleagues.

And so one orchestra after the next wove increasingly visual melodies around the men on the couch. Then shortly before noon, as arranged, they emerged from their cocoons and got to work.

Over the course of the preceding year, IFAS researchers had dosed a total of 22 other men for the creativity study, including a theoretical mathematician, an electronics engineer, a furniture designer, and a commercial artist. By including only those whose jobs involved the hard sciences (the lack of a single female participant says much about mid-century career options for women), they sought to examine the effects of LSD on both visionary and analytical thinking. Such a group offered an additional bonus: Anything they produced during the study would be subsequently scrutinized by departmental chairs, zoning boards, review panels, corporate clients, and the like, thus providing a real-world, unbiased yardstick for their results.

In surveys administered shortly after their LSD-enhanced creativity sessions, the study volunteers, some of the best and brightest in their fields, sounded like tripped-out neopagans at a backwoods gathering. Their minds, they said, had blossomed and contracted with the universe. They’d beheld irregular but clean geometrical patterns glistening into infinity, felt a *rightness* before solutions manifested, and even shapeshifted into relevant formulas, concepts, and raw materials.

But here’s the clincher. After their 5HT2A neural receptors simmered down, they remained firm: LSD absolutely had helped them solve their complex, seemingly intractable problems. And the establishment agreed. The 26 men unleashed a slew of widely embraced innovations shortly after their LSD experiences, including a mathematical theorem for NOR gate circuits, a conceptual model of a photon, a linear electron accelerator beam-steering device, a new design for the vibratory microtome, a technical improvement of the magnetic tape recorder, blueprints for a private residency and an arts-and-crafts shopping plaza, and a space probe experiment designed to measure solar properties. Fadiman and his colleagues published these jaw-dropping results and closed shop.

At a congressional subcommittee hearing that year, Sen. Robert F. Kennedy grilled FDA regulators about their ban on LSD studies: “Why, if they were worthwhile six months ago, why aren’t they worthwhile now?” For him, the ban was personal, too: His wife, Ethel, had received LSD-augmented therapy in Vancouver. “Perhaps to some extent we have lost sight of the fact that it”—Sen. Kennedy was referring specifically to LSD here—“can be very, very helpful in our society if used properly.”

His objection did nothing to slow the panic that surged through halls of government. The state of California outlawed LSD in the fall of 1966, and was followed in quick succession by numerous other states and then the federal government. In 1970, agents of the Drug Enforcement Administration released a comprehensive database in which they’d sorted commonly known drugs into categories, or schedules. “Schedule 1” drugs, which included LSD and psilocybin, have a “significant potential for abuse,” they said, and “no recognized medicinal value.” Because Schedule 1 drugs were seen as the most dangerous of the bunch, those who used, manufactured, bought, possessed, or distributed them were thought to be deserving of the harshest penalties.

By waging war on psychedelics and their aficionados, the U.S. government not only halted promising studies but also effectively shoved open discourse of these substances to the countercultural margins. And so conventional wisdom continues to argue that psychedelics offer one of a few possibilities: a psychotic break, a glimpse of God, or a visually stunning but fairly mindless journey. But no way would they help with practical, results-based thinking. (That’s what Ritalin is for, just ask any Ivy League undergrad.)

Still, intriguing hints suggest that, despite stigma and risk of incarceration, some of our better innovators continued to feed their heads—and society as a whole reaped the benefits. Francis Crick confessed that he was tripping the first time he envisioned the double helix. Steve Jobs called LSD “one of the two or three most important things” he’d experienced. And Bill Wilson claimed it helped to facilitate breakthroughs of a more soulful variety: Decades after co-founding Alcoholics Anonymous, he tried LSD, said it tuned him in to the same spiritual awareness that made sobriety possible, and pitched its therapeutic use—unsuccessfully—to the AA board. So perhaps the music never really died. Perhaps it’s more accurate to say instead that the music got much softer. And the ones who were still listening had to pretend they couldn’t hear anything at all.

##### Did Your Brain Make You Do It? →

By JOHN MONTEROSSO and BARRY SCHWARTZ l * The New York Times Sunday Review* July 27, 2012

ARE you responsible for your behavior if your brain “made you do it”?

Often we think not. For example, research now suggests that the brain’s frontal lobes, which are crucial for self-control, are not yet mature in adolescents. This finding has helped shape attitudes about whether young people are fully responsible for their actions. In 2005, when the Supreme Court ruled that the death penalty for juveniles was unconstitutional, its decision explicitly took into consideration that “parts of the brain involved in behavior control continue to mature through late adolescence.”

Similar reasoning is often applied to behavior arising from chemical imbalances in the brain. It is possible, when the facts emerge, that the case of James E. Holmes, the suspect in the Colorado shootings, will spark debate about neurotransmitters and culpability.

Whatever the merit of such cases, it’s worth stressing an important point: as a general matter, it is *always *true that our brains “made us do it.” Each of our behaviors is always associated with a brain state. If we view every new scientific finding about brain involvement in human behavior as a sign that the behavior was not under the individual’s control, the very notion of responsibility will be threatened. So it is imperative that we think clearly about when brain science frees someone from blame — and when it doesn’t.

Unfortunately, our research shows that clear thinking on this issue doesn’t come naturally to people. Several years ago, with the psychologist Edward B. Royzman, we published a study in the journal Ethics & Behavior that demonstrated the power of neuroscientific explanations to free people from blame.

In our experiment, we asked participants to consider various situations involving an individual who behaved in ways that caused harm, including committing acts of violence. We included information about the protagonist that might help make sense of the action in question: in some cases, that information was about a history of psychologically horrific events that the individual had experienced (e.g., suffering abuse as a child), and in some cases it was about biological characteristics or anomalies in the individual’s brain (e.g., an imbalance in neurotransmitters). In the different situations, we also varied how strong the connection was between those factors and the behavior (e.g., whether most people who are abused as a child act violently, or only a few).

The pattern of results was striking. A brain characteristic that was even weakly associated with violence led people to exonerate the protagonist more than a psychological factor that was strongly associated with violent acts. Moreover, the participants in our study were much more likely, given a protagonist with a brain characteristic, to view the behavior as “automatic” rather than “motivated,” and to view the behavior as unrelated to the protagonist’s character. The participants described the protagonists with brain characteristics in ways that suggested that the “true” person was not at the helm of himself. The behavior was *caused*, not intended.

In contrast, while psychologically damaging experiences like childhood abuse often elicited sympathy for the protagonist and sometimes even prompted considerable mitigation of blame, the participants still saw the protagonist’s behavior as intentional. The protagonist *himself *was twisted by his history of trauma; it wasn’t just his brain. Most participants felt that in such cases, personal character remained relevant in determining how the protagonist went on to act.

We labeled this pattern of responses “naïve dualism.” This is the belief that acts are brought about either by intentions or by the physical laws that govern our brains and that those two types of causes — psychological and biological — are categorically distinct. People are responsible for actions resulting from one but not the other. (In citing neuroscience, the Supreme Court may have been guilty of naïve dualism: did it really need brain evidence to conclude that adolescents are immature?)

Naïve dualism is misguided. “Was the cause psychological or biological?” is the wrong question when assigning responsibility for an action. All psychological states are also biological ones.

A better question is “how strong was the relation between the cause (whatever it happened to be) and the effect?” If, hypothetically, only 1 percent of people with a brain malfunction (or a history of being abused) commit violence, ordinary considerations about blame would still seem relevant. But if 99 percent of them do, you might start to wonder how responsible they really are.

It is crucial that as a society, we learn how to think more clearly about causes and personal responsibility — not only for extraordinary actions like crime but also for ordinary ones, like maintaining exercise regimens, eating sensibly and saving for retirement. As science advances, there will be more and more “causal” alternatives to intentional explanations, and we will be faced with more decisions about when to hold people responsible for their behavior. It’s important that we don’t succumb to the allure of neuroscientific explanations and let everyone off the hook.

##### Why Crowded Coffee Shops Fire Up Your Creativity →

By HANS VILLARICA l *The Atlantic* June 20, 2012

**PROBLEM**: To optimize creativity, how quiet or noisy should your workspace be?

**METHODOLOGY**: Researchers led by Ravi Mehta conducted five experiments to understand how ambient sounds affect creative cognition. In one key trial, they tested people’s creativity at different levels of background noise by asking participants to brainstorm ideas for a new type of mattress or enumerate uncommon uses for a common object.

**RESULTS**: Compared to a relatively quiet environment (50 decibels), a moderate level of ambient noise (70 dB) enhanced subjects’ performance on the creativity tasks, while a high level of noise (85 dB) hurt it. Modest background noise, the scientists explain, creates enough of a distraction to encourage people to think more imaginatively. (Here’s a helpful chart on typical noise levels.)

**CONCLUSION**: The next time you’re stumped on a creative challenge, head to a bustling coffee shop, not the library. As the researchers write in their paper, “[I]nstead of burying oneself in a quiet room trying to figure out a solution, walking out of one’s comfort zone and getting into a relatively noisy environment may trigger the brain to think abstractly, and thus generate creative ideas.”

**SOURCE**: The full study, “Is Noise Always Bad? Exploring the Effects of Ambient Noise on Creative Cognition,” is published in the *Journal of Consumer Research*.