The Nobel Prize in Literature recently awarded to Chinese writer Mo Yan has created such an uproar that the merits of his writing seem to have been lost in the commotion. Taking center stage are cries about the political implications of honoring a member of the Communist Party and questions about the party politics of the writer himself. Then there are the financial questions: How will China best cash in on Mo Yan? The mayor of Mo Yan’s hometown wants to create a “Mo Yan brand,” and there is talk of turning his hometown Gaomi into a theme park.
Seven years ago I interviewed the future Nobel winner, and I have an entirely different take on the current debate. It was September 2005, and I was writing for a magazine based in Hong Kong. Mo Yan’s brilliant epic Big Breasts and Wide Hips had just come out in English; I was certain that he was destined for greatness and must be featured. And while my magazine was more interested in articles on designer-clad, diamond-encrusted socialites than culture, I continued to push for the story, paying for my own flight to Beijing, intent on meeting the author of that wild ride of a novel.
In person, Mo Yan had the well-fed look of someone who has seen too much starvation and famine to diet for fashion. He laughed easily, but his smiles were rare. There were smiles all around, though, on the faces of the staff vying to serve him coffee in the Beijing hotel lobby. Who would have guessed, in a country as vastly populated as China, that an ordinary-looking writer would be as recognizable as a pop star or actor?
Our conversation about his novel turned immediately to politics. It became clear that Mo Yan’s relationship with Communist Party policy is infinitely complex. He said that if he had written the same book 20 years ago he might have been shot, adding that he does not take political sides in his novel, but tries to “treat all as human. I want to show the real China and real life. It seems that [my book] is about a village, but it is actually about China’s history. In this book I want to cover every critical issue of the last century.” Speaking about his future works, his face darkened as he mentioned the unknown consequences he always fears they could provoke. “A writer without controversy is not a good one. A book without controversy is not a good one, either.”
After the interview, I visited a sun-filled Tiananmen Square. When the changing of the guards began, I was singled out by an official and loudly berated, a club waved in my face. Uncomprehending, I did not move until a girl beside me pushed me down and whispered that he had said I was too tall and blocked the view of people behind me. Forced to the ground in the shadow of Mao, I started to understand the enormity of the task Mo Yan has set for himself, which in his words is “to cover every critical issue of the last century.”
Now, however, many are denouncing Mo Yan’s win. Dissident writer Yu Jie says it is a victory for the Communist Party, and the American educated artist Ai Weiwei paints Mo Yan to be a sellout. Even the 2009 Nobel literature laureate Herta Müller calls it a “catastrophe.”
I disagree. To write such compelling fiction featuring current government corruption, inhumane policies, and the country’s bloody history without being jailed, censored, or having to leave his native villagers and country in favor of citizenship abroad, speaks to the deep level of artistry in Mo Yan’s novels and his commitment to his adoring Chinese public. Moreover, the clout of his Nobel now permits him to vocalize opinions that have hitherto only been possible through the veil of his writing. This makes his pen name, translated as “don’t speak,” even more of an irony.
But, be assured that none of this current debate can really be affecting Mo Yan all that much, given his stance that controversy is the mark of good writing. By his own standards, he has proved himself a tour de force. I just worry where he will write his next novel once Gaomi is turned into a theme park.
Anna Schonberg ’95 has a master’s in East Asian studies from Stanford and currently lives in Los Angeles.
On November 1, 1941, a little over a month before Pearl Harbor and America’s entry into World War II, the U.S. Army opened a secret facility in an abandoned airplane hangar at San Francisco’s Golden Gate Park. The purpose of this enterprise was to create a cadre of experts who could speak Japanese. After the war, the new language training center, now known as the Defense Language Institute, moved to Minnesota and eventually found a permanent home at the historic Presidio of Monterey.
In times of war, we always seem to remember the need for people to talk to other people in a language they can truly understand—their own. Unfortunately, without the threat of war, Americans—like the former president of Harvard and former secretary of the treasury Larry Summers—seem to believe that foreign languages are a waste of time and resources because the rest of the world, if they want to talk to us, can be expected to do so in English.
Yet even people who realize that the overwhelming majority of the world’s population does not speak English, and that even those who do speak English can often communicate in that language only on a very basic level of proficiency, add to the problem by joining the stampede for what I like to call “the critical language du jour.” The people who jump on these particular bandwagons seem to be unaware of the fact that their behavior is that of lemmings. In the 1960s and 1970s, following the Sputnik crisis of 1957, everybody was supposed to be learning Russian. In the 1990s there was a spike in Japanese (remember Michael Crichton’s Rising Sun and all those courses on Japanese business ethics?) and German (following the fall of the Berlin Wall, when people were afraid of the rise of a “Fourth Reich”!).
While the Arabic School at Middlebury was established in 1982, on a national scale Arabic remained one of the “less commonly taught languages” until 9/11, when it suddenly seemed as if every single college student in America wanted to study Arabic. The same is true of Chinese: Whereas Middlebury established its Chinese summer Language School as early as 1966, the rest of the nation did not catch up until the late 1990s when it suddenly became obvious to everybody else that China was on its way to becoming a global powerhouse.
There is, of course, nothing wrong with people studying Arabic and Chinese. We desperately need proficient speakers in both languages. With less than 20 percent of Americans fluent in a second language (as compared to 50 percent in the European Union) we sorely need foreign language speakers to remain competitive in a global economy, for purposes of national security, and to participate in worldwide conversations about risks like climate change, global health and resources (food, water, energy), or migration.
The problem is that we need experts in all the most important world languages, not just the one or two “critical languages du jour.” Just as we found ourselves catastrophically short of Arabic speakers after 9/11 (and, more importantly before 9/11!), who is to say that, in the wake of a resurgent Russia, we will not someday wish we had had more Russianists?
Currently, many people in the federal government and consequently many administrators of educational institutions seem to think that some of our traditional languages (except for Spanish) no longer matter. This includes French, German, Italian, and Russian. (It also includes Japanese, which, as recently as the 1990s, was very “hot.”) There are about 110 million people in dozens of countries worldwide who speak French as their native language. About 100 million in central Europe speak German. It is also the most widely spoken second language in Europe, after English. Russian is spoken by some 160 million people—and, as The Economist noted some time ago, we are neglecting a country that remains one of the world’s superpowers at our peril. Japanese is spoken by 125 million people; in 2012, Japan, with a GDP of U.S. $6 trillion, was still the world’s third largest economy behind the United States and China, and ahead of Germany. Yet in the headlong race to throw all of our (dwindling) resources at the language spoken by the people we most fear at any given point in time, we are sending a powerful message to students and the public at large that languages matter only if we are at war with the people who speak them.
What we need is a strategic language reserve, a place, or better yet, many places, where the 10 or 20 most important world languages will always be taught, reliably, year after year, with cutting-edge pedagogy and technology in a setting that is immersive, contextualized, interactive, and high octane. There are only three or four places in the nation that do this, and among these, Middlebury has by far the longest tradition of excellence in immersion language education. As Middlebury’s Language Schools approach our centennial in 2015, we should remember that, except for the German School between 1917 (consider the date!) and 1931, Middlebury has never closed a Language School. This means that Middlebury is one place in the nation where, for a hundred years, students have been able to come and study a particular language in one summer, and then return to study some more one or two or many years later. We now teach 10 languages: Arabic, Chinese, French, German, Hebrew, Japanese, Italian, Portuguese, Russian, and Spanish. And we expect to teach these languages (and others we hope to add) a decade from now and, barring unforeseen disasters, many years into the future. If this country is to remain competitive, secure, and a leader on issues of global import, it will be critical for us to speak the world’s languages.
Michael Geisler is a professor of German and the vice president of the Language Schools, schools abroad, and graduate programs at Middlebury.
But What About English?
It is estimated that 375 million people around the world speak English as their first language; another 375 million, and possibly more, speak English as a second language. Beyond that, even more people speak English to some level of competence, as many as 25 percent of this planet’s seven billion people.
And the demand for the other three-quarters is increasing. Why? “Because English is the language of business and commerce,” says Renee Jourdenais, the dean of the Graduate School of Translation, Interpretation, and Language Education at the Monterey Institute. “If you are in China, and you want to do business with Russia or Japan or India, you need a common language, and English often serves as that language.”
English is also the official language for maritime and aeronautical communications, for the United Nations, the International Olympic Committee; it is the primary or official language for nearly 100 countries around the globe. For those who can’t speak English, they are at risk of being marginalized, a phenomenon taking place both far and near. Consider: An estimated one in four children in the United States are from immigrant families and live in households where a language other than English is spoken. As a result, in American schools, there is a significant learning gap between English-language learners and native English speakers.
Being able to teach English to nonnative speakers is of critical importance. Under Jourdenais’s purview at MIIS are both the programs in intensive English and teaching English to speakers of other languages. (The former is for international students seeking to learn English; the latter trains people to teach English.) Here are some of Jourdenais’s thoughts on the learning and teaching of English:
On the need for understanding English
There’s the business and commerce equation, as I mentioned. English is increasingly seen as the lingua franca of the world. If you want to participate in the global economy, if you want to be globally literate, knowing how to speak and read English can maximize your possibilities. Likewise, if we look inwardly at our own country, the demographics of the United States are changing. The number of people who speak languages other than English is increasing. And English serves as a common language for U.S. residents as well. As such, there is a critical need in our country and our schools for teachers who can teach English to nonnative speakers—to help close a critical learning gap between those who come to school English-fluent and those who need to develop their English skills along with their academic knowledge.
On the teaching of English to nonnative speakers
Too often, people assume that if you can speak a language, if you are “fluent” in a language, then you can teach it. That’s not entirely true. Those who want to teach English to speakers of other languages need to know why people need the language and how they acquire it. These potential teachers need a sound linguistic foundation—they have to understand linguistic theory, the structures of language, and theories of how languages are learned. And then there is language pedagogy—how best to teach languages and engage students in their learning experience. These teachers also need to be prepared to teach students who come from different backgrounds with different ways of learning. All of this is so important—these teachers are giving their students a voice in the world.
At my school, all of the kids in the third grade were asked to read a children’s book to the first graders. This program instilled a very real sense of, I don’t know, superiority, I guess. The age difference between first and third grade isn’t great, but in third grade you can read; it was a differentiator. Reading was imbedded into that sense of identity as a third grader; we were the “big kids,” and we were going to demonstrate it by doing something the first graders couldn’t.
Up until this point, I don’t think I had a full understanding that I couldn’t read like my classmates. I just knew that it was hard, and that was the extent of it. I thought it was like that for everybody. But when it came time for us to choose our books, I remember kids choosing these chapter books, the Magic Tree House series, to show off their reading chops; or maybe they were picking more simple books they had been able to read for a while, books that the first-graders were just learning to read.
So I went that route, picking The Cat in the Hat—except I couldn’t read it. I knew what the story was about because my parents always read to me at bedtime, and I had a pretty good visual memory of the book. I knew how many words there were on a page. The pictures somewhat corresponded with the words, and I could remember the pictures. So up until “reading day” I would have my parents read me that book, and I would try and memorize the story. I would try to remember the words that they were saying.
And then it came time to read the book aloud to the first-graders. And it was right then, when I was sweating, my hands shaking, fumbling for words . . . that’s when I knew. These kids were correcting me. They could read it. And I couldn’t.
That’s when it dawned on me that there was this structure, this hierarchy in the educational world—third-graders should be able to do things that first-graders couldn’t—and I didn’t have a place in it.
I was given the diagnosis in the fourth grade, and it came with such a profound sense of relief. Up until that point, I just felt that I wasn’t smart enough; I couldn’t do what the teachers felt I could do. So getting the diagnosis—that was the ultimate clarification that I was different, but that was good. Suddenly, there was a category that I fit into; I wasn’t alone.
Being diagnosed as dyslexic immediately gave me a sense of what my strengths were and what my weaknesses were. To get these laid out for me was so important because it told me that, OK, there are things I’m going to struggle with, but there are also things that I won’t struggle with. Before, I had no confidence; I just assumed everything would be a struggle.
I was so lucky that my mom was a teacher, because she never had the belief that there were “normal” kids and there were kids who didn’t fit that definition. She sees each kid as an individual learner. The concept that there’s a standard student and there’s a student who needs accommodations is ridiculous because there is no “standard” student. She inherently understood that. Up until my diagnosis, I might have felt alone at school, but never at home.
In high school, I loved studio art, and I think it was expected that because I was dyslexic and because I was good at art, that I’d go to art school. But I saw this as a copout, I saw this as running away from my dyslexia, of conforming to others’ beliefs in what I could or couldn’t do. I had this deep drive to prove to people that I could do academics. I was going to go to a rigorous liberal arts school! And then I was going to be a history major!
When I got here, I felt like Middlebury had taken a risk with me; I was a risky investment. I mean, I knew what I could do, but how could they know for sure? I had bad SAT scores, and I probably spelled some stuff wrong on the application. So I put pressure on myself to prove that kids with learning disabilities, kids who don’t do well on the SATs, can contribute a lot to the community—they can be creators, innovators.
At first I thought that meant excelling in areas I wouldn’t normally excel in and limiting myself to one studio art course a semester—things like that. And I did well. But then I wondered, Why am I not doing what I really want to do? I remember being told that I was going to reach a point in my life when I’d be able to do the things that I wanted to do, that I wouldn’t always have to work so hard to overcome my learning difference.
But there’s no guy standing on the corner saying, “You know that point? It’s happening right now.” You have to come to that realization yourself, and I think this is especially difficult for people with learning differences. When do you shed off that stuff that you have to do?
I think I’ve spent a long time feeling not so great about myself; there are self-esteem issues deeply embedded in working within other people’s expectations. And if you are not doing what you really want to do, not playing to your strengths, then the validation you receive is completely external, and you never feel satisfied.
I’m still working through it. But I’m a studio art major now, though I might minor in history.
Living with dyslexia . . . it’s hard. But from my experience, you have to own it. It’s who I am. It’s always going to be me. Understanding this is essential in order to be happy as a human being.
In the late 1980s, when Jane Swift arrived as a freshman at Trinity College in Hartford, Connecticut, after attending public schools in western Massachusetts, she says it didn’t take long before she noticed a “vibrant, Technicolor gap” between her precollegiate preparation and those of her peers who had attended private schools; it was most pronounced, she says, in the realm of language education.
“I have this distinct recollection of having a steeper learning curve,” says Swift, the former Massachusetts governor and current CEO of Middlebury Interactive Languages (MIL), the joint commercial venture between Middlebury College and K12, Inc. “It opened my eyes, and it later became my focus in public office and in the private sector: how can we better facilitate access to high-quality education in the United States? Technology and its innovative applications seemed to be this untapped area where we could vastly broaden our reach in an affordable way.”
And this access—or lack of it—has had a profound impact on language learning, says Middlebury President Ron Liebowitz. “There is a huge language gap in the United States, a crisis in terms of the number of people who are proficient in foreign languages,” he says. “We’re not adequately preparing our next generation; students typically need to wait until the age of 18 to begin the study of language in any serious way. That’s a problem.”
With education budgets being slashed across the country—according to a recent analysis by the non-partisan Center on Budget and Policy Priorities, 35 states are spending less per pupil than they were five years ago—there likely will be fewer language instructors in this country’s public schools in the years ahead, not more.
While this trend may be troubling, many agree with Swift that an innovative technological approach would not only lessen the impact but would also make language learning in our nation’s public schools more effective.
“A comprehensive online solution is everyone’s holy grail,” Phil Hubbard, a senior lecturer in linguistics at Stanford, told the Pacific Standard magazine’s Bonnie Tsui for a story titled, “What’s the Secret to Learning a Second Language?” “A lot of people developing these programs have a good idea, but no particular experience in language teaching,” he added. “They leverage one part of it, but don’t do the other parts well.”
It was with this in mind that Middlebury partnered with K12 to launch MIL in 2010. K12 is a leader in educational technology and would bring the digital expertise; Middlebury language instructors would design the curriculum, and, most important, would attempt to translate Middlebury’s century-old intensive immersion philosophy to the online realm.
“The drill-and-kill approach . . . doesn’t work,” Vice President of Language Schools Michael Geisler told Tsui for the Pacific Standard story. “Scripted dialogue and picture association . . . [are] not going to teach you the language.”
“Contextualized learning is the key,” Geisler told me in a conversation we had in December about the development of the programs. “We spent a lot of time talking about how to introduce this philosophy into an online curriculum,” he said.
By contextualization, Geisler means using clues that come from the context of the experience to acquire the information one needs to truly understand a language. He considers this to be one of four key principles to language learning. To attain contextualization online, MIL has developed video tutorials and virtual worlds using authentic material that will provide students with body-language clues, recognizable surroundings, and visual and verbal tone. “We’re trying to teach students to look for what they know (cognates, creative guesses),” Geisler said. “Not for what they don’t know.”
Geisler acknowledged that contextualization doesn’t come as easily online as it does face-to-face. In person, if you say something, you can see instantly how your message was received. (“As the German poet Heinrich Heine wrote, ‘Once the arrow has left the bow, it is no longer the archer’s,’” Geisler noted.) Facial recognition isn’t as intuitive in a virtual world, though Geisler added that by using an application such as Skype to communicate with an instructor or a peer, this disadvantage is greatly lessened.
This speaks to another of the four key language-learning principles, interaction with others. (The other two are using the language and using it for a purpose.) “But online, you can do it at your own pace, which is very useful for people with different learning styles,” Geisler explained.
“Think about the shy student, the student who needs more time. This person can ease into interaction online at their own pace, when they are more comfortable. They’re not under the same pressure they would be in a traditional classroom. Of course, when they are more comfortable, we do want them to seek out this personal interaction.”
I asked Geisler about the traditional classroom. Is there a concern that if this online model is as successful as they believe it will be, it will hasten attrition among foreign language instructors? That is, will machine replace man?
“Not if things go right,” he said. “We see online learning as providing more foreign language resources in a more cost-effective manner. Once school districts find out that they can deploy teachers more efficiently, to reach larger numbers of students, there will be an incentive for bringing back some languages that are currently threatened by tight budgets.”
MIL offers three delivery models—a stand-alone model, a supplemental model that a student may use at home to enhance his or her classroom instruction, and a hybrid approach in which the foreign language instructor incorporates online learning into his or her curriculum. Geisler and others believe that the hybrid approach is the most effective way to learn a language. But the supplemental and stand-alone models exist for a reason. The hybrid approach may be optimal, but if it’s not feasible within certain schools, providing students with other options is better than having no options at all.
“Think about it this way,” Jane Swift says. “Let’s say you have access to the very best teacher possible. Well, you can never replace that. But let’s say you don’t have that teacher as an option. Let’s say your school is going to cut Spanish. Or let’s say you want to learn Russian and your school doesn’t offer it. We can replicate that instruction in a fashion.”
She continues: “We can give you a quality learning experience—whenever you want it and at your own pace. It might not be the same as having that specific teacher in your classroom, but how many schools have that? Fewer and fewer. For those that don’t, we can help fill that void and close that gap. And for those that do, well, these programs will only make that instruction even better.”
This apparently recondite question, posed by the philosopher Hilary Putnam in a seminal 1975 paper, actually lies at the core of the branch of linguistics known as semantics. How we answer this question will have important implications for a variety of issues that are currently hotly debated in linguistics, such as whether some concepts are innate, whether different languages create different styles of thought or experience (linguistic determinism), how languages are learned, and so on. In the second half of the 20th century, the prevalent commonsense view of meaning faced a number of serious challenges, but none was as potentially revolutionary as that raised by Putnam and other similarly minded philosophers of language.
I always begin my Philosophy of Language course by asking students what they take to be “the meaning of ‘meaning,’” and the most common initial response is, in short, that meaning is something in the head. The meaning of a sentence like “It’s six pm in Denver now” is a thought in the mind of the speaker, presumably the thought that right now the time in Denver is six pm; the meaning of a word, for instance “cauliflower,” is the speaker’s concept of that thing. This view is a very commonsensical one for us today, and also one with a long historical pedigree.
The 17th century philosopher John Locke held that a man’s words “stand as marks for the ideas in his own mind, whereby they might be made known to others, and the thoughts of men’s minds be conveyed from one to another.”
But Putnam and other philosophers, such as Saul Kripke, raised deep-seated objections to the idea theory, objections whose implications philosophers and linguists are still trying to unravel. Putnam’s challenge takes the form of a thought experiment involving a make-believe planet called “Twin Earth.” Imagine, he says, that somewhere in the universe there is a planet that is, with one exception, molecule for molecule identical with Earth. On Twin Earth there are twin trees and twin rocks. There are even doppelgangers of you and me, who speak something that sounds just like English. The only difference between the two planets is that on Twin Earth, the lakes and rivers don’t contain H2O, but a substance with a different chemical formula we can abbreviate XYZ. XYZ is, to the naked eye, indistinguishable from H2O, and Twin Earthians drink it, cook with it, and even call it by the same sound we use, “water.”
But, Putnam asks, what does the Twin Earthian word “water” mean? Clearly, it does not mean water. After all, water is H2O, not XYZ; a substance with a different chemical formula would not be called water. But—and here’s the rub—this difference of meaning would exist even if Person A on Earth and Twin Person A on Twin Earth were exactly identical in terms of what’s “in their heads.” Suppose that it’s the year 1750 (Earth time), and no one on either Earth or Twin Earth has any understanding of chemical composition. Person A and Twin Person A will then share all the same beliefs about their respective liquids: that it’s clear, odorless, thirst-quenching on a summer’s day, and so on. But even so, the meaning of Twin Person A’s term “water” cannot be water, for this term refers to XYZ, not H2O. Person A’s and Twin Person A’s “concepts” of these substances are identical, and yet the meanings of their terms are different. So meanings cannot just be concepts. As Putnam puts it, “Cut the pie any way you like, ‘meaning’ just ain’t in the head!”
Or, at any rate, not wholly in the head. Putnam’s proposal is actually that the meaning of most words includes two components: one that is not in the head, the word’s extension, or the things to which it applies (in the case of water, H2O); and one that is in the head, the word’s “stereotype.” This may seem, to put it mildly, surprising. How could H2O itself be part of the meaning of “water” in 1750, before anyone knew that water was H2O? Putnam’s idea is that “water”, and indeed most words, are actually akin to indexical words like “this,” “that,” and “now,” whose meaning depends on context. What I mean when I say “that” depends on whether I’m pointing to my cat or my car, and if I’m pointing to my cat, what I mean is the cat itself. In a similar way, the meaning of “water” “reaches out” to encompass the actual stuff in the world to which the word refers, even if the speaker doesn’t fully know the nature of that stuff.
Putnam’s view of meaning has sparked a great deal of controversy since it was proposed, but it has had a tremendous influence. What are its implications? What it means for broader questions concerning, for instance, the innateness of language and linguistic determinism, is still very much a subject of debate. However these specific issues are decided, this new perspective has suggested to many a broad reorientation of our way of thinking about the relationship between the mind and the world. The idea theory of meaning, by picturing meaning as something wholly within the speaker’s head, in a sense separates the mind from the world. On Putnam’s view, the meanings we grasp with our minds encompass things outside the mind, which suggests we should think of the mind as fundamentally open to the world, rather than closed in on itself.
For those who accept Putnam’s argument, there is much work to be done in order to understand what exactly this means about the nature of human subjectivity and its relation to the world.
John Spackman is an associate professor of philosophy. He teaches a course at Middlebury titled “Philosophy of Language.”
Dwayne Nash ’99 was once part of the legal institution he now seeks to reform.
The morning was like any other. It was late February, and Dwayne Nash ’99 woke in a brownstone on Manhattan Avenue, in New York City’s Precinct 28, where Malcolm X once demanded custody of a black man the police beat nearly to death. That was before the riots, before crack hit hard and the War on Drugs took the dealers and doers to prison, and Harlem became a nice, historical neighborhood with tree-lined streets and rents so high that Nash, a former criminal prosecutor, could hardly afford his own apartment. This morning, like every morning, Nash lay in bed and scrolled through headlines on his iPhone. One caught his eye—a neighborhood watchman had shot and killed an unarmed black kid in Sanford, Florida. Trayvon Martin had looked “suspicious,” the watchman, George Zimmerman, said. Martin was on his way home with an iced tea and a bag of Skittles when Zimmerman called the police. By the time an officer arrived, the young man was dead.
Nash had known his share of murders, but this one particularly rattled him. Zimmerman had claimed he acted in self-defense, and the police let him go. “You have one person standing there with a gun, the other person dead. You have to give the body the benefit of the doubt,” said Nash. Why didn’t they? “I don’t think the police were incompetent. I think they saw no value in Trayvon, in investigating any further. His blackness made his body less important.”
Two weeks later, a reporter for the Chronicle of Higher Education met Nash in a coffee shop in Harlem. Nash, 35, is at first glance modish and circumspect; the reporter took note of his “Burberry tie” and “wing-tipped shoes.” She wanted to know what he thought of the incident. Nash, a doctoral candidate at Northwestern University’s black-studies program, was researching the history of stop-and-frisk, a police tactic popularized in the 1990s by former New York City mayor Rudy Giuliani. His research is part of a growing body of work that equates the criminalization of today’s minorities with the laws that once denied African Americans their basic rights. One-third of all black men in the United States are under the watch of the criminal justice system—in prison, on probation, on parole—the majority charged with drug possession and other nonviolent offenses. Statistically, a white person is more likely to use drugs than an African American. Scholars have known for a long time that the numbers don’t add up and trace the disparity to the 1980s and the War on Drugs, when police raided dense, urban neighborhoods. But Nash’s work traces the problem even further back, to the 1960s, when the Civil Rights Act passed and white Americans grasped at a new kind of racial control.
Nash chose his words carefully to the Chronicle reporter, at once gentle and emphatic: “Whether we are stopped, searched, arrested, or shot, it’s all the same. We’re being read as a threat, criminal, or suspicious at the very least. Instead of Trayvon Martin, it could have been me that was killed. I pray that a gun barrel is not pointed to my face for making an innocent gesture or for being in the wrong place at the wrong time because of my skin color. There was no right place for Trayvon. He was walking home in the rain, doing nothing wrong, and he was read as suspicious.”
This past October, when I met Nash in Chicago, I asked him to reflect again on the incident. George Zimmerman, the watchman who shot Martin, had since been arrested and charged with second-degree murder. Nash was dissatisfied. “There is a long history of viewing the black body with criminal suspicion,” he said. “That memory has been transmitted across generations and time—and across institutions, as well.” In this case, said Nash, the real problem was not Zimmerman, nor even the cops, but Florida’s stand-your-ground law, which gives the benefit of the doubt to anyone who claims they shot another in self-defense. “If you believe that Zimmerman was just one bad apple, just ‘that racist,’ then you miss the point,” he said. “Zimmerman knew that he could draw from the law to protect himself. He knew he had greater rights than Trayvon. He did something wrong, but the legal institution made that possible.”