in all blogs
Viewing Blog: The Chicago Blog, Most Recent at Top
Results 1 - 25 of 1,829
Publicity news from the University of Chicago Press including news tips, press releases, reviews, and intelligent commentary.
Statistics for The Chicago Blog
Number of Readers that added this blog to their MyJacketFlap: 9
As with the past few years, we are fortunate enough to have scholar Sandra M. Gustafson contribute a post following Barack Obama’s annual State of the Union address, positing the stakes for Obama’s rhetorical position in light of recent events in Ferguson, Missouri, and New York City (while pointing toward their more deeply embedded and disturbing legacies, respectively). Read Gustafson’s 2015 post in full after the jump below.
Lives that Matter: Reflections on the 2015 State of the Union Address
by Sandra M. Gustafson
In his sixth State of the Union address, President Barack Obama summarized the major achievements of his administration to date–bringing the American economy back from the Great Recession, passing and implementing the Affordable Care Act, advancing civil rights, and winding down wars in Iraq and Afghanistan, while shifting the emphasis of US foreign policy toward diplomacy and multilateralism – and presented a framework for new initiatives that he called “middle class economics,” including affordable child care, a higher minimum wage, and free community college. Commentators compared the president’s emphasis on the successes of his six years in office to an athlete taking a victory lap. Some considered that tone odd in light of Republican midterm victories, while others speculated about his aspirations to shape the 2016 presidential election. More and more, the president’s rhetoric and public actions inform an effort to shape his legacy, both in terms of the direction of his party and with regard to his historical reputation. The 2015 State of the Union address was a prime example of the narrative emerging from the White House.
The announcement earlier on the day of the address that the president will visit Selma, Alabama, to commemorate the fiftieth anniversary of Bloody Sunday and the movement to pass the Voting Rights Act was just one of many examples of how he has presented that legacy over the years: as an extension of the work of Martin Luther King, Jr. Community organizing, nonviolent protest, and political engagement are the central components of the route to social change that the president offered in The Audacity of Hope, his 2006 campaign autobiography. The need to nurture a commitment to progressive change anchored in an expanded electorate and an improved political system has been a regular theme of his time in office.
In the extended peroration that concluded this State of the Union address, the president alluded to his discussion of deliberative democracy in The Audacity of Hope. He called for “a better politics,” which he described as one where “we appeal to each other’s basic decency instead of our basest fears,” “where we debate without demonizing each other; where we talk issues and values, and principles and facts,” and “where we spend less time drowning in dark money for ads that pull us into the gutter, and spend more time lifting young people up with a sense of purpose and possibility.” He also returned to his 2004 speech to the Democratic National Convention in Boston, quoting a now famous passage, “there wasn’t a liberal America or a conservative America; a black America or a white America—but a United States of America.”
The president’s biracial background and his preference for “both/and” ways of framing conflicts has put him at odds with critics such as Cornell West and Tavis Smiley, who have faulted him for not paying sufficient attention to the specific problems of black America. The approach that Obama took in his address to the police killings of unarmed black men in Ferguson, Missouri, and New York City did not satisfy activists in the Black Lives Matter coalition, which issued a rebuttal to his address in the form of a State of the Black Union message. To the president’s claim that “The shadow of crisis has passed, and the State of the Union is strong,” the activists responded emphatically, offering a direct rebuttal in the subtitle of their manifesto: “The Shadow of Crisis has NOT Passed.” Rejecting his assertions of economic growth and social progress, they assembled a list of counterclaims.
The president came closest to engaging the concerns of the activists when he addressed the issue of violence and policing. “We may have different takes on the events of Ferguson and New York,” he noted, juxtaposing micronarratives of “a father who fears his son can’t walk home without being harassed” and “the wife who won’t rest until the police officer she married walks through the front door at the end of his shift.” By focusing on the concerns of a father and a wife, rather than the young man and the police officer at risk, he expanded the possibilities for identification in a manner that echoes his emphasis on family. The “State of the Black Union” extends the notion of difference in an alternative direction and responds with a macronarrative couched in terms of structural violence: “Our schools are designed to funnel our children into prisons. Our police departments have declared war against our community. Black people are exploited, caged, and killed to profit both the state and big business. This is a true State of Emergency. There is no place for apathy in this crisis. The US government has consistently violated the inalienable rights our humanity affords.”
To the president’s language of the nation as a family, and to his statement that “I want our actions to tell every child in every neighborhood, your life matters, and we are committed to improving your life chances[,] as committed as we are to working on behalf of our own kids,” the manifesto responds by rejecting his image of national solidarity and his generalization of the “black lives matter” slogan. Instead it offers a ringing indictment: “This corrupt democracy was built on Indigenous genocide and chattel slavery. And continues to thrive on the brutal exploitation of people of color. We recognize that not even a Black President will pronounce our truths. We must continue the task of making America uncomfortable about institutional racism. Together, we will re-imagine what is possible and build a system that is designed for Blackness to thrive.” After presenting a list of demands and declaring 2015 “the year of resistance,” the manifesto concludes with a nod to Obama’s 2008 speech on race, “A More Perfect Union”: “We the People, committed to the declaration that Black lives matter, will fight to end the structural oppression that prevents so many from realizing their dreams. We cannot, and will not stop until America recognizes the value of Black life.”
This call-and-response between the first African American president and a coalition of activists has two registers. One register involves the relationship between part and whole (e pluribus unum). President Obama responds to demands that he devote more attention to the challenges facing Black America by emphasizing that he is the president of the entire nation. What is at stake, he suggests, is the ability of an African American to represent a heterogeneous society.
The other register of the exchange exemplifies a persistent tension over the place of radicalism in relation to the institutions of democracy in the United States. The Black Lives Matter manifesto draws on critiques of American democracy in Black Nationalist, Black radical, and postcolonial thought. As I discuss in Imagining Deliberative Democracy in the Early American Republic, these critiques have roots reaching back before the Civil War, to abolitionist leaders such as David Walker and Maria Stewart, and even earlier to the Revolutionary War veteran and minister Lemuel Haynes. The recently released film Selma, which portrays the activism leading to the passage of the 1965 Voting Rights Act, highlights the tactics of Dr. King and his associates as they pressure President Johnson to take up the matter of voting. The film characterizes the radical politics of Malcolm X and the threat of violence as a means to enhance the appeal of King’s nonviolent approach, an argument that Malcolm himself made. It then includes a brief scene in which Malcolm meets with Coretta Scott King in a tentative rapprochement that occurred shortly before his assassination. This tripartite structure of the elected official, the moderate or nonviolent activist, and the radical activist willing to embrace violence has become a familiar paradigm of progressive social change.
Aspects of this paradigm inform Darryl Pinckney’s “In Ferguson.” Reporting on the violence that followed the grand jury’s failure to indict Officer Darren Wilson for Michael Brown’s killing, Pickney quotes the Reverend Osagyefo Sekou, one of the leaders of the Don’t Shoot coalition, on the limits of electoral politics. Voting is “an insider strategy,” Sekou says. “If it’s only the ballot box, then we’re finished.” Pickney also cites Hazel Erby, the only black member of the seven-member county council of Ferguson, who explained the overwhelmingly white police force as a result of low voter turnout. Pinckney summarizes: “The city manager of Ferguson and its city council appoint the chief of police, and therefore voting is critical, but the complicated structure of municipal government is one reason many people have been uninterested in local politics.” This type of local narrative has played a very minor role in the coverage. It occupies a register between President Obama’s micronarratives focused on individuals and families, on the one hand, and the structural violence macronarrative of the Black Lives Matter manifesto on the other. This middle register is where specific local situations are addressed and grassroots change happens. It can also provide insight into broad structural problems that might otherwise be invisible.
The value of this middle register of the local narrative emerges in the light that Rachel Aviv shines on police violence in an exposé of the Albuquerque Police Department. In “Your Son is Deceased,” Aviv focuses on the ordeal of the middle class Torres family when Christopher Torres, a young man suffering from schizophrenia, is shot and killed by police in the backyard of the family home. Christopher’s parents, a lawyer and the director of human resources for the county, are refused information and kept from the scene of their son’s killing for hours. They learn what happened to Christopher only through news reports the following day. The parallels between the Torres and Brown cases are striking, as are the differences. Though the confrontation with the police that led to Torres’s death happened just outside his home, and though his parents knew and worked with city officials including the mayor, his death and the official response to it share haunting similarities with that of Brown. Aviv does not ignore the issue of race and ethnicity, mentioning the sometimes sharp conflicts in this borderlands region between Latino/as, Native Americans, and whites. But in presenting her narrative, she highlights the local factors that foster the corruption that she finds to be endemic in the Albuquerque Police Department; she also foregrounds mental illness as a decisive element in a number of police killings–one that crosses racial and economic boundaries.
There is a scene in Selma, in which Dr. King invites his colleagues to explore the dimensions of the voter suppression problem. They begin listing the contributing factors—the literacy tests, the poll tax—and then one of the organizers mentions laws requiring that a sponsor who is a voter must vouch for someone who wishes to register. The sponsor must know the would-be voter and be able to testify to her or his character. In rural areas of the South, there might not be a registered black voter for a hundred miles, and so many potential voters could not find an acquaintance to sponsor them. The organizers agree this should be their first target, since without a sponsor, a potential voter cannot even reach the downstream hurdles of the literacy test and the poll tax. This practice of requiring a sponsor was specifically forbidden in the Voting Rights Act. At present, there are attempts to revive a version of the voucher test.
Selma as a whole, and this scene in particular, exemplifies many of the central features of democratic self-governance that Danielle Allen describes in Our Declaration: A Reading of the Declaration of Independence in Defense of Equality. Allen, a classicist and political theorist at Princeton’s Institute for Advanced Study, develops what she calls a “slow reading” of the Declaration of Independence in order to draw out the meaning of equality, which she relates to political processes focused on democratic deliberation and writing. From the language of the Declaration, Allen draws five interconnected facets of the ideal of equality. Equality, she explains, involves freedom from domination, for both states and individuals. It also involves “recognizing and enabling the general human capacity for political judgment” coupled with “access to the tool of government.” She finds equality to be produced through the Aristotelian “potluck method,” whereby individuals contribute their special forms of knowledge to foster social good, and through reciprocity or mutual responsiveness, which contributes to equality of agency. And she defines equality as “co-creation, where many people participate equally in creating a world together.”[i]
Selma illustrates all of these features of equality at work in the Civil Rights Movement, and the discussion of how to prioritize different aspects of voter suppression is a compelling dramatization of the “potluck method.” Following Allen, what is called for now is the sharing of special knowledge among individuals and communities affected by violent policing, including representatives of the police. The December killings of New York City police officers Wenjian Liu and Rafael Ramos further heightened the polarization between police and protestors. President Obama offered one strategy for defusing that polarization in his State of the Union address when he presented scenarios designed to evoke reciprocity and mutual responsiveness. Christopher Torres’s killing introduces an additional set of issues about the treatment of people with mental illness that complicates the image of a white supremacist state dominating black bodies—as does the fact that neither Liu nor Ramos was white.
What is needed now is a forum to produce and publicize a middle register of knowledge that addresses both local circumstances, such as the overly complicated government structure in Ferguson or the corruption in the Albuquerque Police Department, and more systemic problems such as the legacy of racism, a weak system of mental health care, and ready access to guns. Such a forum would exemplify the potluck method and embody the ideals of deliberative democracy as President Obama described them in The Audacity of Hope. Noting the diffuse operations of power in the government of the United States, he emphasized the importance of building a deliberative democracy where, “all citizens are required to engage in a process of testing their ideas against an external reality, persuading others of their point of view, and building shifting alliances of consent.” The present focus on police violence offers an opportunity to engage in such a democratic deliberation. The issues are emotional, and the stakes are high. But without the social sharing that Aristotle compared to a potluck meal, we will all remain hungry for solutions.
[i] In “Equality as Singularity: Rethinking Literature and Democracy,” I relate Allen’s treatment of equality to the approach developed by French theorist Pierre Rosanvallon and consider both in relation to literature. The essay appears in a forthcoming special issue of New Literary History devoted to political theory.
Sandra M. Gustafson is professor of English and American studies at the University of Notre Dame. She is writing a book on conflict and democracy in classic American fiction with funding from the National Endowment for the Humanities.
To read more about Imagining Deliberative Democracy in the Early American Republic, click here.
How Many is Too Many?: The Progressive Argument for Reducing Immigration into the United States
by Philip Cafaro
How many immigrants should we allow into the United States annually, and who gets to come?
The question is easy to state but hard to answer, for thoughtful individuals and for our nation as a whole. It is a complex question, touching on issues of race and class, morals and money, power and political allegiance. It is an important question, since our answer will help determine what kind of country our children and grandchildren inherit. It is a contentious question: answer it wrongly and you may hear some choice personal epithets directed your way, depending on who you are talking to. It is also an endlessly recurring question, since conditions will change, and an immigration policy that made sense in one era may no longer work in another. Any answer we give must be open to revision.
This book explores the immigration question in light of current realities and defends one provisional answer to it. By exploring the question from a variety of angles and making my own political beliefs explicit, I hope that it will help readers come to their own well-informed conclusions. Our answers may differ, but as fellow citizens we need to keep talking to one another and try to come up with immigration policies that further the common good.
Why are immigration debates frequently so angry? People on one side often seem to assume it is just because people on the other are stupid, or immoral. I disagree. Immigration is contentious because vital interests are at stake and no one set of policies can fully accommodate all of them. Consider two stories from among the hundreds I’ve heard while researching this book.
* * *
It is lunchtime on a sunny October day and I’m talking to Javier, an electrician’s assistant, at a home construction site in Longmont, Colorado, near Denver. He is short and solidly built; his words are soft-spoken but clear. Although he apologizes for his English, it is quite good. At any rate much better than my Spanish.
Javier studied to be an electrician in Mexico, but could not find work there after school. “You have to pay to work,” he explains: pay corrupt officials up to two years’ wages up front just to start a job. “Too much corruption,” he says, a refrain I find repeated often by Mexican immigrants. They feel that a poor man cannot get ahead there, can hardly get started.
So in 1989 Javier came to the United States, undocumented, working various jobs in food preparation and construction. He has lived in Colorado for nine years and now has a wife (also here illegally) and two girls, ages seven and three. “I like USA, you have a better life here,” he says. Of course he misses his family back in Mexico. But to his father’s entreaties to come home, he explains that he needs to consider his own family now. Javier told me that he’s not looking to get rich, he just wants a decent life for himself and his girls. Who could blame him?
Ironically one of the things Javier likes most about the United States is that we have rules that are fairly enforced. Unlike in Mexico, a poor man does not live at the whim of corrupt officials. When I suggest that Mexico might need more people like him to stay and fight “corruption,” he just laughs. “No, go to jail,c he says, or worse. Like the dozens of other Mexican and Central American immigrants I have interviewed for this book, Javier does not seem to think that such corruption could ever change in the land of his birth.
Do immigrants take jobs away from Americans? I ask. “American people no want to work in the fields,” he responds, or as dishwashers in restaurants. Still, he continues, “the problem is cheap labor.” Too many immigrants coming into construction lowers wages for everyone— including other immigrants like himself.
“The American people say, all Mexicans the same,” Javier says. He does not want to be lumped together with “all Mexicans,” or labeled a problem, but judged for who he is as an individual. “I don’t like it when my people abandon cars, or steal.” If immigrants commit crimes, he thinks they should go to jail, or be deported. But “that no me.” While many immigrants work under the table for cash, he is proud of the fact that he pays his taxes. Proud, too, that he gives a good day’s work for his daily pay (a fact confirmed by his coworkers).
Javier’s boss, Andy, thinks that immigration levels are too high and that too many people flout the law and work illegally. He was disappointed, he says, to find out several years ago that Javier was in the country illegally. Still he likes and respects Javier and worries about his family. He is trying to help him get legal residency.
With the government showing new initiative in immigration enforcement—including a well-publicized raid at a nearby meat-packing plant that caught hundreds of illegal workers—there is a lot of worry among undocumented immigrants. “Everyone scared now,” Javier says. He and his wife used to go to restaurants or stores without a second thought; now they are sometimes afraid to go out. “It’s hard,” he says. But: “I understand. If the people say, ‘All the people here, go back to Mexico,’ I understand.”
Javier’s answer to one of my standard questions—“How might changes in immigration policy affect you?”—is obvious. Tighter enforcement could break up his family and destroy the life he has created here in America. An amnesty would give him a chance to regularize his life. “Sometimes,” he says, “I dream in my heart, ‘If you no want to give me paper for residence, or whatever, just give me permit for work.’ ”
* * *
It’s a few months later and I’m back in Longmont, eating a 6:30 breakfast at a café out by the Interstate with Tom Kenney. Fit and alert, Tom looks to be in his mid-forties. Born and raised in Denver, he has been spraying custom finishes on drywall for twenty-five years and has had his own company since 1989. “At one point we had twelve people running three trucks,” he says. Now his business is just him and his wife. “Things have changed,” he says.
Although it has cooled off considerably, residential and commercial construction was booming when I interviewed Tom. The main “thing that’s changed” is the number of immigrants in construction. When Tom got into it twenty-five years ago, construction used almost all native-born workers. Today estimates of the number of immigrant workers in northern Colorado range from 50% to 70% of the total construction workforce. Some trades, like pouring concrete and framing, use immigrant labor almost exclusively. Come in with an “all-white” crew of framers, another small contractor tells me, and people do a double-take.
Tom is an independent contractor, bidding on individual jobs. But, he says, “guys are coming in with bids that are impossible.” After all his time in the business, “no way they can be as efficient in time and materials as me.” The difference has to be in the cost of labor. “They’re not paying the taxes and insurance that I am,” he says. Insurance, workmen’s compensation, and taxes add about 40% to the cost of legally employed workers. When you add the lower wages that immigrants are often willing to take, there is plenty of opportunity for competing contractors to underbid Tom and still make a tidy profit. He no longer bids on the big new construction projects and jobs in individual, custom-built houses are becoming harder to find.
“I’ve gone in to spray a house and there’s a guy sleeping in the bathtub, with a microwave set up in the kitchen. I’m thinking, ‘You moved into this house for two weeks to hang and paint it, you’re gonna get cash from somebody, and he’s gonna pick you up and drive you to the next one.’ ” He seems more upset at the contractor than at the undocumented worker who labors for him.
In this way, some trades in construction are turning into the equivalent of migrant labor in agriculture. Workers do not have insurance or workmen’s compensation, so if they are hurt or worn out on the job, they are simply discarded and replaced. Workers are used up, while the builders and contractors higher up the food chain keep more of the profits for themselves. “The quality of life [for construction workers] has changed drastically,” says Tom. “I don’t want to live like that. I want to go home and live with my family.”
Do immigrants perform jobs Americans don’t want to do? I ask. The answer is no. “My job is undesirable,” Tom replies. “It’s dirty, it’s messy, it’s dusty. I learned right away that because of that, the opportunity is available to make money in it. That job has served me well”—at least up until recently. He now travels as far away as Wyoming and southern Colorado to find work. “We’re all fighting for scraps right now.”
Over the years, Tom has built a reputation for quality work and efficient and prompt service, as I confirmed in interviews with others in the business. Until recently that was enough to secure a good living. Now though, like a friend of his who recently folded his small landscaping company (“I just can’t bid ’em low enough”), Tom is thinking of leaving the business. He is also struggling to find a way to keep up the mortgage payments on his house.
He does not blame immigrants, though. “If you were born in Mexico, and you had to fight for food or clothing, you would do the same thing,” Tom tells me. “You would come here.”
* * *
Any immigration policy will have winners and losers. So claims Harvard economist George Borjas, a leading authority on the economic impacts of immigration. My interviews with Javier Morales and Tom Kenney suggest why Borjas is right.
If we enforce our immigration laws, then good people like Javier and his family will have their lives turned upside down. If we limit the numbers of immigrants, then good people in Mexico (and Guatemala, and Vietnam, and the Philippines …) will have to forgo opportunities to live better lives in the United States.
On the other hand, if we fail to enforce our immigration laws or repeatedly grant amnesties to people like Javier who are in the country illegally, then we forfeit the ability to set limits to immigration. And if immigration levels remain high, then hard-working men and women like Tom and his wife and children will probably continue to see their economic fortunes decline. Economic inequality will continue to increase in America, as it has for the past four decades.
In the abstract neither of these options is appealing. When you talk to the people most directly affected by our immigration policies, the dilemma becomes even more acute. But as we will see further on when we explore the economics of immigration in greater detail, these appear to be the options we have.
Recognizing trade-offs—economic, environmental, social—is indeed the beginning of wisdom on the topic of immigration. We should not exaggerate such conflicts, or imagine conflicts where none exist, but neither can we ignore them. Here are some other trade-offs that immigration decisions may force us to confront:
- Cheaper prices for new houses vs. good wages for construction workers.
- Accommodating more people in the United States vs. preserving wildlife habitat and vital resources.
- Increasing ethnic and racial diversity in America vs. enhancing social solidarity among our citizens.
- More opportunities for Latin Americans to work in the United States vs. greater pressure on Latin American elites to share wealth and opportunities with their fellow citizens.
The best approach to immigration will make such trade-offs explicit, minimize them where possible, and choose fairly between them when necessary.
Since any immigration policy will have winners and losers, at any particular time there probably will be reasonable arguments for changing the mix of immigrants we allow in, or for increasing or decreasing overall immigration, with good people on all sides of these issues. Whatever your current beliefs, by the time you finish this book you should have a much better understanding of the complex trade-offs involved in setting immigration policy. This may cause you to change your views about immigration. It may throw your current views into doubt, making it harder to choose a position on how many immigrants to let into the country each year; or what to do about illegal immigrants; or whether we should emphasize country of origin, educational level, family reunification, or asylum and refugee claims, in choosing whom to let in. In the end, understanding trade-offs ensures that whatever policies we wind up advocating for are more consciously chosen, rationally defensible, and honest. For such a contentious issue, where debate often generates more heat than light, that might have to suffice.
* * *
Perhaps a few words about my own political orientation will help clarify the argument and goals of this book. I’m a political progressive. I favor a relatively equal distribution of wealth across society, economic security for workers and their families, strong, well-enforced environmental protection laws, and an end to racial discrimination in the United States. I want to maximize the political power of common citizens and limit the influence of large corporations. Among my political heroes are the three Roosevelts (Teddy, Franklin, and Eleanor), Rachel Carson, and Martin Luther King Jr.
I also want to reduce immigration into the United States. If this combination seems odd to you, you are not alone. Friends, political allies, even my mother the social worker shake their heads or worse when I bring up the subject. This book aims to show that this combination of political progressivism and reduced immigration is not odd at all. In fact, it makes more sense than liberals’ typical embrace of mass immigration: an embrace shared by many conservatives, from George W. Bush and Orrin Hatch to the editorial board of the Wall Street Journal and the US Chamber of Commerce.
In what follows I detail how current immigration levels—the highest in American history—undermine attempts to achieve progressive economic, environmental, and social goals. I have tried not to oversimplify these complex issues, or mislead readers by cherry-picking facts to support pre-established conclusions. I have worked hard to present the experts’ views on how immigration affects US population growth, poorer workers’ wages, urban sprawl, and so forth. Where the facts are unclear or knowledgeable observers disagree, I report that, too.
This book is divided into four main parts. Chapters 1 and 2 set the stage for us to consider how immigration relates to progressive political goals. Chapter 2, “Immigration by the Numbers,” provides a concise history of US immigration policy. It explains current policy, including who gets in under what categories of entry and how many people immigrate annually. It also discusses population projections for the next one hundred years under different immigration scenarios, showing how relatively small annual differences in immigration numbers quickly lead to huge differences in overall population.
Part 2 consists of chapters 3–5, which explore the economics of immigration, showing how flooded labor markets have driven down workers’ wages in construction, meatpacking, landscaping, and other economic sectors in recent decades, and increased economic inequality. I ask who wins and who loses economically under current immigration policies and consider how different groups might fare under alternative scenarios. I also consider immigration’s contribution to economic growth and argue that unlike fifty or one hundred years ago America today does not need a larger economy, with more economic activity or higher levels of consumption, but rather a fairer economy that better serves the needs of its citizens. Here as elsewhere, the immigration debate can clarify progressive political aspirations; in this case, helping us rethink our support for endless economic growth and develop a more mature understanding of our economic goals.
Part 3, chapters 6–8, focuses on the environment. Mass immigration has increased America’s population by tens of millions of people in recent decades and is set to add hundreds of millions more over the twenty-first century. According to Census Bureau data our population now stands at 320 million people, the third-largest in the world, and at current immigration rates could balloon to over 700 million by 2100. This section examines the environmental problems caused by a rapidly growing population, including urban sprawl, overcrowding, habitat loss, species extinctions, and increased greenhouse gas emissions. I chronicle the environmental community’s historic retreat from population issues over the past four decades, including the Sierra Club’s failed attempts to adopt a consensus policy on immigration, and conclude that this retreat has been a great mistake. Creating an ecologically sustainable society is not just window dressing; it is necessary to pass on a decent future to our descendants and do our part to solve dangerous global environmental problems. Because sustainability is incompatible with an endlessly growing population, Americans can no longer afford to ignore domestic population growth.
Part 4, chapters 9–11, looks for answers. The chapter “Solutions” sketches out a comprehensive proposal for immigration reform in line with progressive political goals, focused on reducing overall immigration levels. I suggest shifting enforcement efforts from border control to employer sanctions—as several European nations have done with great success—and a targeted amnesty for illegal immigrants who have lived in the United States for years and built lives here (Javier and his wife could stay, but their cousins probably would not get to come). I propose changes in US trade and aid policies that could help people create better lives where they are, alleviating some of the pressure to emigrate. In these ways, Americans can meet our global responsibilities without doing so on the backs of our own poor citizens, or sacrificing the interests of future generations. A companion chapter considers a wide range of reasonable progressive “Objections” to this more restrictive immigration policy. I try to answer these objections honestly, focusing on the trade-offs involved. A short concluding chapter reminds readers of all that is at stake in immigration policy, and affirms that we will make better policy with our minds open.
How Many Is Too Many? shows that by thinking through immigration policy progressives can get clearer on our own goals. These do not include having the largest possible percentage of racial and ethnic minorities, but creating a society free of racial discrimination, where diversity is appreciated. They do not include an ever-growing economy, but feature an economy that works for the good of society as a whole. They most certainly do not include a crowded, cooked, polluted, ever-more-tamed environment, but instead a healthy, spacious landscape that supports us with sufficient room for wild nature. Finally our goals should include playing our proper role as global citizens, while still paying attention to our special responsibilities as Americans. Like it or not those responsibilities include setting US immigration policy.
* * *
Although I hope readers across the political spectrum will find this book interesting, I have written it primarily for my fellow progressives. Frankly, we need to think harder about this issue than we have been. Just because Rush Limbaugh and his ilk want to close our borders does not necessarily mean progressives should be for opening them wider. But this is not an easy topic to discuss and I appreciate your willingness to consider it with me. In fact I come to this topic reluctantly myself. I recognize immigration’s contribution to making the United States one of the most dynamic countries in the world. I also find personal meaning in the immigrant experience.
My paternal grandfather came to America from southern Italy when he was twelve years old. As a child I listened entranced to his stories, told in an accent still heavy after half a century in his adopted country. Stories of the trip over and how excited he was to explore everything on the big ship (a sailor, taking advantage of his curiosity, convinced him to lift some newspapers lying on deck, to see what was underneath …). Stories of working as a journeyman shoe repairman in cities and towns across upstate New York and Ohio (in one store, the foreman put my grandfather and his lathe in the front window so passers-by would stop to watch how fast and well he did his work). Stories of settling down and starting his own business, marrying Nana, raising a family.
I admired Grandpa’s adventurousness in coming to a new world, his self-reliance, his pride in his work, and his willingness to work hard to create a better future for himself and his family, including, eventually, me. Stopping by the store, listening to him chat with his customers, I saw clearly that he was a respected member of his community. When he and the relatives got together for those three-hour meals that grew ever longer over stories, songs, and a little wine, I felt part of something special, something different from my everyday life and beyond the experience of many of my friends.
So this book is not a criticism of immigrants! I know that many of today’s immigrants, legal and illegal, share my grandfather’s intelligence and initiative. The lives they are creating here are good lives rich in love and achievement. Nor is it an argument against all immigration: I favor reducing immigration into the United States, not ending it. I hope immigrants will continue to enrich America for many years to come. In fact, reducing current immigration levels would be a good way to insure continued widespread support for immigration.
Still, Americans sometimes forget that we can have too much of a good thing. Sometimes when Nana passes the pasta, it’s time to say basta. Enough.
When to say enough, though, can be a difficult question. How do we know when immigration levels need to be scaled back? And do any of us, as the descendants of immigrants, have the right to do so?
Answering the first question, in detail, is one of the main goals of this book. Speaking generally I think we need to reduce immigration when it seriously harms our society, or its weakest members. The issues are complex, but I think any country should consider reducing immigration:
- When immigration significantly drives down wages for its poorer citizens.
- When immigrants are regularly used to weaken or break unions.
- When immigration appears to increase economic inequality within a society.
- When immigration makes the difference between stabilizing a country’s population or doubling it within the next century.
- When immigration-driven population growth makes it impossible to rein in sprawl, decrease greenhouse gas emissions sufficiently, or take the other steps necessary to create an ecologically sustainable society.
- When rapid demographic shifts undermine social solidarity and a sense of communal purpose.
- When most of its citizens say that immigration should be reduced.
Of course, there may also be good reasons to continue mass immigration: reasons powerful enough to outweigh such serious social costs or the expressed wishes of a nation’s citizens. But they had better be important. And in the case at hand they had better articulate responsibilities that properly belong to the United States and its citizens—and not help our “sender” countries avoid their own problems and responsibilities. Reversing gross economic inequality and creating a sustainable society are the primary political tasks facing this generation of Americans. Progressives should think long and hard before we accept immigration policies that work against these goals.
But what about the second question: do Americans today have a right to reduce immigration? To tell Javier’s cousins, perhaps, that they cannot come to America and make better lives for themselves and their families?
Yes, we do. Not only do we have a right to limit immigration into the United States, as citizens we have a responsibility to do so if immigration levels get so high that they harm our fellow citizens, or society as a whole. Meeting this responsibility may be disagreeable, because it means telling good people that they cannot come to America to pursue their dreams. Still, it may need to be done.
Those of us who want to limit immigration are sometimes accused of selfishness: of wanting to hog resources or keep “the American way of life” for ourselves. There may be some truth in this charge, since many Americans’ interests are threatened by mass immigration. Still, some of those interests seem worth preserving. The union carpenter taking home $30 an hour who owns his own house, free and clear, or the outdoorsman walking quietly along the edge of a favorite elk meadow or trout stream, may want to continue to enjoy these good things and pass them on to their sons and daughters. What is wrong with that?
Besides, the charge of selfishness cuts both ways. Restaurant owners and software tycoons hardly deserve the Mother Teresa Self-Sacrifice Medal when they lobby Congress for more low-wage workers. The wealthy progressive patting herself on the back for her enlightened views on immigration probably hasn’t ever totaled up the many ways she and her family benefit from cheap labor.
In the end our job as citizens is to look beyond our narrow self-interest and consider the common good. Many of us oppose mass immigration not because of what it costs us as individuals, but because we worry about the economic costs to our fellow citizens, or the environmental costs to future generations. Most Americans enjoy sharing our country with foreign visitors and are happy to share economic opportunities with reasonable numbers of newcomers. We just want to make sure we preserve those good things that make this a desirable destination in the first place.
All else being equal, Americans would just as soon not interfere with other people’s decisions about where to live and work. In fact such a laissez-faire approach to immigration lasted for much of our nation’s history. But today all else is not equal. For one thing this is the age of jet airplanes, not tall-masted sailing ships or coal-fired steamers. It is much quicker and easier to come here than it used to be and the pool of would-be immigrants has increased by an order of magnitude since my grandfather’s day. (In 2006, there were 6. million applications for the 50,000 green cards available under that year’s “diversity lottery.” ) For another, we do not have an abundance of unclaimed land for farmers to homestead, or new factories opening up to provide work for masses of unskilled laborers. Unemployment is high and projected to remain high for the foreseeable future. For a third, we recognize new imperatives to live sustainably and do our part to meet global ecological challenges. Scientists are warning that we run grave risks should we fail to do so.
Americans today overwhelmingly support immigration restrictions. We disagree about the optimal amount of immigration, but almost everyone agrees that setting some limits is necessary. Of course, our immigration policies should be fair to all concerned. Javier Morales came to America illegally, but for most of his time here our government just winked at illegal immigration. It also taxed his paychecks. After two and a half decades of hard work that has benefited our country, I think we owe Javier citizenship. But we also owe Tom Kenney something. Perhaps the opportunity to prosper, if he is willing to work hard. Surely, at a minimum, government policies that do not undermine his own attempts to prosper.
* * *
The progressive vision is alive and well in the United States today. Most Americans want a clean environment with flourishing wildlife, a fair economy that serves all its citizens, and a diverse society that is free from racism. Still, it will take a lot of hard work to make this vision a reality and success is not guaranteed. Progressives cannot shackle our hopes to an outmoded immigration policy that thwarts us at every turn.
Given the difficulties involved in getting 320 million Americans to curb consumption and waste, there is little reason to think we will be able to achieve ecological sustainability while doubling or tripling that number. Mass immigration ensures that our population will continue growing at a rapid rate and that environmentalists will always be playing catch up. Fifty or one hundred years from now we will still be arguing that we should destroy this area rather than that one, or that we can make the destruction a little more aesthetically appealing—instead of ending the destruction. We will still be trying to slow the growth of air pollution, water use, or carbon emissions—rather than cutting them back.
But the US population would quickly stabilize without mass immigration. We can stop population growth—without coercion or intrusive domestic population policies—simply by returning to pre-1965 immigration levels.
Imagine an environmentalism that was not always looking to meet the next crisis and that could instead look forward to real triumphs. What if we achieved significant energy efficiency gains and were able to enjoy those gains with less pollution, less industrial development on public lands, and an end to oil wars, because those efficiency gains were not swallowed up by growing populations?
Imagine if the push to develop new lands largely ended and habitat for other species increased year by year, with a culture of conservation developed around restoring and protecting that habitat. Imagine if our demand for fresh water leveled off and instead of fighting new dam projects we could actually leave more water in our rivers.
And what of the American worker? It is hard to see how progressives will succeed in reversing current powerful trends toward ever greater economic inequality in a context of continued mass immigration, particularly with high numbers of relatively unskilled and poorly educated immigrants. Flooded labor markets will harm poorer workers directly, by driving down wages and driving up unemployment. Mass immigration will also continue to harm workers indirectly by making it harder for them to organize and challenge employers, by reducing the percentage of poor workers who are citizens and thus able to vote for politicians who favor the poor, and by limiting sympathy between the haves and havenots, since with mass immigration they are more likely to belong to different ethnic groups.
But it does not have to be this way. We can tighten labor markets and get them working for working people in this country. Combined with other good progressive egalitarian measures—universal health care; a living minimum wage; a more progressive tax structure—we might even reverse current trends and create a more economically just country.
Imagine meatpacking plants and carpet-cleaning companies competing with one another for scarce workers, bidding up their wages. Imagine unions able to strike those companies without having to worry about scabs taking their members’ jobs. Imagine college graduates sifting through numerous job offers, like my father and his friends did fifty years ago during that era’s pause in mass immigration, instead of having to wait tables and just hope for something better.
Imagine poor children of color in our inner cities, no longer looked on as a problem to be warehoused in failing schools, or jails, but instead seen as an indispensable resource: the solution to labor shortages in restaurants and software companies.
Well, why not? Why are we progressives always playing catch up? The right immigration policies could help lead us toward a more just, egalitarian, and sustainable future. They could help liberals achieve our immediate goals and drive the long-term political agenda. But we will not win these battles without an inspiring vision for a better society, or with an immigration policy that makes that vision impossible to achieve.
To read more about How Many is Too Many?, click here.
Adam Gopnik, writing in the New Yorker, recently profiled eminent American sociologist Howard S. Becker (Howie, please: “Only my mother ever called me Howard”), one of the biggest names in the field for over half a century, yet still, as with so many purveyors of haute critique, better known in France. Becker is no wilting lily on these shores, however—since the publication of his pathbreaking Outsiders: Studies in the Sociology of Deviance (1963), he’s been presiding as grand doyen over methodological confrontations with the particularly slippery slopes of human existence, including our very notion of “deviance.” All this, a half dozen or so honorary degrees, a lifetime achievement award, a smattering of our most prestigious fellowships, and the 86-year-old Becker is still going strong, with his most recent book published only this past year.
From the New Yorker profile:
This summer, Becker published a summing up of his life’s method and beliefs, called “What About Mozart? What About Murder?” (The title refers to the two caveats or complaints most often directed against his kind of sociology’s equable “relativism”: how can you study music as a mere social artifact—what about Mozart? How can you consider criminal justice a mutable convention—what about Murder?) The book is both a jocular personal testament of faith and a window into Becker’s beliefs. His accomplishment is hard to summarize in a sentence or catchphrase, since he’s resolutely anti-theoretical and suspicious of “models” that are too neat. He wants a sociology that observes the way people act around each other as they really do, without expectations about how they ought to.
The provenances of that sociology have included: jazz musicians, marijuana users, art world enthusiasts, social science researchers, medical students, musicologists, murderers, and “youth,” to name a few.
As mentioned earlier, his latest book What About Mozart? What About Murder? considers the pull of two methodologies: one, more pragmatic, which addresses its subjects with caution and rigor on a case-by-case basis, and the other, which employs a more speculative approach (guesswork) by asking “killer questions” that force us to reposition our stance on hypothetical situations, such as whether or not, indeed, murder is always already (*Becker might in fact kill me for a foray into that particular theoretical shorthand*) “deviant.”
His work is required reading in many French universities, even though it seems to be a model of American pragmatism, preferring narrow-seeming “How?” and “Who, exactly?” questions to the deeper “Why?” and “What?” supposedly favored by French theory. That may be exactly its appeal, though: for the French, Becker seems to combine three highly American elements—jazz, Chicago, and the exotic beauties of empiricism.
On the heels of his appearance in the New Yorker, Becker participated in a recent, brief sitdown with the New York Times, where he relayed thoughts on Charlie Hebdo and the French media, Nate Silver, and jazz trios, among other concerns.
From that New York Times Q & A:
I work out in a gym with a trainer twice a week. Oh, it’s pure torture, but I’m 86 so you’ve got to do something to stay in shape. I do a mixture of calisthenics, Pilates and yoga—a lot of work on balance. My trainer has this idea that every year on my birthday I should do the same number of push-ups as I have years old. We work up to it over the year. I was born on the anniversary of the great San Francisco earthquake and fire in 1906. It seems auspicious but I don’t know why.
To read more by Becker, click here.
Excerpted from Mayakovsky: A Biography by Bengt Jangfeldt
Mayakovsky returned to Moscow on 17 or 18 September. The following day, Krasnoshchokov was arrested, accused of a number of different offenses. He was supposed to have lent money to his brother Yakov, head of the firm American–Russian Constructor, at too low a rate of interest, and to have arranged drink– and sex–fueled orgies at the Hotel Europe in Petrograd, paying the Gypsy girls who entertained the company with pure gold. He was also accused of having passed on his salary from the Russian–American Industrial Corporation ($200 a month) to his wife (who had returned to the United States), of having bought his mistress flowers and furs out of state funds, of renting a luxury villa, and of keeping no fewer than three horses. Lenin was now so ill that he had not been able to intervene on Krasnoshchokov’s behalf even if he had wanted to.
His arrest was a sensation of the first order. It was the first time that such a highly placed Communist had been accused of corruption, and the event cast a shadow over the whole party apparatus. Immediately after Krasnoshchokov’s arrest, and in order to prevent undesired interpretations of what had happened, Valerian Kuybyshev, the commissar for Workers’ and Peasants’ Inspection, let it be known that “incontrovertible facts have come to light which show Krasnoshchokov has in a criminal manner exploited the resources of the economics department [of the Industry Bank] for his own use, that he has arranged wild orgies with these funds, and that he has used bank funds to enrich his relatives, etc.” He had, it was claimed, “in a criminal manner betrayed the trust placed in him and must be sentenced to a severe punishment.”
Krasnoshchokov was, in other words, judged in advance. There was no question of any objective legal process; the intention was to set an example: “The Soviet power and the Communist Party will […] root out with an iron hand all sick manifestations of the NEP and remind those who ‘let themselves be tempted’ by the joys of capitalism that they live in a workers’ state run by a Communist party.” Krasnoshchokov’s arrest was deemed so important that Kuybyshev’s statement was printed simultaneously in the party organ Pravda and the government organ Izvestiya. Kuybyshev was a close friend of the prosecutor Nikolay Krylenko, who had led the prosecution of the Socialist Revolutionaries the previous year, and who in time would turn show trials and false charges into an art form.
When Krasnoshchokov was arrested, Lili and Osip were still in Berlin. In the letter that Mayakovsky wrote to them a few days after the arrest, the sensational news is passed over in total silence. He gives them the name of the civil servant in the Berlin legation who can give them permission to import household effects (which they had obviously bought in Berlin) into Russia; he tells them that the squirrel which lives with them is still alive and that Lyova Grinkrug is in the Crimea. The only news item of greater significance is that he has been at Lunacharsky’s to discuss Lef and is going to visit Trotsky on the same mission. But of the event which the whole of Moscow was talking about, and which affected Lili to the utmost degree—not a word.
Krasnoshchokov’s trial took place at the beginning of March 1924. Sitting in the dock, apart from his brother Yakov, were three employees of the Industry Bank. Krasnoshchokov, who was a lawyer, delivered a brilliant speech in his own defense, explaining that, as head of the bank, he had the right to fix lending rates in individual cases and that one must be flexible in order to obtain the desired result. As for the charges of immoral behavior he maintained that his work necessitated a certain degree of official entertainment and that the “luxury villa” in the suburb of Kuntsevo was an abandoned dacha which in addition was his sole permanent dwelling. (It is one of the ironies of history that the house had been owned before the Revolution by the Shekhtel family and accordingly had often had Mayakovsky as a guest—see the chapter “Volodya”). Finally, he pointed out that his private life was not within the jurisdiction of the law.
This opinion was not shared by the court, which ruled that Krasnoshchokov had lived an immoral life during a time when a Communist ought to have set a good example and not surrender to the temptations offered by the New Economic Policy. Krasnoshchokov was also guilty of having used his position to “encourage his relatives’ private business transactions” and having caused the bank to lose 10,000 gold rubles. He was sentenced to six years’ imprisonment and in addition three years’ deprivation of citizen’s rights. Moreover, he was excluded from the Communist Party. His brother was given three years’ imprisonment, while the other three coworkers received shorter sentences.
Krasnoshchokov had in fact been a very successful bank director. Between January 1923 and his arrest in September he had managed to increase the Industry Bank’s capital tenfold, partly thanks to a flexible interest policy which led to large American investments in Russia. There is a good deal of evidence that the charges against him were initiated by persons within the Finance Commissariat and the Industry Bank’s competitor, the Soviet National Bank. Shortly before his arrest Krasnoshchokov had suggested that the Industry Bank should take over all the National Bank’s industrial–financial operations. Exactly the opposite happened: after Krasnoshchokov’s verdict was announced, the Industry Bank was subordinated to the Soviet National Bank.
There is little to suggest that the accusations of orgies were true. Krasnoshchokov was not known to be a rake, and his “entertainment expenses” were hardly greater than those of other highly placed functionaries. But he had difficulties defending himself, as he maintained not one mistress but two—although he had a wife and children. The woman who figured in the trial was not, as one might have expected, Lili, but a certain Donna Gruz—Krasnoshchokov’s secretary, who six years later would become his second wife. This fact undoubtedly undermined his credibility as far as his private life was concerned.
When Lili and Elsa showed Nadezhda Lamanova’s dresses in Paris in the winter of 1924, it attracted the attention of both the French and the British press, where this photograph was published with the caption “soviet sack fashion.—Because of the lack of textiles in Soviet Russia, Mme. Lamanoff, a Moscow fashion designer, had this dress made out of sackcloth from freight bales.”
By the time the judgment was announced, Lili had been in Paris for three weeks. She was there for her own amusement and does not seem to have had any particular tasks to fulfill. But she had with her dresses by the Soviet couturier Nadezhda Lamanova which she and Elsa showed off at two soirees organized by a Paris newspaper. She would like to go to Nice, she confided in a letter home to Moscow on 23 February, but her plans were frustrated by the fact that Russian emigrants were holding a congress there. She was thinking of traveling to Spain instead, or somewhere else in France, to “bake in the sun for a week or so.” But she remained in Paris, where she and Elsa went out dancing the whole time. Their “more or less regular cavaliers” were Fernand Léger (whom Mayakovsky had got to know in Paris in 1922) and an acquaintance from London who took them everywhere with him, “from the most chic of places to the worst of dives.” “It has been nothing but partying here,” she wrote. “Elsa has instituted a notebook in which she writes down all our rendezvous ten days in advance!” As clothes are expensive in Paris too, she asks Osip and Mayakovsky to send her a little money in the event of their managing to win “some mad sum of money” at cards.
When she was writing this letter, there were still two weeks to go before Krasnoshchokov’s trial. “How is A[lexander] M[ikhailovich]?” she asked, in the middle of reporting on the fun she was having. But she did not receive a reply, or if she did, it has not been preserved. On 26 March, after a month in Paris, she took the boat to England to visit her mother, who was in poor health, but that same evening she was forced to return to Calais after being stopped at passport control in Dover—despite having a British visa issued in Moscow in June 1923. What she did not know was that after her first visit to England in October 1922 she had been declared persona non grata, something which all British passport control points “for Europe and New York” had been informed of in a secret circular of 13 February 1923.
“You can’t imagine how humiliating it was to be turned back at the British border,” she wrote to Mayakovsky: “I have all sorts of theories about it, which I’ll tell you about when we I see you. Strange as it may seem, I think they didn’t let me in because of you.” She guessed right: documents from the Home Office show that it was her relationship with Mayakovsky, who wrote “extremely libellous articles” in Izvestiya, which had proved her undoing. Strangely enough, despite being refused entry to Britain, she was able to travel to London three weeks later. The British passport authorities have no record of her entry to the country. Did she come in by an illegal route?
At the same time that Lili traveled to Paris, Mayakovsky set out on a recital tour in Ukraine. Recitals were an important source of income for him. During his stay in Odessa he mentioned in a newspaper interview that he was planning to set out soon on a trip round the world, as he had been invited to give lectures and read poems in the United States. Two weeks later he was back in Moscow, and in the middle of April he went to Berlin, where Lili joined him about a week later. According to one newspaper, Mayakovsky was in the German capital “on his way to America.”
The round–the–world trip did not come off, as Mayakovsky failed to obtain the necessary visas. It was not possible to request an American visa in Moscow, as the two countries lacked diplomatic ties. Mayakovsky’s plan was therefore to try to get into the United States via a third country. Britain’s first Labour government, under Ramsay MacDonald, had scarcely recognized the Soviet Union (on 1 February 1924) before Mayakovsky requested a British visa, on 25 March. From England he planned to continue his journey to Canada and India. In a letter to Ramsay MacDonald, Britain’s chargé d’affaires in Moscow asked for advice about the visa application. Mayakovsky was not known to the mission, he wrote, but was “a member of the Communist party and, I am told, is known as a Bolshevik propagandist.” Mr. Hodgson would not have needed to do this if he had known that on 9 February, the Home Office had also issued a secret circular about Mayakovsky, “one of the principal leaders of the ‘Communist’ propaganda and agitation section of the ‘ROSTA,’” who since 1921 had been writing propaganda articles for Izvestiya and “should not be given a visa or be allowed to land in the United Kingdom” or any of its colonies. In Mayakovsky’s case the circular was sent to every British port, consulate, and passport and military checkpoint, as well as to Scotland House and the India Office. But in the very place where people really ought to have known about it, His Majesty’s diplomatic mission in Moscow, they were completely unaware of it.
While he waited for an answer from the British, Mayakovsky made a couple of appearances in Berlin where he talked about Lef and recited his poems. On the 9 May he traveled back to Moscow in company with Lili and Scotty, the Scotch terrier she had picked up in England, tired of waiting for notification that never came. When he got to Moscow he found out that on 5 May London had instructed the British mission in Moscow to turn down his visa application.
The preliminary investigation and subsequent trial of Krasnoshchokov caused a great stir, but it would certainly have got even more column inches if it had not been played out in the shadow of a significantly more important event. On 21 January 1924, Vladimir Lenin died after several years of illness.
Among the thousands of people jostling one another in the queues which snaked around in front of Trade Unions House, where the leader of the Revolution lay in state, were Mayakovsky, Lili, and Osip. Lenin’s death affected Mayakovsky deeply. “It was a terrible morning when he died,” Lili recalled. “We wept in the queue in Red Square where we were standing in the freezing cold to see him. Mayakovsky had a press card, so we were able to bypass the queue. I think he viewed the body ten times. We were all deeply shaken.”
Mayakovsky with Scotty, whom Lili bought in England. The picture was taken in the summer of 1924 at the dacha in Pushkino. Scotty loved ice cream, and, according to Rodchenko, Mayakovsky regarded “with great tenderness how Scotty ate and licked his mouth.” “He took him in his arms and I photographed them in the garden,” the photographer remembered. “I took two pictures. Volodya kept his tender smile, wholly directed at Scotty.” The photograph with Scotty is in fact one of the few where Mayakovsky can be seen smiling.
The feelings awakened by Lenin’s death were deep and genuine, and not only for his political supporters. Among those queuing were Boris Pasternak and Osip Mandelstam, who shared a far more lukewarm attitude to the Revolution and its leader. “Lenin dead in Moscow!” exclaimed Mandelstam in his coverage of the event. “How can one fail to be with Moscow in this hour! Who does not want to see that dear face, the face of Russia itself ? The time? Two, three, four? How long will we stand here? No one knows. The time is past. We stand in a wonderful nocturnal forest of people. And thousands of children with us.”
Shortly after Lenin’s death Mayakovsky tackled his most ambitious project to date: a long poem about the Communist leader. He had written about him before, in connection with his fiftieth birthday in 1920 (“Vladimir Ilyich!”), and when Lenin suffered his first stroke in the winter of 1923 (“We Don’t Believe It!”), but those were shorter poems. According to Mayakovsky himself, he began pondering a poem about Lenin as early as 1923, but that may well have been a rationalization after the event. What set his pen in motion was in any case Lenin’s death in January 1924.
Mayakovsky had only a superficial knowledge of Lenin’s life and work and was forced to read up on him before he could write about him. His mentor, as on so many other occasions, was Osip, who supplied him with books and gave him a crash course in Leniniana. Mayakovsky himself had neither the time nor the patience for such projects. The poem was written during the summer and was ready by the beginning of October 1924. It was given the title “Vladimir Ilyich Lenin” and was the longest poem Mayakovsky ever wrote; at three thousand lines, it was almost twice as long as “About This.” In the autumn of 1924 he gave several poetry readings and fragments of the poem were printed in various newspapers. It came out in book form in February 1925.
The line to the Trade Unions’ House in Moscow, where Lenin was lying in state.
So the lyrical “About This” was followed by an epic poem, in accordance with the conscious or unconscious scheme that directed the rhythm of Mayakovsky’s writing. If even a propaganda poem like “To the Workers in Kursk” was dedicated to Lili, such a dedication was impossible in this case. “Vladimir Ilyich Lenin” was dedicated to the Russian Communist Party, and Mayakovsky explains why, with a subtle but unambiguous reference to “About This”:
I can write
is not the time
as a poet
give to you,
In “Vladimir Ilyich Lenin” Lenin is portrayed as a Messiah–like figure, whose appearance on the historical scene is an inevitable consequence of the emergence of the working class. Karl Marx revealed the laws of history and, with his theories, “helped the working class to its feet.” But Marx was only a theoretician, who in the fullness of time would be replaced by someone who could turn theory into practice, that is, Lenin.
The poem is uneven, which is not surprising considering the format. From a linguistic point of view—the rhyme, the neologisms—it is undoubtedly comparable to the best of Mayakovsky’s other works, and the depiction of the sorrow and loss after Lenin’s death is no less than a magnificent requiem. But the epic, historical sections are too long and prolix. The same is true of the tributes to the Communist Party, which often rattle with empty rhetoric (which in turn can possibly be explained by the fact that Mayakovsky was never a member of the party):
once more to make the majestic word
Who needs that?!
The voice of an individual
is thinner than a cheep.
Who hears it—
except perhaps his wife?
is a hand with millions of fingers
into a single destroying fist.
The individual is rubbish,
the individual is zero …
We say Lenin,
but mean Lenin.
One of the few reviewers who paid any attention to the poem, the proletarian critic and anti–Futurist G. Lelevich, was quite right in pointing out that Mayakovsky’s “ultraindividualistic” lines in “About This” stand out as “uniquely honest” in comparison with “Vladimir Ilyich Lenin,” which “with few exceptions is rationalistic and rhetorical.” This was a “tragic fact” that Mayakovsky could only do something about by trying to “conquer himself.” The Lenin poem, wrote Lelevich, was a “flawed but meaningful and fruitful attempt to tread this path.”
Lelevich was right to claim that “About This” is a much more convincing poem than the ode to Lenin. But the “tragic” thing was not what Lelevich perceived as such, but something quite different, namely, Mayakovsky’s denial of the individual and his importance. In order to “conquer” himself, that is, the lyrical impulse within himself, he would have to take yet more steps in that direction—which he would in fact do, although it went against his innermost being.
If there is anything of lasting value in “Vladimir Ilyich Lenin,” it is not the paeans of praise to Lenin and the Communist Party—poems of homage are seldom good—but the warnings that Lenin, after his death, will be turned into an icon. The Lenin to whom Mayakovsky pays tribute was born in the Russian provinces as “a normal, simple boy” and grew up to be the “most human of all human beings.” If he had been “king–like and god–like” Mayakovsky would without a doubt have protested and taken a stance “opposed to all processions and tributes”:
to have found words
for lightning–flashing curses,
and my yell
were trampled underfoot
I should have
like bombs at the Kremlin
The worst thing Mayakovsky can imagine is that Lenin, like Marx, will become a “cooling plaster dotard imprisoned in marble.” This is a reference back to “The Fourth International,” in which Lenin is depicted as a petrified monument.
I am worried that
set in stone,
in syrup–smooth balsam—
Mayakovsky warns, clearly blind to the fact that he himself is contributing to this development with his seventy–five–page long poem.
The fear that Lenin would be canonized after his death was deeply felt—and well grounded. It did not take long before Gosizdat (!) began advertising busts of the leader in plaster, bronze, granite, and marble, “life–size and double life–size.” The busts were produced from an original by the sculptor Merkurov—whom Mayakovsky had apostrophized in his Kursk poem—and with the permission of the Committee for the Perpetuation of the Memory of V. I. Lenin. The target groups were civil–service departments, party organizations and trade unions, cooperatives, and the like.
After his return from Berlin in May 1924, Mayakovsky met with the Japanese author Tamisi Naito, who was visiting Moscow. Seated at the table next to Mayakovsky and Lili is Sergey Tretyakov’s wife, Olga. To left of Naito (standing in the center) are Sergey Eisenstein and Boris Pasternak.
The Lef members’ tribute to the dead leader was of a different nature. The theory section in the first issue of Lef for 1924 was devoted to Lenin’s language, with contributions by leading Formalists such as Viktor Shklovsky, Boris Eikhenbaum, Boris Tomashevsky, and Yury Tynyanov—groundbreaking attempts to analyze political language by means of structuralist methods. Lenin was said to have “decanonized” the language, “cut down the inflated style,” and so on, all in the name of linguistic efficiency. This striving for powerful simplicity was in line with the theoretical ambitions of the Lef writers but stood in stark contrast to the canonization of Lenin which was set in train by his successors as soon as his corpse was cold.
This entire issue of Lef was in actual fact a polemic against this development—indirectly, in the essays about Lenin’s language, and in a more undisguised way in the leader article. In a direct reference to the advertisements for Lenin busts, the editorial team at Lef in their manifesto “Don’t Trade in Lenin!” sent the following exhortation to the authorities:
Don’t make matrices out of Lenin.
Don’t print his portrait on posters, oilcloths, plates, drinking
vessels, cigarette boxes.
Don’t turn Lenin into bronze.
Don’t take from him his living gait and human physiognomy,
which he managed to preserve at the same time as he led history.
Lenin is still our present.
He is among the living.
We need him living, not dead.
Learn from Lenin, but don’t canonize him.
Don’t create a cult around a man who fought against all kinds of
cults throughout his life.
Don’t peddle artifacts of this cult.
Don’t trade in Lenin.
In view of the extravagant cult of Lenin that would develop later in the Soviet Union, the text is insightful to the point of clairvoyance. But the readers of Lef were never to see it. According to the list of contents, the issue began on page 3 with the leader “Don’t Trade in Lenin!” But in the copies that were distributed, this page is missing and the pagination begins instead on page 5. The leadership of Gosizdat, which distributed Lef, had been incensed by the criticism of the advertisements for Lenin busts and had removed the leader. As if by some miracle, it has been preserved in a few complimentary copies which made it to the libraries before the censor’s axe fell.
To read more about Mayakovsky, click here.
On January 3, 2015, scholar Daniel Albright (1945–2015), the Ernest Bernbaum Professor of Literature at Harvard University—who counted himself, among other accolades, as an NEH Fellow, a Guggenheim Fellow, and a Berlin Prize Fellow at the American Academy in Berlin—passed away unexpectedly. The author of sixteen books, which straddled a range of interests from literary criticism and musicology to panaesthetics and the history of modernism, Albright taught in three departments at Harvard, where he had worked for the past decade.
From an article in the Harvard Crimson:
As an undergraduate at Rice University, Albright originally declared a major in mathematics before switching to English. Upon graduating from Rice in 1967, he attended Yale, where he received his M.Phil in 1969 and his Ph.D. in 1970. Prior to his arrival at Harvard in 2003, Albright taught at the University of Virginia, the University of Munich, the University of Rochester, and the Eastman School of Music.
Once at Harvard, he taught in the English, Music, and Comparative Literature departments. English Department chair and professor W. James Simpson spoke highly of Albright’s career in Cambridge.
“Whenever Dan was in a room, the room was full of fun and amusement and delight because of his range of literary allusions and music allusions,” Simpson said. “He was constantly delighting an audience.”
Among those books he edited or authored were three published and/or distributed by the University of Chicago Press: Untwisting the Serpent: Modernism in Music, Literature, and Other Arts; Modernism and Music: An Anthology of Sources; and Evasions (Sylph Editions).
To read more about those works, click here.
To read a remembrance on Albright’s website, click here.
Below follows, in full, an interview with Alice Kaplan on the career of recent Nobel Laureate Patrick Modiano. The interview was originally published online via the French-language journal Libération, shortly after the Nobel announcement.
The American academic Alice Kaplan, author of the outstanding The Collaborator: The Trial and Execution of Robert Brasillach, and more recently, Dreaming in French, teaches Modiano at Yale University, where she chairs the Department of French. She evokes for us the particular aura of the French Nobel Laureate in the United States.
Is Patrick Modiano well known in American universities?
There have been sixteen PhD dissertations on Modiano in American universities since 1987, a significant number, given that he is a both foreigner and a contemporary novelist. Yale University Press has just published a trilogy of novels originally published by the Editions du Seuil under the title Suspended Sentences. Modiano’s attraction comes from his style, which is laconic and beautiful but also quite accessible, in English as well as in French. Then there is the particular genre he invented, inspired by detective fiction, familiar to American readers. The obstacle is obviously the number of references to specific places in Paris that are everywhere in his books—all those street names and addresses that capture so well the atmosphere of different neighborhood, so that it’s probably necessary to have visited Paris at least once to really get him. At the same time, he knows exactly how to create an atmosphere. I always think about Edward Hopper when I read Modiano, there is a sense in his books that something horrible has happened; a crime is floating in the air, and a sense of someone or something missing. Modiano could write stories that take place in Brooklyn or in New Jersey. He’s invented such a specific a notion of place that you can think of certain places as being “Modianesque.”
Does he have many American readers?
It is difficult to say. He is published by David R. Godine, an editor of fine literary fiction, not a mass market publisher. My sense is that he is appreciated by the kind of reader who appreciates James Salter, for example, or by the kind of American reader who might have read Hélène Berr’s wartime diary in translation—which Modiano prefaced in the original French edition. His novel Dora Bruder has been widely read in the U.S. by intellectuals interested in the memory of the Holocaust—precisely because it is a book about forgetting. Historians find Modiano especially interesting because he offers a challenge to a certain kind of positivism with respect to memory. In the United States, people are always hunting for their identities. Genealogical search engines like ancestry.com, and television shows like Finding your roots are phenomenally successful. And here comes a writer who understands the mixed nature of ancestry, the racial and cultural mix fundamental to American identity and who describes the desire to understand where we come from, but also—and this is important—the impossibility of knowing everything, the confusion. I teach Dora Bruder in a class on the archives and the relationship of history to literature, because it is a book that tells us that we must also respect what we can’t know. In Modiano’s books, the person who knows the answer has just died, or else the narrator is so tired of searching that he stops before he gets to the last garage on his list. Modiano is often compared to Proust and even considered a kind of modern day Proust, and there is something to this, except that Proust is never tired of searching for lost time! The fact that Modiano’s fiction seems to slip through our fingers is an integral part of his literary project. He helps us understanding forgetting, the same way Proust helps us understand jealousy, or Stendhal ambition.
What does research on Modiano look like?
The last thesis I directed on Modiano investigated his use of the first person, the fact that he doesn’t engage in what the French call “autofiction”—i.e., fictional autobiography—but in something much more subtle. You can’t read Modiano for the identity politics that have become so fundamental in the American academy, where we read systematically through the grid of race and gender. Modiano is always questioning those categories, by showing the error of simplistic shortcuts. You cannot, for example, categorize him as a “Jewish writer.”
How did you discover his books?
I read La Place de L’Etoile while I was working on Céline’s anti-Semitic writing. I find it astonishing that Modiano published that book (a parody of anti-semitic prose) in 1968, well before the great wave of consciousness about the French collaboration with the Nazis that came in the wake of Robert Paxton’s groundbreaking research. Think about it: Modiano was alone, not part of any literary school, and he wrote about French anti-Semitism with incredible intuition. He was especially attuned to the anti-Semitic rhetoric around Jewish names. After La Place de l’Etoile, I devoured all of his books.
To read more by Alice Kaplan, click here.
Our free e-book for January is Simon Kitson’s The Hunt for Nazi Spies: Fighting Espionage in Vichy France; read more about the book below:
From 1940 to 1942, French secret agents arrested more than two thousand spies working for the Germans and executed several dozen of them—all despite the Vichy government’s declared collaboration with the Third Reich. A previously untold chapter in the history of World War II, this duplicitous activity is the gripping subject ofThe Hunt for Nazi Spies, a tautly narrated chronicle of the Vichy regime’s attempts to maintain sovereignty while supporting its Nazi occupiers.Simon Kitson informs this remarkable story with findings from his investigation—the first by any historian—of thousands of Vichy documents seized in turn by the Nazis and the Soviets and returned to France only in the 1990s. His pioneering detective work uncovers a puzzling paradox: a French government that was hunting down left-wing activists and supporters of Charles de Gaulle’s Free French forces was also working to undermine the influence of German spies who were pursuing the same Gaullists and resisters. In light of this apparent contradiction, Kitson does not deny that Vichy France was committed to assisting the Nazi cause, but illuminates the complex agendas that characterized the collaboration and shows how it was possible to be both anti-German and anti-Gaullist.
Combining nuanced conclusions with dramatic accounts of the lives of spies on both sides, The Hunt for Nazi Spies adds an important new dimension to our understanding of the French predicament under German occupation and the shadowy world of World War II espionage.
Download (through January 31) your free copy here
A little more than a year ago, we published Paddy Woodworth’s Our Once and Future Planet, an ambitious, even monumental account of the past, present, and future of the ecological restoration movement that was recently named one of the year’s Outstanding Academic Titles by the ALA’s Choice. Then, this past autumn, Paddy came to the States and spent a little over a month talking with people about the book in a variety of settings. Now that he’s back in Ireland and settling in for the holidays, we asked Paddy to offer some thoughts on what it’s like to hit the road promoting not just a book, but an idea.
Publishing a book is a little like casting a stone into a well. We write, as Seamus Heaney put it, “to set the darkness echoing.” And often we wait a long time for the echoes, and must count ourselves lucky if we hear any at all.
Our Once and Future Planet was published by the University of Chicago Press a year ago last October. It charts my journey into the challenging world of ecological restoration projects worldwide; it examines and ultimately finds precious if tenuous hope in restoration ecology’s promise to reverse the globalized degradation of our environment,
The echoes to the book returned slowly enough at first, especially in the US and UK print media, though at home the Irish media, press, radio, TV and online, responded very quickly and positively to the book. Individuals I respected in the restoration field, and readers I had never met, wrote to me praising the book generously. Two of its more controversial chapters engendered pained and sometimes painful personal individual responses from protagonists also, but no real public engagement. Sales were respectable, but the book was hardly flying off the shelves.
And, for a while thereafter, it seemed that that was that – the book was out there, somewhere, but not sending back many more messages.
Then, over the summer, new and louder echoes became audible. Excellent reviews in Bioscience and Science were particularly flattering for a journalist who had learned his ecology on the hoof and on the wing, as it were, while researching Planet.
But I had dreamed of reaching a public far beyond academia. The book’s narrative style, with a focus on the human dramas behind restoration projects, aims at engaging ordinary citizens with the urgent need to restore our landscapes, wherever we live. An opportunity to find this audience came, ironically enough, with an invitation from an academic to teach a seminar based entirely around my text, at DePaul University, Chicago.
Holding the attention of undergraduates is an acid test for any writer. I had some doubts, for sure: did my work really offer strong enough material to engage fifteen bright kids, with lots of other study and leisure options, over five three-hour sessions?
It was very heartening to find that it did: the seminar, co-taught with the invaluable and congenial support of my host, Dr Liam Heneghan, got gratifying reviews from students on Facebook: “by far one of the most stimulating courses I’ve had to date. . . . This is just what I needed,” one student wrote.
It was exciting to find that key restoration questions, like “who decides when and where to restore, and what target to restore to?” and issues like the “novel ecosystem” concept, aroused passionate and well-informed debates. And the quality of the written responses on restoration topics was, Liam and I agreed, exceptionally high.
The students’ first instinct was often to take the position that it must be scientists, or at least “experts” who decide on restoration targets. But as we excavated that idea, and found that leading restoration ecologists, looking at the same ecological and cultural context, often disagree on the correct managerial response, many of them swung towards giving the final say to “community” consensus.
But again, examining the painful history of the North Branch Restoration Project in Chicago, there was a recognition that the comforting yet fuzzy notion of ‘community’ is deeply problematic; different stake-holding communities may have radically different views as to the appropriate ecological vision for a local landscape they all love in different ways.
“I had clear ideas about how to do restoration when I started the class,” one student said in our final session. “Now I can only see how complex it all is.” I guess that is a fair definition of an educational advance.
It was enlightening, too, to explore these topics at discussions organized through DePaul’s Institute of Nature and Culture, where I was a Visiting Fellow. The participants were faculty members from fields as diverse as politics, literature, religion, philosophy and economics. To hear the arguments from my book explored, challenged and advanced through debate, at both undergraduate and faculty level, was a writer’s dream come true. This was the darkness echoing, indeed.
It was particularly illuminating for me to hear academics from the humanities endorse my book’s analysis of the polemical series of articles and books advocating the “novel ecosystems” concept. In my view, their authors use dangerously misleading rhetorical devices to inflate policy arguments in favour of abandoning classic restoration targets. In language heavy with polemic and light on data, they argue for settling for the management of so-called “novel” ecosystems – “chronically degraded” is a much more accurate and appropriate term — for whatever limited goods and services that they may offer. They thus undermine the case for investing in restoration, just at the moment when restoration science and practice is producing the most fruitful results in this young movement’s short history.
But equally, it was enormously refreshing to find that one of our brightest undergraduate students felt that we had all reached too cosy a consensus against the “novel” ecosystems concept, and set out to defend the contrary position, very cogently, in her final paper. That is, after all, what truly open debate is all about.
A growing sense that ecological restoration is now getting traction as a key conservation strategy, and provoking questions that help us reframe our troubled relationship to the natural work in imaginative and positive ways, was confirmed again and again in these classes, and on visits to other colleges. I was also invited to teach, in October and November, at the universities of Iowa, Wisconsin-Madison, Loyola (Chicago), Missouri, William & Mary, Dartmouth College and Mount Holyoke.
And I found that restoration ecology now has a heightened profile on the core research and conservation agenda at Missouri Botanical Garden, where I was honoured to be made a research associate during a lecture visit.
Through this whole process, I was repeatedly challenged to reassess my own work. A book is a frozen moment in one’s development, and while I was happy to find that many of its arguments stood up well, I also found that my own position was evolving beyond some of my year-old conclusions. It was Curt Meine, the lucid historian of American conservation thinking, who alerted me to this.
He asked me, after I had given a lecture on the “novel ecosystem” concept at the Nelson Institute in Madison, whether I had not become more sharply critical about the concept than I had been in the final chapter in the book. And I realized that he was quite right: my erstwhile effort to synthesise opposing arguments, made by respected colleagues in good faith, was yielding to a more sharply honed opposition to proposals I saw doing real damage in the world.
What was perhaps most encouraging about this US trip, however were the indications that a much broader American audience now wants to learn about restoration issues. Public lectures organized by Morton Arboretum and by the Kane County Wild Ones in Illinois, and by Mount Kendal Retirement Home in New Hampshire and the Osher Institute at Dartmouth, all drew full houses, with more than 100 people at some events, along with substantial book sales.
The challenge now is to find ways of identifying and reaching the networks, in the Uand elsewhere, where there is hunger for discussion about restoration, and about how this strategy is being adopted in radically different socio-ecological contexts. Suggestions welcome!
“Best,” from the Old English betest (adjective), betost, betst (adverb), of Germanic origin; related to Dutch and German best, also “to better.”*
*To better your end-of-the-year book perusing considerations, here’s a list of our titles we were geeked to see on many of the year’s Best of 2014 lists, from non-fiction inner-city ethnographies to the taxonomies of beetle sheaths:
John Drury’s Music at Midnight was named one of the ten best nonfiction books of 2014 by the Wall Street Journal.
Alice Goffman’s On the Run was named one of the 100 Notable Books of 2014 by the New York Times Book Review, one of only two university press books on the list, and one of the 30 best nonfiction books of 2014 by Publishers Weekly.
Rachel Sussman’s Oldest Living Things in the World was named one of the 100 best books of 2014 by Amazon, was one of three Chicago books on the Wall Street Journal’s six-book list of the Best Nature Gift Books of 2014, and topped Maria Popova’s list of the year’s best art, design, and photo books at Brainpickings.
Mark E. Hauber’s The Book of Eggs was one of three Chicago books on the Wall Street Journal’s six-book list of the Best Nature Gift Books of 2014.
Helen and William Bynum’s Remarkable Plants that Shape Our World was one of three Chicago books on the Wall Street Journal’s six-book list of the Best Nature Gift Books of 2014, was named by the Guardian’s Grrl Scientist to her list of the year’s best science books, and was named to the Globe and Mail’s Best Gift Books list for 2014.
Sylvia Sumira’s Globes was named one of the best gift books of the year in the design category by the Wall Street Journal.
Atif Mian and Amir Sufi’s House of Debt was named one of the best economics books of the year by the Financial Times.
Patrice Bouchard’s Book of Beetles was named one of the best science books of the year by Wired and was named to the Globe and Mail‘s Best Gift Books list for 2014.
Scott Richard Shaw’s Planet of the Bugs was named by the Guardian’s Grrl Scientist to her list of the year’s best science books.
James W. P. Campbell’s The Library was named to Buzzfeed’s 47 Incredibly Unique Books to Buy Everyone on Your List.
Howard S. Becker’s What About Mozart? What About Murder? was named a Book of the Year by two different contributors to the Times Higher Education.
Marta Gutman’s A City for Children was named a Book of the Year by the Times Higher Education.
Retha Edens-Meier and Peter Bernhardt’s Darwin’s Orchids was named one of the best science books for the holidays by American Scientist.
Donald Westlake’s The Getaway Car was named to the Favorite Books of 2014 list by Christianity Today’s Books and Culture magazine and one of the five best suspense books to give as gifts this year in the Chicago Tribune’s Printers Row supplement.
Philip Ball’s Serving the Reich was named by Physics World as one of the top ten physics books of 2014.
Barbara Taylor’s The Last Asylum was named one of the Guardian’s best psychology books for 2014, one of the Guardian‘s Best Books of 2014 (twice!), and a Book of the Year by two contributors to the Times Higher Education.
Five books in paperback by Peter De Vries made William Giraldi’s Year in Reading list for the Millions.
Peggy Shinner’s You Feel So Mortal was recommended as a gift book in the Chicago Tribune’s Printers Row supplement.
Margaret F. Brinig and Nicole Stelle Garnett’s Lost Classroom, Lost Community was named the best book of the year by Education Next.
Happy holidays! Buy a book!
Irina Baronova and the Ballets Russes de Monte Carlo chronicles one of the most acclaimed touring ballet companies of the twentieth century, along with its prima ballerina and muse, the incomparable Irina Baronova. Along the way, it expands upon the rise of modern ballet as a medium, through an unprecedented archive of letters (over 2,000 of them), photographs, oral histories, and interviews conducted by Victoria Tennant, the book’s author and Baronova’s daughter. Earlier this month, the book was feted at a launch by none other than Mikhail Baryshnikov at his eponymous Arts Center in New York City. Although less sumptuous than those collected in the book, below follow some candid photos from the event:
Victoria Tennant and ballerina Wendy Whelan
Mikhail Baryshnikov, Tennant, and Blythe Danner (L to R)
Bebe Neuwirth, Tennant, and Chris Calkins (L to R)
To read more about Irina Baronova and the Ballets Russes de Monte Carlo, click here.
Atif Mian and Amir Sufi’s House of Debt, a polemic about the Great Recession and a call to action about the borrowing and lending practices that led us down the fiscal pits, already made a splash on the shortlist for the Financial Times‘s Best Business Book of 2014. Now, over at the Independent, the book tops another Best of 2014 list, this time proclaimed, “the jewel of 2014.” From Ben Chu’s review, which also heralds another university press title—HUP’s blockbuster Capital by Thomas Piketty (“the asteroid”):
As with Capital, House of Debt rests on some first-rate empirical research. Using micro data from America, the professors show that the localities where the accumulation of debt by households was the most rapid were also the areas that cut back on spending most drastically when the bubble burst. Mian and Sufi argue that policymakers across the developed world have had the wrong focus over the past half decade. Instead of seeking to restore growth by encouraging bust banks to lend, they should have been writing down household debts. If the professors are correct—and the evidence they assemble is powerful indeed—this work will take its place in the canon of literary economic breakthroughs.
We’ve blogged about the book previously here and here, and no doubt it will appear on more “Best of” lists for business and economics—it’s a read with teeth and legs, and the ostensible advice it offers to ensure we avoid future crises, points its fingers at criminal lending practices, greedy sub-prime investments, and our failure to share—financially and conceptually—risk-taking in our monetary practices.
You can read more about House of Debt here.
David Drummond’s cover for The Hoarders, one of Paste Magazine’s 30 Best Book Covers of 2014.
This past week, New Yorker critic Joan Acocella profiled Scott Herring’s The Hoarders, a foray into the history of material culture from the perspective of clutter fetish and our fascination with the perils surrounding the urge to organize. The question Herring asks, namely, “What counts as an acceptable material life—and who decides?,” takes on a gradient of meaning for Acocella, who confronts the material preferences of her ninety-three-year-old mother, which prove to be in accord with the DSM V‘s suggestion that, “hoarding sometimes begins in childhood, but that by the time the hoarders come to the attention of the authorities they tend to be old.”
In The Hoarders, Herring tells the tale of Homer and Langley Collyer, two brothers to whom we can trace a legend (um, legacy?) of modern hoarding, whose eccentricity and ill health (Langley took care of Homer, who was both rheumatic and blind) led to a lion’s den of accrual, and a rather unfortunate end. As Acocella explains:
In 1947, a caller alerted the police that someone in the Collyer mansion may have died. After a day’s search, the police found the body of Homer, sitting bent over, with his head on his knees. But where was Langley? It took workers eighteen days to find him. The house contained what, in the end, was said to have been more than a hundred and seventy tons of debris. There were toys, bicycles, guns, chandeliers, tapestries, thousands of books, fourteen grand pianos, an organ, the chassis of a Model T Ford, and Dr. Collyer’s canoe. There were also passbooks for bank accounts containing more than thirty thousand dollars, in today’s money.
As Herring describes it, the rooms were packed almost to the ceilings, but the mass, like a Swiss cheese, was pierced by tunnels, which Langley had equipped with booby traps to foil burglars. It was in one of those tunnels that his corpse, partly eaten by rats, was finally discovered, only a few feet away from where Homer’s body had been found. He was apparently bringing Homer some food when he accidentally set off one of his traps and entombed himself. The medical examiner estimated that Langley had been dead for about a month. Homer seems to have died of starvation, waiting for his dinner.
The New Yorker piece also confronts the grand dames of American hoarding, Little Edie and Big Edie Bouvier Beale, cousins to Jacqueline Kennedy Onassis, and the subjects of Albert and David Maysles’ cult classic documentary Grey Gardens. Acocella positions the Beales as camped-out, if charming, odd fellows, but also points to an underlying class-based assumption in our embrace of their peculiarities:
If they crammed twenty-eight rooms with junk, that’s in part because they had a house with twenty-eight rooms. And if they declined to do the dishes, wouldn’t you, on many nights, have preferred to omit that task?
This is only slightly out of sorts with the stance Herring adopts in the book: as material culture changes, so too do our interactions with it—and each other. You can read much more from Acocella in her profile, but her takehome—we are what we stuff, except what we stuff is subject to the scrutinies and perversions of our social order (class and race among them)—is worth mentioning: we should get out from under it now, because it’s only going to get worse.
Read more about The Hoarders here.
It’s unconventional, to say the least, for a university press to publish a cookbook. But an exception to this rule, coming in Spring 2015, is Paul Fehribach’s Big Jones Cookbook, which expands upon the southern Lowcountry cuisine of the eponymous Chicago restaurant. As mentioned in the book’s catalog copy, “from its inception, Big Jones has focused on cooking with local and sustainably grown heirloom crops and heritage livestock, reinvigorating southern cooking through meticulous technique and the unique perspective of its Midwest location.” More expansively, Fehribach’s restaurant positions the social and cultural inheritances involved in regional cooking at the forefront, while the cookbook expands upon the associated recipes by situating their ingredients (and the culinary alchemy involved in their joining!) as part of a rich tradition invigorated by a kind of heirloom sociology, as well as a sustainable farm-to-table tradition.
This past week, as part of the University of Chicago Press’s Spring 2015 sales conference, much of the Book Division took to a celebratory meal at Big Jones, and the photos below, by editorial director Alan Thomas, both show Fehribach in his element, as well as commemorate the occasion:
To read more about The Big Jones Cookbook, forthcoming in Spring 2015, click here.
On this day in 1931, Jane Addams became the first woman to win the Nobel Peace Prize. Read an excerpt from Louise W. Knight’s Citizen: Jane Addams and the Struggle for Democracy, about the ethics and deeply held moral beliefs permeating the labor movement—and Addams’s own relationship to it—after the jump.
From Chapter 13, “Claims” (1894)
On May 11 Addams, after giving a talk at the University of Wisconsin and visiting Mary Addams Linn in Kenosha, wrote Alice that their sister’s health was improving. The same day, a major strike erupted at the Pullman Car Works, in the southernmost part of Chicago. The immediate cause of the strike was a series of wage cuts the company had made in response to the economic crisis. Since September the company had hired back most of the workers it had laid off at the beginning of the depression, but during the same period workers’ wages had also fallen an average of 30 percent. Meanwhile, the company, feeling pinched, was determined to increase its profits from rents. In addition to the company’s refusing to lower the rent rate to match the wage cuts, its foremen threatened to fire workers living outside of Pullman who did not relocate to the company town. The result was that two-thirds of the workforce was soon living in Pullman. By April, many families were struggling to pay the rents and in desperate straits; some were starving. The company’s stance was firm. “We just cannot afford in the present state of commercial depression to pay higher wages,” Vice President Thomas H. Wickes said. At the same time, the company continued to pay its stockholders dividends at a the rate of 8 percent per annum, the same rate it had paid before the depression hit.
The workers had tried to negotiate. After threatening on May 5 to strike if necessary, leaders of the forty-six-member workers’ grievance committee met twice with several company officials, including, at the second meeting, George Pullman, the company’s founder and chief executive, to demand that the company reverse the wage cuts and reduce the rents. The company refused, and on May 11, after three of the leaders of the grievance committee had been fired and a rumor had spread that the company would lock out all employees at noon, twenty-five hundred of the thirty-one hundred workers walked out. Later that day, the company laid off the remaining six hundred. The strike had begun. “We struck at Pullman,” one worker said, “because we were without hope.”
For Addams, the coincidental timing of the strike and Mary’s illness, both of which would soon worsen, made each tragedy, if possible, a greater sorrow. The strike was a public crisis. Its eruption raised difficult questions for Addams about the ethics of the industrial relationship. What were George Pullman’s obligations to his employees? And what were his employees’ to him? Was it disloyal of him to treat his workers as cogs in his economic machine? Or was it disloyal of his workers to strike against an employer who supplied them with a fine town to live in? Who had betrayed whom? Where did the moral responsibility lie? Mary’s illness was Addams’s private crisis. Mary was the faithful and loving sister whose affection Addams had always relied on and whose life embodied the sacrifices a good woman made for the sake of family. Mary had given up her chance for further higher education for her family’s sake and had been a devoted wife to a husband who had repeatedly failed to support her and their children. The threat of her death stirred feelings of great affection and fears of desperate loss in Addams.
As events unfolded, the two crises would increasingly compete for Addams’s loyalty and time. She would find herself torn, unsure whether she should give her closest attention to her sister’s struggle against death or to labor’s struggle against the capitalist George Pullman. It was a poignant and unusual dilemma; still, it could be stated in the framework she had formulated in “Subjective Necessity”: What balance should she seek between the family and the social claim?
The causes of the Pullman Strike went deeper than the company’s reaction to the depression. For the workers who lived in Pullman, the cuts in wages and the high rents of 189394 were merely short-term manifestations of long-term grievances, all of them tied to company president George Pullman’s philosophy of industrial paternalism. These included the rules regarding life in Pullman, a privately owned community located within the city of Chicago. Pullman had built the town in 1880 to test his theory that if the company’s workers lived in a beautiful, clean, liquor- and sin-free environment, the company would prosper. Reformers, social commentators, and journalists across the country were fascinated by Pullman’s “socially responsible” experiment. Addams would later recall how he was “dined and feted throughout Europe . . . as a friend and benefactor of workingmen.” The workers, however, thought the Pullman Company exercised too much control. Its appointees settled community issues that elsewhere would have been dealt with by an elected government, company policy forbade anyone to buy a house, the town newspaper was a company organ, labor meetings were banned, and company spies were everywhere. Frustrated by this as well as by various employment practices, workers organized into unions according to their particular trades (the usual practice), and these various unions repeatedly struck Pullman Company in the late 1880s and early 1890s. The May 1894 strike was the first that was companywide.
Behind that accomplishment lay the organizing skills of George Howard, vice president of the American Railway Union (ARU), the new cross-trades railroad union that Eugene Debs, its president, had founded the previous year. To organize across trades was a bold idea. Howard had been in Chicago since March signing up members, and by early May he was guiding the workers in their attempted negotiations with the company. The ARU’s stated purpose was to give railroad employees “a voice in fixing wages and in determining conditions of employment.” Only one month earlier it had led railroad workers at the Great Northern Railroad through a successful strike. Thanks to the ARU as well as to the mediating efforts of some businessmen from St. Paul, Minnesota, voluntary arbitration had resolved the strike, and three-fourths of the wage cut of 30 percent had been restored. Impressed, 35 percent of Pullman’s workers joined the ARU in the weeks that followed, hoping that the new union could work the same magic on their behalf.
At first, the prospects for a similar solution at Pullman did not look promising. After the walkout, George Pullman locked out all employees and, using a business trip to New York as his excuse, removed himself from the scene. Meanwhile, a few days after the strike began, Debs, a powerful orator, addressed the strikers to give them courage. He had the rare ability to elevate a controversy about wages into a great moral struggle. The arguments he used that day, familiar ones in the labor movement, would be echoed in Jane Addams’s eventual interpretation of the Pullman Strike. “I do not like the paternalism of Pullman,” he said. “He is everlastingly saying, what can we do for the poor workingmen? . . . The question,” he thundered, “is what can we do for ourselves?”
At this point, the Civic Federation of Chicago decided to get involved. Its president, Lyman Gage, an enthusiast for arbitration, appointed a prestigious and diverse conciliation board to serve as a neutral third party to bring the disputing sides before a separate arbitration panel. Made up partly of members of the federation’s Industrial Committee, on which Addams sat, it was designed to be representative of various interests, particularly those of capital, labor, academia, and reform. It included bank presidents, merchants, a stockbroker, an attorney, presidents of labor federations, labor newspaper editors, professors, and three women civic activists: Jane Addams, Ellen Henrotin, and Bertha Palmer.
The board divided itself into five committees. In the early phase of the strike it would meet nightly, in Addams’s words, to “compare notes and adopt new tactics.” Having had some success in arranging arbitrations in the Nineteenth Ward, Addams was eager to see the method tried in the Pullman case. She would soon emerge as the driving force and the leading actor in the initiative.
The first question the board discussed was whether the Pullman workers wanted the strike to be arbitrated. Addams investigated the question by visiting the striking workers in Pullman, eating supper with some of the women workers, touring the tenement housing, and asking questions. Afterwards, she asked the president of the local ARU chapter, Thomas Heathcoate, and ARU organizer George Howard to allow the conciliation board to meet with the Strike Committee. Refusing her request, Howard told her that the ARU was willing to have the committee meet with the board but that first the Pullman Company would have to state its willingness to go to arbitration.
Meanwhile, three men from the conciliation board were supposed to try to meet with the Pullman Company. The board’s president, A. C. Bartlett, a businessman, was to arrange the meeting but, as of May 30, two weeks into the strike, he had done nothing. Frustrated, Addams stepped in. On June 1 she arranged for Bartlett, Ralph Easley (the Civic Federation’s secretary), and herself to meet with Vice President Wickes and General Superintendent Brown. At the meeting, which Bartlett failed to attend, Wickes merely repeated the company’s well-known position: that it had “nothing to arbitrate.”
Thwarted, Addams decided, with the board’s support, to try again to arrange for the board to meet with the Strike Committee. At a Conciliation Board meeting, Lyman Gage suggested that she propose that rent be the first issue to be arbitrated. Agreeing, Addams decided that, instead of taking the idea to the uncooperative Howard, she would take it over his head to Debs. Persuaded by Addams, Debs immediately arranged for members of the board to speak that night to the Strike Committee about the proposal. Once again, however, Addams’s colleagues failed to follow through. She was the only board member to turn up.
At the meeting, the strike leaders were suspicious, believing that arbitration was the company’s idea. No report survives of how Addams made her case to them, but one can glean impressions from a description of Addams that a reporter published in a newspaper article in June 1894. She described Addams as a “person of marked individuality[;] she strikes one at first as lacking in suavity and graciousness of manner but the impression soon wears away before [her] earnestness and honesty.” She was struck, too, by Addams’s paleness, her “deep” eyes, her “low and well-trained voice,” and they way her face was “a window behind which stands her soul.”
Addams must have made a powerful presentation to the Strike Committee. After she spoke, it voted to arbitrate not only the rents but any point. It was the breakthrough Addams had been hoping for. “Feeling that we had made a beginning toward conciliation,” Addams remembered, she reported her news to the board.
Meanwhile, with the workers and their families’ hunger and desperation increasing, tensions were mounting. Wishing to increase the pressure on the company, Debs had declared on June 1 that the ARU was willing to organize a nationwide sympathy boycott of Pullman cars among railroad employees generally if the company did not negotiate. The Pullman Company’s cars, though owned and operated by the company, were pulled by various railroads. A national boycott of Pullman cars could bring the nation’s already devastated economy to a new low point. Meanwhile, the ARU opened its national convention in Chicago on June 12. Chicago was nervous. Even before the convention began, Addams commented to William Stead, in town again for a visit, that “all classes of people” were feeling “unrest, discontent, and fear. We seem,” she added, “to be on the edge of some great upheaval but one can never tell whether it will turn out a tragedy or a farce.”
Several late efforts at negotiation were made. On June 15 an ARU committee of twelve, six of them Pullman workers, met with Wickes to ask again whether the company would arbitrate. His answer was the same: there was nothing to arbitrate and the company would not deal with a union. Soon afterward George Pullman returned to town. He agreed to meet with the conciliation board but, perhaps sensing the danger that the sincere and persuasive Jane Addams posed, only with its male members. At the meeting he restated his position: no arbitration. At this point, Addams recalled, the board’s effort collapsed in “failure.” The strike was now almost two months old. Addams had done everything she could to bring about arbitration. Resourceful, persistent, even wily, she had almost single-handedly brought the workers to the table, but because she was denied access to George Pullman on the pretext of her gender, she had failed to persuade the company. Her efforts, however, had made something very clear to herself and many others—that George Pullman’s refusal to submit the dispute to arbitration was the reason the strike was continuing.
The situation now became graver. At the ARU convention, the delegates voted on June 22 to begin a national boycott of Pullman cars on June 26 if no settlement were reached. Abruptly, on the same day as the vote, a powerful new player, the General Managers Association (GMA), announced its support for the company. The GMA had been founded in 1886 as a cartel to consider “problems of management” shared by the twenty-four railroad companies serving Chicago; it had dabbled in wage-fixing and had long been opposed to unions. George Pullman’s refusal to arbitrate had been, among other things, an act of solidarity with these railroad companies, his business partners. Disgusted with the outcome of the Great Northern Strike, they were determined to break the upstart ARU, which threatened to shrink the profits of the entire industry. Pullman departed the city again in late June for his vacation home in New Jersey, leaving the GMA in charge of the antistrike strategy. It announced that any railroad worker who refused to handle Pullman cars would be fired.
The ARU was undaunted. On June 26 the boycott began. Within three days, one hundred thousand men had stopped working and twenty railroads were frozen. Debs did not mince words in his message to ARU members and their supporters. This struggle, he said, “has developed into a contest between the producing classes and the money power of this country.” Class warfare was at hand.
Jane Addams was not in the city when the ARU voted for the boycott. She had gone to Cleveland to give a commencement speech on June 19 at the College for Women of Western Reserve University. But she was in Chicago when the boycott began. Chicago felt its impact immediately. There was no railroad service into or out of the city, and public transportation within the city also ceased as the streetcar workers joined the boycott. With normal life having ground to a halt, the city’s mood, which had been initially sympathetic to the workers, began to polarize along class lines. Working people’s sympathies for the railroad workers and hostility toward capitalists rose to a fever pitch while many people in the middle classes felt equally hostile toward the workers; some thought that the strikers should be shot. In Twenty Years Addams writes, “During all those dark days of the Pullman strike, the growth of class bitterness was most obvious.” It shocked her. Before the strike, she writes, “there had been nothing in my experience [that had] reveal[ed] that distinct cleavage of society which a general strike at least momentarily affords.”
The boycott quickly spread, eventually reaching twenty-seven states and territories and involving more than two hundred thousand workers. It had become the largest coordinated work stoppage in the nation’s history and the most significant exercise of union strength the nation had ever witnessed. The workers were winning through the exercise of raw economic power. Virtually the only railcars moving were the federal mail cars, which the boycotting railroad workers continued to handle, as required by federal law and as Debs had carefully instructed them. The railroad yards in the city of Chicago were full of striking workers and boycotters determined to make sure that other railroad cars did not move and to protect them from vandalism.
Now the GMA took aggressive steps that would change the outcome of the strike. On June 30 it used h its influence in Washington to arrange for its own lawyer, Edwin Walker, to be named a U.S. Special Attorney. Walker then hired four hundred unemployed men, deputized them as U.S. Marshals, armed them, and sent them to guard the federal mail cars in the railroad yards to be sure the mail got through. In the yards, the strikers and marshals eyed each other nervously.
Meanwhile, on June 29, Jane Addams’s family crisis worsened. Jane had visited Mary on June 28 but returned to Chicago the same day. That night she received word that her sister’s condition suddenly had become serious, and the following day she rushed back to Kenosha the next day accompanied by Mary’s son Weber Linn (apparently they traveled in a mail car thanks to Addams’s ties to the strikers). She deeply regretted having been gone so much. “My sister is so pleased to have me with her,” she wrote Mary Rozet Smith, “that I feel like a brute when I think of the days I haven’t been here.” As Addams sat by Mary’s bed in Kenosha, the situation in Chicago remained relatively calm. Nevertheless, the GMA now took two more steps that further drew the federal government into the crisis. Claiming that the strikers were blocking the movement of the federal mails (although the subsequent federal investigation produced no evidence that this was true), the GMA asked the U.S. attorney general to ask President Grover Cleveland to send federal troops to shut down the strike. Cleveland agreed, and on July 3 the first troops entered the city. The same day the attorney general ordered Special Attorney Walker to seek an injunction in federal court against the ARU in order to block the union from preventing workers from doing their work duties. The injunction was immediately issued.
By July 1 it was clear that Mary Addams Linn was dying. Addams wired her brother-in-law John and the two younger children, Esther, thirteen, and Stanley, who had recently turned eleven, to come from Iowa to Kenosha. By July 3 they had somehow reached Chicago but, because of the boycott, they could not find a train for the last leg of their trip. At last John signed a document relieving the railroad of liability; then, he, Esther, and Stanley boarded a train (probably a mail train) and within hours had arrived in Kenosha, protected, or so Esther later believed, by the fact that they were relatives of Jane Addams, “who was working for the strikers.” Mary’s family was now all gathered around her except for her oldest son, John, who was still in California. Unconscious by the time they arrived, she died on July 6.
Illinois National Guard troops in front of the Arcade Building in Pullman during the Pullman Strike. [Neg. i21195aa.tif, Chicago Historical Society.]
While Jane Addams’s private world was crumbling, so was Chicago’s civic order. On July 4, one thousand federal troops set up camp around the Post Office and across the city, including the Pullman headquarters. On July 5 and 6 thousands of unarmed strikers and boycotters crowded the railroad yards, joined by various hangers-on—hungry, angry, unemployed boys and men. Many were increasingly outraged by the armed marshals and the troops’ presence. Suddenly a railroad agent shot one of them, and they erupted into violence. Hundreds of railroad cars burned as the troops moved in. Now the strikers were fighting not only the GMA but also the federal government. This had been the GMA’s aim all along. In the days to come, thousands more federal troops poured into the city.
After attending Mary’s funeral in Cedarville Jane Addams returned on July 9 to find Chicago an armed camp and class warfare on everyone’s minds. In working-class neighborhoods such as the Nineteenth Ward, people wore white ribbons in support of the strike and the boycott. Across town, middle-class people were greeted at their breakfast tables by sensational newspaper headlines claiming that the strikers were out to destroy the nation. One Tribune headline read, “Dictator Debs versus the Federal Government.” The national press echoed the theme of uncontrolled disorder. Harper’s Weekly called the strikers “anarchists.” And the nation remained in economic gridlock. Farmers and producers were upset that they could not move their produce to market. Passengers were stranded. Telegrams poured into the White House.
Like the strikers’ reputation, Hull House’s was worsening daily. Until the strike took place, Addams later recalled, the settlement, despite its radical Working People’s Social Science Club, had been seen as “a kindly philanthropic undertaking whose new form gave us a certain idealistic glamour.” During and after the strike, the situation “changed markedly.” Although Addams had tried to “maintain avenues of intercourse with both sides,” Hull House was now seen as pro-worker and was condemned for being so. Some of the residents were clearly pro-worker. Florence Kelley and one of her assistant factory inspectors, Alzina Stevens, befriended Debs during the strike and its aftermath. Stevens sheltered him for a time in her suburban home when authorities were trying to arrest him; Kelley tried to raise money for his bail after he was arrested later in July.
Addams and Hull House began to be severely criticized. Donors refused to give. Addams told John Dewey, who had come to town to take up his new position at the University of Chicago, that she had gone to meet with Edward Everett Ayer, a Chicago businessman with railroad industry clients who had often supported Hull House’s relief work, to ask him for another gift. Dewey wrote his wife, “[Ayer] turned on her and told her that she had a great thing and now she had thrown it away; that she had been a trustee for the interests of the poor, and had betrayed it [sic]—that like an idiot she had mixed herself in something which was none of her business and about which she knew nothing, the labor movement and especially Pullman, and had thrown down her own work, etc., etc.” That autumn Addams had “a hard time financing Hull-House,” a wealthy friend later recalled. “Many people felt she was too much in sympathy with the laboring people.” Addams merely notes in Twenty Years that “[in] the public excitement following the Pullman Strike Hull House lost many friends.”
And there were public criticisms as well. Some middle- and upper-class people attacked Addams, one resident remembered, as a “traitor to her class.” When Eugene Debs observed that “epithets, calumny, denunciation . . . have been poured forth in a vitriolic tirade to scathe those who advocated and practiced . . . sympathy,” one suspects that he had in mind the treatment Jane Addams received. Meanwhile, the workers were angry that Addams would not more clearly align herself with their cause. Her stance—that she would take no side—guaranteed that nearly everyone in the intensely polarized city would be angry with her.
Standing apart in this way was extremely painful. She was “very dependent on a sense of warm comradeship and harmony with the mass of her fellowmen,” a friend, Alice Hamilton, recalled. “The famous Pullman strike” was “for her the most painful of experiences, because . . . she was forced by conviction to work against the stream, to separate herself from the great mass of her countrymen.” The result was that Addams “suffered from . . . spiritual loneliness.” In these circumstances, no one could mistake Addams’s neutrality for wishy-washiness. Practicing neutrality during the Pullman Strike required integrity and courage. In being true to her conscience, she paid a tremendous price.
Of course, the strike was not the only reason she was lonely. Mary’s death was the other. And if, as we may suspect, Mary’s passing evoked the old trauma for Jane of their mother Sarah’s passing, not to mention the later losses of their sister Martha and their father, then the loneliness Addams felt in the last days of the strike and the boycott was truly profound.
She does not describe these feelings when she writes about the strike and Mary’s death in Twenty Years, but in the chapter about Abraham Lincoln, she conveys her feelings well enough. She tells about a walk she took in the worst days of the strike. In that “time of great perplexity,” she writes, she decided to seek out Lincoln’s “magnanimous counsel.” In the sweltering heat, dressed in the long skirt and long-sleeved shirtwaist that were then the fashion, Addams walked—because the streetcars were on strike—four and a half “wearisome” miles to St. Gaudens’s fine new statue of Lincoln, placed at the entrance to Lincoln Park just two years earlier, and read the words cut in stone at the slain president’s feet: “With charity towards all.” And then, still bearing on her shoulders the burden of public hatred that Lincoln had also borne, she walked the four and a half miles home.
Although the deployment of troops had broken the strike’s momentum, the government needed to put the strike’s leader behind bars to bring the strike to an end. On July 10 Debs was indicted by a grand jury for violating the injunction and arrested. Bailed out two days later, he was arrested again on July 17 to await trial in jail. However, when the government prosecutor, the ubiquitous Edwin Walker, became ill, the trial was postponed, and Debs went home to Indiana, where he collapsed gratefully into bed. The trial was held in November 1894; Debs would begin serving his six-month sentence in January 1895.
With Debs removed from leadership and fourteen thousand armed troops, police, and guardsmen bivouacked in Chicago, the strike and the boycott soon collapsed. On August 2 the ARU called off the strike, and on the same day the Pullman Company partially reopened. The railroads were soon running again. The anti-labor forces had won. Private industry and the federal government had shown that, united and with the power of the law on their side, no one, not even the hundreds of thousands of workers who ran the nation’s most crucial industry, could defeat them. If the strike had been successful, it would have turned the ARU into the nation’s most powerful union. Given that the strike failed, the opposite result took place. As the GMA had intended, the ARU died. After Debs was released from jail, he did not resurrect the union.
Although the strike was over, innumerable questions remained unanswered. For the country as a whole, whose only sources of information had been sensational news stories and magazine articles, the first question was: What were the facts? To sort these out, President Grover Cleveland appointed a three-person fact-finding commission to investigate and issue a report. Jane Addams would testify before the United States Strike Commission in August, as would George Pullman.
Meanwhile, for Addams and other labor and middle-class reformers in Chicago, the question was how to prevent or resolve future strikes. The Conciliation Board’s effort to promote voluntary arbitration had been promising, but its failure revealed, Addams believed, certain “weaknesses in the legal structure,” that is, in state and federal laws. On July 19, two days after Debs’ second arrest, as the troops began slowly to withdraw from the city, the Central Council of the Civic Federation met at the Commerce Club. At the meeting, M. C. Carroll, editor of a labor magazine and a member of the Conciliation Board, proposed that the federation host a conference “on arbitration or conciliation” to seek ideas about ways to avert “strikes and boycotts in the future.” The Central Council “enthusiastically endorsed” the proposal and appointed a committee to devise a plan. The hope was to do something immediately, while interest was high, to increase public support for arbitration legislation in Illinois and across the nation.
Addams missed the meeting because she was assisting at the Hull House Summer School at Rockford College, which began on July 10. But she was back in Chicago by the second week in August and had soon joined the arbitration conference committee. It devised a three-part strategy. First, it would convene “capital and labor” at a national conference titled “Industrial Conciliation and Arbitration” in Chicago in November to provide a forum for “calm discussion” of the questions raised by the strike and bring together information about methods of arbitration and conciliation. Second, conference participants from Illinois would press the Illinois General Assembly to pass a law creating a state board of arbitration. Third, a national commission would be named at the end of the conference to press for federal legislation. Elected as secretary to the committee, Jane Addams threw herself into organizing the event.
At the same time, she took on new family responsibilities. With Mary’s death, Jane Addams, at thirty-three, became the guardian and mother of the two younger Linn children. Their father had decided he could not afford to keep them. For the fall, she and Alice agreed that Stanley would live at Hull House and Esther would attend the preparatory boarding school that was affiliated with Rockford College. Weber, nineteen, was still a student at the University of Chicago. He would spend his vacations at Hull House. The oldest son, John, twenty-two, having returned from California, was once again a resident at Hull House and studying for the Episcopalian priesthood. Esther remembered Addams as taking “me and my brothers in as her own children. . . . [She] was a wonderful mother to us all.” Addams was particularly close to Stanley, who, according to Alice’s daughter Marcet, “became . . . Aunt Jane’s very own little boy[;] . . . he was always like a son to her.”
Jane Addams would honor this family claim for the rest of her life. Her niece and nephews, later joined by their children, would gather with her for holidays, live with her at Hull House at various times in their lives, and rely on her for advice, as well as for a steady supply of the somewhat shapeless sweaters that she would knit for them. Because few letters between Addams and the Linn children have survived, the historical record is mostly silent about the affectionate bonds that linked them and the faithfulness with which she fulfilled the maternal role. Her devotion arose from a deep understanding of what it felt like for a child to lose its mother and from a deep gratitude that she could give to Mary’s children the gift Mary had given her.
The Pullman Strike was a national tragedy that aroused fierce passions and left many scars. For many in the middle classes, including Jane Addams, some of the most painful scars were the memories of the intense hatred the strike had evoked between the business community and the workers. Was such class antagonism inevitable? Many were saying so, but Addams, committed as she was to Tolstoyan and Christian nonviolence, social Christian cooperation, and Comtean societal unity, found it impossible to accept the prevailing view. That fall she and John Dewey, now the first chair of the Department of Philosophy at the University of Chicago, discussed this question. In a letter to his wife Alice Dewey reported telling Addams that conflict was not only inevitable but possibly a good thing. Addams disagreed. She “had always believed and still believed,” he wrote, that “antagonism was not only useless and harmful, but entirely unnecessary.” She based her claim on her view that antagonisms were not caused by, in Dewey’s words, “objective differences, which would always grow into unity if left alone, but [by] a person’s mixing in his own personal reactions.” A person was antagonistic because he took pleasure in opposing others, because he desired not to be a “moral coward,” or because he felt hurt or insulted. These were all avoidable and unnecessary reactions. Only evil, Addams said, echoing Tolstoy, could come from antagonism.
During their conversation, she asked Dewey repeatedly what he thought. Dewey admitted that he was uncomfortable with Addams’s theory. He agreed that personal reactions often created antagonism, but as for history, he was enough of a social Darwinist and a Hegelian to believe that society progressed via struggle and opposition. He questioned her. Did she not think that, in addition to conflict between individuals, there were conflicts between ideas and between institutions, for example, between Christianity and Judaism and between “Labor and Capital”? And was not the “realization of . . . antagonism necessary to an appreciation of the truth and to a consciousness of growth”?
Again she disagreed. To support her case Addams gave two examples of apparently inevitable conflicts involving ideas or institutions that she interpreted differently. When Jesus angrily drove the moneychangers out of the temple, she argued, his anger was personal and avoidable. He had “lost his faith,” she said, “and reacted.” Or consider the Civil War. Through the antagonism of war, we freed the slaves, she observed, but they were still not free individually, and in addition we have had to “pay the costs of war and reckon with the added bitterness of the Southerner besides.” The “antagonisms of institutions,” Dewey told Alice, summarizing Addams’s response, “were always” due to the “injection of the personal attitude and reaction.”
Dewey was stunned and impressed. Addams’s belief struck him as “the most magnificent exhibition of intellectual & moral faith” that he had ever seen. “[W]hen you think,” he wrote Alice, “that Miss Addams does not think this as a philosophy, but believes it in all her senses & muscles—Great God.” Dewey, gripped by the power of Addams’s grand vision, told Alice, “I never had anything take hold of me so.”
But his intellect lagged behind. Struggling to find a way to reconcile his and Addams’s views, Dewey attempted a formulation that honored Addams’s devotion to unity, which he shared, while retaining the principle of antagonistic development that Addams rejected but he could not abandon. “[T]he unity [is not] the reconciliation of opposites,” he explained to his wife. Rather, “opposites [are] the unity in its growth.” But he knew he had avoided a real point of disagreement between them. He admitted to Alice, “[M]y pride of intellect . . . revolts at thinking” that conflict between ideas or institutions “has no functional value.” His and Addams’s disagreement—was it an antagonism?—was real, and in discovering it, the two had taken each other’s measure. Addams’s principled vision and spiritual charisma had met their match in the cool machinery of John Dewey’s powerful mind.
Two days later Dewey sent Addams a short note in which he retracted part of what he had said. He was now willing to agree, he wrote, that a person’s expectation of opposition was in and of itself not good and even that it caused antagonism to arise. “[T]he first antagonism always come[s] back to the assumption that there is or may be antagonism,” he wrote, and this assumption is “bad.” In other words, he was agreeing with Addams’s points that antagonism was evil and that it always began in the feelings or ideas of the individual. Dewey did not, however, retract his claim that conflict had its historical uses. These were, as he had said, to appreciate truth and to be conscious of its growth, that is, its spread. He was speaking as the Christian idealist he still was—someone who saw truth as God’s revelation. Antagonism, in other words, helped bring man to see the truth, and this was its value.
When Dewey agreed with Addams that opposition originated in individual feelings, he was joining her in rejecting the usual view that objective differences justified antagonism. This was the view that unions held. Workers believed that the antagonism between themselves and employers arose because workers lacked something real and necessary: sufficient negotiating power in the relationship. In denying this, Dewey and Addams were being, in the simplest sense, determinedly apolitical. Addams, despite her recent involvement with strikes and politics, still refused to believe that actual conditions could provide legitimate grounds for opposition. Her idealism, expressed in her fierce commitment to cooperation, Christian love, nonresistance, and unity, stood like a wall preventing her from seeing that power, as much as personal feelings, soured human relations. A strong mind is both an asset and a liability.
That fall, Hull House, returning to normalcy, resumed its rich schedule of classes, club meetings, lectures, and exhibits. As usual, Addams was seriously worried about the settlement’s finances. The size of the total deficit for the year is unknown, but her awareness that the household operating account was $888 in arrears surfaced in a letter to Mary. As she had in previous years, Addams paid for part of the debt herself (how much is unclear; the documentation does not survive). Mary Rozet Smith, among others, sent a generous check. “It gives me a lump in my throat,” Addams wrote her in appreciation, “to think of the dollars you have put . . . into the . . . prosaic debt when there are so many more interesting things you might have done and wanted to do.” Aware of the delicacy of asking a close friend for donations, Addams sounded a note of regret. “It grieves me a little lest our friendship should be jarred by all these money transactions.”
As before, the residents were in the dark about the state of Hull House’s finances. Despite her intentions to keep them informed, Addams had convened no Residents’ Meeting between April and October, perhaps because the strike and Mary’s illness had absorbed so much of her attention. Finally, in early November she and the residents had “a long solemn talk,” as she wrote Mary. She had laid “before folks” the full situation and asked them “for help and suggestions.” And she had vowed that she would “never . . .let things get so bad again” before she consulted them. “I hope,” she told Mary, “we are going to be more intimate and mutually responsible on the financial side.”
Addams was renewed in her determination for two reasons. First, there was the problem of her own worsening finances. Since July she had assumed the new financial burden, apparently without any help from Alice, of raising Mary’s two younger children. Second, there was her increasing fear, as the depression deepened and donations dropped because of Hull House’s involvement with the Pullman Strike, that her personal liability for Hull House’s debts could literally put her in the Dunning poorhouse. Meanwhile, she pushed herself to speak as often as she could to earn lecture fees. In October she reported to Alice that she had given five talks in one week. In November, she gave lectures in three states—Illinois, Wisconsin, and Michigan. It was all that she could think of to do: to work harder.
Hull House was doing well enough by other measures. The residents’ group continued to grow. Despite the house’s recently stained reputation and the risky state of its finances, five new residents arrived, all women, bringing the total to twenty. For 1894–95, the residents had decided, probably at Addams’s urging, to limit the size of the residents, group to that number. There was now a good mix of old and new, with the majority, like Starr, Lathrop, and Kelley, having been there two years or more. The number of men had shrunk from seven to two, but in a few years it would be back to five. Addams, as always, took her greatest pleasure in the effervescent dailiness of it all. The settlement was first and foremost something “organic,” a “way of life,” she told an audience at the University of Chicago that fall.
Furthermore, the residents’ book of maps was moving toward completion. Conceived originally as a way to publicize some of the data about the neighborhood from the Department of Labor study, it had expanded to include a collection of essays on various related subjects and had acquired a sober New York publisher, Thomas Y. Crowell and Company, and a glorious title,Hull-House Maps and Papers: A Presentation of Nationalities and Wages in a Congested District of Chicago, Together with Comments and Essays on Problems Growing Out of the Social Conditions. The byline, it was agreed, would read “Residents of Hull-House.” It would be published in March 1895. Five of the essays, those by Kelley, Lathrop, Starr, and Addams, were much-expanded versions of the presentations they had made at the Congress on Social Settlements the previous year. Five others rounded out the collection. These dealt with the Bohemians, the Italians, and the Jews of the neighborhood, the maps, and the wages and expenses of cloakmakers in Chicago and New York. The maps were the book’s original inspiration and its most extravagant feature. Printed on oiled paper, folded and tucked into special slots in the book’s front and back covers, they displayed, block by block and in graphic, color-coded detail, where people of different nationalities lived in the ward and the range of wages they earned.
Addams, happy to be back in the editor’s chair, wrote the prefatory note, edited essays, and wrote the meaty appendix that described the settlement’s activities and programs. The book’s title was likely also her handiwork. Descriptive, indeed, exhaustive, it was the sort of title in which she specialized. As she once admitted to Weber Linn, “I am very poor at titles.” The book was very close to her heart. When she wrote Henry Demarest Lloyd on December 1 to thank him for sending the house a copy of his Wealth Against Commonwealth, she observed, “I have a great deal of respect for anyone who writes a good book.” After Maps was published she noted to those to whom she sent copies, “We are very proud of the appearance of the child.”
Jane Addams’s contribution to Maps was her essay “The Settlement as a Factor in the Labor Movement.” Her intention was to give a history of Hull House’s relations with unions as a sort of case study and to examine why and how settlements should be engaged with the labor movement. The piece is straightforward in tone, nuanced, not polemical. In it she settles fully into the even-handed interpretive role she had first attempted in her speech on domestic servants eighteen months earlier.
But the essay also burns with the painful knowledge she gained from the Pullman Strike. She wrestles with the tension between the labor movement’s loyalty to its class interests and her own vision of a classless, universalized, democratic society. And she probes the philosophical question she and Dewey had been debating: Are (class) antagonisms inevitable? Are antagonisms useful? The resulting essay was the most in-depth exploration of the subject of class that Addams would ever write. She was trying to find her way back from the edge of the cliff—class warfare—to which the Pullman Strike had brought her and the nation.
On the question of what the strike accomplished, her thoughts had shifted somewhat. Although she had told Dewey that antagonism was always useless, she argues in “The Settlement as a Factor” that strikes, which were certainly were a form of antagonism, can be useful and necessary. Strikes are often “the only method of arresting attention to [the workers’] demands”; they also offer the permanent benefits of strengthening of the ties of “brotherhood” among the strikers and producing (at least when successful) a more “democratic” relation between workers and their employer. Perhaps Dewey had been more persuasive than he realized.
She still felt, however, that personal emotion was the main cause of antagonisms, including strikes. She admits that labor has a responsibility to fight for the interests of the working people (that is, more leisure and wealth) but only because achieving them would help the workingman feel less unjustly treated. She charges labor with storing up of “grudges” against “capitalists” and calls this “selfish.” She ignores the question of whether low wages and long hours are fair. Social justice is not a touchstone for her arguments in this essay.
Instead, Addams stresses the ideal she had emphasized since coming to Chicago: that of a society united by its sense of common humanity. She writes prophetically of “the larger solidarity which includes labor and capital” and that is based on a “notion of universal kinship” and “the common good.” One might read into her argument the conclusion of social justice, yet the principle remains uninvoked. Instead, Addams stays focused on feelings. She is calling for sympathy for others’ suffering, not for a change in workers’ physical condition.
Addams disapproves of capitalism but not because of its effects on the workers. The moral failings of the individual capitalist trouble her. She slips in a rather radical quotation by an unnamed writer: “The crucial question of the time is, ‘In what attitude stand ye toward the present industrial system? Are you content that greed . . . shall rule your business life, while in your family and social life you live so differently? Shall Christianity have no play in trade?’” In one place, although only one place, she takes workers’ perspective and refers to capitalists as “the power-holding classes.” (Here at last was a glancing nod toward power.) The closest she comes to making a social justice argument is in a sentence whose Marxist flavor, like the previous phrase, suggests Florence Kelley’s influence, yet it, too, retains Addams’s characteristic emphasis on feelings. She hopes there will come a time “when no factory child in Chicago can be overworked and underpaid without a protest from all good citizens, capitalist and proletarian.” While Debs had wanted to arouse middle-class sympathies as a ways to improve the working conditions of the Pullman laborers, Addams wanted the labor movement to cause society to be more unified in its sympathies. Their means and ends were reversed.
Addams found the idea that labor’s organizing efforts could benefit society compelling. “If we can accept” that possibility, she adds, then the labor movement is “an ethical movement.” The claim was a startling one for her to make. It seems the strike had shown her at least one moral dimension to the workers’ struggle. The negative had become the potentially positive. Instead of seeing labor’s union organizing as a symptom of society’s moral decay, as she once had and many other middle-class people still did, she was considering the hypothesis that labor organizing was a sign of society’s moral redemption.
The Pullman Strike also cracked her moral absolutism. In “The Settlement as a Factor” she argues for the first time that no person or group can be absolutely right or absolutely wrong. “Life teaches us,” she writes, that there is “nothing more inevitable than that right and wrong are most confusingly mixed; that the blackest wrong [can be] within our own motives.” When we triumph, she adds, we bear “the weight of self-righteousness.” In other words, no one—not unions and working-class people, not businesses and middle-class people, not settlement workers and other middle-class reformers—could claim to hold or ever could hold the highest moral ground. The absolute right did not exist.
For Addams, rejecting moral absolutism was a revolutionary act. She had long believed that a single true, moral way existed and that a person, in theory, could find it. This conviction was her paternal inheritance (one recalls her father’s Christian perfectionism) and her social-cultural inheritance. Moral absolutism was the rock on which her confident Anglo-American culture was grounded. (It is also the belief that most sets the nineteenth century in the West apart from the twenty-first century.) Now she was abandoning that belief. In the territory of her mind, tectonic plates were shifting and a new land mass of moral complexity was arising.
In the fall of 1894, as she was writing “The Settlement as a Factor,” this new perspective became her favorite theme. In October she warned the residents of another newly opened settlement, Chicago Commons, “not to be alarmed,” one resident recalled, “if we found our ethical standards broadening as we became better acquainted with the real facts of the lives of our neighbors.” That same month, speaking to supporters of the University of Chicago Settlement, she hinted again at the dangers of moral absolutism. Do not, she said, seek “to do good.” Instead, simply try to understand life. And when a group of young men from the neighborhood told her they proposed to travel to New York City that fall to help end political corruption and spoke disdainfully of those who were corrupt, she admonished them against believing that they were purer than others and asked them if they knew what harm they did in assuming that they were right and others were wrong.
What had she seen during the Pullman Strike that led to this new awareness? She had seen the destructive force of George Pullman’s moral self-righteousness. It seemed to her that his lack of self-doubt, that is, his unwillingness to negotiate, had produced a national tragedy; his behavior and its consequences had revealed the evil inherent in moral absolutism. In Twenty Years she writes of how, in the midst of the strike’s worst days, as she sat by her dying sister’s bedside, she was thinking about “that touch of self-righteousness which makes the spirit of forgiveness well-nigh impossible.”
She grounded her rejection of absolute truth in her experience. “Life teaches us,” she wrote. This was as revolutionary for her as the decision itself. In “Subjective Necessity” she had embraced experience as a positive teacher in a practical way. Here she was allowing experience to shape her ethics. The further implication was that ethics might evolve, but the point is not argued in “The Settlement as a Factor.” Still, in her eyes ideas no longer had the authority to establish truth that they once had. Her pragmatism was strengthening, but it had not yet blossomed into a full-fledged theory of truth.
The Pullman Strike taught her in a compelling way that moral absolutism was dangerous, but she had been troubled by its dangers before. She had made her own mistakes and, apparently, a whole train of them related to self-righteousness. The details have gone unrecorded, but they made her ready to understand, and not afterwards forget, something James O. Huntington, the Episcopal priest who had shared the podium with her at the Plymouth conference, had said in a speech at Hull House the year before the strike. “I once heard Father Huntington say,” she wrote in 1901, that it is “the essence of immorality to make an exception of one’s self.” She elaborated. “[T]o consider one’s self as . . . unlike the rank and file is to walk straight into the pit of self-righteousness.” As Addams interpreted Huntington, he meant there was no moral justification for believing in one’s superiority, not even a belief that one was right and the others wrong.
A deeply held, central moral belief is like a tent pole: it influences the shape of the entire tent that is a person’s thought. A new central belief is like a taller or shorter tent pole; it requires the tent to take a new shape. The tent stakes must be moved. Jane Addams had decided there was no such thing as something or someone that was purely right or purely wrong, but the rest of her thought had yet to be adjusted. Among other things, she still believed that a person of high culture was superior to those who lacked it; that is, she still believed that cultural accomplishment could justify self-righteousness.
Some hints of this can be found in the adjectives Addams attaches to democracy in “The Settlement as a Factor.” After proposing that the workers might lead the ethical movement of democracy, she anticipates the fear her readers might feel at this idea. “We must learn to trust our democracy,” she writes, “giant-like and threatening as it may appear in its uncouth strength and untried applications.” Addams was edging toward trusting that working-class people, people without the cultural training in “the best,” could set their own course. Such trust, should she embrace it, would require her to go beyond her old ideas—her enthusiasm for egalitarian social etiquette, for the principle of cooperation, and for the ideal of a unified humanity. Not feeling such trust yet, she was unable to give working people’s power a ringing endorsement. The essay is therefore full of warnings about the negative aspects of the labor movement.
These radical claims—that the labor movement was or could become ethical, that the movement was engaged in a struggle that advanced society morally, that capitalists were greedy and ethically compromised, and that there was no absolute right or wrong—opened up a number of complicated issues. Addams decided she needed to write a separate essay—would it be a speech?—to make these points more fully and to make them explicitly, as honesty compelled her to do, about the Pullman strike. Sometime in 1894, she began to write it. A page from the first draft, dated that year, survives with the title “A Modern Tragedy.” In its first paragraph she writes that, because we think of ourselves as modern, “it is hard to remember that the same old human passions persist” and can often lead to “tragedy.” She invited her readers to view “one of these great tragedies” from “the historic perspective,” to seek an “attitude of mental detachment” and “stand aside from our personal prejudices.” Still grieving over what had happened, Addams was hoping that the wisdom of culture, of the humanities, of Greek and Shakespearean tragedy could give her the comfort of emotional distance. But she had pulled too far back. The opening was so blandly vague and philosophical that no one could tell what the essay was about. She set the piece aside.
To read more about Citizen, click here.
A perfect fish in the evolutionary sense, the broadbill swordfish derives its name from its distinctive bill—much longer and wider than the bill of any other billfish—which is flattened into the sword we all recognize. And though the majesty and allure of this warrior fish has commanded much attention—from adventurous sportfishers eager to land one to ravenous diners eager to taste one—no one has yet been bold enough to truly take on the swordfish as a biographer. Who better to do so than Richard Ellis, a master of marine natural history?Swordfish: A Biography of the Ocean Gladiatoris his masterly ode to this mighty fighter.
The swordfish, whose scientific name means “gladiator,” can take on anyone and anything, including ships, boats, sharks, submarines, divers, and whales, and in this book Ellis regales us with tales of its vitality and strength. Ellis makes it easy to understand why it has inspired so many to take up the challenge of epic sportfishing battles as well as the longline fishing expeditions recounted by writers such as Linda Greenlaw and Sebastian Junger. Ellis shows us how the bill is used for defense—contrary to popular opinion it is not used to spear prey, but to slash and debilitate, like a skillful saber fencer. Swordfish, he explains, hunt at the surface as well as thousands of feet down in the depths, and like tuna and some sharks, have an unusual circulatory system that gives them a significant advantage over their prey, no matter the depth in which they hunt. Their adaptability enables them to swim in waters the world over—tropical, temperate, and sometimes cold—and the largest ever caught on rod and reel was landed in Chile in 1953, weighing in at 1,182 pounds (and this heavyweight fighter, like all the largest swordfish, was a female).
Ellis’s detailed and fascinating, fact-filled biography takes us behind the swordfish’s huge, cornflower-blue eyes and provides a complete history of the fish from prehistoric fossils to its present-day endangerment, as our taste for swordfish has had a drastic effect on their population the world over. Throughout, the book is graced with many of Ellis’s own drawings and paintings, which capture the allure of the fish and bring its splendor and power to life for armchair fishermen and landlocked readers alike.
To download your free copy, click here
An excerpt from Marked: Race, Crime, and Finding Work in an Era of Mass Incarceration
by Devah Pager
At the start of the 1970s, incarceration appeared to be a practice in decline. Criticized for its overuse and detrimental effects, practitioners and reformers looked to community-based alternatives as a more promising strategy for managing criminal offenders. A 1967 report published by the President’s Commission on Law Enforcement and Administration of Justice concluded: “Life in many institutions is at best barren and futile, at worst unspeakably brutal and degrading. The conditions in which [prisoners] live are the poorest possible preparation for their successful reentry into society, and often merely reinforces in them a pattern of manipulation or destructiveness.” The commission’s primary recommendation involved developing “more extensive community programs providing special, intensive treatment as an alternative to institutionalization for both juvenile and adult offenders.” Echoing this sentiment, a 1973 report by the National Advisory Commission on Criminal Justice Standards and Goals took a strong stand against the use of incarceration. “The prison, the reformatory, and the jail have achieved only a shocking record of failure. There is overwhelming evidence that these institutions create crime rather than prevent it.” The commission firmly recommended that “no new institutions for adults should be built and existing institutions for juveniles should be closed.” Following what appeared to be the current of the time, historian David Rothman in 1971 confidently proclaimed, “We have been gradually escaping from institutional responses and one can foresee the period when incarceration will be used still more rarely than it is today.”
Quite opposite to the predictions of the time, incarceration began a steady ascent, with prison populations expanding sevenfold over the next three decades. Today the United States boasts the highest rate of incarceration in the world, with more than two million individuals currently behind bars. Characterized by a rejection of the ideals of rehabilitation and an emphasis on “tough on crime” policies, the practice of punishment over the past thirty years has taken a radically different turn from earlier periods in history. Reflecting the stark shift in orientation, the U.S. Department of Justice released a report in 1992 stating “there is no better way to reduce crime than to identify, target, and incapacitate those hardened criminals who commit staggering numbers of violent crimes whenever they are on the streets.” Far removed from earlier calls for decarceration and community supervision, recent crime policy has emphasized containment and harsh punishment as a primary strategy of crime control.
The revolving door
Since the wave of tough on crime rhetoric spread throughout the nation in the early 1970s, the dominant concern of crime policy has been getting criminals off the streets. Surprisingly little thought, however, has gone into developing a longer-term strategy for coping with criminal offenders. With more than 95 percent of those incarcerated eventually released, the problems of offender management do not end at the prison walls. According to one estimate, there are currently more than twelve million ex-felons in the United States, representing roughly 9 percent of the male working-age population. The yearly influx of returning inmates is double the current number of legal immigrants entering the United States from Mexico, Central America, and South America combined.
Despite the vast numbers of inmates leaving prison each year, little provision has been made for their release; as a result, many do not remain out for long. Of those recently released, nearly two-thirds will be charged with new crimes, and more than 40 percent will return to prison within three years. In fact, the revolving door of the prison has now become its own source of growth, with the faces of former inmates increasingly represented among annual admissions to prison. By the end of the 1990s, more than a third of those entering state prison had been there before.
The revolving door of the prison is fueled, in part, by the social contexts in which crime flourishes. Poor neighborhoods, limited opportunities, broken families, and overburdened schools each contribute to the onset of criminal activity among youth and its persistence into early adulthood. But even beyond these contributing factors, evidence suggests that experience with the criminal justice system in itself has adverse consequences for long-term outcomes. In particular, incarceration is associated with limited future employment opportunities and earnings potential, which themselves are among the strongest predictors of desistance from crime. Given the immense barriers to successful reentry, it is little wonder that such a high proportion of those released from prison quickly make their way back through the prison’s revolving door.
The criminalization of young, black men
As the cycle of incarceration and release continues, an ever greater number of young men face prison as an expected marker of adulthood. But the expansive reach of the criminal justice system has not affected all groups equally. More than any other group, African Americans have felt the impact of the prison boom, comprising more than 40 percent of the current prison population while making up just 12 percent of the U.S. population. At any given time, roughly 12 percent of all young black men between the ages of twenty-five and twenty-nine are behind bars, compared to less than 2 percent of white men in the same age group; roughly a third are under criminal justice supervision. Over the course of a lifetime, nearly one in three young black men–and well over half of young black high school dropouts–will spend some time in prison. According to these estimates, young black men are more likely to go to prison than to attend college, serve in the military, or, in the case of high school dropouts, be in the labor market. Prison is no longer a rare or extreme event among our nation’s most marginalized groups. Rather it has now become a normal and anticipated marker in the transition to adulthood.
There is reason to believe that the consequences of these trends extend well beyond the prison walls, with widespread assumptions about the criminal tendencies among blacks affecting far more than those actually engaged in crime. Blacks in this country have long been regarded with suspicion and fear; but unlike progressive trends in other racial attitudes, associations between race and crime have changed little in recent years. Survey respondents consistently rate blacks as more prone to violence than any other American racial or ethnic group, with the stereotype of aggressiveness and violence most frequently endorsed in ratings of African Americans. The stereotype of blacks as criminals is deeply embedded in the collective consciousness of white Americans, irrespective of the perceiver’s level of prejudice or personal beliefs.
While it would be impossible to trace the source of contemporary racial stereotypes to any one factor, the disproportionate growth of the criminal justice system in the lives of young black men–and the corresponding media coverage of this phenomenon, which presents an even more skewed representation–has likely played an important role. Experimental research shows that exposure to news coverage of a violent incident committed by a black perpetrator not only increases punitive attitudes about crime but further increases negative attitudes about blacks generally. The more exposure we have to images of blacks in custody or behind bars, the stronger our expectations become regarding the race of assailants or the criminal tendencies of black strangers.
The consequences of mass incarceration then may extend far beyond the costs to the individual bodies behind bars, and to the families that are disrupted or the communities whose residents cycle in and out. The criminal justice system may itself legitimate and reinforce deeply embedded racial stereotypes, contributing to the persistent chasm in this society between black and white.
The credentialing of stigma
The phenomenon of mass incarceration has filtered into the public consciousness through cycles of media coverage and political debates. But a more lasting source of information detailing the scope and reach of the criminal justice system is generated internally by state courts and departments of corrections. For each individual processed through the criminal justice system, police records, court documents, and corrections databases detail dates of arrest, charges, conviction, and terms of incarceration. Most states make these records publicly available, often through on-line repositories, accessible to employers, landlords, creditors, and other interested parties. With increasing numbers of occupations, public services, and other social goods becoming off-limits to ex-offenders, these records can be used as the official basis for eligibility determination or exclusion. The state in this way serves as a credentialing institution, providing official and public certification of those among us who have been convicted of wrongdoing. The “credential” of a criminal record, like educational or professional credentials, constitutes a formal and enduring classification of social status, which can be used to regulate access and opportunity across numerous social, economic, and political domains.
Within the employment domain, the criminal credential has indeed become a salient marker for employers, with increasing numbers using background checks to screen out undesirable applicants. The majority of employers claim that they would not knowingly hire an applicant with a criminal background. These employers appear less concerned about specific information conveyed by a criminal conviction and its bearing on a particular job, but rather view this credential as an indicator of general employability or trustworthiness. Well beyond the single incident at its origin, the credential comes to stand for a broader internal disposition.
The power of the credential lies in its recognition as an official and legitimate means of evaluating and classifying individuals. The negative credential of a criminal record represents one such tool, offering formal certification of the offenders among us and official notice of those demographic groups most commonly implicated. To understand fully the impact of this negative credential, however, we must rely on more than speculation as to when and how these official labels are invoked as the basis for enabling or denying opportunity. Because credentials are often highly correlated with other indicators of social status or stigma (e.g., race, gender, class), we must examine their direct and independent impact. In addition, credentials may affect certain groups differently than others, with the official marker of criminality carrying more or less stigma depending on the race of its bearer. As increasing numbers of young men are marked by their contact with the criminal justice system, it becomes a critical priority to understand the costs and consequences of this now prevalent form of negative credential.
What do we know about the consequences of incarceration?
Despite the vast political and financial resources that have been mobilized toward prison expansion, very little systematic attention has been focused on the potential problems posed by the large and increasing number of inmates being released each year. A snapshot of ex-offenders one year after release reveals a rocky path of reintegration, with rates of joblessness in excess of 75 percent and rates of rearrest close to 45 percent. But one simple question remains unanswered: Are the employment problems of ex-offenders caused by their offender status, or does this population simply comprise a group of individuals who were never very successful at mainstream involvement in the first place? This question is important, for its answer points to one of two very different sets of policy recommendations. To the extent that the problems of prisoner reentry reflect the challenges of a population poorly equipped for conventional society, our policies would be best targeted toward some combination of treatment, training, and, at the extreme, containment. If, on the other hand, the problems of prisoner reentry are to some degree caused by contact with the criminal justice system itself, then a closer examination of the (unintended) consequences of America’s war on crime may be warranted. Establishing the nature of the relationship between incarceration and subsequent outcomes, then, is critical to developing strategies best suited to address this rapidly expanding ex-offender population.
In an attempt to resolve the substantive and methodological questions surrounding the consequences of incarceration, this book provides both an experimental and an observational approach to studying the barriers to employment for individuals with criminal records. The first stage observes the experiences of black and white job seekers with criminal records in comparison to equally qualified nonoffenders. In the second stage, I turn to the perspectives of employers in order to better understand the concerns that underlie their hiring decisions. Overall, this study represents an accounting of trends that have gone largely unnoticed or underappreciated by academics, policy makers, and the general public. After thirty years of prison expansion, only recently has broad attention turned to the problems of prisoner reentry in an era of mass incarceration. By studying the ways in which the mark of a criminal record shapes and constrains subsequent employment opportunities, this book sheds light on a powerful, emergent mechanism of labor market stratification. Further, this analysis recognizes that an investigation of incarceration in the contemporary United States would be inadequate without careful attention to the dynamics of race. As described earlier, there is a strong link between race and crime, both real and perceived, and yet the implications of this relationship remain poorly understood. This study takes a hard look at the labor market experiences of young black men, both with and without criminal pasts. In doing so, we gain a close-up view of the powerful role race continues to play in shaping the labor market opportunities available to young men. The United States remains sharply divided along color lines. Understanding the mechanisms that perpetuate these divisions represents a crucial step toward their resolution.
To read more about Marked, click here.
To follow-up on yesterday’s post, here’s an excerpt from Eric Weisbard’s Top 40 Democracy: The Rival Mainstreams of American Music.
“The Logic of Formats”
Nearly every history of Top 40 launches from an anecdote about how radio station manager Todd Storz came up with the idea sometime between World War II and the early 1950s, watching with friends in a bar in Omaha as customers repeatedly punched up the same few songs on the jukebox. A waitress, after hearing the tunes for hours, paid for more listens, though she was unable to explain herself. “When they asked why, she replied, simply: ‘I like ’em.’ ” As Storz said on another occasion, “Why this should be, I don’t know. But I saw waitresses do this time after time.” He resolved to program a radio station following the same principles: the hits and nothing but the hits.
Storz’s aha moment has much to tell about Top 40’s complicated relationship to musical diversity. He might be seen as an entrepreneur with his ear to the ground, like the 1920s furniture salesman who insisted hillbilly music be recorded or the 1970s Fire Island dancer who created remixes to extend the beat. Or he could be viewed as a schlockmeister lowering standards for an inarticulate public, especially women —so often conceived as mass-cultural dupes. Though sponsored broadcasting had been part of radio in America, unlike much of the rest of the world, since its beginnings, Top 40 raised hackles in a postwar era concerned about the numbing effects of mass culture. “We become a jukebox without lights,” the Radio Advertising Bureau’s Kevin Sweeney complained. Time called Storz the “King of the Giveaway” and complained of broadcasting “well larded with commercials.”
Storz and those who followed answered demands that licensed stations serve a communal good by calling playlist catholicity a democracy of sound: “If the public suddenly showed a preference for Chinese music, we would play it . . . I do not believe there is any such thing as better or inferior music.” Top 40 programmer Chuck Blore, responding to charges that formats stifled creative DJs, wrote, “He may not be as free to inflict his musical taste on the public, but now, and rightfully, I think, the public dictates the popular music of the day.” Mike Joseph boasted, “When I first go into a market, I go into every record store personally. I’ll spend up to three weeks doing interviews, with an average of forty-five minutes each. And I get every single thing I can get: the sales on every configuration, every demo for every single, the gender of every buyer, the race of every buyer. . . . I follow the audience flow of the market around the clock.” Ascertaining public taste became a matter of extravagant claim for these professional intermediaries: broadcasting divided into “dayparts” to impact commuters, housewives, or students.
Complicating the tension between seeing formats as pandering or as deferring to popular taste was a formal quality that Top 40 also shared with the jukebox: it could encompass many varieties of hits or group a subset for a defined public. This duality blurred categories we often keep separate. American show business grew from blackface minstrelsy and its performative rather than innate notion of identity —pop as striking a pose, animating a mask, putting on style or a musical. More folk and genre-derived notions of group identity, by contrast, led to the authenticity-based categories of rock, soul, hip-hop, and country. Top 40 formats drew on both modes, in constantly recalibrated proportions. And in doing so, the logic of formats, especially the 1970s format system that assimilated genres, unsettled notions of real and fake music.
Go back to Storz’s jukebox. In the late 1930s, jukeboxes revived a record business collapsed by free music on radio and the Great Depression. Jack Kapp in particular, working for the US branch of British-owned Decca, tailored the records he handled to boom from the pack: swing jazz dance beats, slangy vernacular from black urban culture, and significant sexual frankness. This capitalized on qualities inherent in recordings, which separated sound from its sources in place, time, and community, allowing both new artifice — one did not know where the music came from, exactly — and new realism: one might value, permanently, the warble of a certain voice, suggesting a certain origin. Ella Fitzgerald, eroticizing the nursery rhyme “A-Tisket, A-Tasket” in 1938 on Decca, with Chick Webb’s band behind her, could bring more than a hint of Harlem’s Savoy Ballroom to a place like Omaha, as jukeboxes helped instill a national youth culture. Other jukeboxes highlighted the cheating songs of honky-tonk country or partying &B: urban electrifications of once-rural sounds. By World War II, pop was as much these brash cross-genre jukebox blends as it was the Broadway-Hollywood-network radio axis promoting Irving Berlin’s genteel “White Christmas.”
Todd Storz’s notion of Top 40 put the jukebox on the radio. Records had not always been a radio staple. Syndicated network stations avoided “canned music”; record labels feared the loss of sales and often stamped “Not Licensed for Radio Broadcast” on releases. So the shift that followed television’s taking original network programming was twofold: local radio broadcasting that relied on a premade consumer product. Since there were many more records to choose from than network shows, localized Top 40 fed a broader trend that allowed an entrepreneurial capitalism — independent record-label owners such as Sam Phillips of Sun Records, synergists such as American Bandstandhost Dick Clark, or station managers such as Storz—to compete with corporations like William Paley’s Columbia Broadcasting System, the so-called Tiffany Network, which included Columbia Records. The result, in part, was rock and roll, which had emerged sonically by the late 1940s but needed the Top 40 system to become dominant with young 45 RPM – singles buyers by the end of the 1950s.
An objection immediately presents itself, one that will recur throughout this study: Was Top 40 rock and roll at all, or a betrayal of the rockabilly wildness that Sam Phillips’s roster embodied for the fashioning of safe teen idols by Dick Clark? Did the format destroy the genre? The best answer interrogates the question: Didn’t the commerce-first pragmatism of formatting, with its weak boundaries, free performers and fans inhibited by tighter genre codes? For Susan Douglas, the girl group records of the early 1960s made possible by Top 40 defy critics who claim that rock died between Elvis Presley’s army induction and the arrival of the Beatles. Yes, hits like “Leader of the Pack” were created by others, often men, and were thoroughly commercial. Yes, they pulled punches on gender roles even as they encouraged girls to identify with young male rebels. But they “gave voice to all the warring selves inside us struggling.” White girls admired black girls, just as falsetto harmonizers like the Beach Boys allowed girls singing along to assume male roles in “nothing less than musical cross-dressing.” Top 40’s “euphoria of commercialism,” Douglas argues, did more than push product; “tens of millions of young girls started feeling, at the same time, that they, as a generation, would not be trapped.” Top 40, like the jukebox before it and MTV afterward, channeled cultural democracy: spread it but contained it within a regulated, commercialized path.
We can go back further than jukebox juries becoming American Bandstands. Ambiguities between democratic culture and commodification are familiar within cultural history. As Jean-Christophe Agnew points out in his study Worlds Apart, the theater and the marketplace have been inextricable for centuries, caught up as capitalism developed in “the fundamental problematic of a placeless market: the problems of identity, intentionality, accountability, transparency, and reciprocity that the pursuit of commensurability invariably introduces into that universe of particulate human meanings we call culture.” Agnew’s history ranges from Shakespeare to Melville’s Confidence Man, published in 1857. At that point in American popular culture, white entertainers often performed in blackface, jumping Jim Crow and then singing a plaintive “Ethiopian” melody by Stephen Foster. Eric Lott’s book on minstrelsy gives this racial mimicry a handy catchphrase: Love and Theft. Tarred-up actors, giddy with the new freedoms of a white man’s democracy but threatened by industrial “wage slavery,” embodied cartoonish blacks for social comment and anti-bourgeois rudeness. Amid vicious racial stereotyping could be found performances that respectable theater disavowed. Referring to a popular song of the era, typically performed in drag, the New York Tribune wrote in 1853, “ ‘Lucy Long’ was sung by a white negro as a male female danced.” And because of minstrelsy’s fixation on blackness, African Americans after the Civil War found an entry of sorts into entertainment: as songwriter W. C. Handy unceremoniously put it, “The best talent of that generation came down the same drain. The composers, the singers, the musicians, the speakers, the stage performers —the minstrel shows got them all.” If girl groups showcase liberating possibility in commercial constraints, minstrelsy challenges unreflective celebration.
Entertainment, as it grew into the brashest industry of modernizing America, fused selling and singing as a matter of orthodoxy. The three-act minstrel show stamped formats on show business early on, with its songand-dance opening, variety-act olio, and dramatic afterpiece, its interlocutors and end men. Such structures later migrated to variety, vaudeville, and Broadway. After the 1890s, tunes were supplied by Tin Pan Alley sheet-music publishers, who professionalized formula songwriting and invented “payola”— ethically dubious song plugging. These were song factories, unsentimental about creativity, yet the evocation of cheap tinniness in the name was deliberately outrageous, announcing the arrival of new populations —Siberian-born Irving Berlin, for example, the Jew who wrote “White Christmas.” Tin Pan Alley’s strictures of form but multiplicity of identity paved the way for the Brill Building teams who wrote the girl group songs, the Motown Records approach to mainstreaming African American hits, and even millennial hitmakers from Korean “K-Pop” to Sweden’s Cheiron Studios. Advertisers, Timothy Taylor’s history demonstrates, used popular music attitude as early as they could —sheet-music parodies, jingles, and the showmanship of radio hosts like crooner Rudy Vallee designed to give products “ginger, pep, sparkle, and snap.”
The Lucky Strike Hit Parade, a Top 40 forerunner with in-house vocalists performing the leading tunes, was “music for advertising’s sake,” its conductor said in 1941.
Radio, which arrived in the 1920s, was pushed away from a BBC model and toward what Thomas Streeter calls “corporate liberalism” by leaders like Herbert Hoover, who declared as commerce secretary, “We should not imitate some of our foreign colleagues with governmentally controlled broadcasting supported by a tax upon the listener.” In the years after the 1927 Radio Act, the medium consolidated around sponsor-supported syndicated network shows, successfully making radio present by 1940 in 86 percent of American homes and some 6.5 million cars, with average listening of four hours a day. The programming, initially local, now fused the topsy-turvy theatrics of vaudeville and minstrelsy —Amos ’n’ Andy ranked for years with the most popular programs —with love songs and soap operas aimed at the feminized intimacy of the bourgeois parlor. Radio’s mass orientation meant immigrants used it to embrace a mainstream American identity; women confessed sexual feelings for the likes of Vallee as part of the bushels of letters sent to favored broadcasters; and Vox Pop invented the “man on the street” interview, connecting radio’s commercialized public with more traditional political discourse and the Depression era’s documentary impulse. While radio scholars have rejected the view of an authoritarian, manipulative “culture industry,” classically associated with writers such as the Frankfurt School’s Theodor Adorno, historian Elena Razlogova offers an important qualification: “by the 1940s both commercial broadcasters and empirical social scientists . . . shared Adorno’s belief in expert authority and passive emotional listening.” Those most skeptical of mass culture often worked inside the beast.
Each network radio program had a format. So, for example, Kate Smith, returning for a thirteenth radio season in 1942, offered a three-act structure within each broadcast: a song and comedy slot, ad, drama, ad, and finally a segment devoted to patriotism —fitting for the singer of “God Bless America.” She was said by Billboard, writing with the slangy prose that characterized knowing and not fully genteel entertainment professionals, to have a show that “retains the format which, tho often heavy handed and obvious, is glovefit to keep the tremendous number of listeners it has acquired and do a terrific selling job for the sponsor”— General Foods. The trade journal insisted, “Next to a vocal personality, a band on the air needs a format —an idea, a framework of showmanship.”
Top 40 formats addressed the same need to fit broadcast, advertiser, and public, but through a different paradigm: what one branded with an on-air jukebox approach was now the radio station itself, to multiple sponsors. Early on, Top 40s competed with nonformat stations, the “full service” AM’s that relied on avuncular announcers with years of experience, in-house news, community bulletins, and songs used as filler. As formats came to dominate, with even news and talk stations formatted for consistent sound, competing sonic configurations hailed different demographics. But no format was pure: to secure audience share in a crowded market, a programmer might emphasize a portion of a format (Quiet Storm &B) or blur formats (country crossed with easy listening). Subcategories proliferated, creating what a 1978 how-to book called “the radio format conundrum.” The authors, listing biz slang along the lines of MOR,Good Music, and Chicken Rock, explained, “Words are coined, distorted and mutilated, as the programmer looks for ways to label or tag a format, a piece of music, a frame of mind.”
A framework of showmanship in 1944 had become a frame of mind in 1978. Formats began as theatrical structures but evolved into marketing devices — efforts to convince sponsors of the link between a mediated product and its never fully quantifiable audience. Formats did not idealize culture; they sold it. They structured eclecticism rather than imposing aesthetic values. It was the customer’s money —a democracy of whatever moved people.
The Counterlogic of Genres
At about the same time Todd Storz watched the action at a jukebox in Omaha, sociologist David Riesman was conducting in-depth interviews with young music listeners. Most, he found, were fans of what was popular— uncritical. But a minority of interviewees disliked “name bands, most vocalists (except Negro blues singers), and radio commercials.” They felt “a profound resentment of the commercialization of radio and musicians.” They were also, Riesman reported, overwhelmingly male.
American music in the twentieth century was vital to the creation of what Grace Hale’s account calls “a nation of outsiders.” “Hot jazz” adherents raved about Louis Armstrong’s solos in the 1920s, while everybody else thought it impressive enough that Paul Whiteman’s orchestra could syncopate the Charleston and introduce “Rhapsody in Blue.” By the 1930s, the in-crowd were Popular Front aligned, riveted at the pointedly misnamed cabaret Café Society, where doormen had holes in their gloves and Billie Holiday made the anti-lynching, anti-minstrelsy “Strange Fruit” stop all breathing. Circa Riesman’s study, the hipsters Norman Mailer and Jack Kerouac would celebrate redefined hot as cool, seeding a 1960s San Francisco scene that turned hipsters into hippie counterculture.
But the urge to value music as an authentic expression of identity appealed well beyond outsider scenes and subcultures. Hank Williams testified, “When a hillbilly sings a crazy song, he feels crazy. When he sings, ‘I Laid My Mother Away,’ he sees her a-laying right there in the coffin. He sings more sincere than most entertainers because the hillbilly was raised rougher than most entertainers. You got to know a lot about hard work. You got to have smelt a lot of mule manure before you can sing like a hillbilly. The people who has been raised something like the way the hillbilly has knows what he is singing about and appreciates it.” Loretta Lynn reduced this to a chorus: “If you’re looking at me, you’re looking at country.” Soul, rock, and hip-hop offered similar sentiments. An inherently folkloric valuation of popular music, Karl Miller has written, “so thoroughly trounced minstrelsy that historians rarely discuss the process of its ascendance. The folkloric paradigm is the air that we breathe.”
For this study, I want to combine subcultural outsiders and identity-group notions of folkloric authenticity into a single opposition to formats: genres. If entertainment formats are an undertheorized category of analysis, though a widely used term, genres have been highly theorized. By sticking with popular music, however, we can identify a few accepted notions. Music genres have rules: socially constructed and accepted codes of form, meaning, and behavior. Those who recognize and are shaped by these rules belong to what pioneering pop scholar Simon Frith calls “genre worlds”: configurations of musicians, listeners, and figures mediating between them who collectively create a sense of inclusivity and exclusivity. Genres range from highly specific avant-gardes to scenes, industry categories, and revivals, with large genre “streams” to feed subgenres. If music genres cannot be viewed —as their adherents might prefer —as existing outside of commerce and media, they do share a common aversion: to pop shapelessness.
Deconstructing genre ideology within music can be as touchy as insisting on minstrelsy’s centrality: from validating Theft to spitting in the face of Love. Producer and critic John Hammond, progressive in music and politics, gets rewritten as the man who told Duke Ellington that one of his most ambitious compositions featured “slick, un-negroid musicians,” guilty of “aping Tin Pan Alley composers for commercial reasons.” A Hammond obsession, 1930s Mississippi blues guitarist Robert Johnson has his credentials to be called “King of the Delta Blues” and revered by the likes of Bob Dylan, Eric Clapton, and the Rolling Stones questioned by those who want to know why Delta blues, as a category, was invented and sanctified after the fact and how that undercut more urban and vaudeville-inflected, not to mention female, “classic” blues singers such as Ma Rainey, Mamie Smith, and Bessie Smith.
The tug-of-war between format and genre, performative theatrics and folkloric authenticity, came to a head with rock, the commercially and critically dominant form of American music from the late 1960s to the early 1990s. Fifties rock and roll had been the music of black as much as white Americans, southern as much as northern, working class far more than middle class. Rock was both less inclusive and more ideological: what Robert Christgau, aware of the politics of the shift from his first writing as a founding rock critic, called “all music deriving primarily from the energy and influence of the Beatles—and maybe Bob Dylan, and maybe you should stick pretensions in there someplace.” Ellen Willis, another pivotal early critic, centered her analysis of the change on the rock audience’s artistic affiliations: “I loved rock and roll, but I felt no emotional identification with the performers. Elvis Presley was my favorite singer, and I bought all his records; just the same, he was a stupid, slicked-up hillbilly, a bit too fat and soft to be really good-looking, and I was a middle-class adolescent snob.” Listening to Mick Jagger of the Rolling Stones was a far different process: “I couldn’t condescend to him — his ‘vulgarity’ represented a set of social and aesthetic attitudes as sophisticated as mine.”
The hippies gathered at Woodstock were Riesman’s minority segment turned majority, but with a difference. They no longer esteemed contemporary versions of “Negro blues singers”: only three black artists played Woodstock. Motown-style format pop was dismissed as fluff in contrast to English blues-rock and other music with an overt genre lineage. Top 40 met disdain, as new underground radio centered on “freeform”— meaning free of format. Music critics like Christgau, Willis, and Frith challenged these assumptions at the time, with Frith’s Sound Effects the strongest account of rock’s hypocritical “intimations of sincerity, authenticity, art — noncommercial concerns,” even as “rock became the record industry.” In a nation of outsiders, rock ruled, or as a leftist history, Rock ’n’ Roll Is Here to Pay, snarked, “Music for Music’s Sake Means More Money.” Keir Keightley elaborates, “One of the great ironies of the second half of the twentieth century is that while rock has involved millions of people buying a mass-marketed, standardized commodity (CD, cassette, LP) that is available virtually everywhere, these purchases have produced intense feelings of freedom, rebellion, marginality, oppositionality, uniqueness and authenticity.” In 1979, rock fans led by a rock radio DJ blew up disco records; as late as 2004, Kelefa Sanneh felt the need to deconstruct rock-ism in the New York Times.
Yet it would be simplistic to reduce rockism to its disproportions of race, gender, class, and sexuality. What fueled and fuels such attitudes toward popular music, ones hardly limited to rock alone, is the dream of music as democratic in a way opposite to how champions of radio formats justified their playlists. Michael Kramer, in an account of rock far more sympathetic than most others of late, argues that the countercultural era refashioned the bourgeois public sphere for a mass bohemia: writers and fans debated in music publications, gathered with civic commitment at music festivals, and shaped freeform radio into a community instrument. From the beginning, “hip capitalism” battled movement concerns, but the notion of music embodying anti-commercial beliefs, of rock as revolutionary or at least progressive, was genuine. The unity of the rock audience gave it more commercial clout: not just record sales, but arena-sized concerts, the most enduring music publication in Rolling Stone, and ultimately a Rock and Roll Hall of Fame to debate rock against rock and roll or pop forever. Discursively, if not always in commercial reality, this truly was the Rock Era.
The mostly female listeners of the Top 40 pop formats bequeathed by Storz’s jukebox thus confronted, on multiple levels, the mostly male listeners of a rock genre that traced back to the anti-commercial contingent of Riesman’s interviewees. A democracy of hit songs, limited by its capitalist nature, was challenged by a democracy of genre identity, limited by its demographic narrowness. The multi-category Top 40 strands I will be examining were shaped by this enduring tension.
Pop Music in the Rock Era
Jim Ladd, a DJ at the Los Angeles freeform station KASH-FM, received a rude awakening in 1969 when a new program director laid down some rules. “We would not be playing any Top 40 bullshit, but real rock ’n’ roll; and there was no dress code. There would, however, be something known as ‘the format.’ ” Ladd was now told what to play. He writes bitterly about those advising stations. “The radio consultant imposed a statistical grid over the psychedelic counterculture, and reduced it to demographic research. Do you want men 18–24, adults 18–49, women 35–49, or is your target audience teens? Whatever it may be, the radio consultant had a formula.” Nonetheless, the staff was elated when, in 1975, KASH beat Top 40 KHJ, “because to us, it represented everything that we were trying to change in radio. Top 40 was slick, mindless pop pap, without one second of social involvement in its format.” Soon however, KAOS topped KASH with a still tighter format: “balls-out rock ’n’ roll.”
Ladd’s memoir, for all its biases, demonstrates despite itself why it would be misleading to view rock /pop or genre /format dichotomies as absolute divisions. By the mid-1970s, album-oriented rock (AOR) stations, like soul and country channels, pursued a format strategy as much as Top 40 or AC, guided by consultants and quarterly ratings. Rock programmers who used genre rhetoric of masculine rebellion (“balls-out rock ’n’ roll”) still honored Storz’s precept that most fans wanted the same songs repeated. Stations divided listeners explicitly by age and gender and tacitly by race and class. The division might be more inclusive: adults, 18–49; or less so: men, 18–34. The “psychedelic counterculture” ideal of dropping out from the mass had faded, but so had some of the mass: crossover appeal was one, not always desirable, demographic. And genre longings remained, with Ladd’s rockist disparagement of Top 40 symptomatic: many, including those in the business, quested for “social involvement” and disdained format tyranny. If AOR was formatted à la pop, pop became more like rock and soul, as seen in the power ballad, which merged rock’s amplification of sound and self with churchy and therapeutic exhortation.
Pop music in the rock era encompassed two strongly appealing, sometimes connected, but more often opposed impulses. The logic of formats celebrated the skillful matching of a set of songs with a set of people: its proponents idealized generating audiences, particularly new audiences, and prided themselves on figuring out what people wanted to hear. To believe in formats could mean playing it safe, with the reliance on experts and contempt for audiences that Razlogova describes in an earlier radio era: one cliché in radio was that stations were never switched off for the songs they didn’t play, only the ones they did. But there were strong business reasons to experiment with untapped consumer segments, to accentuate the “maturation” of a buying group with “contemporary”— a buzzword of the times —music to match. To successfully develop a new format, like the urban contemporary approach to black middle-class listeners, marked a great program director or consultant, and market-to-market experimentation in playlist emphasis was constant. Record companies, too, argued that a song like “Help Me Make It through the Night,” Kris Kristofferson’s explicit 1971 hit for Sammi Smith, could attract classier listeners for the country stations that played it.
By contrast, the logic of genres —accentuated by an era of counterculture, black power, feminism, and even conservative backlash — celebrated the creative matching of a set of songs and a set of ideals: music as artistic expression, communal statement, and coherent heritage. These were not necessarily anti-commercial impulses. Songwriters had long since learned the financial reasons to craft a lasting Broadway standard, rather than cash in overnight with a disposable Tin Pan Alley ditty. As Keightley shows, the career artist, steering his or her own path, was adult pop’s gift to the rock superstars. Frank Sinatra, Chairman of the Board, did not only symbolically transform into Neil Young, driving into the ditch if he chose. Young actually recorded for Reprise Records, the label that Sinatra had founded in 1960, whose president, Mo Ostin, went on to merge it with, and run, the artist-friendly and rock-dominated major label Warner Bros. Records.
Contrast Ladd’s or Young’s sour view of formatting with Clive Davis, who took over as president of Columbia Records during the rise of the counterculture. Writing just after the regularizing of multiple Top 40 strands, Davis found the mixture of old-school entertainment and new-school pop categories he confronted, the tensions between format and genre, endlessly fascinating. He was happy to discourse on the reasons why an MOR release by Ray Conniff might outsell an attention-hogging album by Bob Dylan, then turn around and explain why playing Las Vegas had tainted the rock group Blood, Sweat & Tears by rebranding them as MOR. Targeting black albums, rather than singles, to music buyers intrigued him, and here he itemized how he accepted racial divisions as market realities, positioning funk’s Earth, Wind & Fire as “progressive” to white rockers while courting soul nationalists too. “Black radio was also becoming increasingly militant; black program directors were refusing to see white promotion men. . . . If a record is ripe to be added to the black station’s play list, but is not quite a sure thing, it is ridiculous to have a white man trying to convince the program director to put it on.”
The incorporation of genre by formats proved hugely successful from the 1970s to the 1990s. Categories of mainstream music multiplied, major record labels learned boutique approaches to rival indies in what Timothy Dowd calls “decentralized” music selling, and the global sounds that Israeli sociologist Motti Regev sums up as “pop-rock” fused national genres with a common international structure of hitmaking, fueled by the widespread licensing in the 1980s of commercial radio channels in countries formerly limited to government broadcasting. In 2000, I was given the opportunity, for a New York Times feature, to survey a list of the top 1,000 selling albums and top 200 artists by total US sales, as registered by SoundScan’s barcode-scanning process since the service’s introduction in 1991. The range was startling: twelve albums on the list by Nashville’s Garth Brooks, but also twelve by the Beatles and more than twenty linked to the gangsta rappers in N.W.A. Female rocker Alanis Morissette topped the album list, with country and AC singer Shania Twain not far behind. Reggae’s Bob Marley had the most popular back-catalogue album, with mammoth total sales for pre-rock vocalist Barbra Streisand and jazz’s Miles Davis. Even “A Horse with No Name” still had fans:America’s Greatest Hits made a top 1,000 list that was 30 percent artists over forty years old in 2000 and one-quarter 1990s teen pop like Backstreet Boys. Pop meant power ballads (Mariah Carey, Celine Dion), rock (Pink Floyd, Metallica, Pearl Jam), and Latin voices (Selena, Marc Anthony), five mellow new age Enya albums, and four noisy Jock Jams compilations.
Yet nearly all this spectrum of sound was owned by a shrinking number of multinationals, joined as the 1990s ended by a new set of vast radio chains like Clear Channel, allowed by a 1996 Telecommunications Act in the corporate liberal spirit of the 1927 policies. The role of music in sparking countercultural liberation movements had matured into a well-understood range of scenes feeding into mainstreams, or train-wreck moments by tabloid pop stars appreciated with camp irony by omnivorous tastemakers. The tightly formatted world that Jim Ladd feared and Clive Davis coveted had come to pass. Was this true diversity, or a simulation? As Keith Negus found when he spoke with those participating in the global pop order, genre convictions still pressed against format pragmatism. Rock was overrepresented at record labels. Genre codes shaped the corporate cultures that framed the selling of country music, gangsta rap, and Latin pop. “The struggle is not between commerce and creativity,” Negus concluded, “but about what is to be commercial and creative.” The friction between competing notions of how to make and sell music had resulted in a staggering range of product, but also intractable disagreements over that product’s value within cultural hierarchies.
To read more about Top 40 Democracy, click here.
Eric Weisbard’s Top 40 Democracy: The Rival Mainstreams of American Music considers the shifting terrain of the pop music landscape, in which FM radio (once an indisputably dominant medium) constructed multiple mainstreams, tailoring each to target communities built on race, gender, class, and social identity. Charting (no pun intended) how categories rivaled and pushed against each other in their rise to reach American audiences, the book posits a counterintuitive notion: when even the blandest incarnation of a particular sub-group (the Isley Brothers version of R & B, for instance) rose to the top of the charts, so too did the visibility of that group’s culture and perspective, making musical formatting one of the master narratives of late-twentieth-century identity.
In a recent piece for the Sound Studies blog, Weisbard wrote about the rise of both Taylor Swift and, via mid-term elections, the Republican Party:
The genius, and curse, of the commercial-cultural system that produced Taylor Swift’s Top 40 democracy win in the week of the 2014 elections, is that its disposition is inherently centrist. Our dominant music formats, rival mainstreams engaged in friendly combat rather than culture war, locked into place by the early 1970s. That it happened right then was a response to, and recuperation from, the splintering effects of the 1960s. But also, a moment of maximum wealth equality in the U.S. was perfect to persuade sponsors that differing Americans all deserved cultural representation.
And, as Weisbard concludes:
Pop music democracy too often gives us the formatted figures of diverse individuals triumphing, rather than collective empowerment. It’s impressive what Swift has accomplished; we once felt that about President Obama, too. But she’s rather alone at the top.
To read more about Top 40 Democracy, click here.
“Academic Freedom Studies: The Five Schools”
In 2009 Terrence Karran published an essay with the title “Academic Freedom: In Justification of a Universal Ideal.” Although it may not seem so at first glance, the title is tendentious, for it answers in advance the question most often posed in the literature: How does one justify academic freedom? One justifies academic freedom, we are told before Karran’s analysis even begins, by claiming for it the status of a universal ideal.
The advantage of this claim is that it disposes of one of the most frequently voiced objections to academic freedom: Why should members of a particular profession be granted latitudes and exemptions not enjoyed by other citizens? Why, for example, should college and university professors be free to criticize their superiors when employees in other workplaces might face discipline or dismissal? Why should college and university professors be free to determine and design the condition of their workplace (the classroom) while others must adhere to a blueprint laid down by a supervisor? Why should college and university professors be free to choose the direction of their research while researchers who work for industry and government must go down the paths mandated by their employers? We must ask, says Frederick Schauer (2006), “whether academics should, by virtue of their academic employment and/or profession, have rights (or privileges, to be more accurate) not possessed by others” (913).
The architects of the doctrine of academic freedom were not unaware of these questions, and, in anticipation of others raising them, raised them themselves. Academic freedom, wrote Arthur O. Lovejoy (1930), might seem “peculiar chiefly in that the teacher is . . . a salaried employee and that the freedom claimed for him implies a denial of the right of those who provide or administer the funds from which he is paid to control the content of his teaching” ( 384). But this denial of the employer’s control of the employee’s behavior is peculiar only if one assumes, first, that college and university teaching is a job like any other and, second, that the college or university teacher works for a dean or a provost or a board of trustees. Those assumptions are directly challenged and rejected by the American Association of University Professors’ 1915 Declaration of Principles on Academic Freedom and Academic Tenure, a founding document (of which Lovejoy was a principal author) and one that is, in many respects, still authoritative. Here is a key sentence:
The responsibility of the university teacher is primarily to the public itself, and to the judgment of his own profession; and while, with respect to certain external conditions of his vocation, he accepts a responsibility to the authorities of the institution in which he serves, in the essentials of his professional activity his duty is to the wider public to which the institution itself is morally amenable.
There are four actors and four centers of interest in this sentence: the public, the institution of the academy, the individual faculty member, and the individual college or university. The faculty member’s allegiance is first to the public, an abstract entity that is not limited to a particular location. The faculty member’s secondary allegiance is to the judgment of his own profession, but since, as the text observes, the profession’s responsibility is to the public, it amounts to the same thing. Last in line is the actual college or university to which the faculty member is tied by the slightest of ligatures. He must honor the “external conditions of his vocation”—conditions like showing up in class and assigning grades, and holding office hours and teaching to the syllabus and course catalog (although, as we shall see, those conditions are not always considered binding)— but since it is a “vocation” to which the faculty member is responsible, he will always have his eye on what is really essential, the “universal ideal” that underwrites and justifies his labors.
Here in 1915 are the seeds of everything that will flower in the twenty- first century. The key is the distinction between a job and a vocation. A job is defined by an agreement (often contractual) between a worker and a boss: you will do X and I will pay you Y; and if you fail to perform as stipulated, I will discipline or even dismiss you. Those called to a vocation are not merely workers; they are professionals; that is, they profess something larger than the task immediately at hand— a religious faith, a commitment to the rule of law, a dedication to healing, a zeal for truth— and in order to become credentialed professors, as opposed to being amateurs, they must undergo a rigorous and lengthy period of training. Being a professional is less a matter of specific performance (although specific performances are required) than of a continual, indeed lifelong, responsiveness to an ideal or a spirit. And given that a spirit, by definition, cannot be circumscribed, it will always be possible (and even thought mandatory and laudable) to expand the area over which it is said to preside.
The history of academic freedom is in part the history of that expansion as academic freedom is declared to be indistinguishable from, and necessary for, the flourishing of every positive value known to humankind. Here are just a few quotations from Karran’s essay:
Academic freedom is important to everyone’s well-being, as well as being particularly pertinent to academics andtheir students. (The Robbins Committee on Higher Education in the UK, 1963)
Academic freedom is but a facet of freedom in the larger society. (R. M. O. Pritchard, “Academic Freedom and Autonomy in the United Kingdom and Germany,” 1998)
A democratic society is hardly conceivable . . . without academic freedom. (S. Bergan, “Institutional Autonomy: Between Myth and Responsibility,” 2002)
In a society that has a high regard for knowledge and universal values, the scope of academic freedom is wide. (Wan Manan, “Academic Freedom: Ethical Implications and Civic Responsibilities,” 2000)
The sacred trust of the universities is to carry the torch of freedom. (J. W. Boyer, “Academic Freedom and the Modern University: The Experience of the University of Chicago,” 2002)
Notice that in this last statement, freedom is not qualified by the adjective academic. Indeed, you can take it as a rule that the larger the claims for academic freedom, the less the limiting force of the adjective academic will be felt. In the taxonomy I offer in this book, the movement from the most conservative to the most radical view of academic freedom will be marked by the transfer of emphasis from academic, which names a local and specific habitation of the asserted freedom, to freedom, which does not limit the scope or location of what is being asserted at all.
Of course, freedom is itself a contested concept and has many possible meanings. Graeme C. Moodie sorts some of them out and defines the freedom academics might reasonably enjoy in terms more modest than those suggested by the authors cited in Karran’s essay. Moodie (1996) notes that freedom is often understood as the “absence of constraint,” but that, he argues, would be too broad an understanding if it were applied to the activities of academics. Instead he would limit academic freedom to faculty members who are “exercising academic functions in a truly academic matter” (134). Academic freedom, in his account, follows from the nature of academic work; it is not a personal right of those who choose to do that work. That freedom— he calls it an “activity freedom” because it flows from the nature of the job and not from some moral abstraction— “can of course only be exercised by persons, but its justification, and thus its extent, must clearly and explicitly be rooted in its relationship to academic activities rather than (or only consequentially) to the persons who perform them” (133). In short, he concludes, “the special freedom(s) of academics is/are conditional on the fulfillment of their academic obligations” (134).
Unlike those who speak of a universal ideal and of the torch of freedom being carried everywhere, Moodie is focused on the adjective academic. He begins with it and reasons from it to the boundaries of the freedom academics can legitimately be granted. To be sure, the matter is not so cut and dried, for academic must itself be defined so that those boundaries can come clearly into view and that is no easy matter. No one doubts that classroom teaching and research and scholarly publishing are activities where the freedom in question is to be accorded, at least to some extent. But what about the freedom to criticize one’s superiors; or the freedom to configure a course in ways not standard in the department; or the freedom to have a voice in the building of parking garages, or in the funding of athletic programs, or in the decision to erect a student center, or in the selection of a president, or in the awarding of honorary degrees, or in the inviting of outside speakers? Is academic freedom violated when faculty members have minimal input into, or are shut out entirely from, the consideration of these and other matters?
To that question, Mark Yudof, who has been a law school dean and a university president, answers a firm “no.” Yudof (1988) acknowledges that “there are many elements necessary to sustain the university,” including “salaries,” library collections,” a “comfortable workplace,” and even “a parking space” (1356), but do academics have a right to these things or a right to participate in discussions about them (a question apart from the question of whether it is wise for an administration to bring them in)? Only, says Yudof, if you believe “that any restrictions, however indirectly linked to teaching and scholarship, will destroy the quest for knowledge” (1355). And that, he observes, would amount to “a kind of unbridled libertarianism for academicians,” who could say anything they liked in a university setting without fear of reprisal or discipline (1356).
Better, Yudof concludes, to define academic freedom narrowly, if only so those who are called upon to defend it can offer a targeted, and not wholly diffuse, rationale. Academic freedom, he declares, “is what it is” (of course that’s the question; what is it?), and it is “not general liberty, pleasant working conditions, equality, self- realization, or happiness,” for “if academic freedom is thought to include all that is desirable for academicians, it may come to mean quite little to policy makers and courts” (1356). Moodie (1996) gives an even more pointed warning: “Scholars only invite ridicule, or being ignored, when they seem to suggest that every issue that directly affects them is a proper sphere for academic rule” (146). (We shall revisit this issue when we consider the relationship between academic freedom, shared governance, and public employee law.)
So we now have as a working hypothesis an opposition between two views of academic freedom. In one, freedom is a general, overriding, and ever-expanding value, and the academy is just one of the places that house it. In the other, the freedom in question is peculiar to the academic profession and limited to the performance of its core duties. When performing those duties, the instructor is, at least relatively, free. When engaged in other activities, even those that take place within university precincts, no such freedom or special latitude obtains. This modest notion of academic freedom is strongly articulated by J. Peter Byrne (1989): “The term ‘academic freedom’ should be reserved for those rights necessary for the preservation of the unique functions of the university ” (262).
These opposed accounts of academic freedom do not exhaust the possibilities; there are extremes to either side of them, and in the pages that follow I shall present the full range of the positions currently available. In effect I am announcing the inauguration of a new field— Academic Freedom Studies. The field is still in a fluid state; new variants and new theories continue to appear. But for the time being we can identify five schools of academic freedom, plotted on a continuum that goes from right to left. The continuum is obviously a political one, but the politics are the politics of the academy. Any correlation of the points on the continuum with real world politics is imperfect, but, as we shall see, there is some. I should acknowledge at the outset that I shall present these schools as more distinct than they are in practice; individual academics can be members of more than one of them. The taxonomy I shall offer is intended as a device of clarification. The inevitable blurring of the lines comes later.
As an aid to the project of sorting out the five schools, here is a list of questions that would receive different answers depending on which version of academic freedom is in place:
Is academic freedom a constitutional right?
What is the relationship between academic freedom and the First Amendment?
What is the relationship between academic freedom and democracy?
Does academic freedom, whatever its scope, attach to the individual faculty member or to the institution?
Do students have academic freedom rights?
What is the relationship between academic freedom and the form of governance at a college or university?
In what sense, if any, are academics special?
Does academic freedom include the right of a professor to criticize his or her organizational superiors with impunity?
Does academic freedom allow a professor to rehearse his or her political views in the classroom?
What is the relationship between academic freedom and political freedom?
What views of education underlie the various positions on academic freedom?
As a further aid, it would be good to have in mind some examples of incidents or controversies in which academic freedom has been thought to be at stake.
In 2011, the faculty of John Jay College nominated playwright Tony Kushner to be the recipient of an honorary degree from the City University of New York. Normally approval of the nomination would have been pro forma, but this time the CUNY Board of Trustees tabled, and thus effectively killed, the motion supporting Kushner’s candidacy because a single trustee objected to his views on Israel. After a few days of outrage and bad publicity the board met again and changed its mind. Was the board’s initial action a violation of academic freedom, and if so, whose freedom was being violated? Or was the incident just one more instance of garden- variety political jockeying, a tempest in a teapot devoid of larger implications?
In the same year Professor John Michael Bailey of Northwestern University permitted a couple to perform a live sex act at an optional session of his course on human sexuality. The male of the couple brought his naked female partner to orgasm with the help of a device known as a “fucksaw.” Should Bailey have been reprimanded and perhaps disciplined for allowing lewd behavior in his classroom or should the display be regarded as a legitimate pedagogical choice and therefore protected by the doctrine of academic freedom?
In 2009 sociology professor William Robinson of the University of California at Santa Barbara, after listening to a tape of a Martin Luther King speech protesting the Vietnam War, sent an e-mail to the students in his sociology of globalization course that began:
If Martin Luther King were alive on this day of January 19th, there is no doubt that he would be condemning the Israeli aggression against Gaza along with U.S. military and political support for Israeli war crimes, or that he would be standing shoulder to shoulder with the Palestinians.
The e-mail went on to compare the Israeli actions against Gaza to the Nazi actions against the Warsaw ghetto, and to characterize Israel as “a state founded on the negation of a people.” Was Robinson’s e-mail an intrusion of his political views into the classroom or was it a contribution to the subject matter of his course and therefore protected under the doctrine of academic freedom?
As the 2008 election approached, an official communication from the administration of the University of Illinois listed as prohibited political activities the wearing of T-shirts or buttons supporting candidates or parties. Were faculty members being denied their First Amendment and academic freedom rights?
BB&T, a bank holding company, funds instruction in ethics on the condition that the courses it supports include as a required reading Ayn Rand’s Atlas Shrugged (certainly a book concerned with issues of ethics). If a university accepts this arrangement (as Florida State University did), has it traded its academic freedom for cash or is it (as the dean at Florida State insisted) merely accepting help in a time of financial exigency?
In 1996, the state of Virginia passed a law forbidding state employees from accessing pornographic materials on state- owned computers. The statute included a waiver for those who could convince a supervisor that the viewing of pornographic material was part of a bona fide research project. Was the academic freedom of faculty members in the state university system violated because they were prevented from determining for themselves and without government monitoring the course of their research?
Just as my questions would be answered differently by proponents of different accounts of academic freedom, so would these cases be assessed differently depending on which school of academic freedom a commentator belongs to.
Of course I have yet to name the schools, and I will do that now.
(1)— The “It’s just a job” school. This school (which may have only one member and you’re reading him now) rests on a deflationary view of higher education. Rather than being a vocation or holy calling, higher education is a service that offers knowledge and skills to students who wish to receive them. Those who work in higher education are trained to impart that knowledge, demonstrate those skills and engage in research that adds to the body of what is known. They are not exercising First Amendment rights or forming citizens or inculcating moral values or training soldiers to fight for social justice. Their obligations and aspirations are defined by the distinctive task— the advancement of knowledge— they are trained and paid to perform, defined, that is, by contract and by the course catalog rather than by a vision of democracy or world peace. College and university teachers are professionals, and as such the activities they legitimately perform are professional activities, activities in which they have a professional competence. When engaged in those activities, they should be accorded the latitude— call it freedom if you like— necessary to their proper performance. That latitude does not include the performance of other tasks, no matter how worthy they might be. According to this school, academics are not free in any special sense to do anything but their jobs.
(2)— The “For the common good” school. This school has its origin in the AAUP Declaration of Principles (1915), and it shares some arguments with the “It’s just a job” school, especially the argument that the academic task is distinctive. Other tasks may be responsible to market or political forces or to public opinion, but the task of advancing knowledge involves following the evidence wherever it leads, and therefore “the first condition of progress is complete and unlimited freedom to pursue inquiry and publish its results.” The standards an academic must honor are the standards of the academic profession; the freedom he enjoys depends on adherence to those standards: “The liberty of the scholar . . . to set forth his conclusions . . . is conditioned by their being conclusions being gained by a scholar’s method and held in a scholar’s spirit.” That liberty cannot be “used as a shelter . . . for uncritical and intemperate partisanship,” and a teacher should not inundate students with his “own opinions.”
With respect to pronouncements like these, the “For the common good” school and the “It’s just a job” school seem perfectly aligned. Both paint a picture of a self-enclosed professional activity, a transaction between teachers, students, and a set of intellectual questions with no reference to larger moral, political, or societal considerations. But the opening to larger considerations is provided, at least potentially, by a claimed connection between academic freedom and democracy. Democracy, say the authors of the 1915 Declaration, requires “experts . . . to advise both legislators and administrators,” and it is the universities that will supply them and thus render a “service to the right solution of . . . social problems.” Democracy ’s virtues, the authors of the Declaration explain, are also the source of its dangers, for by repudiating despotism and political tyranny, democracy risks legitimizing “the tyranny of public opinion.” The academy rides to the rescue by working “to help make public opinion more self-critical and more circumspect, to check the more hasty and unconsidered impulses of popular feeling, to train the democracy.” By thus offering an external justification for an independent academy— it protects us from our worst instincts and furthers the realization of democratic principles— the “For the common good” school moves away from the severe professionalism of the “It’s just a job” school and toward an argument in which professional values are subordinated to the higher values of democracy or justice or freedom; that is, to the common good.
( 3)— The “Academic exceptionalism or uncommon beings” school. This school is a logical extension of the “For the common good” school. If academics are charged not merely with the task of adding to our knowledge of natural and cultural phenomena, but with the task of providing a counterweight to the force of common popular opinion, they must themselves be uncommon, not only intellectually but morally; they must be, in the words of the 1915 Declaration, “men of high gift and character.” Such men (and now women) not only correct the errors of popular opinion, they escape popular judgment and are not to be held accountable to the same laws and restrictions that constrain ordinary citizens.
The essence of this position is displayed by the plaintiff ’s argument in Urofsky v. Gilmore (2000), a Fourth Circuit case revolving around Virginia’s law forbidding state employees from accessing explicitly sexual material on state-owned computers without the permission of a supervisor. The phrase that drives the legal reasoning in the case is “matter of public concern.” In a series of decisions the Supreme Court had ruled that if public employees speak out on a matter of public concern, their First Amendment rights come into play and might outweigh the government’s interest in efficiency and organizational discipline. (A balancing test is triggered.) If, however, the speech is internal to the operations of the administrative unit, no such protection is available. The Urofsky court determined that the ability of employees to access pornography was not a matter of public concern. The plaintiffs, professors in the state university system, then detached themselves from the umbrella category of “public employees” and claimed a special status. They argued that “even if the Act is valid as to the majority of state employees, it violates the . . . academic freedom rights of professors . . . and thus is invalid as to them.” In short, we’re exceptional.
(4)— The “Academic freedom as critique” school. If academics have the special capacity to see through the conventional public wisdom and expose its contradictions, exercising that capacity is, when it comes down to it, the academic’s real job; critique— of everything— is the continuing obligation. While the “It’s just a job” school and the “For the common good” school insist that the freedom academics enjoy is exercised within the norms of the profession, those who identify academic freedom with critique (because they identify education with critique) object that this view reifies and naturalizes professional norms which are themselves the products of history, and as such are, or should be, challengeable and revisable. One should not rest complacently in the norms and standards presupposed by the current academy ’s practices; one should instead interrogate those norms and make them the objects of critical scrutiny rather than the baseline parameters within which critical scrutiny is performed.
Academic freedom is understood by this school as a protection for dissent and the scope of dissent must extend to the very distinctions and boundaries the academy presently enforces. As Judith Butler (2006a) puts it, “as long as voices of dissent are only admissible if they conform to accepted professional norms, then dissent itself is limited so that it cannot take aim at those norms that are already accepted” (114). One of those norms enforces a separation between academic and political urgencies, but, Butler contends, they are not so easily distinguishable and the boundaries between them blur and change. Fixing boundaries that are permeable, she complains, has the effect of freezing the status quo and of allowing distinctions originally rooted in politics to present themselves as apolitical and natural. The result can be “a form of political lib eralism that is coupled with a profoundly conservative intellectual resistance to . . . innovation” (127). From the perspective of critique, established norms are always conservative and suspect and academic freedom exists so that they can be exposed for what they are. Academic freedom, in short, is an engine of social progress and is thought to be the particular property of the left on the reasoning (which I do not affirm but report) that conservative thought is anti- progressive and protective of the status quo. It’s only a small step, really no step at all, from academic freedom as critique to the fifth school of thought.
(5)— The “Academic freedom as revolution” school. With the emergence of this school the shift from academic as a limiting adjective to freedom as an overriding concern is complete and the political agenda implicit in the “For the common good” school and the “Academic freedom as critique” schools is made explicit. If Butler wants us to ask where the norms governing academic practices come from, the members of this school know: they come from the corrupt motives of agents who are embedded in the corrupt institutions that serve and reflect the corrupt values of a corrupt neoliberal society. (Got that?) The view of education that lies behind and informs this most expansive version of academic freedom is articulated by Henry Giroux (2008). The “responsibilities that come along with teaching,” he says, include fighting for
an inclusive and radical democracy by recognizing that education in the broadest sense is not just about understanding, . . . but also about providing the conditions for assuming the responsibilities we have as citizens to expose human misery and to eliminate the conditions that produce it. (128)
In this statement the line between the teacher as a professional and the teacher as a citizen disappears. Education “in the broadest sense” demands positive political action on the part of those engaged in it. Adhering to a narrow view of one’s responsibilities in the classroom amounts to a betrayal both of one’s political being and one‘s pedagogical being. Academic freedom, declares Grant Farred (2008–2009), “has to be conceived as a form of political solidarity ”; and he doesn’t mean solidarity with banks, corporations, pharmaceutical firms, oil companies or, for that matter, universities ( 355). When university obligations clash with the imperative of doing social justice, social justice always trumps. The standard views of academic freedom, members of this school complain, sequester academics in an intellectual ghetto where, like trained monkeys, they perform obedient and sterile routines. It follows, then, that one can only be true to the academy by breaking free of its constraints.
The poster boy for the “Academic freedom as revolution” school is Denis Rancourt, a physics professor at the University of Ottawa (now removed from his position) who practices what he calls “academic squatting”— turning a course with an advertised subject matter and syllabus into a workshop for revolutionary activity. Rancourt (2007) explains that one cannot adhere to the customary practices of the academy without becoming complicit with the ideology that informs them: “Academic squatting is needed because universities are dictatorships, devoid of real democracy, run by self- appointed executives who serve private capital interests.”
To read more about Versions of Academic Freedom, click here.
“Physics Must Be Rebuilt”
from Serving the Reich: The Struggle for the Soul of Physics under Hitler by Philip Ball
Quantum theory, with its paradoxes and uncertainties, its mysteries and challenges to intuition, is something of a refuge for scoundrels and charlatans, as well as a fount of more serious but nonetheless fantastic speculation. Could it explain Consciousness? Does it undermine causality? Everything from homeopathy to mind control and manifestations of the paranormal has been laid at its seemingly tolerant door.
Mostly that represents a blend of wishful thinking, misconception and pseudoscience. Because quantum theory defies common sense and ‘rational’ expectation, it can easily be hijacked to justify almost any wild idea. The extracurricular uses to which quantum theory has been put tend inevitably to reflect the preoccupations of the times: in the 1970s parallels were drawn with Zen Buddhism, today alternative medicine and theories of mind are in vogue.
Nevertheless, the fact remains that fundamental aspects of quantum physics are still not fully understood, and it has genuinely profound philosophical implications. Many of these aspects were evident to the early pioneers of the field – indeed, in the transformation of scientific thought that quantum theory compelled, they were impossible to ignore. Yet while several of the theory’s persistent conundrums were identified in its early days, one can’t say that the physicists greatly distinguished themselves in their response. This is hardly surprising: neither scientists nor philosophers in the early twentieth century had any preparation for thinking in the way quantum physics demands, and if the physicists tended to retreat into vagueness, near-tautology and mysticism, the philosophers and other intellectuals often just misunderstood the science.
This penchant for pondering the deeper meanings of quantum theory was particularily evident in Germany, proud of its long tradition of philosophical enquiry into nature and reality. The British, American and Italian physicists, in contrast, tended to conform to their stereotypical national pragmatism in dealing with quantum matters. But even if they were rather more content to apply the mathematics and not wonder too hard about the ontology, these other scientists relied strongly on the Germanic nations for those theoretical formulations in the first place. Germany, more than any other country, showed how to turn the microscopic fragmentation of nature into a useful, predictive, quantitative and explanatory science. If you were a theoretical physicist in Germany, it was hard to resist the gravitational pull of quantum theory: where Planck and Einstein led, Arnold Sommerfeld, Peter Debye, Werner Heisenberg, Max Born, Erwin Schrödinger, Wolfgang Pauli and others followed.
This being so, it was inevitable that the philosophical aspects of quantum physics should have been coloured by the political and social preoccupations of Germany. As we shall see, it was not the only part of physics to become politicized. These tendencies rocked the ivory tower: the kind of science you pursued became a statement about the sort of person you were, and the sympathies you harboured.
Unpeeling the atom
The realization that light and energy were granular had profound implications for the emerging understanding of how atoms are constituted. In 1907 New Zealander Ernest Rutherford, work ing at Manchester University in England, found that most of the mass of an atom is concentrated in a small, dense nucleus with a positive electrical charge. He concluded that this kernel was surrounded by a cloud of electrons, the particles found in 1897 to be the constituents of cathode rays by J. J. Thomson at Cambridge. Electrons possess a negative electrical charge that collectively balances the positive charge of the nucleus. In 1911 Rutherford proposed that the atom is like a solar system in miniature, a nuclear sun orbited by planetary electrons, held there not by gravity but by electrical attraction.
But there was a problem with that picture. According to classical physics, the orbiting electrons should radiate energy as electromagnetic rays, and so would gradually relinquish their orbits and spiral into the nucleus: the atom should rapidly decay. In 1913 the 28-year-old Danish physicist Niels Bohr showed that the notion of quantization – discreteness of energy – could solve this problem of atomic stability, and at the same time account for the way atoms absorb and emit radiation. The quantum hypothesis gave Bohr permission to prohibit instability by fiat: if the electron energ ies can only take discrete, quantized values, he said, then this gradual leakage of energ y is prevented: the particles remain orbiting indefinitely. Electrons can lose energy, but only by making a hop (‘quantum jump’) to an orbit of lower energy, shedding the difference in the form of a photon of a specific wavelength. By the same token, an electron can gain energy and jump to a higher orbit by absorbing a photon of the right wavelength. Bohr went on to postulate that each orbit can accommodate only a fixed number of electrons, so that downward jumps are impossible unless a vacancy arises.
It was well established experimentally that atoms do absorb and emit radiation at particular, well-defined wavelengths. Light passing through a gas has ‘missing wavelengths’ – a series of dark, narrow bands in the spectrum. The emission spectrum of the same vapour is made up of corresponding bright bands, accounting for example for the characteristic red glow of neon and the yellow glare of sodium vapour when they are stimulated by an electrical discharge. These photons absorbed or emitted, said Bohr, have energies precisely equal to the energy difference between two electron orbits.
By assuming that the orbits are each characterized by an integer ‘quantum number’ related to their energy, Bohr could rationalize the wavelengths of the emission lines of hydrogen. This idea was developed by Arnold Sommerfeld, professor of theoretical physics at the University of Munich. He and his student Peter Debye worked out why the spectral emission lines are split by a magnetic field – an effect discovered by the Dutch physicist Pieter Zeeman in work that won him the 1902 Nobel Prize. (This Zeeman effect is the magnetic equivalent of the line-splitting by an electric field discovered by the German physicist Johannes Stark – see page 88.)
But this was still a rather ad hoc picture, justified only because it seemed to work. What are the rules that govern the energy levels of electrons in atoms, and the jumps between them? In the early 1920s Max Born at the University of Göttingen set out to address those questions, assisted by his brilliant students Wolfgang Pauli, Pascual Jordan and Werner Heisenberg.
Heisenberg, another of Sommerfeld’s protégés, arrived from Munich in October 1922 to become Born’s private assistant, looking as Born put it ‘like a simple farm boy, with short fair hair, clear bright eyes, and a charming expression’. He and Born sought to apply Bohr’s empirical description of atoms in terms of quantum numbers to the case of helium, the second element in the periodic table after hydrogen. Given Bohr’s prescription for how quantum numbers dictate electron energies, one could in principle work out what the energies of the various electron orbits are, assuming that the electrons are held in place by their electrostatic attraction to the nucleus. But that works only for hydrogen, which has a single electron. With more than one electron in the frame, the mathematical elegance is destroyed by the repulsive electrostatic influence that electrons exert on each other. This is not a minor correction: the force between electrons is about as strong as that between electron and nucleus. So for any element aside from hydrogen, Bohr’s appealing modelbecomes too complicated to work out exactly.
In trying to go beyond these limitations, however, Born was not content to fit experimental observations to improvised quantum hypotheses as Bohr had done. Rather, he wanted to calculate the disposition of the electrons using principles akin to those that Isaac Newton used to explain the gravitationally bound solar system. In other words, he sought the rules that governed the quantum states that Bohr had adduced.
It became clear to Born that what he began to call a ‘quantum mechanics’ could not be constructed by a minor amendment of classical, Newtonian mechanics. ‘One must probably introduce entirely new hypotheses’, Heisenberg wrote to Pauli – another former pupil of Sommerfeld in Munich, where the two had become friends – in early 1923. Born agreed, writing that summer that ‘not only new assumptions in the usual sense of physical hypotheses will be necessary, but the entire system of concepts of physics must be rebuilt from the ground up’.
That was a call for revolution, and the ‘new concepts’ that emerged over the next four years amounted to nothing less. Heisenberg began formulating quantum mechanics by writing the energ ies of the quantum states of an atom as a matrix, a kind of mathematical grid. One could specify, for example, a matrix for the positions of the electrons, and another for their momenta (mass times velocity). Heisenberg’s version of quantum theory, devised with Born and Jordan in 1925, became known as matrix mechanics.
It wasn’t the only way to set out the problem. From early 1926 the Austrian physicist Erwin Schrödinger, working at the University of Zurich, began to explicate a different form of quantum mechanics based not on matrices but on waves. Schrödinger postulated that all the fundamental properties of a quantum particle such as an electron, or a collection of such particles, can be expressed as an equation describing a wave, called a wavefunction. The obvious question was: a wave of what? The wave itself is a purely mathematical entity, incorporating ‘imaginary numbers’ derived from the square root of -1 (denoted i), which, as the name implies, cannot correspond to any observable quantity. But if one calculates the square of a wavefunction – that is, if one multiplies this mathematical entity by itself – (More strictly, one calculates the so-called complex conjugate, the product of two wave functions identical except that the imaginary parts have opposite signs: +i and -i.) – then the imaginary numbers go away and only real ones remain, which means that the result may correspond to something concrete that can be measured in the real world. At first Schrödinger thought that the square of the wavefunction produces a mathematical expression describing how the density of the corresponding particle varies from one place to another, rather as the density of air varies through space in a sound wave. That was already weird enough: it meant that quantum particles could be regarded as smeared-out waves, filling space like a gas. But Born – who, to Heisenberg’s dismay, was enthusiastic about Schrödinger’s rival ‘wave mechanics’ – argued that the squared wavefunction denoted something even odder: the probability of finding the particle at each location in space.
Think about that for a moment. Schrödinger was asserting that the wavefunction says all that can be said about a quantum system. And apparently, all that can be said is not where the particle is, but what the chance is of finding it here or there. This is not a question of incomplete knowledge – of knowing that a friend might be at the cinema or the restaurant, but not knowing which. In that case she is one place or another, and you are forced to talk of probabilities just because you lack sufficient information. Schrödinger’s wave-based quantum mechanics is different: it insists that there is no answer to the question beyond the probabilities. To ask where the particle really is has no physical meaning. At least, it doesn’t until you look – but that act of looking doesn’t then disclose what was previously hidden, it determines what was previously undecided.
Whereas Heisenberg’s matrix mechanics was a way of formalizing the quantum jumps that Bohr had introduced, Schrödinger’s wave mechanics seemed to do away with them entirely. The wavefunction made everything smooth and continuous again. At least, it seemed to. But wasn’t that just a piece of legerdemain? When an electron jumps from one atomic orbit to another, the initial and the final states are both described by wavefunctions. But how did one wavefunction change into the other? The theory didn’t specify that – you had to put it in by hand. And you still do: there remains no consensus about how to build quantum jumps into quantum theory. All the same, Schrödinger’s description has prevailed over Heisenberg’s – not because it is more correct, but because it is more convenient and useful. What’s more, Heisenberg’s quantum matrices were abstract, giving scant purchase to an intuitive understanding, while Schrödinger’s wave mechanics offered more sustenance to the imagination.
The probabilistic view of quantum mechanics is famously what disconcerted Einstein. His scepticism eventually isolated him from the evolution of quantum theory and left him unable to contribute further to it. He remained convinced that there was some deeper reality below the probabilities that would rescue the precise certainties of classical physics, restoring a time and a place for everything. This is how it has always been for quantum theory: those who make great, audacious advances prove unable to reconcile them to the still more audacious notions of the next generation. It seems that one’s ability to ‘suppose’ – ‘understanding’ quantum theory is largely a matter of reconciling ourselves to its counter-intuitive claims – is all too easily exhausted by the demands that the theory makes.
Schrödinger wasn’t alone in accepting and even advocating indeterminacy in the quantum realm. Heisenberg’s matrix mechanics seemed to insist on a very strange thing. If you multiply together the matrices describing the position and the momentum of a particle, you get a different result depending on which matrix you put first in the arithmetic. In the classical world the order of multiplication of two quantities is irrelevant: two times three is the same as three times two, and an object’s momentum is the same expressed as mass times velocity or velocity times mass. For some pairs of quantum properties, such as position and momentum, that was evidently no longer the case.
This might seem an inconsequential quirk. But Heisenberg discovered that it had the most bizarre corollary, as foreshadowed in the portentous title of the paper he published in March 1927: ‘On the perceptual content of quantum-theoretical kinematics and mechanics’. Here he showed that the theory insisted on the impossibility of knowing at any instant the precise position and momentum of a quantum particle. As he put it, ‘The more precisely we determine the position, the more imprecise is the determination of momentum in this instant, and vice versa.’
This is Heisenberg’s uncertainty principle. He sought to offer an intuitive rationalization of it, explaining that one cannot make a measurement on a tiny particle such as an electron without disturbing it in some way. If it were possible to see the particle in a microscope (in fact it is far too small), that would involve bouncing light off it. The more accurately you wish to locate its position, the shorter the wavelength of light you need (crudely speak ing, the finer the divisions of the ‘ruler’ need to be). But as the wavelength of photons gets shorter, their energy increases – that’s what Planck had said. And as the energy goes up, the more the particle recoils from the impact of a photon, and so the more you disturb its momentum.
This thought experiment is of some value for grasping the spirit of the uncertainty principle. But it has fostered the misconception that the uncertainty is a result of the limitations of experimentation: you can’t look without disturbing. The uncertainty is, however, more fundamental than that: again, it’s not that we can’t get at the information, but that this information does not exist. Heisenberg’s uncertainty principle has also become popularly interpreted as imputing fuzziness and imprecision to quantum mechanics. But that’s not quite right either. Rather, it places very precise limits on what we can know. Those limits, it transpires, are determined by Planck’s constant, which is so small that the uncertainty becomes significant only at the tiny scale of subatomic particles.
Both Schrödinger’s wavefunction and Heisenberg’s uncertainty principle seemed to be insisting on aspects of quantum theory that verged on the metaphysical. For one thing, they placed bounds on what is knowable. This appeared to throw causality itself – the bedrock of science – into question. Within the blurred margins of quantum phenomena, how can we know what is cause and what is effect? An electron could turn up here, or it could instead be there, with no apparent causal principle motivating those alternatives.
Moreover, the observer now intrudes ineluctably into the previously objective, mechanistic realm of physics. Science purports to pronounce on how the world works. But if the very act of observing it changes the outcome – for example, because it transforms the wavefunction from a probability distribution of situations into one particular situation, commonly called ‘collapsing’ the wavefunction – then how can one claim to speak about an objective world that exists before we look?
Today it is generally thought that quantum theory offers no obvious reason to doubt causality, at least at the level at which we can study the world, although the precise role of the observer is still being debated. But for the pioneers of quantum theory these questions were profoundly disturbing. Quantum theory worked as a mathematical description, but without any consensus about its interpretation, which seemed to be merely a matter of taste. Many physicists were content with the prescription devised between 1925 and 1927 by Bohr and Heisenberg, who visited the Dane in Copenhagen. Known now as the Copenhagen interpretation, this view of quantum physics demanded that centuries of classical preconceptions be abandoned in favour of a capitulation to the maths. At its most fundamental level, the physical world was unknowable and in some sense indeterminate. The only reality worthy of the description is what we can access experimentally – and that is all that quantum theory prescribes. To look for any deeper description of the world is meaningless. To Einstein and some others, this seemed to be surrendering to ignorance. Beneath the formal and united appearance of the Solvay group in 1927 lies a morass of contradictory and seemingly irreconcilable views.
These debates were not limited to the physicists. If even they did not fully understand quantum theory, how much scope there was then for confusion, distortion and misappropriation as they disseminated these ideas to the wider world. Much of the blame for this must be laid at the door of the scientists themselves, including Bohr and Heisenberg, who threw caution to the wind when generalizing the narrow meaning of the Copenhagen interpretation in their public pronouncements. For Bohr, a crucial part of this picture was the notion of complementarity, which holds that two apparently contradictory descriptions of a quantum system can both be valid under different observational circumstances. Thus a quantum entity, be it an insubstantial photon or an electron graced with mass, can behave at one time as a particle, at another as a wave. Bohr’s notion of complementarity is scarcely a scientific theory at all, but rather, another characteristic expression of the Copenhagen affirmation that ‘this is just how things are’: it is not that there is some deeper behaviour that sometimes looks ‘wave-like’ and sometimes ‘particle-like’, but rather, this duality is an intrinsic aspect of nature. However one feels about Bohr’s postulate, there was little justification for his enthusiastic extension of the complementarity principle to biology, law, ethics and religion. Such claims made quantum physics a political matter.
The same is true for Heisenberg’s insistence that, via the uncertainty principle, ‘the meaninglessness of the causal law is definitely proved’. He tried to persuade philosophers to come to terms with this abolition of determinism and causality, as though this had moreover been established not as an (apparent) corollary of quantum theory but as a general law of nature.
This quasi-mystical perspective on quantum theory that the physicists appeared to encourage was attuned to a growing rejection, during the Weimar era, of what were viewed as the maladies of materialism: commercialism, avarice and the encroachment of technology. Science in general, and physics in particular, were apt to suffer from association with these supposedly degenerate values, making it inferior in the eyes of many intellectuals to the noble aspirations of art and ‘higher culture’. While it would be too much to say that an emphasis on the metaphysical aspects of quantum mechanics was cultivated in order to rescue physics from such accusations, that desideratum was not overlooked. Historian Paul Forman has argued that the quantum physicists explicitly accommodated their inter pretations to the prevailing social ethos of the age, in which ‘the concept – or the mere word – “causality” symbolized all that was odious in the scientific enterprise’. In his 1918 book Der Untergang des Abendlandes (The Decline of the West), the German philosopher and historian Oswald Spengler more or less equated causality with physics, while making it a concept deserving of scorn and standing in opposition to life itself. Spengler saw in modern physicists’ doubts about causality a symptom of what he regarded as the moribund nature of science itself. Here he was thinking not of quantum theory, which was barely beginning to reach the public consciousness at the end of the First World War, but of the probabilistic microscopic theory of matter developed by the Scottish physicist James Clerk Maxwell and the Austrian Ludwig Boltzmann, which had already renounced claims to a precise, deterministic picture of atomic motions.
Spengler’s book was read and discussed throughout the German academic world. Einstein and Born knew it, as did many other of the leading physicists, and Forman believes that it fed the impulse to realign modern physics with the spirit of the age, leading theoretical physicists and applied mathematicians to ‘denigrat[e] the capacity of their discipline to attain true, or even valuable, knowledge’. They began to speak of science as an essentially spiritual enterprise, unconnected to the demands and depradations of technology but, as Wilhelm Wien put it, arising ‘solely from an inner need of the human spirit’. Even Einstein, who deplored the rejection of causality that he saw in many of his colleagues, emphasized the roles of feeling and intuition in science.
In this way the physicists were attempting to reclaim some of the prestige that science had lost to the neo-Romantic spirit of the times. Causality was a casualty. Only once we have ‘liberation from the rooted prejudice of absolute causality’, said Schrödinger in 1922, would the puzzles of atomic physics be conquered. Bohr even spoke of quantum theory having an ‘inherent irrationality’. And as Forman points out, many physicists seemed to accept these notions not with reluctance or pain but with relief and with the expectation that they would be welcomed by the public. He does not see in all this simply an attempt to ingratiate physics to a potentially hostile audience, but rather, an unconscious adaptation to the prevailing culture, made in good faith. When Einstein expressed his reservations about the trend in a 1932 interview with the Irish writer James Gardner Murphy, Murphy responded that even scientists surely ‘cannot escape the influence of the milieu in which they live’. And that milieu was anti-causal.
Equally, the fact that both quantum theory and relativity were seen to be provoking crises in physics was consistent with the widespread sense that crises pervaded Weimar culture – economically, politically, intellectually and spiritually. ‘The idea of such a crisis of culture’, said the French politician Pierre Viénot in 1931, ‘belongs today to the solid stock of the common habit of thought in Germany. It is a part of the German mentality.’ The applied mathematician Richard von Mises spoke of ‘the present crisis in mechanics’ in 1921; another mathematician, Hermann Weyl (one of the first scientists openly to question causality) claimed there was a ‘crisis in the foundations of mathematics’, and even Einstein wrote for a popular audience on ‘the present crisis in theoretical physics’ in 1922. (Experimental physicist Johannes Stark’s 1921 book The Present Crisis in German Physics used the same trope but spoke to a very different perception: that his kind of physics was being eclipsed by an abstract, degenerate form of theoretical physics – see page 91.) One has the impression that these crises were not causing much dismay, but rather, reassured physicists that they were in the same tumultuous flow as the rest of society.
This was, however, a dangerous game. Some outsiders drew the conclusion that quantum mechanics pronounced on free will, and it was only a matter of time before the new physics was being enlisted for political ends. Some even managed to claim that it vindicated the policies of the National Socialists.
Moreover, if physics was being in some sense shaped to propitiate Spenglerism, it risked seeming to endorse also Spengler’s central thesis of relativism: that not only art and literature but also science and mathematics are shaped by the culture in which they arise and are invalid and indeed all but incomprehensible outside that culture. It is tempting to find here a presentiment of the ‘Aryan physics’ propagated by Nazi sympathizers in the 1930s (see Chapter 6), which contrasted healthy Germanic science with decadent, self-serving Jewish science. And given Spengler’s nationalism, rejection of Weimar liberalism, support for authoritarianism and belief in historical destiny, it is no surprise that he was initially lauded by the Nazis, especially Joseph Goebbels, nor that he voted for Hitler in 1932. (Spengler was too much of an intellectual for his advocacy to survive close contact. After meeting Hitler in 1933, he distanced himself from the Nazis’ vulgar posturing and anti-Semitism, and was no favourite of the Reich by the time he died in Munich in 1936.)
One way or another, then, by the 1920s physics was becoming freighted with political implications. Without intending it, the physicists themselves had encouraged this. But they hadn’t grasped – were perhaps unable to grasp – what it would soon imply.
To read more about Serving the Reich, click here.
By: Kristi McGuire,
Blog: The Chicago Blog
(Login to Add to MyJacketFlap
Add a tag
On the Run: Fugitive Life in an American City chronicles the effects the War on Drugs levied on one inner-city Philadelphia neighborhood and its largely African American population. Based on Goffman’s six-year-long ethnographic experience as a participant-observer in the community, the book considers how a cycle of presumed criminality engendered by pervasive policing obscures the friendships and associations of a group of residents, small-time drug dealers, everyday persons, and the lives they lead into nodes in a network of surveillance under operation 24 hours a day—and the very human costs involved. The book was recently named to Publishers Weekly’s list, Best Nonfiction of 2014, after garnering praise from both the New Yorker and the New York Times Book Review.
You can read an excerpt from the book, “The Art of Running,” here.
To read more, click here.
Lee Alan Dugatkin’s Mr. Jefferson and the Giant Moose, our free e-book for November, reconsiders the crucial supporting role played by a moose carcass in Jeffersonian democracy.
Thomas Jefferson—author of the Declaration of Independence, US president, and ardent naturalist—spent years countering the French conception of American degeneracy. His Notes on Virginia systematically and scientifically dismantled Buffon’s case through a series of tables and equally compelling writing on the nature of his home state. But the book did little to counter the arrogance of the French and hardly satisfied Jefferson’s quest to demonstrate that his young nation was every bit the equal of a well-established Europe. Enter the giant moose.
The American moose, which Jefferson claimed was so enormous a European reindeer could walk under it, became the cornerstone of his defense. Convinced that the sight of such a magnificent beast would cause Buffon to revise his claims, Jefferson had the remains of a seven-foot ungulate shipped first class from New Hampshire to Paris. Unfortunately, Buffon died before he could make any revisions to his Histoire Naturelle, but the legend of the moose makes for a fascinating tale about Jefferson’s passion to prove that American nature deserved prestige.
In Mr. Jefferson and the Giant Moose, Lee Alan Dugatkin vividly recreates the origin and evolution of the debates about natural history in America and, in so doing, returns the prize moose to its rightful place in American history.
To download your free copy, click here.
Welcome to the third annual #UPWeek blog tour—we’re excited to contribute under Monday’s umbrella theme, “Collaboration,” with a post on the Turabian Teacher Collaborative. To get the ball rolling and further the mission, here’s where you can find other university presses, big and small, far and wide, posting on similarly synergetic projects today: the University Press of Colorado on veterinary immunology, the University of Georgia Press on the New Georgia Encyclopedia Project, Duke University Press on Eben Kirksey’s The Multispecies Salon, the University of California Press on Dr. Paul Farmer and Dr. Jim Yong Kim’s work on the Ebola epidemic in West Africa, the University of Virginia Press on their project Chasing Shadows (a special e-book and website devoted to Watergate-era Oval Office conversations), McGill-Queen’s University Press on the online gallery Landscape Architecture in Canada, Texas A & M University Press on a new consumer health advocacy series, Project MUSE on their history of collaboration, and Yale University Press on their Museum Quality Books series. Remember to follow #UPWeek on Twitter, and read on after the jump for the story of the Turabian Teacher Collaborative’s first two years.
One of the foundational principles of Kate Turabian’s classic writing guides is that research creates a community between writers and readers. Professors Joseph Williams and Gregory Colomb put the principle of a community into action when they collaborated several years ago to adapt Turabian’s guides for a new generation of student researchers. During their writing process, they circulated and reworked each other’s contributions so much that, “by the end of the process, no one could quite remember who had drafted what.”
Channeling the spirit of this “rotational” writing process, the Turabian Teacher Collaborative adds high school teachers and a university press into the mix of colleagues working to bring Turabian’s principles to a new audience. The University of Chicago Press developed this project with University of Iowa English education professors Bonnie Sunstein and Amy Shoultz, after determining that much in Turabian’s Student’s Guide to Writing College Papers aligns with the Common Core State Standards for English Language Arts. Sunstein and Shoultz suggested that the Press begin by inviting high school teachers to test the effectiveness of Turabian’s book, both at helping high schools meet the Common Core standards and at helping students become college ready.
To strategize for the project’s pilot year, participating teachers—from urban, rural, and suburban high schools in California, Illinois, Massachusetts, and Iowa—convened for a workshop at the Press in the summer of 2013. They all left equipped with a set of books and free classroom resources drawn from the book, including topic sheets and ELA Common Core–aligned lesson plans. Following the workshop, this team of teachers integrated these materials into their curricula and exchanged resources and insights on their experiences throughout the year. Later this month, several members of the Turabian Teacher Collaborative will share what they have learned with teachers from across the country at a workshop following the NCTE annual convention in Washington, DC.
And, of course, high school students are now part of the collaboration and its community of researchers, as they envision the needs of readers by engaging in peer review at every step of the writing process. As participating teacher Deb Aldrich of Kennedy High School in Cedar Rapids, Iowa, said of her students’ response to the book: “[They] acted as sounding boards, polite disagree-ers, questioners, cheerleaders, and empathizers. They would come to class and ask if we were meeting in our research groups today, which showed how much they valued participating in a real shared research conversation, not just an imaginary one in their heads. They acted and felt like academic researchers!”
The Press plans to use feedback like this to develop a teachers’ resource guide this year, as well as additional resources for research writing in future high school classrooms. As the collaborative moves into its second year, it is expanding to include high school teachers from across the disciplines who teach research and academic writing skills. Are you one of them? For more information, e-mail email@example.com.
(in the spirit of #UPWeek, this post was collaboratively generated by University of Chicago Press staff members working with the TTC)
To learn more about the TTC project, click here.
Stay tuned for more from #UPWeek’s blog tour!
Today is day two of #UPWeek, which considers the past, present, and future of scholarly publishing through pictures. Among posts dotting the web, you’ll find: a photographic history of Indiana University Press, documentation of 1950s and ’60s print publishing at Stanford University Press, a photo collage from Fordham University Press, a Q & A with art director Martha Sewell and short film of author and illustrator Val Kells at Johns Hopkins University Press, and images of the University Press of Florida through the years. With these surveys in mind, we’re happy to share a few snapshots from our own recent launch of Victoria Tennant’s Irina Baronova and the Ballets Russes de Monte Carlo at Peter Fetterman’s Gallery in Santa Monica, California (including a cameo by Norman Lear). Don’t forget to follow #UPWeek on Twitter to keep up with the AAUP’s celebration of university presses’ blogging culture.
To read more about Irina Baronova and the Ballets Russes de Monte Carlo, click here.
View Next 25 Posts
Today is the last day of #UPWeek—so goes with it another successful tour of university press blogs. On that note, Friday’s theme is one of following: What are your must reads on the internet? Whom do you follow on social media? Which venues and scholars are doing right? University of Illinois Press tracks the geopolitics of imagination, University of Minnesota Press (hi, Maggie!) author John Hartigan explains the foibles of scholars on social media, University of Nebraska Press delivers another social media primer, NYU Press teaches us Key Words in Cultural Studies, Island Press tracks the interests of its editors, and Columbia University Press talks their University Press Round-Up.
Us? We’re running with the idea that history and progress aren’t synonymously bound. The way forward with media is often the way back or through, or at least a trip to the past demonstrates that the seed for new forms of mediation are (apologies for this) always already planted. I realize this makes Follow Friday a bit of Throwback Thursday, but here’s a great photo from UCP author Alan Thomas that has been making the rounds on Twitter of the very first e-book we published. Richard A. Lanham’s The Electronic Word required 2 MB of RAM and a floppy disk reader, yet in its “out-of-timeness,” we can already see the othering of the book-as-object and our desire to store information in as portable (and small) a capacity as possible. Kindle Fire quivers. We keep moving.
For more on #UPWeek, follow the hash-tag on Twitter.