What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing Blog: The Chicago Blog, Most Recent at Top
Results 1 - 25 of 1,792
Visit This Blog | Login to Add to MyJacketFlap
Blog Banner
Publicity news from the University of Chicago Press including news tips, press releases, reviews, and intelligent commentary.
Statistics for The Chicago Blog

Number of Readers that added this blog to their MyJacketFlap: 7
1. Terror and Wonder: our free ebook for September

9780226423128

For nearly twenty years now, Blair Kamin of the Chicago Tribune has explored how architecture captures our imagination and engages our deepest emotions. A winner of the Pulitzer Prize for criticism and writer of the widely read Cityscapes blog, Kamin treats his subjects not only as works of art but also as symbols of the cultural and political forces that inspire them. Terror and Wonder gathers the best of Kamin’s writings from the past decade along with new reflections on an era framed by the destruction of the World Trade Center and the opening of the world’s tallest skyscraper.

Assessing ordinary commercial structures as well as head-turning designs by some of the world’s leading architects, Kamin paints a sweeping but finely textured portrait of a tumultuous age torn between the conflicting mandates of architectural spectacle and sustainability. For Kamin, the story of our built environment over the past ten years is, in tangible ways, the story of the decade itself. Terror and Wonder considers how architecture has been central to the main events and crosscurrents in American life since 2001: the devastating and debilitating consequences of 9/11 and Hurricane Katrina; the real estate boom and bust; the use of over-the-top cultural designs as engines of civic renewal; new challenges in saving old buildings; the unlikely rise of energy-saving, green architecture; and growing concern over our nation’s crumbling infrastructure.

A prominent cast of players—including Santiago Calatrava, Frank Gehry, Helmut Jahn, Daniel Libeskind, Barack Obama, Renzo Piano, and Donald Trump—fills the pages of this eye-opening look at the astounding and extraordinary ways that architecture mirrors our values—and shapes our everyday lives.

***

“Blair Kamin, Pulitzer Prize-winning architecture critic for the Chicago Tribune, thoughtfully and provocatively defines the emotional and cultural dimensions of architecture. He is one of the nation’s leading voices for design that uplifts and enhances life as well as the environment. His new book, Terror and Wonder: Architecture in a Tumultuous Age, assembles some of his best writing from the past ten years.”—Huffington Post
Download your free copy of Terror and Wonder here.

Add a Comment
2. Chicago 1968, the militarization of police, and Ferguson

9780226740782

John Schultz, author of The Chicago Conspiracy Trial and No One Was Killed: The Democratic National Convention, August 1968, recently spoke with WMNF about the history of police militarization, in light of both recent events in Ferguson, Missouri, and the forty-sixth anniversary (this week) of the 1968 Democratic National Convention in Chicago. Providing historical and social context to the ongoing “debate over whether the nation’s police have become so militarized that they are no longer there to preserve and protect but have adopted an attitude of ‘us’ and ‘them,’” Schultz related his eyewitness accounts to that collision of 22,000 police and members of the National Guard with demonstrators in Chicago to the armed forces that swarmed around mostly peaceful protesters in Ferguson these past few weeks.

The selection below, drawn in part from a larger excerpt from No One Was Killed, relays some of that primary account from what happened in Grant Park nearly half a century ago. The full excerpt can be accessed here.

***

The cop bullhorn bellowed that anyone in the Park, including newsmen, were in violation of the law. Nobody moved. The newsmen did not believe that they were marked men; they thought it was just a way for the Cops to emphasize their point. The media lights were turned on for the confrontation. Near the Stockton Drive embankment, the line of police came up to the Yippies and the two lines stood there, a few steps apart, in a moment of meeting that was almost formal, as if everybody recognized the stupendous seriousness of the game that was about to begin. The kids were yelling: “Parks belong to the people! Pig! Pig! Oink, oink!” In The Walker Report, the police say that they were pelted with rocks the moment the media lights “blinded” them. I was at the point where the final, triggering violence began, and friends of mine were nearby up and down the line, and at this point none of us saw anything thrown. Cops in white shirts, meaning lieutenants or captains, were present. It was the formality of the moment between the two groups, the theatrical and game nature showing itself on a definitive level, that was awesome and terrifying in its implications.

It is legend by now that the final insult that caused the first wedge of cops to break loose upon the Yippies, was “Your mother sucks dirty cock!” Now that’s desperate provocation. The authors of The Walker Report purport to believe that the massive use of obscenities during Convention Week was a major form of provocation, as if it helped to explain “irrational” acts. In the very first sentence of the summary at the beginning of the Report, they say “… the Chicago Police were the targets of mounting provocation by both word and act. Obscene epithets …” etcetera. One wonders where the writers of The Walker Report went to school, were they ever in the Army, what streets do they live on, where do they work? They would also benefit by a trip to a police station at night, even up to the bull-pen, where the naked toilet bowl sits in the center of the room, and they could listen and find out whether the cops heard anything during Convention Week that was unfamiliar to their ears or tongue. It matters more who cusses you, and does he know you well enough to hit home to galvanize you into destructive action. It also matters whether you regard a club on the head as an equivalent response to being called a “mother fucking Fascist pig.”

The kids wouldn’t go away and then the cops began shoving them hard up the Stockton Drive embankment and then hitting with their clubs. “Pigs! Pigs! Pigs! Fascist pig bastards!” A cop behind me—I was immediately behind the cop line facing the Yippies—said to me and a few others, in a sick voice, “Move along, sir,” as if he foresaw everything that would happen in the week to come. I have thought again and again about him and the tone of his voice. “Oink, oink,” came the taunts from the kids. The cops charged. A boy trapped against the trunk of a car by a cop on Stockton Drive had the temerity to hit back with his bare fists and the cop tried to break every bone in his body. “If you’re newsmen,” one kid screamed, “get that man’s number!” I tried but all I saw was his blue shirt—no badge or name tag—and he, hearing the cries, stepped backward up onto the curb as a half-dozen cops crammed around him and carried him off into the melée, and I was carried in another direction. A cop swung and smashed the lens of a media camera. “He got my lens!” The cameraman was amazed and offended. The rest of the week the cops would cram around a fellow cop who was in danger of being identified and carry him away, and they would smash any camera that they saw get an incriminating picture. The cops slowed, crossing the grass toward Clark Street, and the more daring kids sensed the loss of contact, loss of energy, and went back to meet the skirmish line of cops. The cops charged again up to the sidewalk on the edge of the Park.

It was thought that the cops would stop along Clark Street on the edge of the Park. For several minutes, there was a huge, loud jam of traffic and people in Clark Street, horns and voices. “Red Rover, Red Rover, send Daley right over!” Then the cops crossed the street and lined up on the curb on the west side, outside curfew territory. Now they started to make utterly new law as they went along—at the behest of those orders they kept talking about. The crowd on the sidewalk, excited but generally peaceable, included a great many bystanders and Lincoln Park citizens. Now came mass cop violence of unmitigated fury, descriptions of which become redundant. No status or manner of appearance or attitude made one less likely to be clubbed. The Cops did us a great favor by putting us all in the same boat. A few upper middleclass white men said they now had some idea of what it meant to be on the other end of the law in the ghetto.

At the corner of Menomenee and Clark, several straight, young people were sitting on their doorsteps to jeer at the Yippies. The cops beat them, too, and took them by the backs of the necks and jerked them onto the sidewalk. A photographer got a picture of a terrible beating here and a cop smashed his camera and beat the photographer unconscious. I saw a stocky cop spring out of the pavement swinging his club, smashing a media man’s movie camera into two pieces, and the media man walked around in the street holding up the pieces for everybody to see, including other cameras, some of which were also smashed. Cops methodically beat one man, summoned an ambulance that was whirling its light out in the traffic jam, shoved the man into it, and rapped their clubs on the bumper to send it on its way. There were people caught in this charge, who had been in civil rights demonstrations in the South in the early Sixties, who said this was the time that they had feared for their lives.

The first missiles thrown Sunday night at cops were beer-cans, then a few rocks, more rocks, a bottle or two, more bottles. Yippies and New Left kids rolled cars into the side streets to block access for the cop attack patrols. The traffic-jam reached wildly north and south, and everywhere Yippies, working out in the traffic, were getting shocked drivers to honk in sympathy. One kid lofted a beer-can at a patrol car that was moving slowly; he led the car perfectly and the beer-can hit on the trunk and stayed there. The cops stopped the car and looked through their rear window at the beer-can on their trunk. They started to back up toward the corner at Wisconsin from which the can was thrown, but they were only two and the Yippies were many, so they thought better of it and drove away. There were kids picking up rocks and other kids telling them to put the rocks down.

At Clark and Wisconsin, a few of the “leaders”—those who trained parade marshalls and also some of the conventionally known and sought leaders—who had expected a confrontation of sorts in Chicago, were standing on a doorstep with their hands clipped together in front of their crotches as they stared balefully out at the streets, trying to look as uninvolved as possible. “Beautiful, beautiful,” one was saying, but they didn’t know how the thing had been delivered or what was happening. They had even directly advised against violent action, and had been denounced for it. Their leadership was that, in all the play and put-on of publicity before the Convention, they had contributed to the development of a consciousness of a politics of confrontation and social disruption. An anarchist saw his dream come true though he was only a spectator of the dream; the middle-class man saw his nightmare. A radioman, moving up and down the street, apparently a friend of Tom Hayden, stuck his mike up the stairs and asked Hayden to make some comments. Hayden, not at all interested in making a statement, leaned down urgently, chopping with his hand, and said, “Hey, man, turn the mike off, turn the mike off.” Hayden, along with Rubin, was a man the Chicago cops deemed a crucial leader and they would have sent them both to the bottom of the Chicago River, if they had thought they could get away with it. The radioman turned the mike off. Hayden said, “Is it off?” The radioman said yes. Hayden said, “Man, what’s going on down there?” The radioman could only say that what was going on was going on everywhere.

Read more about No One Was Killed: The Democratic National Convention, August 1968 here.

Add a Comment
3. Peter Bacon Hales (1950–2014)

f5d48ad406710a8c0b1204.L._V362950315_SX200_

University of Chicago Press author, professor emeritus at the University of Illinois at Chicago, dedicated Americanist, photographer, writer, cyclist, and musician Peter Bacon Hales (1950–2014) died earlier this week, near his home in upstate New York. Once a student of the photographers Garry Winogrand and Russell Lee, Hales obtained his MA and PhD from the University of Texas at Austin, and launched an academic career around American art and culture that saw him take on personal and collaborative topics as diverse as the history of urban photography, the Westward Expansion of the United States, the Manhattan Project, Levittown, contemporary art, and the geographical landscapes of our virtual and built worlds. He began teaching at UIC in 1980, and went on to become director of their American Studies Institute. His most recent book, Outside the Gates of Eden: The Dream of America from Hiroshima to Now, was published by the University of Chicago Press earlier this year.

***

From Outside the Gates of Eden:

 

“We live, then, second lives, and third, and fourth—protean lives, threatened by the lingering traces of our mistakes, but also amenable to self-invention and renewal. . . . The cultural landscape [of the future] is hazy:  it could be a desert or a garden, or something in between. It is and will be populated by Americans, or by those infected by the American imagination: a little cynical, skeptical, self-righteous, self-deprecating, impatient, but interested, engaged, argumentative, observant of the perilous beauty of a landscape we can never possess but yearn to be a part of, even as we are restive, impatient to go on. It’s worth waiting around to see how it turns out.”

9780226313153

Add a Comment
4. The State of the University Press

intelligent-books-to-read

Recently, a spate of articles appeared surrounding the future of the university press. Many of these, of course, focused on the roles institutional library sales, e-books, and shifting concerns around tenure play in determining the strictures and limitations to be overcome as scholarly publishing moves forward in an increasingly digital age. Last week, Book Business published an profile on what goes on behind the scenes as discussions about these issues shape, abet, and occasionally undermine the relationships between the university press, its supporting institution, its constituents, and the consumers and scholars for whom it markets its books. Including commentary from directors at the University of North Carolina Press, the University of California Press, and Johns Hopkins University Press, the piece also included a conversation with our own director, Garrett Kiely:

From Dan Eldridge’s “The State of the University Presses” at Book Business:

Talk to University of Chicago Press director Garrett Kiely, who also sits on the board of the Association of American University Presses (AAUP), and he’ll tell you that many of the presses that are struggling today — financially or otherwise — are dealing with the same sort of headaches being suffered by their colleagues in the commercial world. And yet there is one major difference: “The commercial imperative,” says Kiely, “has never been a requirement for many of these [university] presses.”

Historically, Kiely explains, an understanding has existed between university presses and their affiliated schools that the presses are publishing primarily to disseminate scholarly information. That’s a valuable service, you might say, that feeds the public good, regardless of profit. “But at the same time,” he adds, “as everything gets tight [regarding] the universities and the amount of money they spend on supporting their presses, those things get looked at very carefully.”

As a result, Kiely says, there’s an increasingly strong push today to align the interests of a press with its university. At the University of Chicago, for instance, both the institution and its press are well known for their strong sociology offerings. But because more and more library budgets today are going toward the scientific fields, a catalog filled with even the strongest of humanities titles isn’t necessarily the best thing for a press’ bottom line.

 The shift the digital, in particular, was a pivot point for much of Kiely’s discussion, which went on to consider some of the more successful—as well as awkward—endeavors embraced by the press as part of a publishing culture blatantly faced with the need to experiment via new modalities in order to meet the interlinked demands of expanding scholarship and changing technology. Today, the formerly comfortable terrain once tackled by academic publishing is ever-changing, and with an increasing rapidity, which as the article asserts, may leave “more questions than answers.” As Kiely put it:

“I think the speed with which new ideas can be tested, and either pursued or abandoned is very different than it was five years ago. . . . We’ve found you can very quickly go down the rabbit hole. And then you start wondering, ‘Is there a market for this? Is this really the way we should be going?’”

To read more from “The State of the University Press,” click here.

 

Add a Comment
5. Wikipedia and the Politics of Openness

When you think about Wikipedia, you might not immediately envision it as a locus for a political theory of openness—and that might well be due to a cut-and-paste utopian haze that masks the site’s very real politicking around issues of shared decision-making, administrative organization, and the push for and against transparencies. In Wikipedia and the Politics of Openness, forthcoming this December, Nathaniel Tkacz cuts throw the glow and establishes how issues integral to the concept of “openness” play themselves out in the day-to-day reality of Wikipedia’s existence. Recently, critic Alan Liu, whose prescient scholarship on the relationship between our literary/historical and technological imaginations has shaped much of the humanities turn to new media, endorsed the book via Twitter:

Untitled

With that in mind, the book’s jacket copy furthers a frame for Tkacz’s argument:

Few virtues are as celebrated in contemporary culture as openness. Rooted in software culture and carrying more than a whiff of Silicon Valley technical utopianism, openness—of decision-making, data, and organizational structure—is seen as the cure for many problems in politics and business.

 But what does openness mean, and what would a political theory of openness look like? With Wikipedia and the Politics of Openness, Nathaniel Tkacz uses Wikipedia, the most prominent product of open organization, to analyze the theory and politics of openness in practice—and to break its spell. Through discussions of edit wars, article deletion policies, user access levels, and more, Tkacz enables us to see how the key concepts of openness—including collaboration, ad-hocracy, and the splitting of contested projects through “forking”—play out in reality.

The resulting book is the richest critical analysis of openness to date, one that roots media theory in messy reality and thereby helps us move beyond the vaporware promises of digital utopians and take the first steps toward truly understanding what openness does, and does not, have to offer.

Read more about Wikipedia and the Politics of Openness, available December 2014, here.

Add a Comment
6. Against Prediction: #Ferguson

 

628x471

Photo by: Scott Olson, Getty Images, via Associated Press

From Bernard E. Harcourt’s Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age

***

The ratchet [also] contributes to an exaggerated general perception in the public imagination and among police officers of an association between being African American and being a criminal—between, in Dorothy Roberts’s words, “blackness and criminality.” As she explains,

One of the main tests in American culture for distinguishing law-abiding from lawless people is their race. Many, if not most, Americans believe that Black people are “prone to violence” and make race-based assessments of the danger posed by strangers they encounter. The myth of Black criminality is part of a belief system deeply embedded in American culture that is premised on the superiority of whites and inferiority of Blacks. Stereotypes that originated in slavery are perpetuated today by the media and reinforced by the huge numbers of Blacks under criminal justice supervision. As Jody Armour puts it, “it is unrealistic to dispute the depressing conclusion that, for many Americans, crime has a black face.”

Roberts discusses one extremely revealing symptom of the “black face” of crime, namely, the strong tendency of white victims and eyewitnesses to misidentify suspects in cross-racial situations. Studies show a disproportionate rate of false identifications when the person identifying is white and the person identified is black. In face, according to Sheri Lynn Johnson, “this expectation is so strong that whites may observe an interracial scene in which a white person is the aggressor, yet remember the black person as the aggressor.” The black face has become the criminal in our collective subconscious. “The unconscious association between Blacks and crime is so powerful that it supersedes reality.” Roberts observes: ”it predisposes whites to literally see Black people as criminals. Their skin color marks Blacks as visibly lawless.”

This, in turn, further undermines the ability of African Americans to obtain employment or pursue educational opportunities. It has a delegitimizing effect on the criminal justice system that may encourage disaffected youths to commit crime. It may also erode community-police relations, hampering law enforcement efforts as minority community members become less willing to report crime, to testify, and to convict. The feedback mechanisms, in turn, accelerate the imbalance in the prison population and the growing correlation between race and criminality.

And the costs are deeply personal as well. Dorothy Roberts discusses the personal harm poignantly in a more private voice in her brilliant essay, Race, Vagueness, and the Social Meaning of Order-Maintenance Policing, sharing with the reader a conversation that she had with her sixteen-year-old son, who is African American:

In the middle of writing this Foreword, I had a revealing conversation with my sixteen-year-old son about police and loitering. I told my son that I was discussing the constitutionality of a city ordinance that allowed the police to disperse people talking on the sidewalk if any one of them looked as if he belonged in a gang. My son responded apathetically, “What’s new about that? The police do it all the time, anyway. They don’t like Black kids standing around stores where white people shop, so they tell us to move.” He then casually recounted a couple of instances when he and his friends were ordered by officers to move along when they gathered after school to shoot the breeze on the streets of our integrated community in New Jersey. He seemed resigned to this treatment as a fact of life, just another indignity of growing up Black in America. He was used to being viewed with suspicion: being hassled by police was similar to the way store owners followed him with hawk eyes as he walked through the aisles of neighborhood stores or women clutched their purses as he approached them on the street.

Even my relatively privileged son had become acculturated to one of the salient social norms of contemporary America: Black children, as well as adults, are presumed to be lawless, and that status is enforced by the police. He has learned that as a Black person he cannot expect to be treated with the same dignity and respect accorded his white classmates. Of course, Black teens in inner-city communities are subjected to more routine and brutal forms of police harassment.

To read more about Against Prediction, click here.

 

Add a Comment
7. Tom Koch on Ebola and the new epidemic

9780226449357

“Ebola and the new epidemic” by Tom Koch

Mindless but intelligent, viruses and bacteria want what we all want: to survive, evolve, and then, to procreate. That’s been their program since before there were humans. From the first influenza outbreak around 2500 BC to the current Ebola epidemic, we have created the conditions for microbial evolution, hosted their survival, and tried to live with the results.

These are early days for the Ebola epidemic, which was for some years constrained to a few isolated African sites, but has now advanced from its natal place to several countries, with outbreaks elsewhere. Since the first days of influenza, this has always been the viral way. Born in a specific locale, the virus hitches itself to a traveler who brings it to a new and fertile field of humans. The “epidemic curve,” as it is called, starts slowly but then, as the virus spreads and travels, spreads and travels, the numbers mount.

Hippocrates provided a fine description of an influenza pandemic in 500 BC, one that reached Greece from Asia. The Black Death that hastened the end of the Middle Ages traveled with Crusaders and local traders, infecting the then-known world. Cholera (with a mortality rate of over thirty percent) started in India in 1818 and by 1832 had infected Europe and North America.

Since the end of the seventeenth century, we’ve mapped these spreads in towns and villages located in provinces and nations. The first maps were of plague, but in the eighteenth century that scourge was replaced in North American minds by yellow fever, which in turn, was replaced by the global pandemic of cholera (and then at the end of the century came polio).

In attempting to combat these viral outbreaks, the question is one of scale. Early cases are charted on the streets of a city, the homes of a town. Can they be quarantined and those infected separated? And then, as the epidemic grows, the mapping pulls back to the nations in which those towns are located as travelers, infected but as yet not symptomatic, move from place to place. Those local streets become bus and rail lines that become, as a pandemic begins, airline routes that gird the world.

There are lots of models for us to follow here. In the 1690s, Filipo Arrieta mapped a four-stage containment program that attempted to limit the passage of plague through the province of Bari, Italy, where he marshaled the army to create containment circles.

Indeed, quarantines have been employed, often with little success, since the days of plague. The sooner they are put in place, the better they seem to be. They are not, however, foolproof.

Complacently, we have assumed that our expertise at genetic profiling would permit rapid identification and the speedy production of vaccines or at least curative drugs. We thought we were beyond viral attack. Alas, our microbial friends are faster than that. By the time we’ve genetically typed the virus and found a biochemical to counter it, it will have, most likely, been and gone. Epidemiologists talk about the “epidemic curve” as a natural phenomenon that begins slowly, rises fiercely, and then ends.

We have nobody to blame but ourselves.

Four factors promote the viral and bacterial evolution that results in pandemic diseases and their spread. First, there is the deforestation and man-made ecological changes that upset natural habitats, forcing microbes to seek new homes. Second, urbanization brings people together in dense fields of habitation that become the microbe’s new hosts—when those people live in poverty, the field is even better. Third, trade provides travelers to carry microbes, one way or another, to new places. And, fourth and finally, war always promotes the spread of disease among folk who are poor and stressed.

29_merchant

We have created this perfect context in recent decades and the result has been a fast pace of viral and bacterial evolution to meet the stresses we impose and the opportunities we present as hosts. For their part, diseases must balance between virulence—killing the person quickly—and longevity. The diseases that kill quickly usually modify over time. They need their hosts, or something else, to help them move to new fields of endeavor. New diseases like Ebola are aggressive adolescents seeking the fastest, and thus deadliest, exchanges.

Will it become an “unstoppable” pandemic? Probably not, but we do not know for certain; we don’t know how Ebola will mutate in the face of our plans for resistance.

What we do know is that as anxiety increases the niceties developed over the past fifty years of medical protocol and ethics will fade away. There will now be heated discussions surrounding “ethics” and “justice,” as well as practical questions of quarantine and care. Do we try experimental drugs without the normal safety protocol? (The answer will be … yes, sooner if not later.) If something works and there is not enough for all, how do we decide to whom it is to be given first?

For those like me who have tacked diseases through history and mapped their outbreaks in our world, Ebola, or something like it, is what we have feared would come. And when Ebola is contained it will not be the end. We’re in a period of rapid viral and bacterial evolution brought on by globalization and its trade practices. Our microbial friends will, almost surely, continue to take advantage.

***

Tom Koch is a medical geographer and ethicist, and the author of a number of papers in the history of medicine and disease. His most recent book in this field was Disease Maps: Epidemics on the Ground, published by University of Chicago Press.

 

Add a Comment
8. Carl Zimmer on the Ebolapocalypse

9780226983363

Carl Zimmer is one of our most recognizable—and acclaimed—popular science journalists. Not only have his long-standing New York Times column, “Matter,” and his National Geographic blog, The Loom, helped us to digest everything from the oxytocin in our bloodstream to the genetic roots of mental illness in humans and animals, they also have helped to circulate cutting-edge science and global biological concerns to broad audiences.

One of Zimmer’s areas of journalistic expertise is providing context for the latest research on virology, or, as the back cover of his book Planet of Viruses explains: “How viruses hold sway over our lives and our biosphere, how viruses helped give rise to the first life-forms, how viruses are producing new diseases, how we can harness viruses for our own ends, and how viruses will continue to control our fate for years to come.” 

It shouldn’t come as any surprise, then, that with regard to recent predictions of an Ebolapocalypse Zimmer stands ready to help us interpret and qualify risk with regard to Ebola and the biotech industry’s push for experimental medications and treatments.

BuXs1jIIIAE90yS

At The Loom, Zimmer shows a strand of the ebola virus as an otherworldly cul-de-sac against a dappled pink light. As he writes, we still have no antiviral treatment for some of our nastiest viruses, including this one, as, “viruses—which cause their own panoply of diseases from the common cold and the flu to AIDS and Ebola—are profoundly different from bacteria, and so they don’t present the same targets for a drug to hit.”

A Planet of Viruses takes this all a step further; in the chapter “Predicting the Next Plague: SARS and Ebola,” Zimmer advocates a cautionary—but not hysterical—approach:

There’s no reason to think that one of these new viruses will wipe out the human race. That may be the impression that movies like The Andromeda Strain give, but the biology of real viruses suggests otherwise. Ebola, for example, is a horrific virus that can cause people to bleed from all their orifices including their eyes. It can sweep from victim to victim, killing almost all its hosts along the way. And yet a typical Ebola outbreak only kills a few dozen people before coming to a halt. The virus is just too good at making people sick, and so it kills victims faster than it can find new ones. Once an Ebola outbreak ends, the virus vanishes for years.

With its profile rising daily, this most recent Ebola outbreak is primed to force us to rethink those assumptions–and to reflect the commingling of key issues at the intersection of biology, technology, and Big Pharma. As an article in today’s New York Times about a possible experimental medication points out, therapeutic treatment of the virus is already plagued by this overlap:

How quickly the drug could be made on a larger scale will depend to some extent on the tobacco company Reynolds American. It owns the facility in Owensboro, Ky., where the drug is made inside the leaves of tobacco plants. David Howard, a spokesman for Reynolds, said it would take several months to scale up.

Regardless of the course, we’ll look to Zimmer to help us digest what this means in our daily lives—whether we’re assembling a list of novels for the Ebolapocalypse like The Millions, or standing in line at CVS for a pre-emptive vaccination.

Read more about A Planet of Viruses here.

 

Add a Comment
9. Malcolm Gladwell profiles On the Run

9780226136714

From a profile of On the Run by Malcolm Gladwell in this week’s New Yorker:

It was simply a fact of American life. He saw the pattern being repeated in New York City during the nineteen-seventies, as the city’s demographics changed. The Lupollos’ gambling operations in Harlem had been taken over by African-Americans. In Brooklyn, the family had been forced to enter into a franchise arrangement with blacks and Puerto Ricans, limiting themselves to providing capital and arranging for police protection. “Things here in Brooklyn aren’t good for us now,” Uncle Phil told Ianni. “We’re moving out, and they’re moving in. I guess it’s their turn now.” In the early seventies, Ianni recruited eight black and Puerto Rican ex-cons—all of whom had gone to prison for organized-crime activities—to be his field assistants, and they came back with a picture of organized crime in Harlem that looked a lot like what had been going on in Little Italy seventy years earlier, only with drugs, rather than bootleg alcohol, as the currency of innovation. The newcomers, he predicted, would climb the ladder to respectability just as their predecessors had done. “It was toward the end of the Lupollo study that I became convinced that organized crime was a functional part of the American social system and should be viewed as one end of a continuum of business enterprises with legitimate business at the other end,” Ianni wrote. Fast-forward two generations and, with any luck, the grandchildren of the loan sharks and the street thugs would be riding horses in Old Westbury. It had happened before. Wouldn’t it happen again?

This is one of the questions at the heart of the sociologist Alice Goffman’s extraordinary new book, “On the Run: Fugitive Life in an American City.” The story she tells, however, is very different.

That story—an  ethnography set in West Philadelphia that explores how the War on Drugs turned one neighborhood into a surveillance state—contextualizes the all-too-common toll the presumption of criminality takes on young black men, their families, and their communities. And unlike the story of organized crime in the twentieth century, which saw “respectability” as within reach of one or two generations, Goffman’s fieldwork demonstrates how the “once surveilled, always surveilled” mentality that polices our inner-city neighborhoods engenders a cycle of stigma, suppression, limitation, and control—and its very real human costs. At the same time, as with the shift of turf and contraband that characterized last century’s criminal underworld in New York, we see a pattern enforced demographically; the real question becomes whether or not its constituents have any chance—literally and figuratively—to escape.

Read more about On the Run here.

Add a Comment
10. Our free e-book for August: For the Love of It

0226065863

Wayne C. Booth (1921–2005) was the George M. Pullman Distinguished Service Professor Emeritus in English Language and Literature at the University of Chicago, one of the most renowned literary critics of his generation, and an amateur cellist who came to music later in life.  For the Love of It is a story not only of one intimate struggle between a man and his cello, but also of the larger conflict between a society obsessed with success and individuals who choose challenging hobbies that yield no payoff except the love of it. 

“Will be read with delight by every well-meaning amateur who has ever struggled.… Even general readers will come away with a valuable lesson for living: Never mind the outcome of a possibly vain pursuit; in the passion that is expended lies the glory.”—John von Rhein, Chicago Tribune

“If, in truth, Booth is an amateur player now in his fifth decade of amateuring, he is certainly not an amateur thinker about music and culture. . . . Would that all of us who think and teach and care about music could be so practical and profound at the same time.”—Peter Kountz, New York Times Book Review

“Wayne Booth, the prominent American literary critic, has written the only sustained study of the interior experience of musical amateurism in recent years, For the Love of It. [It] succeeds as a meditation on the tension between the centrality of music in Booth’s life, both inner and social, and its marginality. . . . It causes the reader to acknowledge the heterogeneity of the pleasures involved in making music; the satisfaction in playing well, the pride one takes in learning a difficult piece or passage or technique, the buzz in one’s fingertips and the sense of completeness with the bow when the turn is done just right, the pleasure of playing with others, the comfort of a shared society, the joy of not just hearing, but making, the music, the wonder at the notes lingering in the air.”—Times Literary Supplement
Download your copy here.

Add a Comment
11. War’s Waste: Rehabilitation in World War I America

9780226143354

On the one-hundredth anniversary of World War I, it might be especially opportune to consider one of the unspoken inheritances of global warfare: soldiers who return home physically and/or psychologically wounded from battle. With that in mind, this excerpt from War’s Waste: Rehabilitation in World War I America contextualizes the relationship between rehabilitation—as the proper social and cultural response to those injured in battle—and the progressive reformers who pushed for it as a means to “rebuild” the disabled and regenerate the American medical industry.

***

Rehabilitation was thus a way to restore social order after the chaos of war by (re)making men into producers of capital. Since wage earning often defined manhood, rehabilitation was, in essence, a process of making a man manly. Or, as the World War I “Creed of the Disabled Man” put it, the point of rehabilitation was for each disabled veteran to become “a MAN among MEN in spite of his physical handicap.” Relying on the breadwinner ideal of manhood, those in favor of pension reform began to define disability not by a man’s missing limbs or by any other physical incapacity (as the Civil War pension system had done), but rather by his will (or lack thereof) to work. Seen this way, economic dependency—often linked overtly and metaphorically to womanliness—came to be understood as the real handicap that thwarted the full physical recovery of the veteran and the fiscal strength of the nation.

Much of what Progressive reformers knew about rehabilitation they learned from Europe. This was a time, as historian Daniel T. Rodgers tells us, when “American politics was peculiarly open to foreign models and imported ideas. Germany, France, and Great Britain first introduced rehabilitation as a way to cope, economically, morally, and militarily, with the face that millions of men had been lost to the war. Both the Allied and Central Powers instituted rehabilitation programs so that injured soldiers could be reused on the front lines and in munitions in order to meet the military and industrial demands of a totalizing war. Eventually other belligerent nations—Australia, Canada, India, and the United States—adopted programs in rehabilitation, too, in order to help their own war injured recover. Although these countries engaged in a transnational exchange of knowledge, each nation brought its own particular prewar history and culture to bear on the meaning and construction of rehabilitation. Going into the Great War, the United States was known to have the most generous veterans pension system worldwide. This fact alone makes the story of the rise of rehabilitation in the United States unique.

To make rehabilitation a reality, Woodrow Wilson appointed two internationally known and informed Progressive reformers, Judge Julian Mack and Julia Lathrop, to draw up the necessary legislation. Both Chicagoans, Mack and Lathrop moved in the same social and professional circles, networks dictated by the effort to bring about reform at the state and federal level. In July 1917, Wilson tapped Mack to help “work out a new program for compensation and aid  . . . to soldiers,” one that would be “an improvement upon the traditional [Civil War] pension system.” With the help of Lathrop and Samuel Gompers, Mack drafted a complex piece of legislation that replaced the veteran pension system with government life insurance and a provision for the “rehabilitation and re-education of all disabled soldiers.” The War Risk Insurance Act, as it became known, passed Congress on October 6, 1917, without a dissenting vote.

Although rehabilitation had become law, the practicalities of how, where, and by whom it should be administered remained in question. Who should take control of the endeavor? Civilian or military leaders? Moreover, what kind of professionals should be in charge? Educators, social workers, or medical professionals? Neither Mack nor Lathrop considered the hospital to be the obvious choice. The Veterans Administration did not exist in 1917. Nor did its system of hospitals. Even in the civilian sector at the time, very few hospitals engaged in rehabilitative medicine as we have come to know it today. Put simply, the infrastructure and personnel to rehabilitate an army of injured soldiers did not exist at the time that America entered the First World War. Before the Great War, caring for maimed soldiers was largely a private matter, a community matter, a family matter, handled mostly by sisters, mothers, wives, and private charity groups.

The Army Medical Department stepped in quickly to fill the legislative requirements for rehabilitation. Within months of Wilson’s declaration of war, Army Surgeon General William C. Gorgas created the Division of Special Hospitals and Physical Reconstruction, putting a group of Boston-area orthopedic surgeons in charge. Gorgas turned to orthopedic surgeons for two reasons. First, a few of them had already begun experimenting with work and rehabilitation therapy in a handful of the nation’s children’s hospitals. Second, and more important, several orthopedists had already been involved in the rehabilitation effort abroad, assisting their colleagues in Great Britain long before the United States officially became involved in the war.

Dramatic changes took place in the Army Medical Department to accommodate the demand for rehabilitation. Because virtually every type of war wound had become defined as a disability, the Medical Department expanded to include a wide array of medical specialties. Psychiatrists, neurologists, and psychologists oversaw the rehabilitation of soldiers with neurasthenia and the newly designated diagnosis of shell shock. Ophthalmologists took charge of controlling the spread of trachoma and of providing rehabilitative care to soldiers blinded by mortar shells and poison gas. Tuberculosis specialists supervised the reconstruction of men who had acquired the tubercle bacillus during the war. And orthopedists managed fractures, amputations, and all other musculoskeletal injuries.

Rehabilitation legislation also led to the formation of entirely new, female-dominated medical subspecialties, such as occupational and physical therapy. The driving assumption behind rehabilitation was that disabled men needed to be toughened up, lest they become dependent of the state, their communities, and their families. The newly minted physical therapists engaged in this hardening process with zeal, convincing their male commanding officers that women caregivers could be forceful enough to manage, rehabilitate, and make an army of ostensibly emasculated men manly again. To that end, wartime physical therapists directed their amputee patients in “stump pounding” drills, having men with newly amputated legs walk on, thump, and pound their residual limbs. When not acting as drill sergeants, the physical therapists engaged in the arduous task of stretching and massaging limbs and backs, but only if such manual treatment elicited a degree of pain. These women adhered strictly to the “no pain, no gain” philosophy of physical training. To administer a light touch, “feel good” massage would have endangered their professional reputation (they might have been mistaken for prostitutes) while also undermining the process of remasculinization. Male rehabilitation proponents constantly reminded female physical therapists that they needed to deny their innate mothering and nurturing tendencies, for disabled soldiers required a heavy hand, not coddling.

The expansion of new medical personnel devoted to the long-term care of disabled soldiers created an unprecedented demand for hospital space. Soon after the rehabilitation legislation passed in Congress, the US Army Corps of Engineers erected hundreds of patient wards as well as entirely novel treatment areas such as massage rooms, hydrotherapy units, and electrotherapy quarters. Orthopedic appliance shops and “limb laboratories,” where physicians and staff mechanics engineered and repaired prosthetic limbs, also became a regular part of the new rehabilitation hospitals. Less than a year into the war, Walter Reed Hospital, in Washington, DC, emerged as the leading US medical facility for rehabilitation and prosthetic limb innovation, a reputation the facility still enjoys today.

The most awe-inspiring spaces of the new military rehabilitation hospitals were the “curative workshops,” wards that looked more like industrial workplaces than medical clinics. In these hospital workshops, disabled soldiers repaired automobiles, painted signs, operated telegraphs, and engaged in woodworking, all under the oversight of medical professionals who insisted that rehabilitation was at once industrial training and therapeutic agent. Although built in a time of war, a majority of these hospital facilities and personnel became a permanent part of veteran care in both army general hospitals and in the eventual Veterans Administration hospitals for the remainder of the twentieth century. Taking its cue from the military, the post–World War I civilian hospital began to construct and incorporate rehabilitation units into its system of care as well. Rehabilitation was born as a Progressive Era ideal, took shape as a military medical specialty, and eventually became a societal norm in the civilian sector.

To read more about War’s Waste, click here.

 

 

Add a Comment
12. Philosophy in a Time of Terror

0226066649

Giovanna Borradori conceived Philosophy in a Time of Terror: Dialogues with Jürgen Habermas and Jacques Derrida shortly following the attacks on September 11, 2001; through it, he was able engage in separate interviews with two of the most profound—and mutually antagonistic—philosophers of the era. The work they labor here unravels the social and political rhetoric surrounding the nature of “the event,” examines the contexts of good versus evil, and considers the repercussions such acts of terror levy against our assessment of humanity’s potential for vulnerability and dismissal. All of this, of course, prescient and relevant to ongoing matters today.

Below follows an excerpt published on Berfrois. In it, Jacques Derrida responds to one of Borradori’s questions, which asked if the initial impression of US citizens to 9/11, “as a major event, one of the most important historical events we will witness in our lifetime, especially for those of us who never lived through a world war,” was testifiable:

Whether this “impression” is justified or not, it is in itself an event, let us never forget it, especially when it is, though in quite different ways, a properly global effect. The “impression” cannot be dissociated from all the affects, interpretations, and rhetoric that have at once reflected, communicated, and “globalized” it from everything that also and first of all formed, produced, and made it possible. The “impression” thus resembles “the very thing” that produced it. Even if the so-called “thing” cannot be reduced to it. Even if, therefore, the event itself cannot be reduced to it. The event is made up of the “thing” itself (that which happens or comes) and the impression (itself at once “spontaneous” and “controlled”) that is given, left, or made by the so-called “thing.” We could say that the impression is “informed,” in both senses of the word: a predominant system gave it form, and this form then gets run through an organized information machine (language, communication, rhetoric, image, media, and so on). This informational apparatus is from the very outset political, technical, economic. But we can and, I believe, must (and this duty is at once philosophical and political) distinguish between the supposedly brute fact, the “impression,” and the interpretation. It is of course just about impossible, I realize, to distinguish the “brute” fact from the system that produces the “information” about it. But it is necessary to push the analysis as far as possible. To produce a “major event,” it is, sad to say, not enough, and this has been true for some time now, to cause the deaths of some four thousand people, and especially “civilians,” in just a few seconds by means of so-called advanced technology. Many examples could be given from the world wars (for you specified that this event appears even more important to those who “have never lived through a world war”) but also from after these wars, examples of quasi-instantaneous mass murders that were not recorded, interpreted, felt, and presented as “major events.” They did not give the “impression,” at least not to everyone, of being unforgettable catastrophes.

We must thus ask why this is the case and distinguish between two “impressions.” On the one hand, compassion for the victims and indignation over the killings; our sadness and condemnation should be without limits, unconditional, unimpeachable; they are responding to an undeniable “event,” beyond all simulacra and all possible virtualization; they respond with what might be called the heart and they go straight to the heart of the event. On the other hand, the interpreted, interpretative, informed impression, the conditional evaluation that makes us believe that this is a “major event.” Belief, the phenomenon of credit and of accreditation,, constitutes an essential dimension of the evaluation, of the dating, indeed, of the compulsive inflation of which we’ve been speaking. By distinguishing impression from belief, I continue to make as if I were privileging this language of English empiricism, which we would be wrong to resist here. All the philosophical questions remain open, unless they are opening up again in a perhaps new and original way: what is an impression? What is a belief? But especially: what is an event worthy of this name? And a “major” event, that is, one that is actually more of an “event,” more actually an “event,” than ever? An event that would bear witness, in an exemplary or hyperbolic fashion, to the very essence of an event or even to an event beyond essence? For could an event that still conforms to an essence, to a law or to a truth, indeed to a concept of the event, ever be a major event? A major event should be so unforeseeable and irruptive that it disturbs even the horizon of the concept or essence on the basis of which we believe we recognize an event as such. That is why all the “philosophical” questions remain open, perhaps even beyond philosophy itself, as soon as it is a matter of thinking the event.

Read more about Philosophy in a Time of Terror here.

Add a Comment
13. In praise of Eva Illouz

illouz3

Let’s begin with a personal aside: during our sessions, my therapist invokes Eva Illouz more often than any other writer. At first I was largely deaf to this phenomenon, though eventually I acknowledged that excerpts from her writings had come to function as a sort of Greek chorus alongside my own rambling metastasization of my early thirties. After weeks of failing to make the connection, I recognized her as one of our authors, read her book, and spent some hours poking around the corners of the internet digesting interviews and think pieces—later I picked up a few more books, and finally reflected on how and why a sociologist who studies changing emotional patterns under capitalism might elucidate my own benign/not benign driftlessness and failure to thrive.

The conclusion I reached is one that has been rattling around the zeitgeist—I tend to think of these pronouncements of grand-mal cultural tendencies as wheezing parakeets: often they are the result of a clicking sound you can’t quite place, one insistently audible because it’s both so foreign and so obvious.

The background to Illouz’s ideas is a mainstream media that produces this (a now well-circulated blog post at Esquire in praise of the [formerly "tragic"] 42-year-old woman), which requires—yes, requires, even if the initial post can be written off a dozen ways to Sunday as everything from half-baked and harmlessly banal to absurdly patronizing and surreally out-of-tune—the kind of response posed by New Republic senior editor Rebecca Traister. Here is when someone might seemingly jump in with Illouz’s writings and point out that not only has Esquire been doing this sort of thing for a while (in praise of the 39-year-old woman was a theme from 2008; 27 was the focus number in 1999), but also another New Republic editor contributed to the literary apoplexy surrounding reviews of poet Patricia Lockwood’s Motherland, Fatherland, Homelandsexuals that focused more on why the poet served up possibly discomfiting internet-worthy innuendo than on the actual mechanics of her poetry (this is like a restaurant review centering on why someone drooled when they chewed their food).

Illouz’s arguments may seem obvious, but the success of her scholarship depends on the very fact that they aren’t: that capitalism has changed the way we produce and consume our emotional responses; that these responses are further shaped by class and other specific situational factors; and that cultural critique can and should emerge immanently from our own cultural self-understanding.

What does this mean; why do I care; TL;DR? Well, first of all, it attributes the attrition of our emotional environment to cultural factors we continue to produce and consume, rather than to our self-contained neurotic flailing in a vacuum (i.e., as Jessa Crispin points out in a review of Illouz’s Hard-Core Romance: Fifty Shades of Grey, Best-Sellers, and Society, the myth that “it’s your personal chemical imbalance that keeps you depressed, not a very real and unhealthy shift in the way we manage our families, our communities, our cities.”) Hardcore BDSM romantic fantasy lit doesn’t become a bestseller with women because we all want to be in bondage gear ASAP; maybe some of us, I’m sure, but for the rest? It may be that this represents an endemic trend of cultural wanting, in which one might just be begging for help understanding WTF romantic fulfillment means in a world that has produced and underserved them with Miss America, Joe Millionaire, and The Bachelorette (along with the Esquire think piece), in that order. These books sell, in part, because they further the production and consumption of two genres already targeted to a woman-identified demographic: self-help and romantic fantasy. They sell and will continue to sell and will in turn produce new hybrid object-narratives for the buying and selling because the experiential reality of desire in the society we make is anxiety-producing, frequently patronizing, often pantomiming, and requires some of promised escapism and hoodoo finesse to merely maintain status-quo operations.

Crispin goes on to nail this aspect of Illouz’s thoughts in Hard-Core Romance at the Los Angeles Review of Books:

Illouz refers to women’s mass culture as a self-help culture, and judging from Oprah fiction to women-focused magazines to women-focused talk shows and movies geared toward a female audience, it seems clear she is right. And this realm — where women are meant to work on their relationships, their bodies, their psyches — is where 50 Shades got its start. What’s most interesting about Illouz’s reading of women’s culture is her sense that self-help has been staged against any sort of collective consciousness: although we are encouraged to help ourselves, because we are women, we are not encouraged to help other women. Instead, self-help seems like a kind of masculinized competitiveness, in a different and more anxious mode. It is all about self-improvement, about the attainment of happiness, which comes through individual achievement, not any sort of political or societal improvement.

Illouz, who has been writing on this subject for years, in books like Why Love Hurts: A Sociological Explanation to Saving the Modern Soul: Therapy, Emotions, and the Culture of Self-Help to Oprah Winfrey and the Glamour of Misery, knows this is how you derail movements: by turning societal problems into individual failures. In this mode, the source of inequity turns into psychological inadequacy: it’s your daddy issues that are keeping you from finding a mate, not a generally hostile dating culture and conflicting messages about sex and love; it’s your personal chemical imbalance that keeps you depressed, not a very real and unhealthy shift in the way we manage our families, our communities, our cities.

All that said, if we could help ourselves, we might not be buying anything. And, for better or worse, that would make both E. L. James and my therapist a little lighter in the pockets, though maybe we’d all be at the Radical Feminist Empowered End Sexism Gender Fluidity Love Yourself Yarn In, knitting onesies for the next generation to wear (NB: I truly support the idea of a Radical Feminist Empowered End Sexism Gender Fluidity Love Yourself Yarn In; I am knitting mad swag; I just have doubts surrounding the dexterity of capitalism’s bony fingers).

Read more about Hard-Core Romance here.

Add a Comment
14. The university press and library sales

academic-libraries

 

 

Last month, the Scholarly Kitchen published a post on the decreasing percentage of overall university press sales represented by academic libraries, coauthored by Rick Anderson and UCP’s Dean Blobaum. The post was actually a response-to-a-response piece, picking up on a discussion first initiated by “University Presses under Fire,” a controversial write-up in the Nation which prognosticated future scenarios for scholarly publishing based on a shifting-if-unpredictable current climate. Anderson, responding in an initial post at the Scholarly Kitchen, furthered questions raised by the Nation:

In other words, there’s no question that university presses face a real and probably existential challenge. But the challenge is deeper than any posed by a changing environment and it is more complicated than any posed by uncertain institutional funding. To a significant degree it lies in the fact that, unlike most publishers, university presses provide a vital, high-demand service to authors and a marginal, low-demand one to most readers.

Needless to say, this generated activity in the comments section, where Anderson eventually posed the following hypothesis:

It’s a commonplace assertion that, contrary to longstanding popular belief, libraries are not in fact the primary customers of university presses [and this assertion was made again in the comments]. . . . While this is true of university press publications generally, it’s probably not true of scholarly monographs specifically, and that the decrease in libraries’ share of university press purchases probably has mainly to do with the larger number of non-scholarly books being published by university presses.

In stepped Blobaum, who offered to test Anderson’s hypothesis with real-numbers data from the University of Chicago Press, and the two vetted 10 scholarly monographs against WorldCat holdings and sales data. The result? “49% of the sales represented by those ten titles could be accounted for by library holdings registered in WorldCat.”

You can read the rest of the post to see how that percentage holds up when expanded to a larger data set (UCP’s 2012 offerings, organized and disclosed by format and subject only). It was enough for Anderson to conclude the following:

Other results of this study confirm what I (and probably most of us) would have assumed to be the case: that annuals, reprints, and new editions sell to libraries in very small numbers (from my several years of experience as a bookseller to libraries, I would have guessed that fewer than 10% of those sales would be represented in WorldCat), and that non-library purchases of trade books would greatly outstrip sales to libraries. Even though these findings are unsurprising, I think it’s worthwhile to have the data.

In light of this, we’ll close with a shorthand to Blobaum’s takeaway:

Again, it’s important to bear in mind that we’re looking at one publisher’s books for one year—and a university press publisher, one which, like other university presses, is able to set prices for its books that do not put them out of reach of individual buyers. For that economy, we owe the support we get from our university, the lift from books we publish for a general or regional readership, the book reviewers who read and like those books, the students who purchase books assigned in courses, and the support of libraries who purchase the work of our authors.

Read the full article here.

 

 

 

Add a Comment
15. Advanced praise for The Getaway Car

9780226121819

On our forthcoming The Getaway Car: A Donald Westlake Nonfiction Miscellany, from Kirkus Reviews (read the review in full here):

Westlake (1933–2008), who wrote under his own name and a handful of pseudonyms, was an award-winning writer of crime, mystery and detective novels; short stories; screenplays; and one children’s book. University of Chicago Press promotions director Stahl thinks this collection of Westlake’s nonfiction will please his fans; it’s likely these sharp, disarmingly funny pieces will also create new ones. The editor includes a wide range of writing: interviews, letters, introductions to Westlake’s and others’ work, and even recipes. “May’s Famous Tuna Casserole” appeared in the cookbook A Taste of Murder. May is the “faithful companion” of Westlake’s famous protagonist John Dortmunder, “whose joys are few and travails many.” Another of his culinary joys, apparently, was sautéed sloth. One of the best essays is “Living With a Mystery Writer,” by Westlake’s wife, Abby Adams: “Living with one man is difficult enough; living with a group can be nerve-wracking. I have lived with the consortium which calls itself Donald Westlake for five years now, and I still can’t always be sure, when I get up in the morning, which of the mob I’ll have my coffee with.”

To read more about The Getaway Car (publishing September 2014), click here.

Add a Comment
16. Winnifred Fallers Sullivan on the impossibility of religious freedom

9780226779751

 

The impossibility of religious freedom

by Winnifred Fallers Sullivan

In the last week the US Supreme Court has decided two religious freedom cases (Burwell v. Hobby Lobby and Wheaton College v. Burwell) in favor of conservative Christian plaintiffs seeking exemptions from the contraceptive coverage mandate of the Affordable Care Act. Liberals have gone nuts, wildly predicting the end of the world as we know it. While I share their distress about the effects of these decisions on women, I want to talk about religion. I believe that it is time for some serious self-reflection on the part of liberals. To the extent that these decisions are about religion (and there are certainly other reasons to criticize the reasoning in these opinions), they reveal the rotten core at the heart of all religious freedom laws. The positions of both liberals and conservatives are affected by this rottenness but I speak here to liberals.

You cannot both celebrate religious freedom and deny it to those whose religion you don’t like. Human history supports the idea that religion, small “r” religion, is a nearly ubiquitous and perhaps necessary part of human culture. Big “R” Religion, on the other hand, the Religion that is protected in constitutions and human rights law under liberal political theory, is not. Big “R” Religion is a modern invention, an invention designed to separate good religion from bad religion, orthodoxy from heresy—an invention whose legal and political use has arguably reached the end of its useful life.

The challenge, then, for American liberals is to explain how they can both be in favor of religious freedom for all and at the same time deny that freedom to Hobby Lobby and Wheaton College. Among other stratagems meant to solve this contradiction, the Court’s dissenters and their supporters have made various arguments to show that what Hobby Lobby and Wheaton College are doing is not, in fact, religion—that they don’t really understand how to be Christians. Real Christians, the dissenters and their supporters say, do not mix religion with business. Nor do real Christians seek to disadvantage others in the exercise of their religious freedom. Those arguments are embarrassing; more than anything else, they reveal the ramshackle structure of current religious freedom jurisprudence in the US. They expose the multiple legal fictions at the heart of any legal protection for religious freedom—legal fictions whose value is exhausted.

The need to delimit what counts as protected religion is a need that is, of course, inherent in any legal regime that purports to protect all sincere religious persons, while insisting on the legal system’s right to deny that protection to those it deems uncivilized, or insufficiently liberal, whether they be polygamist Mormons, Native American peyote users, or conservative Christians with a gendered theology and politics. Such distinctions cannot be made on any principled basis.

In his concurrence in Hobby Lobby, Justice Kennedy writes:

In our constitutional tradition, freedom means that all persons have the right to believe or strive to believe in a divine creator and a divine law. For those who choose this course, free exercise is essential in preserving their own dignity and in striving for a self-definition shaped by their religious precepts. Free exercise in this sense implicates more than just freedom of belief . . . It means, too, the right to express those beliefs and to establish one’s religious (or nonreligious) self-definition in the political, civic, and economic life of our larger community.

High-minded words—words to make Americans proud on this patriotic weekend—but words that, in our constitutional tradition, have usually resulted in religious discrimination at the hands of the majority, not in the acknowledgment of religious freedom for those outside the mainstream. Both the majority and dissenting Justices in these two cases affirm—over and over again—a commitment to religious liberty and to the accommodation of sincere religious objections. Where they disagree is on what counts as an exercise of religion. Their common refusal, together with that of their predecessors, to acknowledge the impossibility of fairly delimiting what counts as religion has produced a thicket of circumlocutions and fictions that cannot, when all is said and done, obscure the absence of any compelling logic to support the laws that purport to protect religious freedom today.

The claims in Hobby Lobby and Wheaton College were brought under the Religious Freedom Restoration Act (RFRA). RFRA, passed overwhelmingly by Congress and signed into law by President Clinton in 1993, states that government may not “substantially burden a person’s exercise of religion” without meeting certain conditions. Justice Alito, writing for the majority in Hobby Lobby, describes RFRA as providing “very broad protection for religious liberty.”

As the majority notes in Hobby Lobby, and as many commentators have rehearsed, RFRA was enacted in response to the Court’s notorious 1990 decision in Employment Division v. Smith, a decision that severely limited the reach of the free exercise clause of the First Amendment. TheSmith decision sparked a political movement to reverse that limitation, first with the passage of RFRA; then with a flurry of other federal, state, and local legislation; and finally with the emergence of public interest groups and a specialized bar to advocate for religious freedom at home and abroad. Smith mobilized a large public across the political and religious spectrum to focus on a perceived threat to religion in general. It was, importantly, not just a movement of the right, but one that encompassed groups representing many political and theological persuasions. Religion was given new life by this politics.

A great deal of ink has already been spilled in response to the decisions in Hobby Lobby and Wheaton College. It is important, in my view—particularly for those of us who study religion—to move beyond the culture-wars framing of most commentaries and examine why it seems obvious, even natural, to the justices in the majority and to many others outside the Court that Hobby Lobby is engaged in a protected exercise of religion and that for Hobby Lobby and many others, opposition to the use of contraception is the quintessential sign of the religious. What is the religious phenomenology at work in these cases and how does that religious phenomenology reflect changes to religion in the US? It is the business of religious studies scholars to explain these phenomena, not to decry them.

The exercise of religion, as Justice Ginsburg suggested in her dissent in Hobby Lobby, might more usually be understood to be centered on activities such as “prayer, worship, and the taking of sacraments” by individuals. The government took a similar tack, imagining religion in such conventional terms, when it sought to deal with objections to contraceptive coverage by providing automatic exemptions to “religious employers,” which it defined in the regulations as “churches, their integrated auxiliaries, and conventions or associations of churches,” as well as “the exclusively religious activities of any religious order.” (Hobby Lobby Majority Opinion, slip opinion p.9)

To anyone who studies American religion, these churchy references seem astonishingly outdated: much—perhaps most—American religion today does not happen in churches. Many American Christians have, for a long time, engaged in a kind of DIY religion free from the regulations of church authorities. Their religion is radically disestablished free religion, defined not by bishops and church councils, but by themselves—ordinary Americans reading their Bibles, picking and choosing from among a wide array of religious practices. Indeed, Americans have always been incredibly varied, creative, and entrepreneurial in living out what they take to be their religious obligations—religious obligations that range far beyond the prescriptions of the mainline churches, which seem staid, contained, and tamed to the many who consider their own religious practices, unapproved by traditional religious authorities, to be alive with the spirit. They find their religious community and their religious fields of action in places other than churches—including the marketplace.

Justice Sotomayor claims in her dissent in Wheaton College to have “deep respect for religious faith, for the important and selfless work performed by religious organizations.” Why is the exercise of religion by Hobby Lobby any less deserving of Justice Sotomayor’s, or of the US government’s, respect than the work of the Catholic Hospital Association or the Little Sisters of the Poor? Why should churches and religious orders be obviously and unproblematically exempt, particularly in the aftermath of a series of sexual and financial scandals, while Hobby Lobby is not? Why disdain the representations of the Greens and the Hahns that they consider their businesses to be a religious ministry? Where is it written in the Constitution that only the religious practices of churches or church-related non-profits are entitled to accommodation?

Liberals seem offended by the mixing of religion and profit-making as well as by the obvious misogyny displayed here and elsewhere by a Court that sees the test cases of religious freedom in the protection of a male-only priesthood and the control of women’s reproductive lives.

How did a store become an expression of religion and how did being religious become equated with being conservative on social issues? The politics of religion in the US is a complex story. Religion and business in the US have always been entwined. In the first decades of the country’s existence, as both churches and business worked to institutionalize themselves, they grew up together, many of the same people involved in making the corporate form work for each. Their way of being Christians in the world infused their work as businesses with their Christian piety. By the last third of the nineteenth century, merchants like John Wanamaker saw the department store as a place for Christian action, but the growth of Christian business in the last several decades reveals the ways in which economic activity is increasingly viewed as a field of religious activity. Bethany Moreton’s To Serve God and Walmart: The Making of Christian Free Enterprise, Lake Lambert’s Spirituality, Inc., and Kathryn Lofton’s Oprah: The Gospel of an Icon, all describe this world. This is an old story, of course, as Max Weber explained in The Protestant Ethic and the Spirit of CapitalismAs for gender, Janet Jakobsen and Ann Pellegrini have shown in Love the Sin: Sexual Regulation and the Limits of Religious Tolerance how deeply intertwined are Christian ideas about proper sexual mores and government regulation of the family and of sexuality in the US.

That American religion is involved in business and obsessed with sex is not news. What is surprising is that those who object to this kind of religion continue to hold on to a faith in the idea that religious freedom means protection only for the kind of religion they like, the private, individualized, progressive kind.

The radical nature of RFRA and other post-Smith legislation—including the International Religious Freedom Act (IRFA), the Religious Land Use and Institutionalized Persons Act (RLUIPA), and a host of legislative exemptions from otherwise broadly based legislation—was evident from the beginning. These laws promised a broad deference to religious reasons that had never, in fact, been available under the Supreme Court’s religion clause jurisprudence and that was impossible to implement. They invited a regime under which courts would necessarily have to do the impossible, that is distinguish an exercise of religion, necessarily dividing good religion from bad religion, all the while denying that that was what they were doing, a regime the SmithCourt recognized as unworkable and refused to endorse.

All of this activity, legislative and judicial, has placed a heavy burden on the words religion and religious, words that are constantly repeated in both the majority and dissenting opinions inHobby Lobby and Wheaton College. The adjective “religious” appears on virtually every page of the more than 100 pages of opinions, modifying a wide range of words. Likewise, the word “religion” seems to be both everywhere and nowhere. Is it really possible to distinguish the religious from the non-religious in these cases? Do we have a shared theory of religion that permits such distinctions to be made? Isn’t the religious always mixed with the political and the cultural and the economic? The constant repetition of the adjective seems necessary only in order to reify a notion about which everyone is, in fact, very uncertain.

As one example, Justice Ginsburg announces that, “Religious organizations exist to foster the interests of persons subscribing to the same religious faith.” It is not clear to whom she refers here. As with the other justices in this case and others, her Delphic pronouncements about religion seem to come from the ether. How does she know this? Few who study religion would agree with this statement. Religious organizations, if indeed such a set can be rationally collected, exist for a wide range of purposes and consist of and cater to a diverse group of people. Justice Sotomayor is sputtering mad about the Wheaton College injunction. She says that, while she does not deny the sincerity of its religious belief, the College failed to make a showing that filing a form requesting an exemption is a substantial enough burden to trigger a RFRA claim. Shifting to an argument about substantiality is an effort to avoid challenging the rationality of their religious belief, but that is exactly what she is doing. They say that filing the form is enough to make them complicit with evil. Who is she to say nay without getting into exactly the theological battle she is trying to avoid when she claims to respect them?

The notion that religion exists and can be regulated without being defined is a fiction at the heart of religious freedom protection. Legal fictions—such as the idea that corporations are persons—are, of course, necessary to law. For legal scholars as diverse as Henry Maine and Lon Fuller, the capacity of legal language to finesse the facts could be understood as making legal flexibility and progress possible. The startling unbelievability of legal fictions can also focus our attention on the limits of legal language in a salutary way. Yet legal fictions can be stretched too far. They can become nothing more than lies.

Religion also specializes in fiction. It is not just the corporation that has fictional legal personality. So does the church. Justice Ginsburg objects to free exercise protection being extended to “artificial entities,” referring to corporations, but religious freedom is all about protecting artificial identities. The church is an imagined artificial entity; so are gods and demons. The church is the body of Christ in orthodox Christian theology; like the sovereign, it is the quintessential legal fiction, as we learn from Ernst H. Kantorowicz in The King’s Two Bodies.

We need fictions to live. But when the church and the state went their separate ways—when the church was disestablished—the intimate articulation of political, legal, and religious fictions lost their logic on a national scale. They no longer recognize one another. The legal and religious fictions of religious freedom have become lies designed to extend the life of the impossible idea that church and state can still work together after disestablishment. There is no neutral place from which to distinguish the religious from the non-religious. There is no shared understanding of what religion, big “R” religion, is. Let’s stop talking about big “R” religion.

What remains, as Clifford Geertz reminds us, is for us to work on creating new fictions together, political, legal, and religious:

The primary question . . . now that nobody is leaving anybody else alone and isn’t ever going to, is not whether everything is going to come seamlessly together or whether, contrariwise, we are all going to persist sequestered in our separate prejudices. It is whether human beings are going to be able . . . to imagine principled lives they can practicably lead. (Local Knowledge p. 234)

Judges cannot do this work.

Thank you to Dianne Avery, Constance Furey, Elizabeth Shakman Hurd, Fred Konefsky, and Barry Sullivan for comments on earlier drafts of this essay.

***

Winnifred Fallers Sullivan is professor and chair of the Department of Religious Studies and affiliate professor of law at Indiana University Bloomington. She is one of the co-organizers of a Luce Foundation funded project on politics of religious freedom, and guest editor (with Elizabeth Shakman Hurd) of an extensive TIF discussion series on the same topic. Sullivan is the author of The Impossibility of Religious Freedom (Princeton, 2005), and A Ministry of Presence: Chaplaincy, Spiritual Care and the Law (Chicago, 2014); and coeditor, with Robert A. Yelle and Mateo Taussig-Rubbo, of After Secular Law (Stanford, 2011); with Lori Beaman, of Varieties of Religious Establishment; and, with Elizabeth Shakman Hurd, Saba Mahmood, and Peter Danchin, of Politics of Religious Freedom (forthcoming from Chicago, 2015).

To read more about A Ministry of Presence, click here.

This essay has been republished in its entirety from The Immanent Frame, in conjunction with the Social Science Research Council’s program on Religion and the Public Sphere. The original post can be viewed here.

 

 

 

Add a Comment
17. “Never have empty bedrooms looked so full.”

gilbertson_bedrooms cover

The Fourth of July will be marked tomorrow, as usual, with barbecues and fireworks and displays of patriotic fervor.

This year, it will also be marked by the publication of a book that honors patriotism–and counts its costs–in a more somber way: Ashley Gilbertson’s Bedrooms of the Fallen. The book presents photographs of the bedrooms of forty soldiers–the number in a platoon–who died while serving in Iraq or Afghanistan. The bedrooms, preserved by the families as memorials in honor of their lost loved ones, are a stark, heartbreaking reminder of the real pain and loss that war brings. As NPR’s The Two-Way put it, “Never have empty bedrooms looked so full.”

 

Gilbertson_Bedrooms Scherer, page 62

{Marine Corporal Christopher G. Scherer, 21, was killed by a sniper on July 21, 2007, in Karmah, Iraq. He was from East Northport, New York. His bedroom was photographed in February 2009.}

A moving essay by Gilbertson tells the story of his work on the project, of how he came to it after photographing the Iraq War, and about the experience of working with grieving families, gaining their trust and working to honor it. As Philip Gourevitch writes in his foreword, “The need to see America’s twenty-first-century war dead, and to make them seen–to give their absence presence–has consumed Ashley Gilbertson for much of the past decade.” With Bedrooms of the Fallen, he has made their loss visible, undeniable.

More images from the book are available on Time‘s Lightbox blog, and you can read Gourevitch’s essay on the New Yorker‘s site. Independence Day finds the United States near the end of its decade-plus engagement in Afghanistan, but even as the men and women serving there come home, thousands of others continue to serve all over the world. To quote Abraham Lincoln, “it is altogether fitting and proper” that we take a moment to honor them, and respect their service, on this holiday.

Add a Comment
18. Excerpt: House of Debt

milan2

From House of Debt: How They (And You) Caused the Great Recession, and How We Can Prevent It from Happening Again

by Atif Mian and Amir Sufi

A SCANDAL IN BOHEMIA

Selling recreational vehicles used to be easy in America. As a button worn by Winnebago CEO Bob Olson read, “You can’t take sex, booze, or weekends away from the American people.” But things went horribly wrong in 2008, when sales for Monaco Coach Corporation, a giant in the RV industry, plummeted by almost 30 percent. This left Monaco management with little choice. Craig Wanichek, their spokesman, lamented, “We are sad that the economic environment, obviously outside our control, has forced us to make . . . difficult decisions.”

Monaco was the number-one producer of diesel-powered motor homes. They had a long history in northern Indiana making vehicles that were sold throughout the United States. In 2005, the company sold over 15,000 vehicles and employed about 3,000 people in Wakarusa, Nappanee, and Elkhart Counties in Indiana. In July 2008, 1,430 workers at two Indiana plants of Monaco Coach Corporation were let go. Employees were stunned. Jennifer Eiler, who worked at the plant in Wakarusa County, spoke to a reporter at a restaurant down the road: “I was very shocked. We thought there could be another layoff, but we did not expect this.” Karen Hundt, a bartender at a hotel in Wakarusa, summed up the difficulties faced by laid-off workers: “It’s all these people have done for years. Who’s going to hire them when they are in their 50s? They are just in shock. A lot of it hasn’t hit them yet.”

In 2008 this painful episode played out repeatedly throughout northern Indiana. By the end of the year, the unemployment rate in Elkhart, Indiana, had jumped from 4.9 to 16.2 percent. Almost twenty thousand jobs were lost. And the effects of unemployment were felt in schools and charities throughout the region. Soup kitchens in Elkhart saw twice as many people showing up for free meals, and the Salvation Army saw a jump in demand for food and toys during the Christmas season. About 60 percent of students in the Elkhart public schools system had low-enough family income to qualify for the free-lunch program.

Northern Indiana felt the pain early, but it certainly wasn’t alone. The Great American Recession swept away 8 million jobs between 2007 and 2009. More than 4 million homes were foreclosed. If it weren’t for the Great Recession, the income of the United States in 2012 would have been higher by $2 trillion, around $17,000 per household. The deeper human costs are even more severe. Study after study points to the significant negative psychological effects of unemployment, including depression and even suicide. Workers who are laid off during recessions lose on average three full years of lifetime income potential. Franklin Delano Roosevelt articulated the devastation quite accurately by calling unemployment “the greatest menace to our social order.”

Just like workers at the Monaco plants in Indiana, innocent bystanders losing their jobs during recessions often feel shocked, stunned, and confused. And for good reason. Severe economic contractions are in many ways a mystery. They are almost never instigated by any obvious destruction of the economy’s capacity to produce. In the Great Recession, for example, there was no natural disaster or war that destroyed buildings, machines, or the latest cutting-edge technologies. Workers at Monaco did not suddenly lose the vast knowledge they had acquired over years of training. The economy sputtered, spending collapsed, and millions of jobs were lost. The human costs of severe economic contractions are undoubtedly immense. But there is no obvious reason why they happen.

Intense pain makes people rush to the doctor for answers. Why am I experiencing this pain? What can I do to alleviate it? To feel better, we are willing to take medicine or change our lifestyle. When it comes to economic pain, who do we go to for answers? How do we get well? Unfortunately, people don’t hold economists in the same esteem as doctors. Writing in the 1930s during the Great Depression, John Maynard Keynes criticized his fellow economists for being “unmoved by the lack of correspondence between the results of their theory and the facts of observation.” And as a result, the ordinary man has a “growing unwillingness to accord to economists that measure of respect which he gives to other groups of scientists whose theoretical results are confirmed with observation when they are applied to the facts.”

There has been an explosion in data on economic activity and advancement in the techniques we can use to evaluate them, which gives us a huge advantage over Keynes and his contemporaries. Still, our goal in this book is ambitious. We seek to use data and scientific methods to answer some of the most important questions facing the modern economy: Why do severe recessions happen? Could we have prevented the Great Recession and its consequences? How can we prevent such crises? This book provides answers to these questions based on empirical evidence. Laid-off workers at Monaco, like millions of other Americans who lost their jobs, deserve an evidence-based explanation for why the Great Recession occurred, and what we can do to avoid more of them in the future.

Whodunit?

In “A Scandal in Bohemia,” Sherlock Holmes famously remarks that “it is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.”6 The mystery of economic disasters presents a challenge on par with anything the great detective faced. It is easy for economists to fall prey to theorizing before they have a good understanding of the evidence, but our approach must resemble Sherlock Holmes’s. Let’s begin by collecting as many facts as possible.

Figure 1.1: U.S. Household Debt-to-Income Ratio

When it comes to the Great Recession, one important fact jumps out: the United States witnessed a dramatic rise in household debt between 2000 and 2007—the total amount doubled in these seven years to $14 trillion, and the household debt-to-income ratio skyrocketed from 1.4 to 2.1. To put this in perspective, figure 1.1 shows the U.S. household debt-to-income ratio from 1950 to 2010. Debt rose steadily to 2000, then there was a sharp change.

Using a longer historical pattern (based on the household-debt-to-GDP [gross domestic product] ratio), economist David Beim showed that the increase prior to the Great Recession is matched by only one other episode in the last century of U.S. history: the initial years of the Great Depression. From 1920 to1929, there was an explosion in both mortgage debt and installment debt for purchasing automobiles and furniture. The data are less precise, but calculations done in 1930 by the economist Charles Persons suggest that outstanding mortgages for urban nonfarm properties tripled from 1920 to 1929. Such a massive increase in mortgage debt even swamps the housing-boom years of 2000–2007.

The rise in installment financing in the 1920s revolutionized the manner in which households purchased durable goods, items like washing machines, cars, and furniture. Martha Olney, a leading expert on the history of consumer credit, explains that “the 1920s mark the crucial turning point in the history of consumer credit.” For the first time in U.S. history, merchants selling durable goods began to assume that a potential buyer walking through their door would use debt to purchase. Society’s attitudes toward borrowing had changed, and purchasing on credit became more acceptable.

With this increased willingness to lend to consumers, household spending in the 1920s rose faster than income. Consumer debt as a percentage of household income more than doubled during the ten years before the Great Depression, and scholars have documented an “unusually large buildup of household liabilities in 1929.” Persons, writing in 1930, was unambiguous in his conclusions regarding debt in the 1920s: “The past decade has witnessed a great volume of credit inflation. Our period of prosperity in part was based on nothing more substantial than debt expansion.” And as households loaded up on debt to purchase new products, they saved less. Olney estimates that the personal savings rate for the United States fell from 7.1 percent between 1898 and 1916 to 4.4 percent from 1922 to 1929.

So one fact we observe is that both the Great Recession and Great Depression were preceded by a large run-up in household debt. There is another striking commonality: both started off with a mysteriously large drop in household spending. Workers at Monaco Coach Corporation understood this well. They were let go in large part because of the sharp decline in motor-home purchases in 2007 and 2008. The pattern was widespread. Purchases of durable goods like autos, furniture, and appliances plummeted early in the Great Recession—before the worst of the financial crisis in September 2008. Auto sales from January to August 2008 were down almost 10 percent compared to 2007, also before the worst part of the recession or financial crisis.

The Great Depression also began with a large drop in household spending. Economic historian Peter Temin holds that “the Depression was severe because the fall in autonomous spending was large and sustained,” and he remarks further that the consumption decline in 1930 was “truly autonomous,” or too big to be explained by falling income and prices. Just as in the Great Recession, the drop in spending that set off the Great Depression was mysteriously large.

The International Evidence

This pattern of large jumps in household debt and drops in spending preceding economic disasters isn’t unique to the United States. Evidence demonstrates that this relation is robust internationally. And looking internationally, we notice something else: the bigger the increase in debt, the harder the fall in spending. A 2010 study of the Great Recession in the sixteen OECD (Organisation for Economic Co-operation and Development) countries by Reuven Glick and Kevin Lansing shows that countries with the largest increase in household debt from 1997 to 2007 were exactly the ones that suffered the largest decline in household spending from 2008 to 2009 . The authors find a strong correlation between household debt growth before the downturn and the decline in consumption during the Great Recession. As they note, consumption fell most sharply in Ireland and Denmark, two countries that witnessed enormous increases in household debt in the early 2000s. As striking as the increase in household debt was in the United States from 2000 to 2007, the increase was even larger in Ireland, Denmark, Norway, the United Kingdom, Spain, Portugal, and the Netherlands. And as dramatic as the decline in household spending was in the United States, it was even larger in five of these six countries (the exception was Portugal).

A study by researchers at the International Monetary Fund (IMF) expands the Glick and Lansing sample to thirty-six countries, bringing in many eastern European and Asian countries, and focuses on data through 2010. Their findings confirm that growth in household debt is one of the best predictors of the decline in household spending during the recession. The basic argument put forward in these studies is simple: If you had known how much household debt had increased in a country prior to the Great Recession, you would have been able to predict exactly which countries would have the most severe decline in spending during the Great Recession.

But is the relation between household-debt growth and recession severity unique to the Great Recession? In 1994, long before the Great Recession, Mervyn King, the recent governor of the Bank of England, gave a presidential address to the European Economic Association titled “Debt Deflation: Theory and Evidence.” In the very first line of the abstract, he argued: “In the early 1990s the most severe recessions occurred in those countries which had experienced the largest increase in private debt burdens.” In the address, he documented the relation between the growth in household debt in a given country from 1984 to 1988 and the country’s decline in economic growth from 1989 to 1992. This was analogous to the analysis that Glick and Lansing and the IMF researchers gave twenty years later for the Great Recession. Despite focusing on a completely different recession, King found exactly the same relation: Countries with the largest increase in household-debt burdens—Sweden and the United Kingdom, in particular—experienced the largest decline in growth during the recession.

Another set of economic downturns we can examine are what economists Carmen Reinhart and Kenneth Rogoff call the “big five” postwar banking crises in the developed world: Spain in 1977, Norway in 1987, Finland and Sweden in 1991, and Japan in 1992. These recessions were triggered by asset-price collapses that led to massive losses in the banking sector, and all were especially deep downturns with slow recoveries. Reinhart and Rogoff show that all five episodes were preceded by large run-ups in real-estate prices and large increases in the current-account deficits (the amount borrowed by the country as a whole from foreigners) of the countries.

But Reinhart and Rogoff don’t emphasize the household-debt patterns that preceded the banking crises. To shed some light on the household-debt patterns, Moritz Schularick and Alan Taylor put together an excellent data set that covers all of these episodes except Finland. In the remaining four, the banking crises emphasized by Reinhart and Rogoff were all preceded by large run-ups in private-debt burdens. (By private debt, we mean the debt of households and non-financial firms, instead of the debt of the government or banks.) These banking crises were in a sense also privatedebt crises—they were all preceded by large run-ups in private debt, just as with the Great Recession and the Great Depression in the United States. So banking crises and large run-ups in household debt are closely related—their combination catalyzes financial crises, and the groundbreaking research of Reinhart and Rogoff demonstrates that they are associated with the most severe economic downturns.18 While banking crises may be acute events that capture people’s attention, we must also recognize the run-ups in household debt that precede them.

Which aspect of a financial crisis is more important in determining the severity of a recession: the run-up in private-debt burdens or the banking crisis? Research by Oscar Jorda, Moritz Schularick, and Alan Taylor helps answer this question. They looked at over two hundred recessions in fourteen advanced countries between 1870 and 2008 They begin by confirming the basic Reinhart and Rogoff pattern: Banking-crisis recessions are much more severe than normal recessions. But Jorda, Schularick, and Taylor also find that banking-crisis recessions are preceded by a much larger increase in private debt than other recessions. In fact, the expansion in debt is five times as large before a banking-crisis recession. Also, banking-crisis recessions with low levels of private debt are similar to normal recessions. So, without elevated levels of debt, banking-crisis recessions are unexceptional. They also demonstrate that normal recessions with high private debt are more severe than other normal recessions. Even if there is no banking crisis, elevated levels of private debt make recessions worse. However, they show that the worst recessions include both high private debt and a banking crisis.20 The conclusion drawn by Jorda, Schularick, and Taylor from their analysis of a huge sample of recessions is direct:

We document, to our knowledge for the first time, that throughout a century or more of modern economic history in advanced countries a close relationship has existed between the build-up of credit during an expansion and the severity of the subsequent recession. . . . [W]e show that the economic costs of financial crises can vary considerably depending on the leverage incurred during the previous expansion phase [our emphasis].

Taken together, both the international and U.S. evidence reveals a strong pattern:Economic disasters are almost always preceded by a large increase in household debt. In fact, the correlation is so robust that it is as close to an empirical law as it gets in macroeconomics. Further, large increases in household debt and economic disasters seem to be linked by collapses in spending.

So an initial look at the evidence suggests a link between household debt, spending, and severe recessions. But the exact relation between the three is not precisely clear. This allows for alternative explanations, and many intelligent and respected economists have looked elsewhere. They argue that household debt is largely a sideshow—not the main attraction when it comes to explaining severe recessions.

The Alternative Views

Those economists who are suspicious of the importance of household debt usually have some alternative in mind. Perhaps the most common is the fundamentals view, according to which severe recessions are caused by some fundamental shock to the economy: a natural disaster, a political coup, or a change in expectations of growth in the future.

But most severe recessions we’ve discussed above were not preceded by some obvious act of nature or political disaster. As a result, the fundamentals view usually blames a change in expectations of growth, in which the run-up in debt before a recession merely reflects optimistic expectations that income or productivity will grow. Perhaps there is some technology that people believe will lead to huge improvements in well-being. Severe recession results when these high expectations are not realized. People lose faith that technology will advance or that incomes will improve, and therefore they spend less. In the fundamentals view, debt still increases before severe recessions. But the correlation is spurious—it is not indicative of a causal relation.

A second explanation is the animal spirits view, in which economic fluctuations are driven by irrational and volatile beliefs. It is similar to the fundamentals view except that these beliefs are not the result of any rational process. For example, during the housing boom before the Great Recession, people may have irrationally thought that house prices would rise forever. Then fickle human nature led to a dramatic revision of beliefs. People became pessimistic and cut back on spending. House prices collapsed, and the economy went into a tailspin because of a self-fulfilling prophecy. People got scared of a downturn, and their fear made the downturn inevitable. Once again, in this view household debt had little to do with the ensuing downturn. In both the fundamentals and animal spirits mind-sets, there is a strong sense of fatalism: a large drop in economic activity cannot be predicted or avoided. We simply have to accept them as a natural part of the economic process.

A third hypothesis often put forward is the banking view, which holds that the central problem with the economy is a severely weakened financial sector that has stopped the flow of credit. According to this, the run-up in debt is not a problem; the problem is that we’ve stopped the flow of debt. If we can just get banks to start lending to households and businesses again, everything will be all right. If we save the banks, we will save the economy. Everything will go back to normal.

The banking view in particular enjoyed an immense amount of support among policy makers during the Great Recession. On September 24, 2008, President George W. Bush expressed his great enthusiasm for it in a hallmark speech outlining his administration’s response. As he saw it, “Financial assets related to home mortgages have lost value during the house decline, and the banks holding these assets have restricted credit. As a result, our entire economy is in danger. . . . So I propose that the federal government reduce the risk posed by these troubled assets and supply urgently needed money so banks and other financial institutions can avoid collapse and resume lending. . . . This rescue effort . . . is aimed at preserving America’s overall economy.” If we save the banks, he argued, it would help “create jobs” and it “will help our economy grow.” There’s no such thing as excessive debt—instead, we should encourage banks to lend even more.

* * *

The only way we can address—and perhaps even prevent—economic catastrophes is by understanding their causes. During the Great Recession, disagreement on causes overshadowed the facts that policy makers desperately needed to clean up the mess. We must distinguish whether there is something more to the link between household debt and severe recessions or if the alternatives above are true. The best way to test this is the scientific method: let’s take a close look at the data and see which theory is valid. That is the purpose of this book.

To pin down exactly how household debt affects the economy, we zero in on the United States during the Great Recession. We have a major advantage over economists who lived through prior recessions thanks to the recent explosion in data availability and computing power. We now have microeconomic data on an abundance of outcomes, including borrowing, spending, house prices, and defaults. All of these data are available at the zip-code level for the United States, and some are available even at the individual level. This allows us to examine who had more debt and who cut back on spending—and who lost their jobs.

The Big Picture

As it turns out, we think debt is dangerous. If this is correct, and large increases in household debt really do generate severe recessions, we must fundamentally rethink the financial system. One of the main purposes of financial markets is to help people in the economy share risk. The financial system offers many products that reduce risk: life insurance, a portfolio of stocks, or put options on a major index. Households need a sense of security that they are protected against unforeseen events.

A financial system that thrives on the massive use of debt by households does exactly what we don’t want it do—it concentrates risk squarely on the debtor. We want the financial system to insure us against shocks like a decline in house prices. But instead, as we will show, it concentrates the losses on home owners. The financial system actually works against us, not for us. For home owners with a mortgage, for example, we will demonstrate how home equity is much riskier than the mortgage held by the bank, something many home owners realize only when house prices collapse.

But it’s not all bad news. If we are correct that excessive reliance on debt is in fact our culprit, it is a problem that potentially can be fixed. We don’t need to view severe recessions and mass unemployment as an inevitable part of the business cycle. We can determine our own economic fate. We hope that the end result of this book is that it will provide an intellectual framework, strongly supported by evidence, that can help us respond to future recessions— and even prevent them. We understand this is an ambitious goal.

But we must pursue it. We strongly believe that recessions are not inevitable—they are not mysterious acts of nature that we must accept. Instead, recessions are a product of a financial system that fosters too much household debt. Economic disasters are man-made, and the right framework can help us understand how to prevent them.

Read more about House of Debt here.

Read related posts on the authors’ House of Debt blog here.

Add a Comment
19. Excerpt: D-Day through French Eyes

9780226136998

As World War II continued to rage, and though they yearned for liberation, by late spring 1944, the French in Normandy nonetheless steeled themselves for war, knowing that their homes and land and fellow citizens would have to bear the brunt of any incoming attack. The result of events that took place that June 6th—the largest seaborne invasion in history—led to a restoration of the French Republic and in story familiar to many, shifted the tide in favor of the Allied Forces. In D-Day through French Eyes, historian Mary Louise Roberts turns those usual stories of D-Day around, taking readers across the Channel to view the invasion from a range of gripping first-person accounts as seen by French citizens throughout the region. And as we approach the 70th anniversary of one of the most iconic military events of the twentieth century, we’ll be running an excerpt from the book (today) accompanied by a Q & A with Roberts (tomorrow), to honor, expand upon, and reinvigorate the story we thought we knew.

***

CHAPTER ONE

THE NIGHT OF ALL NIGHTS

For Normans, the invasion began with noise. Just before midnight on Monday night, the fifth of June, hundreds of airplanes could be heard flying south over the Cotentin Peninsula. The constant rumble of plane engines and the distant roar of artillery—these two sounds combined to create what one witness called “a ceaseless storm.” Together they awakened thousands of Normans from their deepest sleep of the night. They rose from their beds, ran outside in their nightclothes, peered at the sky, and tried to figure out what was happening. Is this it? they wondered, overcome with fear and excitement.

The sound of airplanes was by no means a novel phenomenon. In the past months, civilians had grown accustomed to planes flying overhead—hundreds of them—almost every night. Allied bombing of strategic sights throughout northern France had become a common event. But this night was different; something new was happening. The aircraft were flying close to the ground and reaching targets. In response, German machine guns and artillery were firing furiously, contributing to the din. Soon the Norman night was filled with strange sights as well as sounds: the landing of parachutes and gliders, the dancing lights of artillery, the red glow of villages in flames. These sights were terrible, frightening, but also oddly beautiful.

In her memoir, Madame Hamel-Hateau, a schoolteacher in Neuville-au-Plain, near Sainte-Mère-Église, captures the dreamlike magic of the night of June 5–6. Hamel-Hateau lived close to the village school and spoke some English. The paratrooper she meets is a pathfinder sent to illuminate the landing areas for thousands of paratroopers who would soon land in Normandy to begin the invasion. He is one of the very first American servicemen to arrive in France.

In the month of June, the days no longer have an end and the night is really just a long twilight because the darkness is never complete. Around 10:00 p.m. this Monday, the fifth of June, I have just gone to bed next to my mother. We are both sleeping on a daybed that we open up every night in the common room. Since the evacuation of Cherbourg, we have given our bedroom to my grandparents. The daybed faces the window, itself wide open on the night. In this way, from my bed, I am taking a moment to reflect on the end of this beautiful day. With sadness I think of a similar June night in 1940 when my boyfriend, Jean, had left to join the Free French. I had received news that he had landed in North Africa, so perhaps he was now in Italy? Perhaps it will be soon . . . I thought, but then refused to let my mind wander further. It was time to go to sleep.

Abruptly, the noise of airplanes breaks the night’s silence. We have gotten used to that sound. Since there are no military targets here and the railway is more than five miles away, we normally do not pay much attention. But the noise gets louder, and the sky begins to light up and get red. I rise out of bed, and soon the whole family is up as well. We go out into the courtyard. There everything seems calm. The only thing you can hear is the distant murmur of a bombardment in the direction of Quinéville. Yet there seems to be an endless number of planes mysteriously roaming about; theirengines create an incessant hum. Then the noise decreases and becomes vague and distant. “It’s just like the last time,” says my mother, “when they had to bomb the blockhouse on the coast.” And we all go back to bed.

Mama goes to sleep right away. But I sit on my bed and continue to study the rectangle of cloudless night carved out by the window. The need to sleep slowly overwhelms me, but my eyes remain wide open. It is in this sort of half sleep that I begin to see fantastic shadows, somber shapes against the clear blackness of the sky. Like big black umbrellas, they rain down on the fields across the way, and then disappear behind the black line of the hedges.

No, I am not dreaming. Grandma was also not sleeping, and saw them from the window of the bedroom. I wake up Mama and my aunt. We hurriedly get dressed and go out into the courtyard. Once again, the sky is filled with a continuous, ever-intensifying hum. The hedgerows are alive with a strange crackling sound. Monsieur Dumont, the neighbor across the street, a widower who lives with his three children, has also come out of his house. He comes toward us and shows me, hanging on the edge of the roof courtyard, a parachute. The Dumont kids follow their father and join us in the school courtyard. But the night has not yet revealed its secret.

An impatient curiosity is stronger than the fear that grips me. I leave the courtyard and make my way onto the road. At the fence of a neighboring field, a man is sitting on the edge of the embankment. He is harnessed with big bags and armed from head to foot: rifle, pistol, and some sort of knife. He makes a sign for me to approach him. In English I ask him if his plane was shot down. He negates that and in a low voice shoots back the incredible news: “It’s the big invasion. . . . Thousands and thousands of paratroopers are landing in this countryside tonight.” His French is excellent. “I am an American soldier, but I speak your language well; my mother is a Frenchwoman of the Basse Pyrénées.” . . . I ask him, “What is going on along the coast? Are there landings? And what about the Germans?” I was babbling; my emotions were overwhelming my thoughts. Ignoring my questions, he asked me about the proximity of the enemy and its relative presence in the area. I reassured him: “There are no Germans here; the closest troops are stationed in Sainte-Mère-Église, almost two kilometers from here.”

The American tells me he would like to look at his map in a place where the light of his electrical torch will not be easily spotted. I propose that he come inside our house. He hesitates because he fears, he says, in the event that the Germans unexpectedly appear, he will put us in danger. I insist and reassure him: “Monsieur Dumont and my old aunt are going to watch the area around the school, one in front and the other in back.” Then the soldier follows us, limping; he explains to me that he sprained his ankle on landing. But he would not let me care for him; there are many things more important. . . . In the classroom, to which Grandmama, Mama, and the Dumont children follow us, he takes off one of his three or four satchels, tears off the sticky little bands that sealed it, and takes out the maps. He spreads one out on a desk; it is a map of the region. He asks me to show him his precise location. He is astonished to discover how far he is from his targets: the railway tracks and the little river called the Merderet bordering the Neuville swamp toward the west. I show him the road to follow in order to arrive there, where he is supposed to meet his comrades. He looks at his watch. Without thinking, I do as well. It is 11:20 p.m. He folds up his map, removes any trace of his presence, and after taking some chocolate out of his pocket which he gives to the children, so flabbergasted they forget to eat, he leaves us. He is perfectly calm and self-controlled, but the hand I shake is a little sweaty and stiff. I wish him luck in a voice that tries to be cheerful. And he adds in English—so that only I can understand—“The days to come are going to be terrible. Good luck, mademoiselle, thank you, I will not forget you for the rest of my life.” And he disappears like a vision in a dream.

Once again, the mystery of the night deepens. We stay outside waiting for who-knows-what, keeping our voices low. Suddenly, there is an extraordinary blaze of light. The horizon in the direction of the sea lights up as if reflecting an immense fire that has been lit over the ocean. The formidable growl of marine artillery can be heard even here, although muffled by and submerged in a multitude of other inchoate sounds. The black silhouettes of airplanes arrive in the clouds and turn around in the sky. One of them passes just above our little school; it puts on its lights and releases . . . what? For an instant, we think it’s a stick of bombs. But we are only starting to throw ourselves on the ground when parachutes open and float down like a mass of bubbles in the clear night. Then they scatter before disappearing in the confusion of the nocturnal countryside.

Another airplane passes over and releases its cargo. At first, the parachutes seem carried by the wake of the plane; then they drop vertiginously downward; finally the silk domes open. The descent gets slower and slower as they approach the ground. Those men whose dangling legs can clearly be seen get there a little more rapidly than those who hold bags of foodstuffs, equipment, ammunition. In a few moments, the sky is nothing more than an immense ballet of parachutes.

The spectacle on the earth is no less extraordinary. From all corners of the countryside shoot bursts of multicolored rockets as if thrown by invisible jugglers. In the fields all around us, big black planes slide silently toward the earth. Like flying Dutchmen, they land as if in a dream. These are the first groups of gliders. Our parachutist had been part of a group of scouts sent to signal the descent and landing zones.

To read more from this excerpt, click here.

For more information about D-Day through French Eyes, click here.

Add a Comment
20. Q & A with Mary Louise Roberts

f-dday-584

June 6, 2014, marks the 70th anniversary of the Allied invasion of Normandy, France: one of the most iconic moments of World War II, which resulted in an unprecedented loss of life and quite literally shifted the tide in the Allies’ favor, leading to a restoration of the French Republic. Yesterday, to commemorate the event, we ran an excerpt from historian Mary Louise Roberts’s D-Day through French Eyes: Normandy 1944which approaches the battle for Normandy from the perspective of French civilians, bearing witness in their homes and as part of their everyday life. Today, we’re following up with a brief Q & A, via which Roberts expands on how our understanding of that single day in history—June 6, 1944—has changed a much larger story.

You can read more from Roberts on revisiting the other side of D-Day’s history at Medium here, and check out more from her book here.

***

On the 70th anniversary of D-Day, how would you say our perspective of the event has shifted since June 6, 1944?

The memory of any important event like D-Day undergoes change over time. If you read a novel such as Joseph Heller’s Catch-22, you’ll see an image of the American GI in Europe which is not flattering. Writers like Heller (and also, arguably, Kurt Vonnegut) used the American GI to register their protest against arbitrary authority in the 1960s—a broad cultural theme of that era. In the 1990s, which marked the fiftieth anniversary of the landings, we became used to thinking about the GIs in a brighter light. Thanks to Tom Brokaw’s notion of the “Greatest Generation,” as well as films such as Saving Private Ryan, we came to remember American soldiers in France almost exclusively as strong, self-sacrificing heroes.

The problem with this image of D-Day is that it ignores other agents who contributed greatly to the victory. Chief among them were the British, Canadian, and French armies, whose enormous efforts in Normandy should never be forgotten. Also overlooked by certain historians (Stephen Ambrose, for example) were the contributions of French civilians. In the days following the landings, these civilians committed small but substantial acts of courage. They guided the Americans through the woods and terrain, and gave them valuable intelligence concerning German military installations. In addition, they sheltered and cared for wounded paratroopers who had landed on the morning of the invasion.

So much of your work engages with reparative, reconstructive, and alternative accounts of history, esp. those surrounding sex, gender, and war: what was the impetus to start researching D-Day as seen by the Normans?

My previous book, What Soldiers Do, began with a simple question: What were relations like between the American GIs and French women during the years from 1944 to 1946? To answer that question, I began by looking at the US trench journal Stars and Stripes. By way of the pages of that journal, I quickly realized two things. First, the American stereotype of the French woman was that she was not only seductive but easy to seduce. And second, the GIs were getting “sold” on the invasion through an old gender narrative: the knight who comes to rescue the damsel in distress. The GIs were fed an image of France as a nation of woman awaiting American rescue. If you put those two things together, you have the version of the war presented to the average Joe in Stars and Stripes: if you fight bravely and liberate the women of France, you will be rewarded with kisses, embraces, and possibly more.

Much of the testimonies and first-person accounts of French civilians in D-Day through French Eyes rely on rich, sensory details and deeply personal narratives of terror and euphoria: What did it feel like to uncover these writings in the archive?

When I was researching What Soldiers Do, I traveled to many municipal and departmental archives in Normandy and Brittany in order to consult documents. Many of the archivists in these regions had collected unpublished memoirs from the summer of 1944. They were stunning. Not only did they demonstrate the efforts of the French to liberate themselves, but also they presented a newly intimate look at the American soldier that summer as he fought in the Norman bocage. I wanted very much to share these memoirs with the American public, and that is why I wrote D-Day through French Eyes. 

What should we take away from D-Day through French Eyes?

The most important take-away from the book is just how much the French suffered for their freedom. The statistics are sobering: about three-thousand Normans died in the first few days of the invasion—the same as the GI death toll in those days. Before the summer was over, about 19,000 Frenchmen had lost their lives. Hundreds of thousands more watched their homes reduced to rubble, or came back to a hometown that had been completely destroyed. Death—the bodies of soldiers and animals—became an everyday sight for children, as well as for adults.

 

Add a Comment
21. Lawrence Summers on House of Debt

9780226081946

From Lawrence H. Summers, former Secretary of the Treasury and president emeritus of Harvard University, in the Financial Times:

“Atif Mian and Amir Sufi’s House of Debt, despite some tough competition, looks likely to be the most important economics book of 2014; it could be the most important book to come out of the 2008 financial crisis and subsequent Great Recession. Its arguments deserve careful attention, and its publication provides an opportunity to reconsider policy choices made in 2009 and 2010 regarding mortgage debt.”

House of Debt takes a complicated premise—unraveling the threads of the 2008 financial crisis from a tangle of Federal Reserve policies, insolvent investment banks, predatory mortgage lenders, and private label securities—and delivers a clean-cut conclusion:  the Great Recession and Great Depression, as well as the current economic malaise in Europe, were caused by a large run-up in household debt followed by a significantly large drop in household spending. Recently, in addition to Summers’s endorsement in today’s Financial Times, the book has been profiled at the New York Times, the Wall Street Journal, the Atlantic, and the Economist, among others; Paul Krugman, writing for the NYT, noted that  its associated House of Debt blog has “instantly become must reading.”

How do we move forward and break the cycle? With a direct attack on debt, say Mian and Sufi.  More aggressive debt forgiveness after the crash helps, but as they illustrate, we can be rid of painful bubble-and-bust episodes only if the financial system moves away from its reliance on inflexible debt contracts. 

To follow developments in global policy at the House of Debt blog, click here.

To read more about the book, click here.

 

Add a Comment
22. World Ocean(s) Day

This past weekend, on June 8th to be exact, the Ocean Project and the World Ocean Network celebrated World Oceans Day. The event recognizes that there is “one world ocean” connecting the planet, and to this end, was known as “World Ocean Day” until 2009, when the “s” was added in accordance with the resolution passed by the United Nations General Assembly, which officially designated the annual date as “World Oceans Day.” Even this semantic quandry should evidence the passion yielded by those who champion and protect our hydrosphere—with that in mind, we’re revisiting The Deep, a project that launched new endeavors in “tidal” acquisitions for the Press, and has led to a remarkable list in the oceanic sciences (under the helm of Christie Henry, editorial director of the Sciences and Social Sciences).

The Deep explores the deepest realms of the ocean, revealing a cast of more than 200 sometimes terrifying and most mesmerizing creatures in crystalline detail, some photographed for the very first time.  The website associated with the book features an image gallery,an animated sampler, and beautiful pages, including the below profile of the glowing sucker octopus, one of the world’s few bioluminescent creatures, native to the North Atlantic:

ShaleCN_R1L5737_72

 

In the wake of The Deep (pun intended), Chicago’s recent books that focus on oceans and the life thriving within them include:

Billion-Dollar Fish: The Untold Story of Alaska Pollock by Kevin M. Bailey

Swordfish: A Biography of the Ocean Gladiator by Richard Ellis

Stung!: On Jellyfish Blooms and the Future of the Ocean by Lisa-ann Gershwin

Science on Ice: Four Polar Expeditions by Chris Linder

Seasick: Ocean Change and the Extinction of Life on Earth by Alana Mitchell

Among Giants: A Life with Whales by Charles “Flip” Nicklin

Sharks and People: Exploring Our Relationship with the Most Feared Fish in the Sea by Thomas M. Peschak

Chasing Science at Sea: Racing Hurricanes, Stalking Sharks, and Living Undersea with Ocean Experts by Ellen Prager

Sex, Drugs, and Sea Slime: The Oceans’ Oddest Creatures and Why They Matter by Ellen Prager

Oceans: A Scientific American Reader

To read more about our lists in the biological sciences, click here.

To read more about our lists in ecology and environment, click here.

 

Add a Comment
23. The detractor and the Donald

9780226423128

In Terror and Wonder, Pulitzer Prize–winning Chicago Tribune architecture critic Blair Kamin assembled his most memorable writing from the past decade, as well as some polemical observations on the changing context of the built environment. Among them are two that have taken on a new life in the past couple of weeks: “The Donald’s Dud: Trump’s Skyscraper, Shortened by the Post-9/11 Fear of Heights, Reaches Only for Mediocrity” and “A Skyscraper of Many Faces: In Trump’s Context-Driven Chicago Skyscraper, Beauty Is in the Eye—and the Vantage Point—of the Beholder.” The first piece decries the original design, leaving little room for ambivalence; the other considers the finished construction, and all in all, mostly lauds its structure.

Fast forward. Trump’s skyscraper has now been branded unequivocally as part of Trump’s real estate empire, in twenty-foot-tall block letters that spell out his eponym. Kamin unleashed some sharp criticism of the sign in Chicago Tribune column last week, pointing the blame at city government for allowing this particular type of self-aggrandizement to continue due to obscure politicking:

“It’s a lack of sophisticated design guidelines as well as the teeth to enforce them. Trump’s sign isn’t the only offender — it’s just the most egregious — in a city where skyline branding has run amok.”

The response? Well, first:

It happens to be great for Chicago, because I have the hottest brand in the world,” Trump told the Wall Street Journal

Then the mayor’s office weighed in:

“The mayor thinks the sign is awful,” Bill McCaffrey, a mayoral spokesman, told the Tribune on Wednesday. “It’s in very poor taste and scars what is otherwise an architecturally accomplished building.”

More came from Kamin:

“Whatever the outcome in the Trump-Emanuel faceoff, Chicago needs to take the opposite tack, discouraging signs along its riverfront lest more Trump-style incursions defile what promises to be a great public space.”

Things escalated on Twitter, where Trump has long been known to broadcast his many opinions. Then Kamin was invited to appear on the Today Show, followed by a live call-in from Trump. It should be noted that Kamin’s three columns for the Tribune (here, here, and here) covering Trump’s skyscraper and the foibles of its branding are as prescient, intelligent, and thorough as the rest of his body of work, for which again, he won a Pulitzer Prize. That didn’t stop Trump from trying to dismiss him.

Here’s a direct quote from Trump’s phone call:

“This was started by a third-rate architectural critic for the Chicago Tribune, who I thought got fired. He was gone for a long period of time. Most people thought he got fired. All of a sudden he re-emerges, and to get a little publicity, he started this campaign.”

Some last words? Kamin is of course still at the Tribune. During that “long period of time,” he was busy at Harvard University, where he served as a 2012–13 fellow at the Nieman Foundation for Journalism. (See above: Pulitzer.) 

 

Add a Comment
24. Excerpt: The Democratic Surround

C_Turner_Democratic_9780226817460_cvr_IFT
Where Did All the Fascists Come From?

from The Democratic Surround: Multimedia and American Liberalism from World War II to the Psychedelic Sixties

by Fred Turner

***

On December 3, 1933, a reporter for the named Shepard Stone tried to answer a question that had begun to puzzle many of his readers: How was it that in a single year, the nation that had brought the world Goethe and Bach, Hegel and Beethoven, had fallen so completely under the sway of a short, mustachioed dictator named Adolf Hitler? To some analysts, the answer was fundamentally social, as Stone acknowledged. Starvation, political chaos, violence in the streets—all had plagued the Weimar Republic that Hitler’s fascist state replaced. But neither Stone nor his editors thought such privations were enough to explain Hitler’s rise. Rather, wrote Stone, “something intangible was necessary to coordinate the resentments and hatreds which these forces engendered.”

That something was propaganda. Above an enormous photograph of a Nazi rally, with floodlit swastika banners towering two stories high and row upon row of helmeted soldiers leaning toward the lights, the article’s headline told its story: “Hitler’s Showmen Weave a Magic Spell: By a Vast Propaganda Aimed at Emotions, Germany’s Trance is Maintained.” For Stone and his editors, fascism was a fundamentally psychological condition. Its victims swayed in time, linked by fellow feeling, unable to reason. In part, they responded to Hitler’s charisma. But they also responded to the power of mass media. Hitler famously “hypnotized” the crowds at mass rallies until they roared with applause. His voice then traveled out from those arenas in radio waves, reaching Germans across the nation and inspiring in them the same hypnotic allegiance. As Stone suggested, Hitler’s personal appeal alone could not have transformed the mindset of the entire populace. Only mass media could have turned a nation famous for its philosophers into a land of unthinking automata: “With coordinated newspaper headlines overpowering him, with radio voices beseeching him, with news reels and feature pictures arousing him, and with politicians and professors philosophizing for him, the individual German has been unable to salvage his identity and has been engulfed in a brown wave. Today few Germans can separate the chaff from the wheat. They are living in a Nazi dream and not in the reality of the world.”

During and after World War II, this belief would drive many intellectuals and artists to imagine pro-democratic alternatives to authoritarian psyches and societies, and to the mass-mediated propaganda that seemed to produce them. But before we can explore those alternatives, we need to revisit the anxieties that made them so important to their makers. In the years leading up to the war, the fear of mass media and mass psychology that animated Stone’s account became ubiquitous among American intellectuals, politicians, and artists. When they gazed across the Atlantic to Hitler’s Germany and, to a lesser extent, Stalin’s Soviet Union and Mussolini’s Italy, American journalists and social scientists saw their longstanding anxieties about the power of mass media harden into a specific fear that newspapers, radio, and film were engines of fascist socialization.

Since the late nineteenth century, writers in Europe and the United States had dreaded the rise of mass industrial society. Such a society fractured the psyches of its members and rendered them vulnerable to collective fits of irrational violence, many feared. Now analysts worried that mass media drew individual citizens into protofascistic relationships with the centers of political and commercial power and with one another. In the one-to-many communication pattern of mass media they saw a model of political dictatorship. In mass media audiences, they saw the shadows of the German masses turning their collective eyes toward a single podium and a single leader. To enter into such a relationship with media, many worried, was to rehearse the psychology of fascism. The rise of National Socialism in Germany demonstrated that such rehearsals could transform one of the most cultured of nations—and perhaps even America itself—into a bastion of authoritarianism.

Could It Happen Here?

In the early 1930s, popular writers tended to see Hitler as an ordinary man who had somehow risen to extraordinary heights. Journalist Dorothy Thompson, who interviewed Hitler in 1931, characteristically described him as “formless, almost faceless, a man whose countenance is a caricature, a man whose framework seems cartilaginous, without bones. He is inconsequent and voluble, ill poised, insecure. He is the very prototype of the Little Man.” How was it that such a man should have acquired such power? she wondered.

As Shepard Stone had pointed out, part of the answer was surely political. In the chaos of the Weimar years, Hitler and his National Socialists promised national rejuvenation. They also threatened violent ends for any who opposed them. Yet these explanations found a comparatively small place in the American popular press and scholarship of the time, where more cultural and characterological explanations often held sway. In 1941, for instance, William McGovern, a professor of political science at Northwestern University, published a representative if long-winded analysis of the origins of National Socialism entitled Somehow Hitler had managed to harvest those ideals and so transform a German cultural trait into a principle of national unity. For McGovern and others, it was not only German politics that had produced National Socialism, but something in the German mindset.

This conclusion presented a problem: If German totalitarianism was rooted in German culture, how could Americans explain the apparent rise of fascism in the United States? Though few remember the fact today, in the late 1930s, uniformed fascists marched down American streets and their voices echoed over the radio airwaves. The Catholic demagogue Father Coughlin, for example—founder of the “Radio League of the Little Flower”—was a ubiquitous presence on American radio for much of the decade. He formed a political party to oppose Roosevelt in 1936, endorsed and helped publish the anti-Semitic tract known as the Protocols of the Elders of Zion and by 1938 could be heard spewing anti-Semitic and pro-fascist propaganda on the radio to a regular audience of some 3,500,000 listeners. A Gallup poll taken in January 1939 reported that some 67 percent of these listeners agreed with his views.

Alongside Father Coughlin, Americans could track the activities of William Dudley Pelley’s Silver Legion of America—an anti-Semitic paramilitary group formed in 1933 and modeled after Hitler’s brownshirts and Mussolini’s blackshirts. Though Pelley claimed to hear the voices of distant spirits, his group still attracted fifteen thousand members at its peak. Americans could also follow the Crusader White Shirts in Chattanooga, Tennessee; the American National-Socialist Party; and, of course, the Ku Klux Klan. For more than a few Americans in the 1930s, fascists were not merely threats from overseas. They lived next door.

The group that attracted the greatest notice of the American press in this period was the Amerikadeutscher Volksbund. The Bund had been created in 1936, when self-styled “American Führer” Fritz Kuhn, a German-born American citizen, was elected head of a German-American organization known as the Friends of New Germany. At its largest, the Bund probably had no more than twenty-five thousand members, most of them Americans of German extraction. Even so, on the night of February 20, 1939, they managed to bring twenty thousand people to Madison Square Garden for a pro-fascist rally. Though the event ostensibly celebrated George Washington’s birthday, the Garden was hung with antiSemitic and pro-Nazi banners. Speakers wore uniforms clearly modeled on the military regalia of Nazi Germany. Three thousand uniformed men from the Bund’s pseudo–police force, the Ordnungsdienst, moved among the crowd, spotting and removing hecklers and soliciting donations. Throughout the rally, speakers and audience carefully proclaimed their pro-Americanism. They sang the “Star-Spangled Banner” and pledged “undivided” allegiance to the American flag. But speakers also launched a steady attack on Jews and the Roosevelt administration. One drew out the word “Roosevelt” in such a way that it sounded like “Rosenfeld.” Another tried to convince the audience that Judaism and communism were essentially the same social movement.

Twenty-two thousand Americans rally to support fascism in Madison Square Garden, February 20, 1939. Among the banners was one that read “Stop Jewish Domination of Christian America.” Photograph by FPG. © Getty Images. Used by permission.

Outside the Garden, Mayor Fiorello La Guardia stationed 1,700 policemen to keep order. City leaders feared large and violent counterdemonstrations, but the mayor had refused to prevent the rally, arguing that permitting free speech was precisely what distinguished democratic America from fascist Germany. In the end, police counted approximately ten thousand mostly peaceful demonstrators and observers, some holding signs reading “Smash Anti-Semitism” and “Drive the Nazis Out of New York.” Journalists on the scene believed police estimates to be heavily exaggerated. Even if they were correct, pro-fascist rally-goers outnumbered protesters two to one. To reporters at the time, it seemed entirely plausible that the Bund enjoyed substantial support, at the ver y least among Americans of German origin, and perhaps among other communities as well.

Even before this rally, the Bund loomed large as an emblem of the threat fascism posed to the United States. On March 27, 1937, for instance, Life published a seven-page spread under the headline, “Like Communism It Masquerades as Americanism.” There on the first page of the piece, Americans could see a Bundist color guard at the Garden wearing imitations of Nazi brownshirt uniforms and standing in front of a massive portrait of George Washington. Another headline in the same feature underlined the visual point: “It Can Happen Here.”

The actual number of fascists in the United States never came anywhere near to becoming a sufficiently critical mass to challenge, let alone over-throw, the state. Yet in the late 1930s analysts across much of the political spectrum feared that it soon might. If it did, they reasoned, it would be because of one or both of two social forces. The first was a fascist fifth column inside the United States. In the 1930s, American journalists and politicians believed that Hitler’s Germany was engaging in a massive propaganda campaign inside the United States. Reporters noted that Germany had established active propaganda networks in European nations such as France, Norway, and the Netherlands, and suggested that they were exporting those tactics to American shores. In June of 1940, magazine announced, “These Are Signs of Fifth Columns Everywhere,” and published pictures of fascists congregating in South America, Asia, and Long Island. And despite the fact that Hitler’s regime had tried to distance itself from Fritz Kuhn, many Americans assumed that the Bund was as much as anything a front for Nazi interests in the United States.

German-American Bundists parade swastikas and American flags down East 86th Street, New York, October 30, 1939. Photograph from the Library of Congress, Prints and Photographs Division, NYWT&S Collection, LC-USZ62-117148.

The presence of Nazi agitators was only one part of the problem, though. The other was the power of language and of mass communication. Consider the national popularity of two groups that sought to challenge that power: the Institute for Propaganda Analysis and the General Semantics movement. Each presented a view of the individual psyche as vulnerable to irrational impulses and false beliefs. Each also suggested not only that communication could be manipulated by unscrupulous leaders, but that the media of communication—pictures, verbal language, symbols—were themselves naturally deceptive. Both agreed that the technologies of one-to-many communication amplified this power enormously. The individual American mind had become a battleground, and it was their mission to defend individual reason from the predations of fascism, of communication, and, potentially, of the individual’s own unconscious desires.

The Institute for Propaganda Analysis emerged in 1937 out of a class in “Education and Public Opinion” taught by Dr. Clyde Miller at Columbia’s Teacher’s College.Thanks to a $50,000 grant from Boston businessman Edward A. Filene, Miller, a number of New York-area colleagues, and aboard of advisors that included leading sociologists Hadley Cantril, Leonard Doob, and Robert Lynd began creating study materials for a group of high schools in Illinois, New York, and Massachusetts. They also began publishing a monthly newsletter aimed primarily at teachers; it soon had almost six thousand subscribers.

The newsletter offered its readers a detailed training regime designed to help Americans achieve a heightened state of rational alertness. In the Institute’s materials the words and pictures of the mass media were scrims that obscured the motives and actions of distant powers. The source of their power to persuade lay primarily in their ability to stir up the emotions. The Institute implied that Americans could build up a psychological barrier to such manipulation by wrestling with newspaper stories and radio news accounts. An Institute-sponsored guide for discussion group leaders published in 1938 noted that propaganda analysis should proceed in four stages: “1) survey the contents 2) search for evidence of the statements or claims 3) study the propagandist’s motive [and] 4) estimate the content’s persuasive force.” This work could be done alone or in groups, and it was a species of intellectual calisthenics. Much as members might exercise their bodies to ward off disease, so might they also exercise their reason so as to ward off the inflammation of their unconscious desires and its potentially authoritarian consequences.

For the members of the General Semantics movement, the fight against propaganda depended on decoupling symbols and words from their objects of reference. If “semantics” referred to the study of meaning, “general semantics” referred to the more specific and, in the minds of its practitioners, scientific study of language and reference. The term “general semantics” was coined by Polish philosopher and mathematician Alfred Korzybski in the early 1920s. Korzybski had published a series of articles and books in which he argued that human beings’ ability to pass knowledge down through time via language was what made them unique as a species. In 1933 he published an exceptionally influential extension of his early theories, entitled. At its core, the book argued that much human unhappiness in both the psychological and social realms could be traced to our inability to separate the pictures in our heads and the communicative processes that put them there from material reality itself. To solve this problem, Korzybski offered a course in close scientific reasoning and linguistic analysis. To alleviate the power that symbols and their makers have over us, he argued, human beings needed to parse the terms in which language presented the world to them. Having done so, they could begin to recognize the world as it was and thus to experience some degree of mental health.

General Semantics enjoyed a three-decade vogue among American intellectuals and the general public. In the years immediately before World War II, it seemed to offer new tools with which to confront not only the psychological threats posed by propaganda but a whole panoply of social and psychological ills. In his popular 1938 volume. Finally, when a little headway has been made against economic disaster, the peoples of Europe, more civilized than any other living group, prepare solemnly and deliberately to blow one another to molecules. . . . Confusions persist because we have no true picture of the world outside, and so cannot talk to one another about how to stop them.”

To be able to understand the world and change it, Chase argued, Americans needed to break down language itself, to dissolve its terms from their material-world referents, and so distinguish the pictures in their heads from reality. And nothing made the importance of that work clearer than the omnipresence of mass communication, propaganda, and the threat of a second world war. In 1941, linguist and future Senator S. I. Hayakawa’s volume brought Chase’s argument and Korzybski’s theories into the public eye. Like Chase, Hayakawa argued that “we live in an environment shaped and partially created by hitherto unparalleled semantic influences: commercialized newspapers, commercialized radio programs, ‘public relations counsels,’ and the propaganda technique of nationalistic madmen.” To survive this onslaught, citizens needed scientific techniques for interpreting and resisting semantic assaults.

They especially needed techniques for disabling their immediate emotional responses to individual symbols. Hayakawa argued that human nervous systems tended to translate flows of experience into static pictures. Without training in General Semantics, it did so automatically. This in turn led quite literally to individual and collective madness. That is, words like “Nazi” and “Jew” conjured instant emotional responses; individuals lost track of the fact that the terms lacked immediate referents and were in fact so general as to be practically meaningless. Moreover, in their rush to emotional judgment, Hayakawa feared that citizens would rush to war as well. The only solution was a deep study of language and, with it, of our own roles in the communication process. As Hayakawa put it, “Men react to meaningless noises, maps of non-existent territories, as if they stood for actualities, and never suspect that there is anything wrong with the process. . . . To cure these evils, we must first go to work on ourselves. . . . [We must] understand how language works, what we are doing when we open these irresponsible mouths of ours, and what it is that happens, or should happen, when we listen or read.”

Read more from this excerpt here.

Read the book’s Introduction, via Fred Turner’s website, here.

For more about The Democratic Surround, click here.

Add a Comment
25. The Summer of Hillary Chute

9780226099446

Not a bad summer for Hillary Chute, so far. The University of Chicago’s reigning doyenne of the history of comics and cartooning, Chute earned several nods from Stephen Burt in a recent Artforum piece (from a summer feature on graphic content, see print issue), for her work in Outside the Box: Interviews with Contemporary Cartoonistswhich offers unprecedented access into the life-stories and processes of cartooning’s pantheon, including Lynda Barry, Alison Bechdel, Joe Sacco, Art Spiegelman, and Chris Ware.

In that same issue, Chute reviews the work of indie-feminist cult cartoonist Julie Doucet, unsparingly delving into the fantastical materiality “Heavy Flow,” while placing Doucet at the helm of a movement that usurped the comics form for the purposes of feminist art:

Doucet’s darkly witty comics offer an aesthetic at once loose and dense. Her stylish line is controlled and masterful, while the rich spaces of her frames, with their heavy inking and deep perspective, teem with details and seething objects that seem as if they are about to burst out of the picture. The bodies in her work are simultaneously exuberant and seething. In the classic “Heavy Flow” (collected in Twisted Sisters: A Collection of Bad Girl Art [1991]), the Julie character at the center, menstruating, grows into a Godzilla-like monster, bleeding and crushing buildings in search of Tampax.

Doucet is central to our understanding of comics as a particularly vibrant platform for telling and showing women’s stories. Her work in the 1990s ushered in an era of comics as a feminist art form—a shift we can note throughout the past twenty years, marked by the success of Marjane Satrapi’s Persepolis (2000) and Alison Bechdel’s Fun Home: A Family Tragicomic (2006). Doucet became part of a wide-ranging punk- and Riot Grrrl–inflected cultural uptake—even getting a shout-out in Le Tigre’s 1999 song “Hot Topic,” alongside the likes of VALIE EXPORT and Carolee Schneemann.

If that weren’t enough, pick up the latest issue of Bookforum, which features another review by Chute, this time on Ariel Schrag’s Adam. And if you’re not a subscriber, keep your eyes peeled for the Summer 2014 issue of Critical Inquiry “Comics and Media,” coedited by Chute and Patrick Jagoda, which should hit mailboxes and newsstands (unfair academic joke, I know, I know) any day now.

To read more about Outside the Box, click here.

 

Add a Comment

View Next 25 Posts