in all blogs
Viewing Blog: The Chicago Blog, Most Recent at Top
Results 1 - 25 of 1,872
Publicity news from the University of Chicago Press including news tips, press releases, reviews, and intelligent commentary.
Statistics for The Chicago Blog
Number of Readers that added this blog to their MyJacketFlap: 11
Below follows a brief excerpt from “Heat Wave,” Chicago magazine’s excellent, comprehensive oral history of the week of record-breaking temperatures in July 1995 that killed more than 700 people, became one of the nation’s worst disasters, and left a legacy of unanswered questions about how civic, social, and medical respondents were ill-equipped and unable to contend with trauma on such a scale.
Mark Cichon, emergency room physician at Chicago Osteopathic Hospital
I remember talking to friends at other hospitals who said, “Man, we’re in the middle of a crisis mode.” It was across the city. Our waiting room and the emergency departments were packed. We were going from one emergency to another, all bunched together, almost like a pit crew. The most severe cases were the patients with asthma who were so far into an attack we couldn’t resuscitate them. I remember a woman in her early 30s. The paramedics had already put a tube into her lungs. We were trying to turn her around, but there was nothing that could be done.
Eric Klinenberg, sociologist and author of the 2002 book Heat Wave: A Social Autopsy of Disaster in Chicago (to the Chicago Tribune in July 2012)
[Fire officials] did not call in additional ambulances and paramedics, even though the wait times for people needing help were long.
Raymond Orozco, commissioner of the Chicago Fire Department (at an Illinois Senate hearing in late July 1995)
Nobody indicated that we needed more personnel or supplies. Our field supervisors told us, “We’re holding our own.” We needed something to trigger the mechanism. Nobody pulled the trigger.
Klinenberg, who offers a line of commentary in the piece, explored those days in depth in his classic work of sociology, Heat Wave: A Social Autopsy of Disaster in Chicago, a second edition of which just published this past May, including a new preface by Klinenberg that situates climate change at the center of untenable weather events in urban centers and pushes for changes in infrastructure, rather than post-disaster responses. You can read more about the book here.
Bolder. More global. Risk-taking. The home of future stars.
Not a tagline for a well-placed index fund portfolio (thank G-d), but the crux of a piece by Sam Leith for the Guardian on the “crisis in non-fiction publishing”—ostensibly the result of copycat, smart-thinking, point-taking trade fodder that made Malcolm Gladwell not just a columnist, but a brand. As Leith asserts:
We have a flock of books arguing that the internet is either the answer to all our problems or the cause of them; we have scads of books telling us about the importance of mindfulness, or forgetfulness, or distraction, or stress. We have any number about what one recent press release called the “always topical” debate between science and religion. We have a whole subcategory that concern themselves with “what it means to be human.”
Enter the university presses. Though Leith acknowledges they’re still capable of producing academic jargon dressed-up in always already pantalettes, they are also home to deeper, more complex, and vital trade non-fiction that produces new scholarship and nuanced contributions to the world of ideas, while still targeting their offerings to the general reader. If big-house publishers produce brands, scholarly presses produce the sharp, intelligent, and individualized contributions that later (after, perhaps, some mutation and watering down by the conglomerates) establish their fields. Especially nice to see Yale, Harvard, Oxford, Princeton, Cambridge, and UCP called out for their “high-calibre, serious non-fiction of the quality and variety.”
More from the Guardian article:
In natural history and popular science, alone, for instance: Hal Whitehead and Luke Rendell’s amazing book The Cultural Lives of Whales and Dolphins or Brooke Borel’s history of the bedbug, Infested, or Caitlin O’Connell’s book on pachyderm behaviour, Elephant Don, or Christian Sardet’s gorgeous book Plankton? All are published by the University of Chicago. Beth Shapiro’s book on the science of de-extinction, How to Clone a Mammoth? Published by Princeton. In biography, Yale – who gave us Sue Prideaux’s award-winning life of Strindberg a couple of years back – have been quietly churning out the superb Jewish Lives series. Theirs is the new biography of Stalin applauded by one reviewer as “the pinnacle of scholarly knowledge on the subject”, and theirs the much-admired new life of Francis Barber, the freed slave named as Dr Johnson’s heir. Here are chewy, interesting subjects treated by writers of real authority but marketed in a popular way. The university presses are turning towards the public because with the big presses not taking these risks, the stuff’s there for the taking.
You can read more about the University of Chicago Press’s biological sciences list here. And the rest of our titles, organized by subject category, here. Follow the #ReadUP hashtag on Twitter for old and new books straddling the line between accessible scholarship and exciting nonfiction.
Carol Kasper, our very own marketing director, was recently honored by the Association of American University Presses (AAUP) with their 2015 Constituency Award. The Constituency Award is unique, in that it involves an open-call nomination process from one’s peers, and focuses not only on individual achievement, but also on the spirit of cooperation and collaboration that marks the measure of integrity and success within the scholarly publishing community.
From the official press release:
The Constituency Award, established in 1991, honors an individual of a member press who has demonstrated active leadership and service, not only in service to the Association but to the scholarly publishing community as a whole. In addition to a term on the Association’s Board of Directors from 2009 to 2011, Kasper has been a member of numerous committees and panels throughout the years, including the Marketing Committee, the Bias-Free Language Task Force, and Midwest Presses Meeting Committees. . . . In addition to her formal service to the Association, and her leadership in the university press and international scholarly publishing worlds, Kasper has hosted numerous Whiting/AAUP Residents over the years. One of the nominating letters added: “Carol has dedicated all this time and energy to the AAUP in her typically quiet, unassuming fashion.”
From University of Chicago Press director Garrett Kiely’s remarks at the award ceremony:
What makes Carol special and what uniquely qualifies her for this award are the people that Carol has mentored, supported, and trained in her time here in Chicago,” says Garrett Kiely, Director of University of Chicago Press and presenter of the award. “To put it in scholarly journal terms, her ‘impact factor’ has been very high!”
And just to add:
Carol is a phenomenal teacher and mentor—the very best kind, in that the generosity she extends to her colleagues, the fierce integrity with which she makes things happen, the self-determination and cooperation she encourages, and the good humor she doles out all seem effortless, because they are so very much a part of her. Congrats, CK!
From Nandini Ramachandran’s review of The Dead Ladies Project at Public Books:
The Dead Ladies Project is part of a long literary tradition of single ladies having adventures. As a genre, it has had to contend with the collective energies of late capitalism (which tries to convert all adventure into tourism), patriarchy (which tries to make all single women into threatening and/or pathetic monsters), and publishing (which tries to repackage and flatten all women who write into “women writers”). It does, on the whole, remarkably well, perhaps because it’s written by insightful people who have resisted, for an entire century, the call to cynicism. It’s easy, these days, to be jaded about human relationships, to believe that they have been fabricated and marketed and focus-grouped into torpor and that no one remains capable of an authentic emotion. Jessa Crispin, like so many writers before her, flatly refuses to believe that. She insists on the fleeting, transcendental passion, the abjection of unrequited longing, the thrill and terror of waking up in an alien city. She insists, further, that a woman can revel in all that tumult.
(I choose this excerpt as the best teaser for the book, yet a part earlier on, a sort of prelude in which Ramachandran relays the mise-en-scène of the spinster’s myth, that consuming-qua-shrill narrative surrounding a woman with “too much plot”—I feel you.)
Read more about The Dead Ladies Project here.
From an interview between Micah Uetricht and Andrew Hartman, author of The War for the Soul of America: A History of the Culture Wars, at In These Times:
You write about people on the Left realizing that, in addition to restrictive ideas about gender and race, perhaps the whole American project is rotten to the core, and they need a different way to define themselves. And so there increasingly was no unifying project for the Left to feel a part of anymore—while the average American still probably wanted to be a part of that kind of project.
There were some people from the ‘60s onward who saw the whole American project as irredeemable: racist, sexist, imperialist. But for the most part that was a very small minority.
Multiculturalists, the people who Schlesinger was arguing against, just wanted the U.S. to reflect what it actually was: a very multicultural society. People wanted to stop the U.S. from thinking of itself as better than others, reject American exceptionalism. But most of these people weren’t giving up the project of the U.S., they just wanted the project to look different. Derrick Bell, the critical race theorist and law professor who I write about in the book, was arguing that America was irredeemable; can never be anything other than racist. But the majority of say, social movement activists and professors in English departments were not going so far as to say that we need to burn the American project to the ground.
But most critics of multiculturalists and others lumped them together with many others much farther to the Left. To conservatives in general, there was no difference to them between multiculturalists and Afro-Centrists. In their eyes, both were rejecting American ideals.
You argue, pretty provocatively, in your conclusion that the culture wars are largely over.
To me, the logic of the cultural wars seems largely exhausted. The Christian right in many ways is kind of a lost cause. You have an increasing number of conservative religious figures who are arguing the need to withdraw from public culture and create their own autonomous cultural zones, where they can prepare for when the U.S. is once again ready for its ideals.
Many conservative Christians, for example, still believe that homosexuality is not only an abomination in the eyes of God but also a threat to national values. But they are less likely to make that argument publicly and politically; instead their main tactic is “religious freedom.” To me, this is a recognition that they are losing the national battle, and they’re trying to create smaller zones in which they can discriminate in the name of religious freedom.
To read more about The War for the Soul of America, click here.
Earlier this month, the New York Times revisited Paul R. Ehrlich—through both his cult favorite 1968 work The Population Bomb, and as a doomsday-advocating talk show guest, who spent much of the 1970s and years since advancing the notion that it was just a matter of time before the strained resources of our overcrowded planet could no support humanity. Though the years since might have seeded us with a kinder, gentler apocalypse, Ehrlich remains (mostly) resolute:
But Dr. Ehrlich, now 83, is not retreating from his bleak prophesies. He would not echo everything that he once wrote, he says. But his intention back then was to raise awareness of a menacing situation, he says, and he accomplished that. He remains convinced that doom lurks around the corner, not some distant prospect for the year 2525 and beyond. What he wrote in the 1960s was comparatively mild, he suggested, telling Retro Report: “My language would be even more apocalyptic today.”
And yet, in a second Times piece, an op-ed, “Paul Ehrlich’s Population Bomb Argument Was Right,” by statistics professor Paul A. Murtaugh, Ehrlich’s ideas are framed less as nostalgia for a time of reasonable doomsday bets, and more as the inevitable catastrophic consequences of human reproduction and environmental degradation, in the age of what the children are calling the Anthropocene:
The more catastrophic consequences of human population growth predicted by Paul Ehrlich have not yet materialized, in part because he did not anticipate the enormous increase in agricultural productivity enabled by fossil fuels. But Ehrlich’s population bomb will inevitably detonate when fossil fuels become too scarce and expensive to sustain a growing population. . . . Ehrlich’s argument that expanding human populations cannot be sustained on an Earth with finite carrying capacity is irrefutable and, indeed, almost tautological. The only uncertainty concerns the timing and severity of the rebalancing that must inevitably occur.
If this seems reasonable, Ehrlich writes about the current state of our environment, along with the continual threats posed by our often careless inhabitancy, in two recent books: Hope on Earth (a conversation with Michael Charles Tobias about catastrophe and morality within the contemporary context) and the forthcoming Killing the Koala and Poisoning the Prairie (a comparative study of how these concerns have been [mis]handled by two global democracies, Australia and the United States, coauthored with Corey J. A. Bradshaw). You can read more about both here.
Congratulations to George Monbiot, author of Feral: Rewilding the Land, the Sea, Human Life, which was just announced as the winner of the 2015 Orion Book Award for nonfiction, which honors “books that deepen the reader’s connection to the natural world, [and] represent excellence in writing.” In Feral, Monbiot, a journalist, columnist for the Guardian, and environmentalist (see his recent TED talk here), argues for a twenty-first-century movement based upon the concept of rewilding, which seeks to free nature from human intervention and allow ecosystems to resume their natural processes.
From a recent profile of the book at the Orion Blog:
When’s the last time you walked into the woods, or a park, or your garden, and felt unsure of what—or who—you might see? If the answer is “it’s been a while,” you’re not alone. With his intrepid and imaginative new book, Feral: Rewilding the Land, the Sea, and Human Life, journalist George Monbiot has invented a term for this twenty-first-century condition that afflicts so many of us in the developed world: “ecological boredom.” He’s come up with a prescription, too, which involves large-scale reintroductions of keystone species to the landscapes that humans have emptied out and made their own. If this sounds reckless and implausible, it’s not: Monbiot has done his research, and builds a case for how well his surprising list of animal recruits would fit into his home landscape of Britain. From moose and lynx to hippopotamuses and black rhinoceroses, Feral invites readers to imagine a wilder, less stifled and more primal world—one in which we humans can come to recognize our animal natures once again.—Scott Gast
And from the Orion editors’ commendation for the award:
George Monbiot’s well-researched book of narrative storytelling, speculation, and bold imagination is a vote in favor of rewilding not just nature but the human spirit. Feral invites readers to envision a wilder, less stifled and more primal world—one in which we humans can come to recognize our animal selves once again.
To read more about Feral, click here.
Fresh off an embargo of the news, we’re delighted to announce that Megan R. Luke’s Kurt Schwitters: Space, Image, Exile is the recipient of the 2015 Robert Motherwell Prize from the Dedalus Foundation. The Motherwell Prize, accompanied by a $10,000 award, “honors an outstanding publication in the history and criticism of modernism in the arts.” Luke’s book contextualizes, for the first time, the multidisciplinary work produced by one of modernism’s foremost innovators during the last years of his life, both during the Nazi regime and while in exile in Western Europe.
From the official announcement:
The Dedalus Foundation is pleased to announce that Megan R. Luke is the winner of the fourteenth annual Robert Motherwell Book Award, for Kurt Schwitters: Space, Image, Exile, published by The University of Chicago Press. The award, which carries a prize of $10,000, honors an outstanding publication in the history and criticism of modernism in the arts for the year 2014.
German artist Kurt Schwitters (1887–1948) is best known for his pioneering work in fusing collage and abstraction, the two most transformative innovations of twentieth-century art. Considered the father of installation art, Schwitters was also a theorist and a writer whose influence extends from Robert Rauschenberg and Eva Hesse to Thomas Hirschhorn. But while his early experiments in collage and installation from the interwar period have garnered much critical acclaim, his later work has generally been ignored. In the first book to fill this gap, Megan R. Luke tells the fascinating, even moving story of the work produced by the aging, isolated artist under the Nazi regime and during his years in exile.
Combining new biographical material with archival research, Luke surveys Schwitters’s experiments in shaping space and the development of his Merzbau, describing his haphazard studios in Scandinavia and the United Kingdom and the smaller, quieter pieces he created there. She makes a case for the great relevance of Schwitters’s aesthetic concerns to contemporary artists, arguing that his later work provides a guide to new narratives about modernism in the visual arts. His late works, she shows, were born of artistic exchange and shaped by his rootless life after exile, and they offer a new way of thinking about the history of art. Packed with images, Kurt Schwitters completes the narrative of an artist who remains a considerable force today.
Megan R. Luke is assistant professor of art history at the University of Southern California. Her research focuses on the advent of abstraction and collage, the history of photography and art reproduction, and the intersection of avant-garde art and mass culture, particularly early cinema.
To read more about Kurt Schwitters, click here.
When sociologist Donald Levine (1931–2015) passed away this April, the Chicago Tribune asked University of Chicago Press executive editor Douglas Mitchell to offer up some remarks on his decades long personal and professional relationship with the longtime University of Chicago professor and former dean of the College. With Mitchell’s permission, they follow, in full, after the jump.
Don Levine and I go back a ways. In fact, the Chicago Tribune plays a role in my memories because of a “First Person” feature the Trib did about me on June 22, 1986 (you probably remember these full-page bio-vignettes in the Sunday magazine, usually on or near the back page—inspired no doubt by Studs Terkel’s Working, the reporters would sniff out interesting occupations, usually stuff like pizza delivery guy, parking lot attendant, hotel housekeeper, and the like, but then they got the idea of doing a white-collar type). I told the story of how I had been leafing through old files and found a one-paragraph sketch of a book on precision vs. ambiguity in language, and how the worship of precision actually disrupts understanding and relationships. It turned out to be Donald Levine’s book idea. I was smitten. I called him (we had no contact prior to that even though he was on the faculty; this was back when I was new to the Press in the late ’70s). Conversations with him led me to see that ambiguity has very definite positive functions in poetic and diplomatic and heuristic contexts, and so, chapter by chapter, we worked together for a few years to bring to fruition his The Flight from Ambiguity (1985).
Conversations with him were exciting because he had such a restless mind, curious about everything (a lot of curiosity for intellectuals comes from a wish to convert what they find back into their own terms, their own frameworks, but it’s a constant process of modifying ideas, both for the thinker and the interlocutors). In fact, I sometimes wondered how he managed to settle down his constantly searching gaze enough to get, not just words on paper, but whole arguments and whole books that had coherence; he did have a respect for aphorism, and had an aphoristic turn to his thought. The interest in language came from two sources: his intensive fieldwork in Ethiopia (where discourse had a culturally embedded double track, one that prized ambiguity for its hidden, allegorical meaning: gold, and one for obvious, literal meaning: wax); the other source was the philosopher Richard McKeon, in whom both Don and I took a lifelong dedicated interest, in part because McKeon had found ways to map univocal and polyvocal structures for generating new ideas and arguments and for judging evidence and results. It was McKeon’s vast interdisciplinary work in systematic pluralism that appealed to Don’s interest in multiple possibilities and in liberal arts of discourse.
His work as dean of the College at the University of Chicago was influenced in turn by this vision of plural methods and arts (including martial arts, as you doubtlessly know), and out of that experience with curriculum planning and experimentation came one of his most widely read books, Powers of the Mind: The Reinvention of Leaning in America (2006). At a time when higher education has gotten commodified and corporatized, Don’s book seeks relevance in new formulations and applications of traditional disciplines, and is far more powerful as an antidote to mere utility and a celebration of learning for its own sake than, say, Fareed Zakaria’s new book on education.
Don’s keen interest in philosophy (much of it by way of his auditing of McKeon’s classes and reading McKeon’s work) is at the foundation of his great commitment to the work of Georg Simmel. Don edited a collection of SImmel’s key writings for the Press’s Heritage of Sociology series, and it remains as one of the bestselling volumes in that series (up there with Weber, Marx, and Durkheim). He had a lot to do with making North American sociology safe for Simmel; one of his last books was a translation and edition of Simmel’s essays in metaphysics (The View of Life, 2011), precursors in a very special way to Heidegger’s philosophy. And, not least, this reverence for heritage of ideas and cultural traditions made Don a first-class historian of sociology as a discipline. Another very influential book, thus, was Visions of the Sociological Tradition (1995).
Don’s own powers of the mind were marked by their protean vivacity, as you can tell; and so were his powers of the body (others can speak to his expertise in martial arts, and his workshops for students in aikido, an art of which he was a master). The conjunction of the two, mind and body, came in his powers of the soul: he and I bonded strongly over our mutual love of music, and here again his curiosity led him to go beyond the classical music in which he was an adept (as a violist; he also composed) to jazz, which is my idiom. He would stop by on many occasions to the South Side bar where my jazz band plays on Sunday nights, listening intently, asking for the names of tunes and seeking instruction in motifs, rhythmic riffs, and other aspects of the music. When I had a birthday that happened to fall on a Sunday, year before last, Don somehow found out about it and showed up with balloons, flowers, a card, and his irrepressible self. Ebullience came to him as naturally as Charlie Parker tunes come to jazz musicians.
What Stevie Wonder really meant to sing was “no book launch Saturday within the month of June,” and with that in mind, here are some recent images from those book-related fêtes staged a smidge sooner, during the long green march of spring.
Snapshots from the official book launch for The Big Jones Cookbook: Recipes for Savoring the Heritage of Regional Southern Cooking, featuring Chef (and author) Paul Fehribach, some of his clientele, and a band of University of Chicago Press culinary enthusiasts:
A photograph from the Dublin launch of Gillian O’Brien’s Blood Runs Green: The Murder that Transfixed Gilded Age Chicago (these young readers are actually O’Brien’s nieces and nephew):
And, finally, this photograph from Andrew Hartman’s talk about A War for the Soul of America: A History of the Culture Wars at the In These Times HQ:
To read more about books from Chicago’s most recent list, click here.
Our free e-book for June:
Prospero’s Son: Life, Books, Love and Theater by Seth Lerer
“This book is the record of a struggle between two temperaments, two consciousnesses and almost two epochs.” That’s how Edmund Gosse opened Father and Son, the classic 1907 book about his relationship with his father. Seth Lerer’s Prospero’s Son is, as fits our latter days, altogether more complicated, layered, and multivalent, but at its heart is that same problem: the fraught relationship between fathers and sons.
At the same time, Lerer’s memoir is about the power of books and theater, the excitement of stories in a young man’s life, and the transformative magic of words and performance. A flamboyantly performative father, a teacher and lifelong actor, comes to terms with his life as a gay man. A bookish boy becomes a professor of literature and an acclaimed expert on the very children’s books that set him on his path in the first place. And when that boy grows up, he learns how hard it is to be a father and how much books can, and cannot, instruct him. Throughout these intertwined accounts of changing selves, Lerer returns again and again to stories—the ways they teach us about discovery, deliverance, forgetting, and remembering.
“A child is a man in small letter,” wrote Bishop John Earle in the seventeenth century. “His father hath writ him as his own little story.” WithProspero’s Son, Seth Lerer acknowledges the author of his story while simultaneously reminding us that we all confront the blank page of life on our own, as authors of our lives.
Download your free copy of Prospero’s Son
An excerpt from Edible Memory: The Lure of Heirloom Tomatoes and Other Forgotten Foods by Jennifer A. Jordan
How could anything as perishable as fruits and vegetables become an heirloom? Many things that are heirlooms today were once simple everyday objects. A quilt made of fabric scraps, a wooden bowl used in the last stages of making butter, both become heirlooms only as time increases between now and the era of their everyday use. Likewise, the Montafoner Braunvieh—a tawny, gorgeously crooked-horned cow that roams a handful of pastures and zoos in Europe, a tuft of hair like bangs above her big brown eyes—or the Ossabaw pigs that scurry around on spindly legs at Mount Vernon were not always “heirlooms.” Nor were the piles of multicolored tomatoes that periodically grace the cover of Martha Stewart Living magazine or the food pages of daily newspapers. What happened to change these plants and animals from everyday objects into something rare and precious, imbued with stories of the past? In fact, food has always been an heirloom in the sense of saving seeds, of passing down the food you eat to your children and your children’s children, in a mixture of the genetic code of a given food (a cow, a variety of wheat, a tomato), and also in handing down the techniques of cultivation, preservation, preparation, and even a taste for particular foods. It is only with the rise of industrial agriculture that this practice of treating food as a literal heirloom has disappeared in many parts of the world—and that is precisely when the heirloom label emerges. The chain is broken for many people as they flock to the cities and the number of farmers and gardeners declines. So the concept of an heirloom becomes possible only in the context of the loss of actual heirloom varieties, of increased urbanization and industrialization as fewer people grow their own food, or at least know the people who grow their food. These are global issues, relevant to hunger and security and to cultural memory, community, and place. This book addresses one aspect of the much larger spectrum of issues around culture and agricultural biodiversity, focusing on these old seeds and trees.
In some ways heirlooms become possible (as a concept) only because of the industrialization and standardization of agriculture. They went away, there was a cultural and agricultural break, placing temporal and practical distance between current generations and past foods. In the meantime, gardeners and farmers quietly saved seeds for their own use. And then, as I discuss in much greater detail below, these heirloom foods began, tomato by tomato, apple by apple, to return to some degree of popularity.
In the United States, newspaper article after article, activist after activist, describes heirloom varieties as something one’s grandmother might have eaten. The implication is that there has been a significant break—that the current generation and their parents lost touch with these fruits, vegetables, and animals but that their grandparents might not have. “Heirlooms are major-league hot,” a reporter marveled in 1995. “As we become more of a technological society, people are reaching into the garden to get back that simple life, the simple life of their grandparents.” Concepts like “old-fashioned,” “just like Grandma ate,” and even “heirloom” can feel very American. But this is a mythical grandmother. The grandmothers of today’s United States are a diverse crew whose cooking habits are just one of the ways they differ. Gender is also obviously a vital element of the study of food production and consumption. Women are perceived as (and often are) the primary cooks and shoppers, and there are many gendered understandings of our relationships to food. Many people, men and women alike, have little time to cook, despite recent exhortations to engage in more home cooking. My own grandmother (the niece of my great-great-aunt Budder whom I write about in the prologue) smoked cigarettes and drank martinis with gusto, and for her, making Christmas cookies consisted of melting peanut butter and butterscotch chips, stirring in cornflakes, and forming the mixture into little clumps that would harden as they cooled. I loved them as a child, and when I make them today, I am invoking my grandmother just as much as other people may when serving up a platter of ancestral heirloom tomatoes.
In the context of food, however, the word “heirloom” also has a genetic connotation. The object itself is not handed down. Heirloom tomatoes are either eaten or they rot. Old-fashioned breeds of pigs are slaughtered and end up as pork chops; they rarely live a long life like Wilbur in Charlotte’s Web, without the help of a literate spider and a film career. The “heirloom,” then, what is handed down, is the genetic code. Heirloom foods are products of human intervention, ranging from selecting what seeds to save for the next growing season to deciding which tom turkey should father poults with which hen.
The genetic heirloom takes on a physical expression in the form of a pig or a tomato, for example, to which people may then attach all kinds of meanings—not only the physical appetite for the flavor of a particular tomato or pork chop, but also the sense that edible heirlooms connect us to something many people see as more authentic than supermarket fare. Over and over, in conversations and newspaper articles, orchards and public lectures, I have heard people articulating a search for a connection to the past, even as they also sought out appealing flavors, colors, and textures. The appetite for an heirloom food commonly leads, of course, to the destruction of its embodiment—in a Caprese salad, say, or an apple pie—but it is precisely the consumption of its phenotype that ensures the survival of the genetic code that gave rise to it.
A guide to heirloom vegetables describes heirloom status (of tomatoes and other produce) in three ways:
- The variety must be able to reproduce itself from seed [except those propagated through roots or cuttings]. . . .
- The variety must have been introduced more than 50 years ago. Fifty years, is, admittedly, an arbitrary cutoff date, and different people use different dates. . . . A few people use an even stricter definition, considering heirlooms to be only those varieties developed and preserved outside the commercial seed trade. . . .
- The variety must have a history of its own.
The term “heirloom” itself generally applies to varieties that are capable of being pollen fertilized and that existed before the 1940s, when industrial farming spread in North America and the variety of species grown commercially was significantly reduced. Generally speaking, an heirloom can reproduce itself from seed, meaning seed saved from the previous year. When growing hybrids, you have to buy new seed each year (for plants that reproduce true to seed; apples, potatoes, and some other fruits and vegetables are preserved and propagated through grafts or cuttings rather than seeds). In other words, if you save the seeds of a hybrid tomato and plant them the next year, you more than likely won’t be pleased with what you get, if you get anything at all. Furthermore, simply because they are “heirloom” tomatoes does not mean they are native. In fact, tomatoes are native not to the United States, but to South and Central America, and many heirloom varieties such as the Caspian Pink were developed in Russia and other far-off places. People also use the term “heirloom” to describe old varieties of roses, ornamental plants, fruit trees (reproduced by grafting rather than from seed), potatoes, and even livestock.
As the US Department of Agriculture’s heirloom vegetable guide explains, “Dating to the early 20th C. and before, many [heirloom varieties] originated during a very different agricultural age—when localized and subsistence-based food economies flourished, when waves of immigrant farmers and gardeners brought cherished seeds and plants to this country, and before seed saving had dwindled to a ‘lost art’ among most North American farmers and gardeners.” Fashions, tastes, and technology changed, but “since the 1970s, an expanding popular movement dedicated to perpetuating and distributing these garden classics has emerged among home gardeners and small-scale growers, with interest and endorsement from scientists, historians, environmentalists, and consumers.” In Germany they speak of alte Sorten, “old varieties,” but this phrasing does not carry the same symbolic, nostalgic weight as the homey word “heirloom.” In French heirloom varieties may be called légumes oubliés, “forgotten vegetables,” or légumes anciennes. Of course, once vegetables are labeled forgotten, they’re not really forgotten anymore. In general, the United States has a different relationship to its past than European countries do. Thus there are regional gardening and cooking traditions in the United States, as well as a particular form of nostalgia that allows the term “heirloom” to apply to fruits, vegetables, and animals in the first place. The idea of an heirloom object can be very homespun. Certainly an heirloom can be something of great monetary value, but it can also be a threadbare quilt, a grandfather’s toolbox, or in my case the worn and mismatched paddles my great-great-aunt used in the last stages of making butter. The word “heirloom” can be a way to preserve biodiversity, but it can also be inaccurate and misused, a label slapped on an overpriced tomato. There is always the danger that dishonest grocers and restaurateurs will exploit the desire for local, seasonal, and heirloom food.
Heirlooms of all sorts are often wrapped up in nostalgic ideas about the past. Patchwork quilts and butter churns evoke not only idyllic images of yesteryear, but often difficult lives circumscribed by poverty and dire necessity as much as by simplicity and self-sufficiency. They speak of times (and, when we think globally, of places) when life may have been (or may still be) not only technologically simpler but also much, much harder. Old-fashioned farm implements in the front yards of rural Wisconsin, or in living history museums, evoke nostalgic feelings. But there’s a reason they’re in museums or front yards and not hitched to a team of horses or in the hands of a farmer, at least in Wisconsin. These are backbreaking tools whose functions have wherever possible been transferred to machines.
Even today, while it may surprise people who pick up a book like this, when I first tell someone about my work, I routinely have to explain what an heirloom tomato is. On a recent trip to a Milwaukee farmers’ market, I heard an older man say to his female companion, “Heirloom tomatoes? Never heard of ’em.” He’s not alone. While some food writers and restaurant reviewers may feel that heirloom tomatoes are yesterday’s news, plenty of consumers are still encountering them for the first time.
Heirloom varieties are just one form of edible memory, but they offer a unique opportunity to understand the powerful ways memory and materiality interact, and how the stories we tell one another about the past shape the world we inhabit. I write about heirlooms not because I think they’re the only way to go, but because they present an intriguing sociological puzzle (How can something as perishable as a tomato become an heirloom?) and because they are the subject of so much activity by so many different people. These efforts, all this work, are also just the latest turn in the twisting path of fruit and vegetable trends, of the relationship of these plants to human communities. This book recounts my search for endangered squashes, nearly forgotten plums, and other rare genes surviving in barnyards, gardens, and orchards, this intertwining of botanical, social, and edible worlds.
I relish the moments I have spent with the old-fashioned farm animals at the Vienna zoo, standing in the stall with the zookeeper to scratch the fluffy head of a newborn lamb or the vast forehead of that speckled black-and-white cow, one of only a few of her breed remaining on the planet, who had just dutifully produced a calf that looked exactly like her. I also relish the meals I’ve prepared from multicolored potatoes or tomatoes; and, given a free Saturday, I can spend hours at farmers’ markets, contemplating what I can do with a bucket of almost overripe peaches (freeze them for my winter oatmeal) or a pile of striped squash (a spectacularly failed attempt at whole wheat squash gnocchi, which may still be lurking in the back of my freezer). And I have my own history of deep attachment to processed spice cake and the unctuous taste of a rare glass of whole milk—a reminder that “edible memory” goes far beyond the relatively narrow confines of heirloom food.
But I am also a sociologist, so in this book, while I am fond of many of the places, people, and foods I discuss, I also aim, ultimately, to tell a sociological story. I did not, like Barbara Kingsolver in Animal, Vegetable, Miracle, try to raise turkeys or can a heroic quantity of heirloom tomatoes. Unlike Michael Pollan in the journey he undertook for The Omnivore’s Dilemma, I did not try to shoot anything or make my own salt. Along the way, however, I did get involved; I immersed myself in these rich landscapes, markets, and texts and in conversations with diverse groups and individuals who often, unknown to anyone else, managed to hold on to vital and beautiful collections of genes in the form of old apple trees or tomato seeds, turnips or taro. I set out not to grow these plants and raise these animals myself, but to talk with and observe the diverse and committed gardeners, farmers, curators, seed savers, animal breeders, and other people who make possible the persistence of these plants and animals on this planet. I set out to understand in particular where these plants have come from, the threats they face, the kinds of places that are created in the attempt to save them, and the stories they tell us about the past and about ourselves, as well as how they figure in the broader patterns of human appetites, trends and fashions, habits and intentions.
The research for this book comprised seven years of observation and analysis. In my efforts to understand how tomatoes became heirlooms and apples became antiques, I set out on multiple journeys, of varying sorts. I drove down Lake Shore Drive to the Green City Market and urban farms and gardens in Chicago, traveled across town in Milwaukee to Growing Power and other urban growers, flew across the Atlantic to Vienna, took a streetcar over the bridges of Stockholm to get to the barnyards and gardens of the Swedish national open-air folk museum, and got lost on the tangle of bridges and highways between Washington, DC, and rural Virginia in search of Thomas Jefferson’s vegetable garden and George Washington’s turkeys. I also took more philosophical journeys: literary and archival travels through the pages of government reports, scholarly periodicals, and popular and scientific books. I traveled through recipe collections and the glossy pages of food magazines, through the digital universe of online databases, and through correspondence with colleagues and informants in far-off places. The collection of these journeys, of this movement through gardens, barnyards, orchards, and markets, as well as thickets of printed and digital information, accounts for the story I tell here.
This book emerged in part from solitary hours in front of the computer, taking notes, with stacks of books at my side, reading newspaper articles and academic journal articles on everything from apple grafting to patent law. I analyzed thousands of newspaper articles, charting the emergence of the term “heirloom” in popular food writing and looking for changes in the quantity and quality of the discussion over time as well as differences and similarities across different kinds of foods. Much of this book is based on the ways heirloom varieties register in public discussions, especially the media, and the ways they get taken up by organizations and individuals, both in and out of the limelight. Blogs and other food writing have also figured centrally in my analysis of the heirloom food movement as markers of popular discussions, and I have relied on hundreds of secondary sources (see the bibliography) for historical information about specific foods. I read encyclopedias and fascinating scholarly and popular books, charting the rise and fall of particular foods and their historical transformations. And I drew on the insights of my colleagues in sociology and neighboring academic disciplines and the ways they think about things like culture, memory, and food.
Occasionally I would take a break and cook one of the recipes I came across, and I also left my desk and set out to visit the farms and gardens, camera and notebook in hand. I scratched the noses of wiry old pigs, walked through fragrant herb gardens, and tasted hard cider and fresh bread, the hems of my jeans coated in mud and my nose sunburned from a long day in an Alpine valley or at a midwestern heirloom seed festival. I spoke formally and informally with gardeners, farmers, and chefs, activists, seed savers, academics, and all kinds of people devoted to food. I visited farms and gardens and living history museums and farmers’ markets, and I attended conferences and public lectures and delivered some of my own to smart crowds full of eager gardeners, eaters, and thinkers. I also spoke with the gardeners of less well-known historical kitchen gardens across Europe and the United States, quiet conversations about their enthusiasm for their work and about their assessments of the changing public perceptions of edible biodiversity over recent decades. Many of these farmers and gardeners became good friends, and our late-night conversations over good meals in my dining room or cheap beer at a rooftop farm in Chicago’s Back of the Yards also came to shape my sociological understanding of these trends. Sifting through the stacks of papers on my desk in the depths of winter, and wandering through gardens, barnyards, and farmers’ markets in the heat of summer, I wanted to see what patterns I might find.
Finding Edible Memory
What I found was something I came to call “edible memory.” And I want to emphasize that I did not expect to find it. Edible memory emerged out of these documents, landscapes, and conversations. This book focuses largely on the contemporary United States, with occasional examples drawn from elsewhere. But the fundamental ideas and questions can help us to think about other times and places as well. For sociologists, the study of human behavior— of what people actually do, and do in large enough numbers to register as visible patterns—is at the heart of our work. Many of us are studying what happens when people are highly motivated, when they are so passionate about something that the passion provokes action. That said, many of us are also deeply interested in the small actions of habit, the little steps we take every day that add up to this big thing called society. What we eat for breakfast, who we spend time with and how, what we buy, even what we ignore— these are all crucial to understanding how and why things are as they are. This book is about the fervent devotees, the people who can’t not plant orchards full of apple trees or spend countless hours saving turnip seeds. But it is also about the ways millions (perhaps even billions) of people make small decisions every day about what to serve their families, about how to feed themselves.
When I began to look in scholarly and popular writing, and in kitchens, gardens, farms, and markets, I saw more and more evidence of edible memory: in the rice described by geographer Judith Carney, in the gardens of Hmong refugees in Minnesota, in the hard-won community gardens of New York’s Lower East Side, and in the appetites and memories of friends and strangers alike. Edible memory appears in the reverberations of African foods in a range of North American culinary traditions, in the efforts to cultivate Native American foods today, in the shifting appetites of immigrant populations and ardently trendy folks in Brooklyn or Portland. It goes far beyond the heirloom, but heirlooms were my way in, a way to narrow, at least temporarily, the scope of the investigation and to explore one particularly potent intersection of food, biodiversity, and tales of past ways of being. Edible memory is a widely applicable concept, and I hope it will resonate well beyond the boundaries of the examples I have included in this book.
Edible memory is also in no way the sole province of elites. Much of what people understand as heirloom food today is expensive and out of reach, justifying the pretensions sometimes assigned to heirloom tomatoes, farmers’ markets, or the pedigreed chicken in the television show Portlandia. Food deserts, double shifts, cumbersome or expensive transportation, and straight-up poverty greatly reduce access to a wide range of foods, heirlooms included. But to assume that edible memory is strictly connected to privilege ignores the vital connections people have to food at a range of locations on the socioeconomic scale. Poverty, and even hunger, does not preclude (and indeed may intensify) the meanings and memories surrounding food. As many researchers have discussed, the various alternative approaches to food— heirlooms, but also farmers’ markets, organic and local foods, and artisanal foods—tend to be expensive, eaten largely by elites—well-off and often white. However, while that may characterize what we might call mainstream alternative, both edible biodiversity and edible memory happen across the socioeconomic spectrum. There are vibrant, successful projects in which people worlds away from expensive restaurants and farmers’ markets grow and eat many of the same kinds of memorable vegetables, in rural backyards, small urban allotments, and school gardens. Chicago alone is home to many farms and gardens supplying food and often employment and other projects in low-income communities, projects like the Chicago Farmworks, Growing Home, Gingko Gardens, or the Chicago location of Growing Power, which is even selling its produce in local Walgreens, trying to improve access to locally grown produce in predominantly low-income and African American neighborhoods. The numerous farms and gardens profiled on Natasha Bowen’s blog and multimedia project, The Color of Food, also offer examples across the country of farmers and gardeners with a deep commitment to many of the same foods that find their way into high-priced grocery stores or expensive restaurant dinners.
At the same time, I do not want to argue that edible memory is a universal concept. We can ask where and how it appears and matters, but we should not assume that it is everywhere either present or significant. It is certainly widespread, based on the research I have conducted, but it is not universal. For some people food may be a way to imagine communities, to understand their place in the world and connect to other people, but for others it is simply physical sustenance or transitory pleasure.
To read more about Edible Memory, click here.
Anthony C. Yu (1938−2015)—scholar, translator, teacher—passed away earlier this month, following a brief illness. As the Carl Darling Buck Distinguished Service Professor Emeritus in the Humanities and the Divinity School at the University of Chicago, Yu fused a knowledge of Eastern and Western approaches in his broadranging humanistic inquiries. Perhaps best known for his translation of The Journey to the West, a sixteenth-century Chinese novel about a Tang Dynasty monk who travels to India to obtain sacred texts, which blends folk and institutionalized national religions with comedy, allegory, and the archetypal pilgrim’s tale. Published in four volumes by the University of Chicago Press, Yu’s pathbreaking translation spans more than 100 chapters; an abridged version of the text appeared in 2006 (The Monkey and the Monk), and just recently, in 2012, Yu published a revised edition.
In addition to JttW, Yu’s scholarship explored Chinese, English, and Greek literature, among other fields, as well as the classic texts of comparative religion. He was a member of the American Academy of the Arts and Sciences, the American Council of Learned Societies, and Academia Sinica, and served as a board member of the Modern Language Association, as well as a Guggenheim and Mellon Fellow.
From the University of Chicago News obituary:
“Professor Anthony C. Yu was an outstanding scholar, whose work was marked by uncommon erudition, range of reference and interpretive sophistication. He embodied the highest virtues of the University of Chicago, his alma mater and his academic home as a professor for 46 years, with an appointment spanning five departments of the University. Tony was also a person of inimitable elegance, dignity, passion and the highest standards for everything he did,” said Margaret M. Mitchell, the Shailer Mathews Professor of New Testament and Early Christian Literature and dean of the Divinity School.
To read more about The Journey to the West, click here.
Our free e-book for May, Valerie Curtis’s Don’t Look, Don’t Touch, Don’t Eat: The Science behind Revulsion, considers the narrative history and scientific basis behind the psychology of disgust.
Every flu season, sneezing, coughing, and graphic throat-clearing become the day-to-day background noise in every workplace. And coworkers tend to move as far—and as quickly—away from the source of these bodily eruptions as possible. Instinctively, humans recoil from objects that they view as dirty and even struggle to overcome feelings of discomfort once the offending item has been cleaned. These reactions are universal, and although there are cultural and individual variations, by and large we are all disgusted by the same things.
In Don’t Look, Don’t Touch, Don’t Eat, Valerie Curtis builds a strong case for disgust as a “shadow emotion”—less familiar than love or sadness, it nevertheless affects our day-to-day lives. In disgust, biological and sociocultural factors meet in dynamic ways to shape human and animal behavior. Curtis traces the evolutionary role of disgust in disease prevention and hygiene, but also shows that it is much more than a biological mechanism. Human social norms, from good manners to moral behavior, are deeply rooted in our sense of disgust. The disgust reaction informs both our political opinions and our darkest tendencies, such as misogyny and racism. Through a deeper understanding of disgust, Curtis argues, we can take this ubiquitous human emotion and direct it towards useful ends, from combating prejudice to reducing disease in the poorest parts of the world by raising standards of hygiene.
Don’t Look, Don’t Touch, Don’t Eat reveals disgust to be a vital part of what it means to be human and explores how this deep-seated response can be harnessed to improve the world.
To download your free copy (through May 31) of Don’t Look, Don’t Touch, Don’t Eat
, click here
Coinciding with the celebration of Cinco de Mayo and for a very limited time, the good folks behind the University of Chicago Spanish–English Dictionary (Sixth Edition) app have dropped the price to $0.99 (usually $4.99). You can a basic screenshot of the app’s functionality above—from breezing through recent reviews, it seems like the app’s ability to generate words lists, along with its word-by-word notetaking feature, has proven especially popular.
From the App Store description:
The Spanish–English Dictionary app is a precise and practical bilingual application for iPhone® and iPod touch® based on the sixth edition of The University of Chicago Spanish–English Dictionary. Browse or search the full contents to display all instances of a term for fuller understanding of how it is used in both languages. Build your vocabulary by creating Word Lists and testing yourself on terms you need to master with flash cards and multiple choice quizzes. Whether you are preparing for next week’s class or upcoming international travel, this app is the essential on-the-go reference.
You can watch a demo of the app here:
The app is, of course, a companion to the (physical book) sixth edition of the University of Chicago Spanish–English Dictionary, praised by Library Journal as, “comprehensive in scope, but simple enough to use for even the most tongue-tied linguist.” Limited time means limited time, so if you’re looking for an “an important contribution to update the traditional dictionary to the new digital era,” visit the App Store today.
Brooke Borel’s Infested: How the Bed Bug Infiltrated Our Bedrooms and Took Over the World, a history, is the kind of book that can make you squirm—and not in a way that reassures you about the general asepsis of your mattress, hostel accommodations, luggage, vintage sweater, sexual partner, electrical heating system, duvet cover, trousseau, or recycling bin.
Consider this excerpt from the book, recently posted at Gizmodo, about the plucky bed bug’s resistance to DDT (read more at the link to learn about how it—yes, the insect—was almost drafted in the Vietnam War):
Four years after the Americans and the Brits added DDT to their wartime supply lists, scientists found bed bugs resistant to the insecticide in Pearl Harbor barracks. More resistant bed bugs soon showed up in Japan, Korea, Iran, Israel, French Guiana, and Columbus, Ohio. In 1958 James Busvine of the London School of Hygiene and Tropical Medicine showed DDT resistance in bed bugs as well as cross- resistance to several similar pesticides, including a tenfold increase in resistance to a common organic one called pyrethrin. In 1964 scientists tested bed bugs that had proven resistant five years prior but had not been exposed to any insecticides since. The bugs still defied the DDT.
Soon there was a long list of other insect and arachnid with an increasing immunity to DDT: lice, mosquitoes, house flies, fruit flies, cockroaches, ticks, and the tropical bed bug. In 1969 one entomology professor would write of the trend: “The events of the past 25 years have taught us that virtually any chemical control method we have devised for insects is eventually destined to become obsolete, and that insect control can never be static but must be in a dynamic state of constant evolution.” In other words, in the race between chemical and insect, the insects always pull ahead.
If that doesn’t, er, scratch your itch, check out the video above (produced by the Frank Collective, a rad tribe of Brooklyn-based digital media collaborators), which features Borel teasing “7 Crazy Bed Bug Facts,” and explore the book’s website, a safe space where the “bed bug queen” makes her nest.
To read more about Infested, click here.
screenshot from AP video of Baltimore protests on April 26, 2015
N. B. D. Connolly, assistant professor of history at Johns Hopkins University and author of A World More Concrete: Real Estate and the Remaking of Jim Crow South Florida, on “Black Culture is Not the Problem” for the New York Times:
The problem is not black culture. It is policy and politics, the very things that bind together the history of Ferguson and Baltimore and, for that matter, the rest of America.
Specifically, the problem rests on the continued profitability of racism. Freddie Gray’s exposure to lead paint as a child, his suspected participation in the drug trade, and the relative confinement of black unrest to black communities during this week’s riot are all features of a city and a country that still segregate people along racial lines, to the financial enrichment of landlords, corner store merchants and other vendors selling second-rate goods.
The problem originates in a political culture that has long bound black bodies to questions of property. Yes, I’m referring to slavery.
To read more about A World More Concrete, click here.
An excerpt from Elephant Don: The Politics of a Pachyderm Posse
by Caitlin O’Connell
“Kissing the Ring”
Sitting in our research tower at the water hole, I sipped my tea and enjoyed the late morning view. A couple of lappet-faced vultures climbed a nearby thermal in the white sky. A small dust devil of sand, dry brush, and elephant dung whirled around the pan, scattering a flock of guinea fowl in its path. It appeared to be just another day for all the denizens of Mushara water hole—except the elephants. For them, a storm of epic proportions was brewing.
It was the beginning of the 2005 season at my field site in Etosha National Park, Namibia—just after the rainy period, when more elephants would be coming to Mushara in search of water—and I was focused on sorting out the dynamics of the resident male elephant society. I was determined to see if male elephants operated under different rules here than in other environments and how this male society compared to other male societies in general. Among the many questions I wanted to answer was how ranking was determined and maintained and for how long the dominant bull could hold his position at the top of the hierarchy.
While observing eight members of the local boys’ club arrive for a drink, I immediately noticed that something was amiss—these bulls weren’t quite up to their usual friendly antics. There was an undeniable edge to the mood of the group.
The two youngest bulls, Osh and Vincent Van Gogh, kept shifting their weight back and forth from shoulder to shoulder, seemingly looking for reassurance from their mid- and high-ranking elders. Occasionally, one or the other held its trunk tentatively outward—as if to gain comfort from a ritualized trunk-to-mouth greeting.
The elders completely ignored these gestures, offering none of the usual reassurances such as a trunk-to-mouth in return or an ear over a youngster’s head or rear. Instead, everyone kept an eye on Greg, the most dominant member of the group. And for whatever reason, Greg was in a foul temper. He moved as if ants were crawling under his skin.
Like many other animals, elephants form a strict hierarchy to reduce conflict over scarce resources, such as water, food, and mates. In this desert environment, it made sense that these bulls would form a pecking order to reduce the amount of conflict surrounding access to water, particularly the cleanest water.
At Mushara water hole, the best water comes up from the outflow of an artesian well, which is funneled into a cement trough at a particular point. As clean water is more palatable to the elephant and as access to the best drinking spot is driven by dominance, scoring of rank in most cases is made fairly simple—based on the number of times one bull wins a contest with another by usurping his position at the water hole, by forcing him to move to a less desirable position in terms of water quality, or by changing trajectory away from better-quality water through physical contact or visual cues.
Cynthia Moss and her colleagues had figured out a great deal about dominance in matriarchal family groups by. Their long-term studies in Amboseli National Park showed that the top position in the family was passed on to the next oldest and wisest female, rather than to the offspring of the most dominant individual. Females formed extended social networks, with the strongest bonds being found within the family group. Then the network branched out into bond groups, and beyond that into associated groups called clans. Branches of these networks were fluid in nature, with some group members coming together and others spreading out to join more distantly related groups in what had been termed a fission-fusion society.
Not as much research had been done on the social lives males, outside the work by Joyce Poole and her colleagues in the context of musth and one-on-one contests. I wanted to understand how male relationships were structured after leaving their maternal family groups as teens, when much of their adult lives was spent away from their female family. In my previous field seasons at Mushara, I’d noticed that male elephants formed much larger and more consistent groups than had been reported elsewhere and that, in dry years, lone bulls were not as common here than were recorded in other research sites.
Bulls of all ages were remarkably affiliative—or friendly—within associated groups at Mushara. This was particularly true of adolescent bulls, which were always touching each other and often maintained body contact for long periods. And it was common to see a gathering of elephant bulls arrive together in one long dusty line of gray boulders that rose from the tree line and slowly morphed into elephants. Most often, they’d leave in a similar manner—just as the family groups of females did.
The dominant bull, Greg, most often at the head of the line, is distinguishable by the two square-shaped notches out of the lower portion of his left ear. But there is something deeper that differentiates him, something that exhibits his character and makes him visible from a long way off. This guy has the confidence of royalty—the way he holds his head, his casual swagger: he is made of kingly stuff. And it is clear that the others acknowledge his royal rank as his position is reinforced every time he struts up to the water hole to drink.
Without fail, when Greg approaches, the other bulls slowly back away, allowing him access to the best, purest water at the head of the trough—the score having been settled at some earlier period, as this deference is triggered without challenge or contest almost every time. The head of the trough is equivalent to the end of the table and is clearly reserved for the top-ranking elephant—the one I can’t help but refer to as the don since his subordinates line up to place their trunks in his mouth as if kissing a Mafioso don’s ring.
As I watched Greg settle in to drink, each bull approached in turn with trunk outstretched, quivering in trepidation, dipping the tip into Greg’s mouth. It was clearly an act of great intent, a symbolic gesture of respect for the highest-ranking male. After performing the ritual, the lesser bulls seemed to relax their shoulder as they shifted to a lower-ranking position within the elephantine equivalent of a social club. Each bull paid their respects and then retreated. It was an event that never failed to impress me—one of those reminders in life that maybe humans are not as special in our social complexity as we sometimes like to think—or at least that other animals may be equally complex. This male culture was steeped in ritual.
Greg takes on Kevin. Both bulls face each other squarely, with ears held out. Greg’s ear cutout pattern in the left ear make him very recognizable
But today, no amount of ritual would placate the don. Greg was clearly agitated. He was shifting his weight from one front foot to the other in jerky movements and spinning his head around to watch his back, as if someone had tapped him on the shoulder in a bar, trying to pick a fight.
The midranking bulls were in a state of upheaval in the presence of their pissed-off don. Each seemed to be demonstrating good relations with key higher-ranking individuals through body contact. Osh leaned against Torn Trunk on his one side, and Dave leaned in from the other, placing his trunk in Torn Trunk’s mouth. The most sought-after connection was with Greg himself, of course, who normally allowed lower-ranking individuals like Tim to drink at the dominant position with him.
Greg, however, was in no mood for the brotherly “back slapping” that ordinarily took place. Tim, as a result, didn’t display the confidence that he generally had in Greg’s presence. He stood cowering at the lowest-ranking position at the trough, sucking his trunk, as if uncertain of how to negotiate his place in the hierarchy without the protection of the don.
Finally, the explanation for all of the chaos strode in on four legs. It was Kevin, the third-ranking bull. His wide-splayed tusks, perfect ears, and bald tail made him easy to identify. And he exhibited the telltale sign of musth, as urine was dribbling from his penis sheath. With shoulders high and head up, he was ready to take Greg on.
A bull entering the hormonal state of musth was supposed to experience a kind of “Popeye effect” that trumped established dominance patterns—even the alpha male wouldn’t risk challenging a bull elephant with the testosterone equivalent of a can of spinach on board. In fact, there are reports of musth bulls having on the order of twenty times the normal amount of testosterone circulating in their blood. That’s a lot of spinach.
Musth manifests itself in a suite of exaggerated aggressive displays, including curling the trunk across the brow with ears waving—presumably to facilitate the wafting of a musthy secretion from glands in the temporal region—all the while dribbling urine. The message is the elephant equivalent of “don’t even think about messing with me ’cause I’m so crazy-mad that I’ll tear your frickin’ head off”—a kind of Dennis Hopper approach to negotiating space.
Musth—a Hindi word derived from the Persian and Urdu word “mast,” meaning intoxicated—was first noted in the Asian elephant. In Sufi philosophy, a mast (pronounced “must”) was someone so overcome with love for God that in their ecstasy they appeared to be disoriented. The testosterone-heightened state of musth is similar to the phenomenon of rutting in antelopes, in which all adult males compete for access to females under the influence of a similar surge of testosterone that lasts throughout a discrete season. During the rutting season, roaring red deer and bugling elk, for example, aggressively fight off other males in rut and do their best to corral and defend their harems in order to mate with as many does as possible.
The curious thing about elephants, however, is that only a few bulls go into musth at any one time throughout the year. This means that there is no discrete season when all bulls are simultaneously vying for mates. The prevailing theory is that this staggering of bulls entering musth allows lower-ranking males to gain a temporary competitive advantage over others of higher rank by becoming so acutely agitated that dominant bulls wouldn’t want to contend with such a challenge, even in the presence of an estrus female who is ready to mate. This serves to spread the wealth in terms of gene pool variation, in that the dominant bull won’t then be the only father in the region.
Given what was known about musth, I fully expected Greg to get the daylights beaten out of him. Everything I had read suggested that when a top-ranking bull went up against a rival that was in musth, the rival would win.
What makes the stakes especially high for elephant bulls is the fact that estrus is so infrequent among elephant cows. Since gestation lasts twenty-two months, and calves are only weaned after two years, estrus cycles are spaced at least four and as many as six years apart. Because of this unusually long interval, relatively few female elephants are ovulating in any one season. The competition for access to cows is stiffer than in most other mammalian societies, where almost all mature females would be available to mate in any one year. To complicate matters, sexually mature bulls don’t live within matriarchal family groups and elephants range widely in search of water and forage, sofinding an estrus female is that much more of a challenge for a bull.
Long-term studies in Amboseli indicated that the more dominant bulls still had an advantage, in that they tended to come into musth when more females were likely to be in estrus. Moreover, these bulls were able to maintain their musth period for a longer time than the younger, less dominant bulls. Although estrus was not supposed to be synchronous in females, more females tended to come into estrus at the end of the wet season, with babies appearing toward the middle of the wet season, twenty-two months later. So being in musth in this prime period was clearly an advantage.
Even if Greg enjoyed the luxury of being in musth during the peak of estrus females, this was not his season. According to the prevailing theory, and in this situation, Greg would back down to Kevin.
As Kevin sauntered up to the water hole, the rest of the bulls backed away like a crowd avoiding a street fight. Except for Greg. Not only did Greg not back down, he marched clear around the pan with his head held to its fullest height, back arched, heading straight for Kevin. Even more surprising, when Kevin saw Greg approach him with this aggressive posture, he immediately started to back up.
Backing up is rarely a graceful procedure for any animal, and I had certainly never seen an elephant back up so sure-footedly. But there was Kevin, keeping his same even and wide gait, only in the reverse direction—like a four-legged Michael Jackson doing the moon walk. He walked backward with such purpose and poise that I couldn’t help but feel that I was watching a videotape playing in reverse—that Nordic-track style gait, fluidly moving in the opposite direction, first the legs on the one side, then on the other, always hind foot first.
Greg stepped up his game a notch as Kevin readied himself in his now fifty-yard retreat, squaring off to face his assailant head on. Greg puffed up like a bruiser and picked up his pace, kicking dust in all directions. Just before reaching Kevin, Greg lifted his head even higher and made a full frontal attack, lunging at the offending beast, thrusting his head forward, ready to come to blows.
In another instant, two mighty heads collided in a dusty clash. Tusks met in an explosive crack, with trunks tucked under bellies to stay clear of the collisions. Greg’s ears were pinched in the horizontal position—an extremely aggressive posture. And using the full weight of his body, he raised his head again and slammed at Kevin with his broken tusks. Dust flew as the musth bull now went in full backward retreat.
Amazingly, this third-ranking bull, doped up with the elephant equivalent of PCP, was getting his hide kicked. That wasn’t supposed to happen.
At first, it looked as if it would be over without much of a fight. Then, Kevin made his move and went from retreat to confrontation and approached Greg, holding his head high. With heads now aligned and only inches apart, the two bulls locked eyes and squared up again, muscles tense. It was like watching two cowboys face off in a western.
There were a lot of false starts, mock charges from inches away, and all manner of insults cast through stiff trunks and arched backs. For a while, these two seemed equally matched, and the fight turned into a stalemate.
But after holding his own for half an hour, Kevin’s strength, or confidence, visibly waned—a change that did not go unnoticed by Greg, who took full advantage of the situation. Aggressively dragging his trunk on the ground as he stomped forward, Greg continued to threaten Kevin with body language until finally the lesser bull was able to put a man-made structure between them, a cement bunker that we used for ground-level observations. Now, the two cowboys seemed more like sumo wrestlers, feet stamping in a sideways dance, thrusting their jaws out at each other in threat.
The two bulls faced each other over the cement bunker and postured back and forth, Greg tossing his trunk across the three-meter divide in frustration, until he was at last able to break the standoff, getting Kevin out in the open again. Without the obstacle between them, Kevin couldn’t turn sideways to retreat, as that would have left his body vulnerable to Greg’s formidable tusks. He eventually walked backward until he was driven out of the clearing, defeated.
In less than an hour, Greg, the dominant bull displaced a high-ranking bull in musth. Kevin’s hormonal state not only failed to intimidate Greg but in fact just the opposite occurred: Kevin’s state appeared to fuel Greg into a fit of violence. Greg would not tolerate a usurpation of his power.
Did Greg have a superpower that somehow trumped musth? Or could he only achieve this feat as the most dominant individual within his bonded band of brothers? Perhaps paying respects to the don was a little more expensive than a kiss of the ring.
To read more about Elephant Don, click here.
By: Kristi McGuire,
Blog: The Chicago Blog
(Login to Add to MyJacketFlap
Author Essays, Interviews, and Excerpts
, Black Studies
, Books for the News
, Politics and Current Events
, Press Releases
, Add a tag
“Can We Race Together? An Autopsy”*
by Ellen Berrey
Corporate diversity dialogues are ripe for backlash, the research shows,
even without coffee counter gimmicks.
Corporate executives and university presidents are, yet again, calling for public discussion on race and racial inequality. Revelations about the tech industry’s diversity problem have company officials convening panels on workplace barriers, and, at the University of Oklahoma spokespeople and students are organizing town-hall sessions in response to a fraternity’s racist chant.
The most provocative of the efforts was Starbucks’ failed Race Together program. In March, the company announced that it would ask baristas to initiate dialogues with customers about America’s most vexing dilemma. Although public outcry shut down those conversations before they even got to “Hello,” Starbucks said it would nonetheless carry on Race Together with forums and special USA Today discussion guides. As someone who has done sociological research on diversity initiatives for the past 15 years, I was intrigued.
For a moment, let’s take this seriously
What would conversations about race have looked like if they played out as Starbucks imagined, given the social science of race? Can companies, in Starbucks’ CEO Howard Schultz’s words, “create a more empathetic and inclusive society—one conversation at a time”? A data-driven autopsy of Starbucks’ ambitions is in order.
Surprisingly, Starbucks turned its sights on the provocative issue of racial inequality—not just feel-good cultural differences (or, thank goodness, the sort of “respectability politics” that, under well-intentioned cover, focus on the moral flaws of black people). Most Americans, especially those of us who are white, are ill-informed on the topic of inequality. We generally do not recognize our personal prejudice. We routinely, and incorrectly, insist that we are colorblind and that racism is a thing of the past, as sociologist Eduardo Bonilla-Silva has documented. When we do try to talk about race, we usually resort to what sociologists Joyce Bell and Doug Hartmann call the “happy talk” of diversity, without a language for discussing who comes out ahead and who gets pushed behind.
Starbucks pulls back the veil on our unconscious
How to take this on? Starbucks opted to tackle the thorny issue of unacknowledged prejudice—the cognitive biases that predispose a person against racial minorities and in favor of white people. The company intended to offer “insight into the divisive role unconscious bias plays in our society and the role empathy can play to bridge those divides.” The conversation guide it distributed the first week described a bias experiment in which lawyers were asked to assess an error-ridden memo. When told that the (fictional) author was white, the lawyers commented “has potential.” When told he was black, they remarked “can’t believe he went to NYU.”
Perhaps this was a promising starting point. Americans prefer psychological explanations; we like to think that terrorism, poverty, obesity, and other social ills are rooted in the individual’s psyche.
A comforting thought: I’m not racist
We also do not want to see ourselves as complicit in the segregation of our communities, workplaces, or friendships. We definitely don’t want the stigma of being “racist.” Even white supremacists resist that label. So if it’s true that we can’t see our own bias, as Starbucks told us, we can take comfort in our innocence.
Starbucks’ description of the bias experiment actually took the conversation where it never seems to venture: to the advantages that white people enjoy. White people get help, forgiveness, and the inside track far more often than do people of color. But Starbucks stopped before pointing the finger at who gives white people these advantages.
The rest of Race Together veered off in a confused direction, mostly bent on educated enlightenment. The conversation guide was a mishmash of racial utopianism (the millennials have it figured out!), demography as destiny (immigration changes everything!), triumph over a troublesome past (progress!), testimonies by people of color (the one white guy is clueless!), statistics, inspired introspection, and social network tallies (“I have ____ friends of a different race”!).
Not your daddy’s diversity training
Companies have been trying to positively address race for decades. Typically, they do so through diversity management within their own workforce. Their stated purpose is to increase the numbers of people of color in the top ranks or improve the corporate culture. Most diversity management strategies, however, are far from effective (unless they make someone responsible for results), as shown by as sociologists Alexandra Kalev, Frank Dobbin, and Erin Kelly. Corporate aggrandizement and the façade of legal compliance seem as much the goals as actual change.
Race Together most closely resembled diversity training, which tries to undo managerial stereotyping through educational exchange, but this time the exchange was between capitalists and consumers. And it bucked the typical managerial spin. Usually, the kicker is the business case for diversity: this will boost productivity and profits. Instead, Starbucks made the diversity case for business. Consumption, supposedly, would create inclusion and equity. That would be its own reward. There was no clear connection to its specific business goals, beyond (disgruntled) buzz about the brand.
What were you thinking, Howard Schultz?
Briefly, let’s revisit what made Starbucks’ over-the-counter conversations so offensive. Starbucks was asking low-wage, young, disproportionately minority workers to prompt meaningful exchanges about race with uncaffeinated, mostly white and affluent customers. Even under the best of circumstances, diversity dialogues tend to put the burden of explaining racism on people of color. Here, baristas were supposed to walk the third rail during the morning rush hour without specialized training, much less extra compensation. One sociological term for this is Arlie Hochschild‘s “emotional labor.” The employee was required to tactfully manage customers’ feelings. The most likely reaction from coffee drinkers? Microaggressions of avoidance, denial, and eye-rolling.
The alternative, for Starbucks so-called “partners,” was disgruntled defiance. At my local Starbucks, when I asked about these conversations, the manager emphatically said, “We’re not participating.” The barista next to her was blunt: “We think it’s bullshit.”
Swiftly, the company came out with public statements that had the air of faux intention and cover-up, as if to say, “We’re not retreating; we’re merely advancing in the other direction.” Starbucks had promised a year of Race Together, but the collapse of the café stunt made an all-out retreat more likely: one more forum, one more ad, then silence.
This doesn’t work…
Race Together trod treacherous ground. The research shows that diversity training backfires when it attempts to ferret out prejudice. It puts white people on the defensive and creates a backlash against people of color. For committed consumers, Starbucks was messing with the equivocally best part about capitalism: that you can give someone money and they give you a thing. For activists, this all smelled wrong (i.e., not how you want your latté). Like co-opted social justice.
… Does anyone in HQ ever ask what works?
Starbucks was wise to shift closer to the traditional role of a coffee house—the so-called Third Place between work and home that Schultz has long exalted. Hopefully, the company looks to proven models for productive conversations on race. Organizations such as the Center for Racial Justice Innovation push forward discussions that recognize racism as systemic, not as isolated individual attitudes and bad behaviors. This helps to avoid what people hate most about diversity trainings: forced discourse about superficial differences (“are you a daytime or nighttime person?”) and the wretched hunt for guilty bad guys.
According to social psychologists, unconscious bias can be minimized when people have positive incentives for interpersonal, cross-racial relationships. Wearing a sports jersey for the same team is impressively effective for getting white people to cooperate with African Americans, as shown in a study led by psychologist Jason Nier. The idea is to not provoke white people’s fear and avoidance of doing wrong. It is to motivate people to try to do what’s right by establishing a shared identity
Starbucks also needs to wrestle with its goal of “together.” That’s not always the outcome of conversations about race. Political scientist Katherine Cramer Walsh found that participants in civic dialogues on race commonly walk away with a heightened awareness of their differences, not with the unity that meeting organizers hope to foster.
Is it better to abandon ship?
Despite its missteps, Starbucks, in fact, alit on hopeful insights. Individuals can ignite change, and empathy and listening are starting points. The company deserves some applause for taking the risk and for its deliberate focus on inequality. Undoubtedly, working-class, minority millennials could teach the rest of the country something about race (and executives something about company policy).
The truth hurts
But let’s be clear about what Race Together was not. It was not about addressing institutional discrimination. In that scenario, Starbucks would have issued a press release about eliminating patterns of unfair hiring and firing. It would have overhauled a corporate division of labor that channels racial minorities into lower-tier, nonunionized jobs. It might very well have closed stores in gentrifying neighborhoods.
Those solutions start with incisive diagnosis, not personal reflection. (The U.S. Department of Justice did just that when it scrutinized racial profiling in traffic stops and court fines in Ferguson, Missouri.) Those solutions require change in corporate policy.
To make Race Together honest, Starbucks needed to recognize an ugly truth: America’s race problem is not an inability to talk. It is a failure to rectify the unfair disadvantages hoisted on people of color and the unearned privileges that white people enjoy. Corporations, in their internal operations, are complicit in these very dynamics. So, too, are long-standing government policies, such as tax deductions of home mortgage interest (white folks are far more likely to own their homes). And white Americans may not want to hear it, but racial inequality is, in large measure, rooted in our collective choices: where we’ll pay property taxes, who we’ll tell about a job lead, what we’ll deem criminal, and even when we’ll smile or scowl. Howard Schultz, are you listening?
*This piece was originally published at the Society Pages, http://www.thesocietypages.org
Ellen Berrey teaches in the Department of Sociology at the University of Buffalo, SUNY, and is an affiliated scholar of the American Bar Foundation. Her book The Enigma of Diversity: The Language of Race and the Limits of Racial Justice will publish in April 2015.
An excerpt from That’s the Way It Is: A History of Television News in America
by Charles L. Ponce de Leon
Few technologies have stirred the utopian imagination like television. Virtually from the moment that research produced the first breakthroughs that made it more than a science fiction fantasy, its promoters began gushing about how it would change the world. Perhaps the most effusive was David Sarnoff. Like the hero of a dime novel, Sarnoff had come to America as a nearly penniless immigrant child, and had risen from lowly office boy to the presidency of RCA, a leading manufacturer of radio receivers and the parent company of the nation’s biggest radio network, NBC. More than anyone else, it was Sarnoff who had recognized the potential of “wireless” as a form of broadcasting—a way of transmitting from a single source to a geographically dispersed audience. Sarnoff had built NBC into a juggernaut, the network with the largest number of affiliates and the most popular programs. He had also become the industry’s loudest cheerleader, touting its contributions to “progress” and the “American Way of Life.” Having blessed the world with the miracle of radio, he promised Americans an even more astounding marvel, a device that would bring them sound and pictures over the air, using the same invisible frequencies.
In countless speeches heralding television’s imminent arrival, Sarnoff rhapsodized about how it would transform American life and encourage global communication and “international solidarity.” “Television will be a mighty window, through which people in all walks of life, rich and poor alike, will be able to see for themselves, not only the small world around us but the larger world of which we are a part,” he proclaimed in 1945, as the Second World War was nearing an end and Sarnoff and RCA eagerly anticipated an increase in public demand for the new technology.
Sarnoff predicted that television would become the American people’s “principal source of entertainment, education and news,” bringing them a wealth of program options. It would increase the public’s appreciation for “high culture” and, when supplemented by universal schooling, enable Americans to attain “the highest general cultural level of any people in the history of the world.” Among the new medium’s “outstanding contributions,” he argued, would be “its ability to bring news and sporting events to the listener while they are occurring,” and build on the news programs that NBC and the other networks had already developed for radio. He saw no conflicts or potential problems. Action-adventure programs, mysteries, soap operas, situation comedies, and variety shows would coexist harmoniously with high-toned drama, ballet, opera, classical music performances, and news and public affairs programs. And they would all be supported by advertising, making it unnecessary for the United States to move to a system of “government control,” as in Europe and the UK. Television in the US would remain “free.”
Yet Sarnoff ’s booster rhetoric overlooked some thorny issues. Radio in the US wasn’t really free. It was thoroughly commercialized, and this had a powerful influence on the range of programs available to listeners. To pay for program development, the networks and individual stations “sold” airtime to advertisers. Advertisers, in turn, produced programs—or selected ones created by independent producers—that they hoped would attract listeners. The whole point of “sponsorship” was to reach the public and make them aware of your products, most often through recurrent advertisements. Though owners of radios didn’t have to pay an annual fee for the privilege of listening, as did citizens in other countries, they were forced to endure the commercials that accompanied the majority of programs.
This had significant consequences. As the development of radio made clear, some kinds of programs were more popular than others, and advertisers were naturally more interested in sponsoring ones that were likely to attract large numbers of listeners. These were nearly always entertainment programs, especially shows that drew on formulas that had proven successful in other fields—music and variety shows, comedy, and serial fiction. More off-beat and esoteric programs were sometimes able to find sponsors who backed them for the sake of prestige; from 1937 to 1954, for example, General Motors sponsored live performances by NBC’s acclaimed “Symphony of the Air.” But most cultural, news, and public affairs programs were unsponsored, making them unprofitable for the networks and individual stations. Thus in the bountiful mix envisioned by Sarnoff, certain kinds of broadcasts were more valuable than others. If high culture and news and public affairs programs were to thrive, their presence on network schedules would have to be justified by something other than their contribution to the bottom line.
The most compelling reason was provided by the Federal Communications Commission (FCC). Established after Congress passed the Federal Communications Act in 1934, the FCC was responsible for overseeing the broadcasting industry and the nation’s airwaves, which, at least in theory, belonged to the public. Rather than selling frequencies, which would have violated this principle, the FCC granted individual parties station licenses. These allowed licensees sole possession of a frequency to broadcast to listeners in their community or region. This system allocated a scarce resource—the nation’s limited number of frequencies—and made possession of a license a lucrative asset for businessmen eager to exploit broadcasting’s commercial potential. Licenses granted by the FCC were temporary, and all licensees were required to go through a periodic renewal process. As part of this process, they had to demonstrate to the FCC that at least some of the programs they aired were in the “public interest.” Inspired by a deep suspicion of commercialization, which had spread widely among the public during the early 1900s, the FCC’s public-interest requirement was conceived as a countervailing force that would prevent broadcasting from falling entirely under the sway of market forces. Its champions hoped that it might protect programming that did not pay and ensure that the nation’s airwaves weren’t dominated by the cheap, sensational fare that, reformers feared, would proliferate if broadcasting was unregulated
In practice, however, the FCC’s oversight of broadcasting proved to be relatively lax. More concerned about NBC’s enormous market power—it controlled two networks of affiliates, NBC Red and NBC Blue—FCC commissioners in the 1930s were unusually sympathetic to the businessmen who owned individual stations and possessed broadcast licenses and made it quite easy for them to renew their licenses. They were allowed to air a bare minimum of public-affairs programming and fill their schedules with the entertainment programs that appealed to listeners and sponsors alike. By interpreting the public-interest requirement so broadly, the FCC encouraged the commercialization of broadcasting and unwittingly tilted the playing field against any programs—including news and public affairs—that could not compete with the entertainment shows that were coming to dominate the medium.
Nevertheless, news and public-affairs programs were able to find a niche on commercial radio. But until the outbreak of the Second World War, it wasn’t a very large or comfortable one, and it was more a result of economic competition than the dictates of the FCC. Occasional news bulletins and regular election returns were broadcast by individual stations and the fledgling networks in the 1920s. They became more frequent in the 1930s, when the networks, chafing at the restrictions placed on them by the newspaper industry, established their own news divisions to supplement the reports they acquired through the newspaper-dominated wire services.
By the mid-1930s, the most impressive radio news division belonged not to Sarnoff ’s NBC but its main rival, CBS. Owned by William S. Paley, the wealthy son of a cigar magnate, CBS was struggling to keep up with NBC, and Paley came to see news as an area where his young network might be able to gain an advantage. A brilliant, visionary businessman, Paley was fascinated by broadcasting and would soon steer CBS ahead of NBC, in part by luring away its biggest stars. His bold initiative to beef up its news division was equally important, giving CBS an identity that clearly distinguished it from its rivals. Under Paley, CBS would become the “Tiffany network,” the home of “quality” as well as crowd-pleasers, a brand that made it irresistible to advertisers.
Paley hired two print journalists, Ed Klauber and Paul White, to run CBS’s news unit. Under their watch, the network increased the frequency of its news reports and launched news-and-commentary programs hosted by Lowell Thomas, H. V. Kaltenborn, and Robert Trout. In 1938, with Europe drifting toward war, CBS expanded these programs and began broadcasting its highly praised World News Roundup; its signature feature was live reports from correspondents stationed in London, Paris, Berlin, and other European capitals. These programs were well received and popular with listeners, prompting NBC and the other networks to follow Paley’s lead.
The outbreak of war sparked a massive increase in news programming on all the networks. It comprised an astonishing 20 percent of the networks’ schedules by 1944. Heightened public interest in news, particularly news about the war, was especially beneficial to CBS, where Klauber and White had built a talented stable of reporters. Led by Edward R. Murrow, they specialized in vivid on-the-spot reporting and developed an appealing style of broadcast journalism, affirming CBS’s leadership in news. By the end of the war, surveys conducted by the Office of Radio Research revealed that radio had become the main source of news for large numbers of Americans, and Murrow and other radio journalists were widely respected by the public. And though network news people knew that their audience and airtime would decrease now that the war was over, they were optimistic about the future and not very keen to jump into the new field of television.
This is ironic, since it was television that was uppermost in the minds of network leaders like Sarnoff and Paley. The television industry had been poised for takeoff as early as 1939, when NBC, CBS, and DuMont, a growing network owned by an ambitious television manufacturer, established experimental stations in New York City and began limited broadcasting to the few thousand households that had purchased the first sets for consumer use. After Pearl Harbor, CBS’s experimental station even developed a pathbreaking news program that used maps and charts to explain the war’s progress to viewers. This experiment came to an abrupt end in 1942, when the enormous shift of public and private resources to military production forced the networks to curtail and eventually shut down their television units, delaying television’s launch for several years.
Meanwhile, other events were shaking up the industry. In 1943, in response to an FCC decree, RCA was forced to sell one of its radio networks—NBC Blue—to the industrialist Edward J. Noble. The sale included all the programs and personalities that were contractually bound to the network, and in 1945 it was rechristened the American Broadcasting Company (ABC). The birth of ABC created another competitor not just in radio, where the Blue network had a loyal following, but in the burgeoning television industry as well. ABC joined NBC, CBS, and DuMont in their effort to persuade local broadcasters—often owners of radio stations who were moving into the new field of television—to become affiliates.
In 1944, the New York City stations owned by NBC, CBS, and Du-Mont resumed broadcasting, and NBC and CBS in particular launched aggressive campaigns to sign up affiliates in other cities. ABC and DuMont, hamstrung by financial and legal problems, quickly fell behind as most station owners chose NBC or CBS, largely because of their proven track record in radio. But even for the “ big two,” building television networks was costly and difficult. Unlike radio programming, which could be fed through ordinary phone lines to affiliates, who then broadcast them over the air in their communities, linking television stations into a network required a more advanced technology, a coaxial cable especially designed for the medium that AT&T, the private, government-regulated telephone monopoly, would have to lay throughout the country. At the end of the war, at the government’s and television industry’s behest, AT&T began work on this project. By the end of the 1940s, most of the East Coast had been linked, and the connection extended to Chicago and much of the Midwest. But it was slow going, and at the dawn of the 1950s, no more than 30 percent of the nation’s population was within reach of network programming. Until a city was linked to the coaxial cable, there was no reason for station owners to sign up with a network; instead, they relied on local talent to produce programs. As a result, the television networks grew more slowly than executives might have wished, and the audience for network programs was restricted by geography until the mid-1950s. An important breakthrough occurred in 1951, when the coaxial cable was extended to the West Coast and made transcontinental broadcasting possible. But until microwave relay stations were built to reach large swaths of rural America, many viewers lacked access to the networks.
Access wasn’t the only problem. The first television sets that rolled off the assembly lines were expensive. RCA’s basic model, the one that Sarnoff envisioned as its “Model T,” cost $385, while top-of-the-line models were more than $2,000. With the average annual salary in the mid-1940s just over $3,000, this was a lot of money, even if consumers were able to buy sets through department-store installment plans. And though the price of TVs would steadily decline, throughout the 1940s the audience for television was restricted by income. Most early adopters were from well-to-do families—or tavern owners who hoped that their investment in television would attract patrons.
Still, the industry expanded dramatically. In 1946, there were approximately 20,000 television sets in the US; by 1948, there were 350,000; and by 1952, there were 15.3 million. Less than 1 percent of American homes had TVs in 1948; a whopping 32 percent did by 1952. The number of stations also multiplied, despite an FCC freeze in the issuing of station licenses from 1948 to 1952. In 1946, there were six stations in only four cities; by 1952, there were 108 stations in sixty-five cities, most of them recipients of licenses issued right before the freeze. When the freeze was lifted and new licenses began to be issued again, there was a mad rush to establish new stations and get on the air. By 1955, almost 500 television stations were operating in the US.
The FCC freeze greatly benefited NBC and CBS. Eighty percent of the markets with TV at the start of the freeze in 1948 had only one or two licensees, and it made sense for them to contract with one or both of the big networks for national programming to supplement locally produced material. Shut out of these markets, ABC and DuMont were forced to secure affiliates in the small number of markets—usually large cities—where stations were more plentiful. By the time the FCC starting issuing licenses again, NBC and CBS had established reputations for popular, high-quality programs, and when new markets were opened, it became easier for them to sign up stations with the most desirable frequencies, usually the lowest “channels” on the dial. Meanwhile, ABC languished for much of the 1950s, with the fewest and poorest affiliates, and the struggling DuMont network ceased operations altogether in 1955.
News programs were among the first kinds of broadcasts that aired in the waning years of the war, and virtually everyone in the industry expected them to be part of the program mix as the networks increased programming to fill the broadcast day. News was “an invaluable builder of prestige,” noted Sig Mickelson, who joined CBS as an executive in 1949 and served as head of its news division throughout the 1950s. “It helped create an image that was useful in attracting audiences and stimulating commercial sales, not to mention maintaining favorable government relations. . . . News met the test of ‘public service.’ ” As usual, CBS led the way, inaugurating a fifteen-minute evening news program in 1944. It was broadcast on Thursdays and Fridays at 8:00 PM, the two nights of the week the network was on the air. NBC launched its own short Sunday evening newscast in 1945 as the lead-in to its ninety minutes of programming. Both programs resembled the newsreels that were regularly shown in movie theaters, a mélange of filmed stories with voice-over narration by off-screen announcers.
Considering the limited technology available, this was not surprising. Newsreels offered television news producers the most readily applicable model for a visual presentation of news, and the first people the networks hired to produce news programs were often newsreel veterans. But newsreels relied on 35mm film and were expensive and time-consuming to produce, and they had never been employed for breaking news. Aside from during the war, when they were filled with military stories that employed footage provided by the government, they specialized in fluff, events that were staged and would make the biggest impression on the screen: celebrity weddings, movie premiers, beauty contests, ship launches. In the mid-1940s, recognizing this shortcoming, producers at WCBW, CBS’s wholly owned subsidiary in New York, developed a number of innovative techniques for “visualizing” stories for which they had no film and established the precedent of sending a reporter to cover local stories.
These conventions were well established when the networks, in response to booming sales of television sets, expanded their evening schedules to seven days a week and launched regular weeknight newscasts. NBC’s premiered first, in February 1948. Sponsored by R. J. Reynolds, the makers of Camel cigarettes, it was produced for the network by the Fox Movietone newsreel company and had no on-screen news-readers. CBS soon followed suit, with the CBS Evening News, in April 1948. Relying on film provided by another newsreel outfit, Telenews, it featured a rotating cast of announcers, including Douglas Edwards, who had only reluctantly agreed to work in television after failing to break into the top tier of the network’s radio correspondents. In the late summer, after CBS president Frank Stanton convinced Edwards of television’s potential, Edwards was installed as the program’s regular on-screen newsreader, its recognizable “face.” DuMont created an evening newscast as well. But its News from Washington, which reached only the handful of stations that were owned by or affiliated with the network, was canceled in less than a year, and DuMont’s subsequent attempt, Camera Headlines, suffered the same fate and was off the air by 1950. ABC’s experience with news was similarly frustrating. Its first newscast, News and Views, began airing in August 1948 and was soon canceled. It didn’t try to broadcast another one until 1952, when it launched an ambitious prime-time news program called ABC All Star News, which combined filmed news reports with man-on-the street interviews, a technique popularized by local stations. By this time, however, the prime-time schedules of all the networks were full of popular entertainment programs, and All Star News, which failed to attract viewers, was pulled from the air after less than three months.
In February 1949, NBC, eager to make up ground lost to CBS, transformed its weeknight evening newscast into the Camel News Caravan, with John Cameron Swayze, a veteran of NBC’s radio division, as sole on-camera newsreader. Film for the program was acquired from a variety of sources, including foreign and domestic newsreel agencies and freelance stringers. But Swayze’s narration and on-screen presence distinguished the broadcast from its earlier incarnation. He sat at a desk that prominently displayed the Camel logo and presented an overview of the day’s major headlines, sometimes accompanied by film and still photos, but sometimes in the form of a “tell-story”— Swayze on camera reading from a script. In between, he would plug Camels and even occasionally light up, much to his sponsor’s delight. One of the show’s highlights was a whirlwind review of stories for which producers had no visuals, which Swayze would introduce by announcing, “Now let’s go hopscotching the news for headlines!” Swayze was popular with viewers and hosted the broadcast for seven years. He became well known to the public, especially for this nightly sign off, “That’s the story, folks. Glad we could get together.”
The Camel News Caravan was superficial, and Swayze’s tone undeniably glib, as critics at the time noted. But the assumption that guided its production did not set particularly high standards. As Reuven Frank, who joined the show as its main writer in 1950 and soon became its producer, recalled, “We assumed that almost everyone who watched us had read a newspaper . . . that our contribution . . . would be pictures. The people at home, knowing what the news was, could see it happen.” Yet over the next few years, especially after William McAndrew became head of NBC’s news division and Frank was installed as the program’s producer, the News Caravansteadily improved. Making good use of the largesse provided by R. J. Reynolds, which more than covered the news department’s rapidly expanding budget, the show increased its use of filmed reports, acquired from foreign sources like the BBC and other European news agencies, the US government and military, and the network’s growing corps of inhouse cameramen and technicians. It also came to rely more and more on the network’s staff of reporters, including a young North Carolinian named David Brinkley, and reporters at NBC’s “O-and-Os,” the five television stations that the network owned and operated. In the days before network bureaus, journalists at network O-and-Os were responsible for combing their cities for stories of potential national interest. NBC also employed stringers on whom it relied for material from cities or regions where it had no O-and-Os. Airing at 7:45 PM, right before the network’s lineup of prime-time entertainment programs, the News Caravan became the first widely viewed news program of the television age. Its success gave McAndrew and his staff greater leverage in their efforts to command network resources and put added pressure on their main rival.
The CBS Evening News, broadcast at 7:30, was also very much a work-in-progress. Influenced by the experiments in “visualizing” news that CBS producers had conducted at the network’s flagship New York City O-and-O in the mid-1940s, it was produced by a mix of radio people like Edwards and newcomers from other fields. Most of the radio people, however, were second-stringers. The network’s leading radio personnel, including Murrow and his comrades, had little interest in moving to television. Though this disturbed Paley and his second-in-command, CBS president Frank Stanton, it allowed CBS’s fledgling television news unit to escape from the long shadow of the network’s radio news operation, and it increased the influence of staff committed to the tradition of “visualizing.” With few radio people willing to work on the program, the network was forced to hire new staff from outside the network. These newcomers from the wire services, photojournalism, and news and photographic syndicates brought a lively spirit of innovation to CBS’s nascent television news division. They were impressed by the notion of “visualizing,” and they resolved that TV news ought to be different from radio news, “an amalgam of existing news media, with a substantial infusion of showmanship from the stage and motion pictures.”
The most important new hire was Don Hewitt, an ambitious, energetic twenty-five-year-old who joined the small staff of the CBS Evening News in 1948 and soon become its producer. Despite his age, Hewitt was already an experienced print journalist, and his resume included a stint at ACME News Pictures, a syndicate that provided newspapers with photographs. He was well aware of the power of pictures, and when he joined CBS, he brought a new sensibility and willingness to experiment. Under Hewitt, the Edwards program made rapid strides. Eager to find ways of compensating for television’s technical limitations, Hewitt made extensive use of still photos and created a graphic arts department to produce charts, maps, and captions to illustrate tell-stories. To make Edwards’s delivery more natural and smooth, he introduced a new machine called a TelePrompTer, which replaced the heavy cue cards on which his script had been written. Expanding on the experiments of CBS’s early “visualizers,” Hewitt devised a number of clever devices to provide visuals for stories—for example, using toy soldiers to illustrate battles during the Korean War. He was the principal figure behind the shift to 16mm film, which was easier and less expensive to produce, and the network’s decision to establish its own in-house camera crews. His most significant innovation, however, was the double-projector system that he developed to mix narration and film. This technique, which was copied throughout the industry, made possible a new kind of filmed report that would become the archetypal television news package: a reporter on camera, often at the scene of a story, beginning with a “stand-upper” that introduces the story; then film of other scenes, while the reporter’s words, recorded separately, serve as voice-over narration; finally, at the end, a “wrap-up,” where the reporter appears on camera again. By the early 1950s, the CBS newscast, now titled Douglas Edwards with the News, was adding viewers and winning plaudits from critics. And it had gained the respect of many of the network’s radio journalists, who now agreed to contribute to the program and other television news shows.
During the 1950s, Don Hewitt (left) was perhaps the most influential producer of television news. He was responsible not only for CBS’s successful evening newscast, but also worked on See It Now and other network programs. Douglas Edwards (right) anchored the broadcast from the late 1940s to 1962, when he was replaced by Walter Cronkite. Photo courtesy of CBS/Photofest.
The big networks were not the only innovators. In the late 1940s, with network growth limited and many stations still independent, local stations developed many different kinds of programs, including news shows. WPIX, a New York City station owned by theDaily News, the city’s most popular tabloid, established a daily news program in June 1948. The Telepix Newsreel aired twice a day, at 7:30 PM and 11:00 PM, and specialized in coverage of big local events like fires and plane crashes. Its staff went to great lengths to acquire film of these stories, which it hyped with what would become a standard teaser, “film at eleven.” Like its print cousin, it also featured lots of human-interest stories and man-on-the-street interviews. A Chicago station, WGN, developed a similar program, the Chicagoland Newsreel, which was also successful. The real pioneer was KTLA in Los Angeles. Run by Klaus Landsberg, a brilliant engineer, KTLA established the most technologically sophisticated news program of the era. Employing relatively small, portable cameras and mobile live transmitters, its reporters excelled in covering breaking news stories, and it would remain a trailblazer in the delivery of breaking news throughout the 1950s and 1960s. It was Landsberg, for example, who first conceived of putting a TV camera in a helicopter.
But such programs were the exception. Most local stations offered little more than brief summaries of wire-service headlines, and the expense of film technology led most to emphasize live entertainment programs instead of news. Believing that viewers got their news from local papers and radio stations, television stations saw no need to duplicate their efforts. Not until the 1960s, when new, inexpensive video and microwave technology made local newsgathering economically feasible, did local stations, including network affiliates, expand their news programming.
The television news industry’s first big opportunity to display its potential occurred in 1948, when the networks descended on Philadelphia for the political conventions. The major parties had selected Philadelphia with an eye on the emerging medium of television. Sales were booming, and Philadelphia was on the coaxial cable, which was reaching more and more cities as the weeks and months passed. By the time the Republicans convened in July, it extended from Boston to Richmond, Virginia, with the potential for reaching millions of viewers. Radio journalists had been covering the conventions for two decades, but with lucrative entertainment programs on network schedules, it hadn’t paid to produce “gavel-to-gavel” coverage—just bulletins, wrap-ups, and the acceptance speeches of the nominees. In 1948, however, television was a wide-open field, and with much of the broadcast day open—or devoted to unsponsored programming that cost nothing to preempt—the conventions were a great showcase. In cities where they were broadcast, friends and neighbors gathered in the homes of early adopters, in bars and taverns, even in front of department store display windows, where store managers had carefully arranged TVs to draw the attention of passers-by. Crowds on the sidewalk sometimes overflowed into the street, blocking traffic. “No more effective way could have been found to stimulate receiver sales than these impromptu TV set demonstrations,” suggested Sig Mickelson.
Because of the enormous technical difficulties and a lack of experience, the networks collaborated extensively. All four networks used the same pictures, provided by a common pool of cameras set up to focus on the podium and surrounding area. NBC’s coverage was produced by Life magazine and featured journalists from Henry Luce’s media empire as well as Swayze and network radio stars H. V. Kaltenborn and Richard Harkness. CBS’s starred Murrow, Quincy Howe, and Douglas Edwards, newly installed on the Evening News and soon to be its sole newsreader. ABC relied on the gossip columnist and radio personality Walter Winchell. Lacking its own news staff, DuMont hired the Washington-based political columnist Drew Pearson to provide commentary. Many of these announcers did double duty, providing radio bulletins, too. With cameras still heavy and bulky, there were no roving floor reporters conducting interviews with delegates and candidates; instead, interviews occurred in makeshift studios set up in adjacent rooms off the main convention floor. Accordingly, there was little coverage of anything other than events occurring on the podium, and it was print journalists who provided Americans with the behindthe-scenes drama, particularly at the Democrats’ convention, where Southern delegates, angered by the party’s growing commitment to civil rights, walked out in protest and chose Strom Thurmond to run as the nominee of the hastily organized “Dixiecrats.” The conventions were a hit with viewers. Though there were only about 300,000 sets in the entire US, industry research suggested that as many as 10 million Americans saw at least some convention coverage thanks to group viewing and department store advertising and special events.
Four years later, when the Republicans and Democrats again gathered for their conventions, this time in Chicago, the networks were better prepared. Besides experience, they brought more nimble and sophisticated equipment. And, thanks to the spread of the coaxial cable, there were in a position to reach a nationwide audience. Excited by the geometric increase in receiver sales, and inspired by access to new markets that seemed to make it possible to double or even triple the number of television households, major manufacturers signed up as sponsors, and advertisements in newspapers urged consumers to buy sets to “see the conventions.” Coverage was much wider and more complete than in 1948. Several main pool cameras with improved zoom capabilities focused on the podium, while each network deployed between twenty and twenty-five cameras on the periphery and at downtown hotels and in mobile units. “Never before,” noted Mickelson, the CBS executive responsible for the event, “had so many television cameras been massed at one event.”
Meanwhile, announcers from each of the networks explained what was occurring and provided analysis and commentary. NBC’s main announcer was Bill Henry, a Los Angeles print journalist. He was assisted by Kaltenborn and Harkness. Henry sat in a tiny studio and watched the proceedings through monitors, and did not appear on camera. CBS’s coverage differed and established a new precedent. Its main announcer, Walter Cronkite, provided essentially the same narration, explanation, and commentary as Henry. But his face appeared on-screen, in a tiny window in the corner of the screen; when there was a lull on the convention floor, the window expanded to fill the entire screen. Cronkite, an experienced wire service correspondent, had just joined CBS after a successful stint at WTOP, its Washington affiliate. Mickelson had been impressed with his ability to explain and ad lib, and he insisted that CBS use Cronkite rather than the far more experienced and well-known Robert Trout. Mickelson conceded that, from his years of radio work, Trout excelled at “creating word pictures.” But, with television, this was a superfluous gift. The cameras delivered the pictures. “What we needed was interpretation of the pictures on the screen. That was Cronkite’s forte.”
When print journalists asked Mickelson on the eve of the conventions what exact role Cronkite would play, he responded by suggesting that his new hire would be the “anchorman,” a term that soon came to refer to newsreaders like Swayze and Edwards as well. Yet in coining this term, Mickelson was referring to the complex process that Don Hewitt had conceived to provide more detailed and up-to-the-minute coverage of the convention. Recognizing that the action was on the floor, and that if TV journalists were to match the efforts of print reporters they needed to be able to report from there as quickly as possible, Hewitt mounted a second camera that could pan the floor and zoom in on floor reporters armed with walkie-talkies and flashlights, which they used to inform Hewitt when they had an interview or report ready to deliver. It worked like clockwork: “They combed through the delegations, talked to both leaders and members, queried them on motivations and prospective actions, and kept relaying information to the editorial desk.” It was then filtered and collated and passed on to Cronkite, who served as the “anchor” of the relay, delivering the latest news and ad-libbing with the poise and self-assurance that he would display at subsequent conventions and during live coverage of space flights and major breaking news. Cronkite’s seemingly effortless ability to provide viewers with useful and interesting information about the proceedings won praise from television critics and boosted CBS’s reputation with viewers.
NBC was not so successful. In keeping with the network’s—and RCA’s—infatuation with technology, it sought to cover events on the convention floor with a new gadget, a small, hand-held, live-television camera that could transmit pictures and needn’t be connected by wire. As Frank recalled, “It could roam the floor . . . showing delegates reacting to speakers and even join a wireless microphone for interviews.” But it regularly malfunctioned and contributed little to NBC’s coverage. More effective and popular were a series of programs that Bill McAndrew developed to provide background. Convention Call was broadcast twice a day during the conventions, before sessions and when they adjourned for breaks. Its hosts encouraged viewers to call in and ask NBC reporters to explain what was occurring, especially rules of procedure. The show sparked a flood of calls that overwhelmed telephone company switchboards and forced NBC to switch to telegrams instead.
Ratings for network coverage of the conventions exceeded expectations. Approximately 60 million viewers saw at least some of the conventions on television, with an estimated audience of 55 million tuning in at their peak. And the conventions inspired viewers to begin watching the evening newscasts and contributed to an increase in their popularity. Television critics praised the networks for their contributions to civic enlightenment. Jack Gould of the New York Times suggested that television had “won its spurs” and was “a welcome addition to the Fourth Estate.”
Conventions, planned in advance at locations well-suited for television’s limited technology, were ideal events for the networks to cover. These were the days before front-loaded primaries made them little more than coronations of nominees determined months beforehand, and the parties were undergoing important changes that were often revealed in angry debates and frantic back-room deliberations. And while print journalists remained the most complete source for such information, television allowed viewers to see it in real time, and its stable of experienced reporters and analysts proved remarkably adept at conveying the drama and explaining the stakes.
To read more about That’s the Way It Is, click here.
An excerpt from Portrait of a Man Known as Il Condottiere by Georges Perec
Madera was heavy. I grabbed him by the armpits and went backwards down the stairs to the laboratory. His feet bounced from tread to tread in a staccato rhythm that matched my own unsteady descent, thumping and banging around the narrow stairwell. Our shadows danced on the walls. Blood was still flowing, all sticky, seeping from the soaking wet towel, rapidly forming drips on the silk lapels, then disappearing into the folds of the jacket, like trails of slightly glinting snot side-tracked by the slightest roughness in the fabric, sometimes accumulating into drops that fell to the floor and exploded into star-shaped stains. I let him slump at the bottom of the stairs, right next to the laboratory door, and then went back up to fetch the razor and to mop up the bloodstains before Otto returned. But Otto came in by the other door at almost the same time as I did. He looked at me uncomprehendingly. I beat a retreat, ran down the stairs, and shut myself in the laboratory. I padlocked the door and jammed the wardrobe up against it. He came down a few minutes later, tried to force the door open, to no avail, then went back upstairs, dragging Madera behind him. I reinforced the door with the easel. He called out to me. He fired at the door twice with his revolver.
You see, maybe you told yourself it would be easy. Nobody in the house, no-one round and about. If Otto hadn’t come back so soon, where would you be? You don’t know, you’re here. In the same laboratory as ever, and nothing’s changed, or almost nothing. Madera is dead. So what? You are still in the same underground studio, it’s just a bit less tidy and bit less clean. The same light of day seeps through the basement window. The Condottiere, crucified on his easel . . .
He had looked all around. It was the same office—the same glass table-top, the same telephone, the same calendar on its chrome-plated steel base. It still had the stark orderliness and uncluttered iciness of an intentionally cold style, with strictly matching colours—dark green carpet, mauve leather armchairs, light brown wall covering—giving a sense of discreet impersonality with its large metal filing cabinets . . . But all of a sudden the flabby mass of Madera’s body seemed grotesque, like a wrong note, something incoherent, anachronistic . . . He’d slipped off his chair and was lying on his back with his eyes half-closed and his slightly parted lips stuck in an expression of idiotic stupor enhanced by the dull gleam of a gold tooth. Blood streamed from his cut throat in thick spurts and trickled onto the floor, gradually soaking into the carpet, making an ill-defined, blackish stain that grew ever larger around his head, around his face whose whiteness had long seemed rather fishy, a warm, living, animal stain slowly taking possession of the room, as if the walls were already soaked through with it, as if the orderliness and strictness had already been overturned, abolished, pillaged, as if nothing more existed beyond the radiating stain and the obscene and ridiculous heap on the floor, the corpse, fulfilled, multiplied, made infinite . . .
Why? Why had he said that sentence: “I don’t think that’ll be a problem”? He tries to recall the precise tone of Madera’s voice, the timbre that had taken him by surprise the first time he’d heard it, that slight lisp, its faintly hesitant intonation, the almost imperceptible limp in his words, as if he were stumbling— almost tripping—as if he were permanently afraid of making a mistake. I don’t think. What nationality? Spanish? South American? Accent? Put on? Tricky. No. Simpler than that: he rolled his rs in the back of his throat. Or perhaps he was just a bit hoarse? He can see him coming towards him with outstretched hand: “Gaspard—that’s what I should call you, isn’t it?—I’m truly delighted to make your acquaintance.” So what? It didn’t mean much to him. What was he doing here? What did the man want of him? Rufus hadn’t warned him . . .
People always make mistakes. They think things will work out, will go on as per normal. But you never can tell. It’s so easy to delude yourself. What do you want, then? An oil painting? You want a top-of-the-range Renaissance piece? Can do. Why not aPortrait of a Young Man, for instance . . .
A flabby, slightly over-handsome face. His tie. “Rufus has told me a lot about you.” So what? Big deal! You should have paid attention, you should have been wary . . . A man you didn’t know from Adam or Eve . . . But you rushed headlong to accept the opportunity. It was too easy. And now. Well, now . . .
This is where it had got him. He did the sums in his head: all that had been spent setting up the laboratory, including the cost of materials and reproductions—photographs, enlargements, X-ray images, images seen through Wood’s lamp and with sideillumination—and the spotlights, the tour of European art galleries, upkeep . . . a fantastic outlay for a farcical conclusion . . . But what was comical about his idiotic incarceration? He was at his desk as if nothing had happened . . . That was yesterday . . . But upstairs there was Madera’s corpse in a puddle of blood . . . and Otto’s heavy footsteps as he paced up and down keeping guard. All that to get to this! Where would he be now if . . . He thinks of the sunny Balearic Islands—it would have taken just a wave of his hand a year and half before—Geneviève would be at his side . . . the beach, the setting sun . . . a picture postcard scene . . . Is this where it all comes to a full stop?
Now he recalled every move he’d made. He’d just lit a cigarette, he was standing with one hand on the table, with his weight on one hip. He was looking at the Portrait of a Man. Then he’d stubbed out his cigarette quickly and his left hand had swept over the table, stopped, gripped a piece of cloth, and crumpled it tight—an old handkerchief used as a brush-rag. Everything was hazy. He was putting ever more of his weight onto the table without letting the Condottiere out of his sight. Days and days of useless effort? It was as if his weariness had given way to the anger rising in him, step by certain step. He was crushing the fabric in his hand and his nails had scored the wooden table-top. He had pulled himself up, gone to his work bench, rummaged among his tools . . .
A black sheath made of hardened leather. An ebony handle. A shining blade. He had raised it to the light and checked the cutting edge. What had he been thinking of? He’d felt as if there was nothing in the world apart from that anger and that weariness . . . He’d flopped into the armchair, put his head in his hands, with the razor scarcely a few inches from his eyes, set off clearly and sharply by the dangerously smooth surface of the Condottiere’s doublet. A single movement and then curtains . . . One thrust would be enough . . . His arm raised, the glint of the blade . . . a single movement . . . he would approach slowly and the carpet would muffle the sound of his steps, he would steal up on Madera from behind . . .
A quarter of an hour had gone by, maybe. Why did he have an impression of distant gestures? Had he forgotten? Where was he? He’d been upstairs. He’d come back down. Madera was dead. Otto was keeping guard. What now? Otto was going to phone Rufus, Rufus would come. And then? What if Otto couldn’t get hold of Rufus? Where was Rufus? That’s what it all hung on. On this stupid what-if. If Rufus came, he would die, and if Otto didn’t get hold of Rufus, he would live. How much longer? Otto had a weapon. The skylight was too high and too small. Would Otto fall asleep? Does a man on guard need to sleep? . . .
He was going to die. The thought of it comforted him like a promise. He was alive, he was going to be dead. Then what?
Leonardo is dead, Antonello is dead, and I’m not feeling too well myself. A stupid death. A victim of circumstance. Struck down by bad luck, a wrong move, a mistake. Convicted in absentia. By unanimous decision with one abstention—which one?—he was sentenced to die like a rat in a cellar, under a dozen unfeeling eyes—the side lights and X-ray lamps purchased at outrageous prices from the laboratory at the Louvre—sentenced to death for murder by virtue of that good old moral legend of the eye, the tooth and the turn of the wheel—Achilles’ wheel—death is the beginning of the life of the mind—sentenced to die because of a combination of circumstances, an incoherent conjunction of trivial events . . . Across the globe there were wires and submarine cables . . . Hello, Paris, this is Dreux, hold the line, we’re connecting to Dampierre. Hello, Dampierre, Paris calling. You can talk now. Who could have imagined those peaceable operators with their earpieces becoming implacable executioners . . . Hello, Monsieur Koenig, Otto speaking, Madera has just died . . .
In the dark of night the Porsche will leap forward with its headlights spitting fire like dragons. There will be no accident. In the middle of the night they will come and get him . . .
And then? What the hell does it matter to you? They’ll come and get you. Next? Slump into an armchair and stare long and hard until death overtakes into the eyes of the tall joker with the shiv, the ineffable Condottiere. Responsible or not responsible? Guilty or not guilty? I’m not guilty, you’ll scream when they drag you up to the guillotine. We’ll soon see about that, says the executioner. And down the blade comes with a clunk. Curtains. Self-evident justice. Isn’t that obvious? Isn’t it normal? Why should there be any other way out?
To read more about Portrait of a Man Known as Il Condottiere, click here.
Hearty congratulations to Alan Shapiro, whose collection of poems Reel to Reel was recently shortlisted for the 2015 Pulitzer Prize in poetry. Shapiro, who teaches at the University of North Carolina at Chapel Hill, has published twelve volumes of poetry, and has previously been nominated for both the National Book Award and the Griffin Prize. The Pulitzer Prize citation commended Reel to Reel‘s “finely crafted poems with a composure that cannot conceal the troubled terrain they traverse.” The book, written with Shapiro’s recognizably graceful, abstracting, and subtle minimalism, was one of two finalists, along with Arthur Sze’s Compass Rose; Gregory Pardlo’s Digest won the award.
From the jacket copy for Reel to Reel:
Reel to Reel, Alan Shapiro’s twelfth collection of poetry, moves outward from the intimate spaces of family and romantic life to embrace not only the human realm of politics and culture but also the natural world, and even the outer spaces of the cosmos itself. In language richly nuanced yet accessible, these poems inhabit and explore fundamental questions of existence, such as time, mortality, consciousness, and matter. How did we get here? Why is there something rather than nothing? How do we live fully and lovingly as conscious creatures in an unconscious universe with no ultimate purpose or destination beyond returning to the abyss that spawned us? Shapiro brings his humor, imaginative intensity, characteristic syntactical energy, and generous heart to bear on these ultimate mysteries. In ways few poets have done, he writes from a premodern, primal sense of wonder about our postmodern world.
“Family Bed,” on the book’s poems:
My sister first and then my brother woke
Inside the house they dreamed, and so the dream
House, which, in my dream, was the house in which
I found them now, was vanishing as they woke,
Was swallowing itself the way the picture did
Inside the switched off television screen.
It was the nightmare picture of them sleeping
As if alive beside me in the last
Room left to us, the nightmare of the picture
Suddenly collapsing on the screen
Into the tick and crackle of the shriveling
Abyss they were being sucked away into
By having wakened, while I, alone now,
Clung to the screen of sleeping in the not
Yet undreamt bedroom they no longer dreamed.
To read more about Reel to Reel, or to view more of the author’s books published by the University of Chicago Press, click here.
We’ve gone mimetic and we’re not coming back; executive editor Doug Mitchell models our new on-brand lookbook.
We call it Informcore.
In the meantime, here’s a link to what’s on offer for Fall 2015.
Each year, the University of Chicago Press, awards the Gordon J. Laing Prize to “the faculty author, editor or translator of a book published in the previous three years that brings the Press the greatest distinction.” Originated in 1963, the Prize was named after a former general editor of the Press, whose commitment to extraordinary scholarship helped establish UCP as one of the country’s premier university presses. Conferred by a vote from the Board of University Publications and celebrated earlier this week, the 2015 Laing Prize was awarded to Mauricio Tenorio-Trillo, professor of history at the University of Chicago, and associate professor at the Centro de Investigación y Docencia Económicas, Mexico City, for his book I Speak the City: Mexico City at the Turn of the Twentieth Century.
University of Chicago President Robert J. Zimmer’s presented the award at a ceremony earlier this week. From the Press’s official citation:
From art to city planning, from epidemiology to poetry, I Speak of the City challenges the conventional wisdom about Mexico City, investigating the city and the turn-of-the-century world to which it belonged. By engaging with the rise of modernism and the cultural experiences of such personalities as Hart Crane, Mina Loy and Diego Rivera, I Speak of the City will find an enthusiastic audience across the disciplines.
While accepting the award, Tenorio-Trillo noted his fear that the book would ever find a publisher:
His colleague, Prof. Emilio Kouri, told him to try the University of Chicago Press. “He said they do not normally publish Latin American history, but they publish what you do: history and thinking,” said Tenorio-Trillo. And so the manuscript was sent to Press Executive Editor Douglas Mitchell to review.
“My books in Spanish sometimes are catalogued as history, sometimes as essays, closer to literature. I was truly surprised to learn of this very prestigious prize. I do not know if my work has finally reached the maturity to deserve such a prize or if I have luckily arrived to the intellectual milieu where the idiosyncratic nature of my work is considered a true intellectual contribution. With or without prizes, it’s been a privilege to work here and to collaborate with the University of Chicago Press,” he added.
In addition to the Laing Prize, I Speak of the City was awarded the Spiro Kostof Book Award from the Society of Architectural Historians and the Bolton-Johnson Prize Honorable Mention Award from the American Historical Association.
To read more about the book, click here.
View Next 25 Posts
Donald L. Levine (1931–2013), the Peter B. Ritzma Professor Emeritus of Sociology at the University of Chicago (where he served as dean of the College from 1982 to 1987), passed away earlier this month at the age of 83, following a long illness.
Among his significant contributions to the field of sociology were five volumes (The Flight from Ambiguity, Greater Ethiopia, Powers of the Mind, Wax and Gold, and Visions of the Sociological Tradition), an edited collection (Georg Simmel on Individuality and Social Forms), and a translation (Simmel’s The View of Life), all published by the University of Chicago Press.
As chronicled in memoriam by Susie Allen for UChicagoNews:
Over his long career, Levine published several works that are now considered landmarks of sociology. His “masterpiece,” according to former student Charles Camic, was Visions of the Sociological Tradition, published by the University of Chicago Press in 1995.
In that book, Levine traced the intellectual genealogy of the social sciences and argued that different traditions of social thought could productively inform one another. “It’s a brilliant analysis of theories and intellectual traditions, but also a very thoughtful effort to bring them into intellectual dialogue with one another,” said Camic, PhD’79, now a professor of sociology at Northwestern University. “The beauty with which it’s argued and the depth of his knowledge about these different intellectual traditions are astounding.”
Levine was also influential in promoting the work of German sociologist Georg Simmel and translated several of Simmel’s works into English. “He brought Simmel to awareness in the U.S.,” said Douglas Mitchell, a longtime editor at the University of Chicago Press, who worked with Levine throughout his career.
Executive editor T. David Brent noted, “I thought that if immortality were a possibility it would be conferred upon Don.”
To read more about Levine’s work, click here.