What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing Blog: The Chicago Blog, Most Recent at Top
Results 26 - 50 of 1,886
Visit This Blog | Login to Add to MyJacketFlap
Blog Banner
Publicity news from the University of Chicago Press including news tips, press releases, reviews, and intelligent commentary.
Statistics for The Chicago Blog

Number of Readers that added this blog to their MyJacketFlap: 14
26. Excerpt: Edible Memory

9780226228105

An excerpt from Edible Memory: The Lure of Heirloom Tomatoes and Other Forgotten Foods by Jennifer A. Jordan

***

“Making Heirlooms”

How could anything as perishable as fruits and vegetables become an heirloom? Many things that are heirlooms today were once simple everyday objects. A quilt made of fabric scraps, a wooden bowl used in the last stages of making butter, both become heirlooms only as time increases between now and the era of their everyday use. Likewise, the Montafoner Braunvieh—a tawny, gorgeously crooked-horned cow that roams a handful of pastures and zoos in Europe, a tuft of hair like bangs above her big brown eyes—or the Ossabaw pigs that scurry around on spindly legs at Mount Vernon were not always “heirlooms.” Nor were the piles of multicolored tomatoes that periodically grace the cover of Martha Stewart Living magazine or the food pages of daily newspapers. What happened to change these plants and animals from everyday objects into something rare and precious, imbued with stories of the past? In fact, food has always been an heirloom in the sense of saving seeds, of passing down the food you eat to your children and your children’s children, in a mixture of the genetic code of a given food (a cow, a variety of wheat, a tomato), and also in handing down the techniques of cultivation, preservation, preparation, and even a taste for particular foods. It is only with the rise of industrial agriculture that this practice of treating food as a literal heirloom has disappeared in many parts of the world—and that is precisely when the heirloom label emerges. The chain is broken for many people as they flock to the cities and the number of farmers and gardeners declines. So the concept of an heirloom becomes possible only in the context of the loss of actual heirloom varieties, of increased urbanization and industrialization as fewer people grow their own food, or at least know the people who grow their food. These are global issues, relevant to hunger and security and to cultural memory, community, and place. This book addresses one aspect of the much larger spectrum of issues around culture and agricultural biodiversity, focusing on these old seeds and trees.

In some ways heirlooms become possible (as a concept) only because of the industrialization and standardization of agriculture. They went away, there was a cultural and agricultural break, placing temporal and practical distance between current generations and past foods. In the meantime, gardeners and farmers quietly saved seeds for their own use. And then, as I discuss in much greater detail below, these heirloom foods began, tomato by tomato, apple by apple, to return to some degree of popularity.

In the United States, newspaper article after article, activist after activist, describes heirloom varieties as something one’s grandmother might have eaten. The implication is that there has been a significant break—that the current generation and their parents lost touch with these fruits, vegetables, and animals but that their grandparents might not have. “Heirlooms are major-league hot,” a reporter marveled in 1995. “As we become more of a technological society, people are reaching into the garden to get back that simple life, the simple life of their grandparents.” Concepts like “old-fashioned,” “just like Grandma ate,” and even “heirloom” can feel very American. But this is a mythical grandmother. The grandmothers of today’s United States are a diverse crew whose cooking habits are just one of the ways they differ. Gender is also obviously a vital element of the study of food production and consumption. Women are perceived as (and often are) the primary cooks and shoppers, and there are many gendered understandings of our relationships to food. Many people, men and women alike, have little time to cook, despite recent exhortations to engage in more home cooking. My own grandmother (the niece of my great-great-aunt Budder whom I write about in the prologue) smoked cigarettes and drank martinis with gusto, and for her, making Christmas cookies consisted of melting peanut butter and butterscotch chips, stirring in cornflakes, and forming the mixture into little clumps that would harden as they cooled. I loved them as a child, and when I make them today, I am invoking my grandmother just as much as other people may when serving up a platter of ancestral heirloom tomatoes.

In the context of food, however, the word “heirloom” also has a genetic connotation. The object itself is not handed down. Heirloom tomatoes are either eaten or they rot. Old-fashioned breeds of pigs are slaughtered and end up as pork chops; they rarely live a long life like Wilbur in Charlotte’s Web, without the help of a literate spider and a film career. The “heirloom,” then, what is handed down, is the genetic code. Heirloom foods are products of human intervention, ranging from selecting what seeds to save for the next growing season to deciding which tom turkey should father poults with which hen.

The genetic heirloom takes on a physical expression in the form of a pig or a tomato, for example, to which people may then attach all kinds of meanings—not only the physical appetite for the flavor of a particular tomato or pork chop, but also the sense that edible heirlooms connect us to something many people see as more authentic than supermarket fare. Over and over, in conversations and newspaper articles, orchards and public lectures, I have heard people articulating a search for a connection to the past, even as they also sought out appealing flavors, colors, and textures. The appetite for an heirloom food commonly leads, of course, to the destruction of its embodiment—in a Caprese salad, say, or an apple pie—but it is precisely the consumption of its phenotype that ensures the survival of the genetic code that gave rise to it.

A guide to heirloom vegetables describes heirloom status (of tomatoes and other produce) in three ways:

  1. The variety must be able to reproduce itself from seed [except those propagated through roots or cuttings]. . . .
  2. The variety must have been introduced more than 50 years ago. Fifty years, is, admittedly, an arbitrary cutoff date, and different people use different dates. . . . A few people use an even stricter definition, considering heirlooms to be only those varieties developed and preserved outside the commercial seed trade. . . .
  3. The variety must have a history of its own.

The term “heirloom” itself generally applies to varieties that are capable of being pollen fertilized and that existed before the 1940s, when industrial farming spread in North America and the variety of species grown commercially was significantly reduced. Generally speaking, an heirloom can reproduce itself from seed, meaning seed saved from the previous year. When growing hybrids, you have to buy new seed each year (for plants that reproduce true to seed; apples, potatoes, and some other fruits and vegetables are preserved and propagated through grafts or cuttings rather than seeds). In other words, if you save the seeds of a hybrid tomato and plant them the next year, you more than likely won’t be pleased with what you get, if you get anything at all. Furthermore, simply because they are “heirloom” tomatoes does not mean they are native. In fact, tomatoes are native not to the United States, but to South and Central America, and many heirloom varieties such as the Caspian Pink were developed in Russia and other far-off places. People also use the term “heirloom” to describe old varieties of roses, ornamental plants, fruit trees (reproduced by grafting rather than from seed), potatoes, and even livestock.

As the US Department of Agriculture’s heirloom vegetable guide explains, “Dating to the early 20th C. and before, many [heirloom varieties] originated during a very different agricultural age—when localized and subsistence-based food economies flourished, when waves of immigrant farmers and gardeners brought cherished seeds and plants to this country, and before seed saving had dwindled to a ‘lost art’ among most North American farmers and gardeners.” Fashions, tastes, and technology changed, but “since the 1970s, an expanding popular movement dedicated to perpetuating and distributing these garden classics has emerged among home gardeners and small-scale growers, with interest and endorsement from scientists, historians, environmentalists, and consumers.” In Germany they speak of alte Sorten, “old varieties,” but this phrasing does not carry the same symbolic, nostalgic weight as the homey word “heirloom.” In French heirloom varieties may be called légumes oubliés, “forgotten vegetables,” or légumes anciennes. Of course, once vegetables are labeled forgotten, they’re not really forgotten anymore. In general, the United States has a different relationship to its past than European countries do. Thus there are regional gardening and cooking traditions in the United States, as well as a particular form of nostalgia that allows the term “heirloom” to apply to fruits, vegetables, and animals in the first place. The idea of an heirloom object can be very homespun. Certainly an heirloom can be something of great monetary value, but it can also be a threadbare quilt, a grandfather’s toolbox, or in my case the worn and mismatched paddles my great-great-aunt used in the last stages of making butter. The word “heirloom” can be a way to preserve biodiversity, but it can also be inaccurate and misused, a label slapped on an overpriced tomato. There is always the danger that dishonest grocers and restaurateurs will exploit the desire for local, seasonal, and heirloom food.

Heirlooms of all sorts are often wrapped up in nostalgic ideas about the past. Patchwork quilts and butter churns evoke not only idyllic images of yesteryear, but often difficult lives circumscribed by poverty and dire necessity as much as by simplicity and self-sufficiency. They speak of times (and, when we think globally, of places) when life may have been (or may still be) not only technologically simpler but also much, much harder. Old-fashioned farm implements in the front yards of rural Wisconsin, or in living history museums, evoke nostalgic feelings. But there’s a reason they’re in museums or front yards and not hitched to a team of horses or in the hands of a farmer, at least in Wisconsin. These are backbreaking tools whose functions have wherever possible been transferred to machines.

Even today, while it may surprise people who pick up a book like this, when I first tell someone about my work, I routinely have to explain what an heirloom tomato is. On a recent trip to a Milwaukee farmers’ market, I heard an older man say to his female companion, “Heirloom tomatoes? Never heard of ’em.” He’s not alone. While some food writers and restaurant reviewers may feel that heirloom tomatoes are yesterday’s news, plenty of consumers are still encountering them for the first time.

Heirloom varieties are just one form of edible memory, but they offer a unique opportunity to understand the powerful ways memory and materiality interact, and how the stories we tell one another about the past shape the world we inhabit. I write about heirlooms not because I think they’re the only way to go, but because they present an intriguing sociological puzzle (How can something as perishable as a tomato become an heirloom?) and because they are the subject of so much activity by so many different people. These efforts, all this work, are also just the latest turn in the twisting path of fruit and vegetable trends, of the relationship of these plants to human communities. This book recounts my search for endangered squashes, nearly forgotten plums, and other rare genes surviving in barnyards, gardens, and orchards, this intertwining of botanical, social, and edible worlds.

Investigating Heirlooms

I relish the moments I have spent with the old-fashioned farm animals at the Vienna zoo, standing in the stall with the zookeeper to scratch the fluffy head of a newborn lamb or the vast forehead of that speckled black-and-white cow, one of only a few of her breed remaining on the planet, who had just dutifully produced a calf that looked exactly like her. I also relish the meals I’ve prepared from multicolored potatoes or tomatoes; and, given a free Saturday, I can spend hours at farmers’ markets, contemplating what I can do with a bucket of almost overripe peaches (freeze them for my winter oatmeal) or a pile of striped squash (a spectacularly failed attempt at whole wheat squash gnocchi, which may still be lurking in the back of my freezer). And I have my own history of deep attachment to processed spice cake and the unctuous taste of a rare glass of whole milk—a reminder that “edible memory” goes far beyond the relatively narrow confines of heirloom food.

But I am also a sociologist, so in this book, while I am fond of many of the places, people, and foods I discuss, I also aim, ultimately, to tell a sociological story. I did not, like Barbara Kingsolver in Animal, Vegetable, Miracle, try to raise turkeys or can a heroic quantity of heirloom tomatoes. Unlike Michael Pollan in the journey he undertook for The Omnivore’s Dilemma, I did not try to shoot anything or make my own salt. Along the way, however, I did get involved; I immersed myself in these rich landscapes, markets, and texts and in conversations with diverse groups and individuals who often, unknown to anyone else, managed to hold on to vital and beautiful collections of genes in the form of old apple trees or tomato seeds, turnips or taro. I set out not to grow these plants and raise these animals myself, but to talk with and observe the diverse and committed gardeners, farmers, curators, seed savers, animal breeders, and other people who make possible the persistence of these plants and animals on this planet. I set out to understand in particular where these plants have come from, the threats they face, the kinds of places that are created in the attempt to save them, and the stories they tell us about the past and about ourselves, as well as how they figure in the broader patterns of human appetites, trends and fashions, habits and intentions.

The research for this book comprised seven years of observation and analysis. In my efforts to understand how tomatoes became heirlooms and apples became antiques, I set out on multiple journeys, of varying sorts. I drove down Lake Shore Drive to the Green City Market and urban farms and gardens in Chicago, traveled across town in Milwaukee to Growing Power and other urban growers, flew across the Atlantic to Vienna, took a streetcar over the bridges of Stockholm to get to the barnyards and gardens of the Swedish national open-air folk museum, and got lost on the tangle of bridges and highways between Washington, DC, and rural Virginia in search of Thomas Jefferson’s vegetable garden and George Washington’s turkeys. I also took more philosophical journeys: literary and archival travels through the pages of government reports, scholarly periodicals, and popular and scientific books. I traveled through recipe collections and the glossy pages of food magazines, through the digital universe of online databases, and through correspondence with colleagues and informants in far-off places. The collection of these journeys, of this movement through gardens, barnyards, orchards, and markets, as well as thickets of printed and digital information, accounts for the story I tell here.

This book emerged in part from solitary hours in front of the computer, taking notes, with stacks of books at my side, reading newspaper articles and academic journal articles on everything from apple grafting to patent law. I analyzed thousands of newspaper articles, charting the emergence of the term “heirloom” in popular food writing and looking for changes in the quantity and quality of the discussion over time as well as differences and similarities across different kinds of foods. Much of this book is based on the ways heirloom varieties register in public discussions, especially the media, and the ways they get taken up by organizations and individuals, both in and out of the limelight. Blogs and other food writing have also figured centrally in my analysis of the heirloom food movement as markers of popular discussions, and I have relied on hundreds of secondary sources (see the bibliography) for historical information about specific foods. I read encyclopedias and fascinating scholarly and popular books, charting the rise and fall of particular foods and their historical transformations. And I drew on the insights of my colleagues in sociology and neighboring academic disciplines and the ways they think about things like culture, memory, and food.

Occasionally I would take a break and cook one of the recipes I came across, and I also left my desk and set out to visit the farms and gardens, camera and notebook in hand. I scratched the noses of wiry old pigs, walked through fragrant herb gardens, and tasted hard cider and fresh bread, the hems of my jeans coated in mud and my nose sunburned from a long day in an Alpine valley or at a midwestern heirloom seed festival. I spoke formally and informally with gardeners, farmers, and chefs, activists, seed savers, academics, and all kinds of people devoted to food. I visited farms and gardens and living history museums and farmers’ markets, and I attended conferences and public lectures and delivered some of my own to smart crowds full of eager gardeners, eaters, and thinkers. I also spoke with the gardeners of less well-known historical kitchen gardens across Europe and the United States, quiet conversations about their enthusiasm for their work and about their assessments of the changing public perceptions of edible biodiversity over recent decades. Many of these farmers and gardeners became good friends, and our late-night conversations over good meals in my dining room or cheap beer at a rooftop farm in Chicago’s Back of the Yards also came to shape my sociological understanding of these trends. Sifting through the stacks of papers on my desk in the depths of winter, and wandering through gardens, barnyards, and farmers’ markets in the heat of summer, I wanted to see what patterns I might find.

Finding Edible Memory

What I found was something I came to call “edible memory.” And I want to emphasize that I did not expect to find it. Edible memory emerged out of these documents, landscapes, and conversations. This book focuses largely on the contemporary United States, with occasional examples drawn from elsewhere. But the fundamental ideas and questions can help us to think about other times and places as well. For sociologists, the study of human behavior— of what people actually do, and do in large enough numbers to register as visible patterns—is at the heart of our work. Many of us are studying what happens when people are highly motivated, when they are so passionate about something that the passion provokes action. That said, many of us are also deeply interested in the small actions of habit, the little steps we take every day that add up to this big thing called society. What we eat for breakfast, who we spend time with and how, what we buy, even what we ignore— these are all crucial to understanding how and why things are as they are. This book is about the fervent devotees, the people who can’t not plant orchards full of apple trees or spend countless hours saving turnip seeds. But it is also about the ways millions (perhaps even billions) of people make small decisions every day about what to serve their families, about how to feed themselves.

When I began to look in scholarly and popular writing, and in kitchens, gardens, farms, and markets, I saw more and more evidence of edible memory: in the rice described by geographer Judith Carney, in the gardens of Hmong refugees in Minnesota, in the hard-won community gardens of New York’s Lower East Side, and in the appetites and memories of friends and strangers alike. Edible memory appears in the reverberations of African foods in a range of North American culinary traditions, in the efforts to cultivate Native American foods today, in the shifting appetites of immigrant populations and ardently trendy folks in Brooklyn or Portland. It goes far beyond the heirloom, but heirlooms were my way in, a way to narrow, at least temporarily, the scope of the investigation and to explore one particularly potent intersection of food, biodiversity, and tales of past ways of being. Edible memory is a widely applicable concept, and I hope it will resonate well beyond the boundaries of the examples I have included in this book.

Edible memory is also in no way the sole province of elites. Much of what people understand as heirloom food today is expensive and out of reach, justifying the pretensions sometimes assigned to heirloom tomatoes, farmers’ markets, or the pedigreed chicken in the television show Portlandia. Food deserts, double shifts, cumbersome or expensive transportation, and straight-up poverty greatly reduce access to a wide range of foods, heirlooms included. But to assume that edible memory is strictly connected to privilege ignores the vital connections people have to food at a range of locations on the socioeconomic scale. Poverty, and even hunger, does not preclude (and indeed may intensify) the meanings and memories surrounding food. As many researchers have discussed, the various alternative approaches to food— heirlooms, but also farmers’ markets, organic and local foods, and artisanal foods—tend to be expensive, eaten largely by elites—well-off and often white. However, while that may characterize what we might call mainstream alternative, both edible biodiversity and edible memory happen across the socioeconomic spectrum. There are vibrant, successful projects in which people worlds away from expensive restaurants and farmers’ markets grow and eat many of the same kinds of memorable vegetables, in rural backyards, small urban allotments, and school gardens. Chicago alone is home to many farms and gardens supplying food and often employment and other projects in low-income communities, projects like the Chicago Farmworks, Growing Home, Gingko Gardens, or the Chicago location of Growing Power, which is even selling its produce in local Walgreens, trying to improve access to locally grown produce in predominantly low-income and African American neighborhoods. The numerous farms and gardens profiled on Natasha Bowen’s blog and multimedia project, The Color of Food, also offer examples across the country of farmers and gardeners with a deep commitment to many of the same foods that find their way into high-priced grocery stores or expensive restaurant dinners.

At the same time, I do not want to argue that edible memory is a universal concept. We can ask where and how it appears and matters, but we should not assume that it is everywhere either present or significant. It is certainly widespread, based on the research I have conducted, but it is not universal. For some people food may be a way to imagine communities, to understand their place in the world and connect to other people, but for others it is simply physical sustenance or transitory pleasure.

To read more about Edible Memory, click here.

Add a Comment
27. Anthony C. Yu (1938–2015)

Yu 2

Anthony C. Yu (1938−2015)—scholar, translator, teacher—passed away earlier this month, following a brief illness. As the Carl Darling Buck Distinguished Service Professor Emeritus in the Humanities and the Divinity School at the University of Chicago, Yu fused a knowledge of Eastern and Western approaches in his broadranging humanistic inquiries. Perhaps best known for his translation of The Journey to the West, a sixteenth-century Chinese novel about a Tang Dynasty monk who travels to India to obtain sacred texts, which blends folk and institutionalized national religions with comedy, allegory, and the archetypal pilgrim’s tale. Published in four volumes by the University of Chicago Press, Yu’s pathbreaking translation spans more than 100 chapters; an abridged version of the text appeared in 2006 (The Monkey and the Monk), and just recently, in 2012, Yu published a revised edition.

In addition to JttW, Yu’s scholarship explored Chinese, English, and Greek literature, among other fields, as well as the classic texts of comparative religion. He was a member of the American Academy of the Arts and Sciences, the American Council of Learned Societies, and Academia Sinica, and served as a board member of the Modern Language Association, as well as a Guggenheim and Mellon Fellow.

From the University of Chicago News obituary:

“Professor Anthony C. Yu was an outstanding scholar, whose work was marked by uncommon erudition, range of reference and interpretive sophistication. He embodied the highest virtues of the University of Chicago, his alma mater and his academic home as a professor for 46 years, with an appointment spanning five departments of the University. Tony was also a person of inimitable elegance, dignity, passion and the highest standards for everything he did,” said Margaret M. Mitchell, the Shailer Mathews Professor of New Testament and Early Christian Literature and dean of the Divinity School.

To read more about The Journey to the West, click here.

 

Add a Comment
28. Free e-book for May: Don’t Look, Don’t Touch, Don’t Eat

9780226131337

Our free e-book for May, Valerie Curtis’s Don’t Look, Don’t Touch, Don’t Eat: The Science behind Revulsion, considers the narrative history and scientific basis behind the psychology of disgust.

***

Every flu season, sneezing, coughing, and graphic throat-clearing become the day-to-day background noise in every workplace. And coworkers tend to move as far—and as quickly—away from the source of these bodily eruptions as possible. Instinctively, humans recoil from objects that they view as dirty and even struggle to overcome feelings of discomfort once the offending item has been cleaned. These reactions are universal, and although there are cultural and individual variations, by and large we are all disgusted by the same things.
In Don’t Look, Don’t Touch, Don’t Eat, Valerie Curtis builds a strong case for disgust as a “shadow emotion”—less familiar than love or sadness, it nevertheless affects our day-to-day lives. In disgust, biological and sociocultural factors meet in dynamic ways to shape human and animal behavior. Curtis traces the evolutionary role of disgust in disease prevention and hygiene, but also shows that it is much more than a biological mechanism. Human social norms, from good manners to moral behavior, are deeply rooted in our sense of disgust. The disgust reaction informs both our political opinions and our darkest tendencies, such as misogyny and racism. Through a deeper understanding of disgust, Curtis argues, we can take this ubiquitous human emotion and direct it towards useful ends, from combating prejudice to reducing disease in the poorest parts of the world by raising standards of hygiene.
Don’t Look, Don’t Touch, Don’t Eat reveals disgust to be a vital part of what it means to be human and explores how this deep-seated response can be harnessed to improve the world.
***
To download your free copy (through May 31) of Don’t Look, Don’t Touch, Don’t Eat, click here.

Add a Comment
29. University of Chicago Spanish–English Dictionary app [SALE]

Untitled

Coinciding with the celebration of Cinco de Mayo and for a very limited time, the good folks behind the University of Chicago Spanish–English Dictionary (Sixth Edition) app have dropped the price to $0.99 (usually $4.99). You can a basic screenshot of the app’s functionality above—from breezing through recent reviews, it seems like the app’s ability to generate words lists, along with its word-by-word notetaking feature, has proven especially popular.

From the App Store description:

The Spanish–English Dictionary app is a precise and practical bilingual application for iPhone® and iPod touch® based on the sixth edition of The University of Chicago Spanish–English Dictionary. Browse or search the full contents to display all instances of a term for fuller understanding of how it is used in both languages. Build your vocabulary by creating Word Lists and testing yourself on terms you need to master with flash cards and multiple choice quizzes. Whether you are preparing for next week’s class or upcoming international travel, this app is the essential on-the-go reference.

You can watch a demo of the app here:

 

The app is, of course, a companion to the (physical book) sixth edition of the University of Chicago Spanish–English Dictionary, praised by Library Journal as, “comprehensive in scope, but simple enough to use for even the most tongue-tied linguist.” Limited time means limited time, so if you’re looking for an “an important contribution to update the traditional dictionary to the new digital era,” visit the App Store today.

Add a Comment
30. INFESTEDBOOK.COM

Brooke Borel’s Infested: How the Bed Bug Infiltrated Our Bedrooms and Took Over the World, a history, is the kind of book that can make you squirm—and not in a way that reassures you about the general asepsis of your mattress, hostel accommodations, luggage, vintage sweater, sexual partner, electrical heating system, duvet cover, trousseau, or recycling bin.

Consider this excerpt from the book, recently posted at Gizmodo, about the plucky bed bug’s resistance to DDT (read more at the link to learn about how it—yes, the insect—was almost drafted in the Vietnam War):

Four years after the Americans and the Brits added DDT to their wartime supply lists, scientists found bed bugs resistant to the insecticide in Pearl Harbor barracks. More resistant bed bugs soon showed up in Japan, Korea, Iran, Israel, French Guiana, and Columbus, Ohio. In 1958 James Busvine of the London School of Hygiene and Tropical Medicine showed DDT resistance in bed bugs as well as cross- resistance to several similar pesticides, including a tenfold increase in resistance to a common organic one called pyrethrin. In 1964 scientists tested bed bugs that had proven resistant five years prior but had not been exposed to any insecticides since. The bugs still defied the DDT.

Soon there was a long list of other insect and arachnid with an increasing immunity to DDT: lice, mosquitoes, house flies, fruit flies, cockroaches, ticks, and the tropical bed bug. In 1969 one entomology professor would write of the trend: “The events of the past 25 years have taught us that virtually any chemical control method we have devised for insects is eventually destined to become obsolete, and that insect control can never be static but must be in a dynamic state of constant evolution.” In other words, in the race between chemical and insect, the insects always pull ahead.

If that doesn’t, er, scratch your itch, check out the video above (produced by the Frank Collective, a rad tribe of Brooklyn-based digital media collaborators), which features Borel teasing “7 Crazy Bed Bug Facts,” and explore the book’s website, a safe space where the “bed bug queen” makes her nest.

To read more about Infested, click here.

Add a Comment
31. N. B. D. Connolly on “Black Culture is Not the Problem”

baltimore-protests-640-jpg

screenshot from AP video of Baltimore protests on April 26, 2015

N. B. D. Connolly, assistant professor of history at Johns Hopkins University and author of A World More Concrete: Real Estate and the Remaking of Jim Crow South Florida, on “Black Culture is Not the Problem” for the New York Times:

The problem is not black culture. It is policy and politics, the very things that bind together the history of Ferguson and Baltimore and, for that matter, the rest of America.

Specifically, the problem rests on the continued profitability of racism. Freddie Gray’s exposure to lead paint as a child, his suspected participation in the drug trade, and the relative confinement of black unrest to black communities during this week’s riot are all features of a city and a country that still segregate people along racial lines, to the financial enrichment of landlords, corner store merchants and other vendors selling second-rate goods.

The problem originates in a political culture that has long bound black bodies to questions of property. Yes, I’m referring to slavery.

To read more about A World More Concrete, click here.

Add a Comment
32. Excerpt: Elephant Don

9780226106113

An excerpt from Elephant Don: The Politics of a Pachyderm Posse 

by Caitlin O’Connell

“Kissing the Ring”

Sitting in our research tower at the water hole, I sipped my tea and enjoyed the late morning view. A couple of lappet-faced vultures climbed a nearby thermal in the white sky. A small dust devil of sand, dry brush, and elephant dung whirled around the pan, scattering a flock of guinea fowl in its path. It appeared to be just another day for all the denizens of Mushara water hole—except the elephants. For them, a storm of epic proportions was brewing.

It was the beginning of the 2005 season at my field site in Etosha National Park, Namibia—just after the rainy period, when more elephants would be coming to Mushara in search of water—and I was focused on sorting out the dynamics of the resident male elephant society. I was determined to see if male elephants operated under different rules here than in other environments and how this male society compared to other male societies in general. Among the many questions I wanted to answer was how ranking was determined and maintained and for how long the dominant bull could hold his position at the top of the hierarchy.

While observing eight members of the local boys’ club arrive for a drink, I immediately noticed that something was amiss—these bulls weren’t quite up to their usual friendly antics. There was an undeniable edge to the mood of the group.

The two youngest bulls, Osh and Vincent Van Gogh, kept shifting their weight back and forth from shoulder to shoulder, seemingly looking for reassurance from their mid- and high-ranking elders. Occasionally, one or the other held its trunk tentatively outward—as if to gain comfort from a ritualized trunk-to-mouth greeting.

The elders completely ignored these gestures, offering none of the usual reassurances such as a trunk-to-mouth in return or an ear over a youngster’s head or rear. Instead, everyone kept an eye on Greg, the most dominant member of the group. And for whatever reason, Greg was in a foul temper. He moved as if ants were crawling under his skin.

Like many other animals, elephants form a strict hierarchy to reduce conflict over scarce resources, such as water, food, and mates. In this desert environment, it made sense that these bulls would form a pecking order to reduce the amount of conflict surrounding access to water, particularly the cleanest water.

At Mushara water hole, the best water comes up from the outflow of an artesian well, which is funneled into a cement trough at a particular point. As clean water is more palatable to the elephant and as access to the best drinking spot is driven by dominance, scoring of rank in most cases is made fairly simple—based on the number of times one bull wins a contest with another by usurping his position at the water hole, by forcing him to move to a less desirable position in terms of water quality, or by changing trajectory away from better-quality water through physical contact or visual cues.

Cynthia Moss and her colleagues had figured out a great deal about dominance in matriarchal family groups by. Their long-term studies in Amboseli National Park showed that the top position in the family was passed on to the next oldest and wisest female, rather than to the offspring of the most dominant individual. Females formed extended social networks, with the strongest bonds being found within the family group. Then the network branched out into bond groups, and beyond that into associated groups called clans. Branches of these networks were fluid in nature, with some group members coming together and others spreading out to join more distantly related groups in what had been termed a fission-fusion society.

Not as much research had been done on the social lives males, outside the work by Joyce Poole and her colleagues in the context of musth and one-on-one contests. I wanted to understand how male relationships were structured after leaving their maternal family groups as teens, when much of their adult lives was spent away from their female family. In my previous field seasons at Mushara, I’d noticed that male elephants formed much larger and more consistent groups than had been reported elsewhere and that, in dry years, lone bulls were not as common here than were recorded in other research sites.

Bulls of all ages were remarkably affiliative—or friendly—within associated groups at Mushara. This was particularly true of adolescent bulls, which were always touching each other and often maintained body contact for long periods. And it was common to see a gathering of elephant bulls arrive together in one long dusty line of gray boulders that rose from the tree line and slowly morphed into elephants. Most often, they’d leave in a similar manner—just as the family groups of females did.

The dominant bull, Greg, most often at the head of the line, is distinguishable by the two square-shaped notches out of the lower portion of his left ear. But there is something deeper that differentiates him, something that exhibits his character and makes him visible from a long way off. This guy has the confidence of royalty—the way he holds his head, his casual swagger: he is made of kingly stuff. And it is clear that the others acknowledge his royal rank as his position is reinforced every time he struts up to the water hole to drink.

Without fail, when Greg approaches, the other bulls slowly back away, allowing him access to the best, purest water at the head of the trough—the score having been settled at some earlier period, as this deference is triggered without challenge or contest almost every time. The head of the trough is equivalent to the end of the table and is clearly reserved for the top-ranking elephant—the one I can’t help but refer to as the don since his subordinates line up to place their trunks in his mouth as if kissing a Mafioso don’s ring.

As I watched Greg settle in to drink, each bull approached in turn with trunk outstretched, quivering in trepidation, dipping the tip into Greg’s mouth. It was clearly an act of great intent, a symbolic gesture of respect for the highest-ranking male. After performing the ritual, the lesser bulls seemed to relax their shoulder as they shifted to a lower-ranking position within the elephantine equivalent of a social club. Each bull paid their respects and then retreated. It was an event that never failed to impress me—one of those reminders in life that maybe humans are not as special in our social complexity as we sometimes like to think—or at least that other animals may be equally complex. This male culture was steeped in ritual.

Greg takes on Kevin. Both bulls face each other squarely, with ears held out. Greg’s ear cutout pattern in the left ear make him very recognizable

 

But today, no amount of ritual would placate the don. Greg was clearly agitated. He was shifting his weight from one front foot to the other in jerky movements and spinning his head around to watch his back, as if someone had tapped him on the shoulder in a bar, trying to pick a fight.

The midranking bulls were in a state of upheaval in the presence of their pissed-off don. Each seemed to be demonstrating good relations with key higher-ranking individuals through body contact. Osh leaned against Torn Trunk on his one side, and Dave leaned in from the other, placing his trunk in Torn Trunk’s mouth. The most sought-after connection was with Greg himself, of course, who normally allowed lower-ranking individuals like Tim to drink at the dominant position with him.

Greg, however, was in no mood for the brotherly “back slapping” that ordinarily took place. Tim, as a result, didn’t display the confidence that he generally had in Greg’s presence. He stood cowering at the lowest-ranking position at the trough, sucking his trunk, as if uncertain of how to negotiate his place in the hierarchy without the protection of the don.

Finally, the explanation for all of the chaos strode in on four legs. It was Kevin, the third-ranking bull. His wide-splayed tusks, perfect ears, and bald tail made him easy to identify. And he exhibited the telltale sign of musth, as urine was dribbling from his penis sheath. With shoulders high and head up, he was ready to take Greg on.

A bull entering the hormonal state of musth was supposed to experience a kind of “Popeye effect” that trumped established dominance patterns—even the alpha male wouldn’t risk challenging a bull elephant with the testosterone equivalent of a can of spinach on board. In fact, there are reports of musth bulls having on the order of twenty times the normal amount of testosterone circulating in their blood. That’s a lot of spinach.

Musth manifests itself in a suite of exaggerated aggressive displays, including curling the trunk across the brow with ears waving—presumably to facilitate the wafting of a musthy secretion from glands in the temporal region—all the while dribbling urine. The message is the elephant equivalent of “don’t even think about messing with me ’cause I’m so crazy-mad that I’ll tear your frickin’ head off”—a kind of Dennis Hopper approach to negotiating space.

Musth—a Hindi word derived from the Persian and Urdu word “mast,” meaning intoxicated—was first noted in the Asian elephant. In Sufi philosophy, a mast (pronounced “must”) was someone so overcome with love for God that in their ecstasy they appeared to be disoriented. The testosterone-heightened state of musth is similar to the phenomenon of rutting in antelopes, in which all adult males compete for access to females under the influence of a similar surge of testosterone that lasts throughout a discrete season. During the rutting season, roaring red deer and bugling elk, for example, aggressively fight off other males in rut and do their best to corral and defend their harems in order to mate with as many does as possible.

The curious thing about elephants, however, is that only a few bulls go into musth at any one time throughout the year. This means that there is no discrete season when all bulls are simultaneously vying for mates. The prevailing theory is that this staggering of bulls entering musth allows lower-ranking males to gain a temporary competitive advantage over others of higher rank by becoming so acutely agitated that dominant bulls wouldn’t want to contend with such a challenge, even in the presence of an estrus female who is ready to mate. This serves to spread the wealth in terms of gene pool variation, in that the dominant bull won’t then be the only father in the region.

Given what was known about musth, I fully expected Greg to get the daylights beaten out of him. Everything I had read suggested that when a top-ranking bull went up against a rival that was in musth, the rival would win.

What makes the stakes especially high for elephant bulls is the fact that estrus is so infrequent among elephant cows. Since gestation lasts twenty-two months, and calves are only weaned after two years, estrus cycles are spaced at least four and as many as six years apart. Because of this unusually long interval, relatively few female elephants are ovulating in any one season. The competition for access to cows is stiffer than in most other mammalian societies, where almost all mature females would be available to mate in any one year. To complicate matters, sexually mature bulls don’t live within matriarchal family groups and elephants range widely in search of water and forage, sofinding an estrus female is that much more of a challenge for a bull.

Long-term studies in Amboseli indicated that the more dominant bulls still had an advantage, in that they tended to come into musth when more females were likely to be in estrus. Moreover, these bulls were able to maintain their musth period for a longer time than the younger, less dominant bulls. Although estrus was not supposed to be synchronous in females, more females tended to come into estrus at the end of the wet season, with babies appearing toward the middle of the wet season, twenty-two months later. So being in musth in this prime period was clearly an advantage.

Even if Greg enjoyed the luxury of being in musth during the peak of estrus females, this was not his season. According to the prevailing theory, and in this situation, Greg would back down to Kevin.

As Kevin sauntered up to the water hole, the rest of the bulls backed away like a crowd avoiding a street fight. Except for Greg. Not only did Greg not back down, he marched clear around the pan with his head held to its fullest height, back arched, heading straight for Kevin. Even more surprising, when Kevin saw Greg approach him with this aggressive posture, he immediately started to back up.

Backing up is rarely a graceful procedure for any animal, and I had certainly never seen an elephant back up so sure-footedly. But there was Kevin, keeping his same even and wide gait, only in the reverse direction—like a four-legged Michael Jackson doing the moon walk. He walked backward with such purpose and poise that I couldn’t help but feel that I was watching a videotape playing in reverse—that Nordic-track style gait, fluidly moving in the opposite direction, first the legs on the one side, then on the other, always hind foot first.

Greg stepped up his game a notch as Kevin readied himself in his now fifty-yard retreat, squaring off to face his assailant head on. Greg puffed up like a bruiser and picked up his pace, kicking dust in all directions. Just before reaching Kevin, Greg lifted his head even higher and made a full frontal attack, lunging at the offending beast, thrusting his head forward, ready to come to blows.

In another instant, two mighty heads collided in a dusty clash. Tusks met in an explosive crack, with trunks tucked under bellies to stay clear of the collisions. Greg’s ears were pinched in the horizontal position—an extremely aggressive posture. And using the full weight of his body, he raised his head again and slammed at Kevin with his broken tusks. Dust flew as the musth bull now went in full backward retreat.

Amazingly, this third-ranking bull, doped up with the elephant equivalent of PCP, was getting his hide kicked. That wasn’t supposed to happen.

At first, it looked as if it would be over without much of a fight. Then, Kevin made his move and went from retreat to confrontation and approached Greg, holding his head high. With heads now aligned and only inches apart, the two bulls locked eyes and squared up again, muscles tense. It was like watching two cowboys face off in a western.

There were a lot of false starts, mock charges from inches away, and all manner of insults cast through stiff trunks and arched backs. For a while, these two seemed equally matched, and the fight turned into a stalemate.

But after holding his own for half an hour, Kevin’s strength, or confidence, visibly waned—a change that did not go unnoticed by Greg, who took full advantage of the situation. Aggressively dragging his trunk on the ground as he stomped forward, Greg continued to threaten Kevin with body language until finally the lesser bull was able to put a man-made structure between them, a cement bunker that we used for ground-level observations. Now, the two cowboys seemed more like sumo wrestlers, feet stamping in a sideways dance, thrusting their jaws out at each other in threat.

The two bulls faced each other over the cement bunker and postured back and forth, Greg tossing his trunk across the three-meter divide in frustration, until he was at last able to break the standoff, getting Kevin out in the open again. Without the obstacle between them, Kevin couldn’t turn sideways to retreat, as that would have left his body vulnerable to Greg’s formidable tusks. He eventually walked backward until he was driven out of the clearing, defeated.

In less than an hour, Greg, the dominant bull displaced a high-ranking bull in musth. Kevin’s hormonal state not only failed to intimidate Greg but in fact just the opposite occurred: Kevin’s state appeared to fuel Greg into a fit of violence. Greg would not tolerate a usurpation of his power.

Did Greg have a superpower that somehow trumped musth? Or could he only achieve this feat as the most dominant individual within his bonded band of brothers? Perhaps paying respects to the don was a little more expensive than a kiss of the ring.

***

To read more about Elephant Don, click here.

Add a Comment
33. Donald N. Levine (1931–2013)

ktocpqlaun.19326.20150409

 

Donald L. Levine (1931–2013), the Peter B. Ritzma Professor Emeritus of Sociology at the University of Chicago (where he served as dean of the College from 1982 to 1987), passed away earlier this month at the age of 83, following a long illness.

Among his significant contributions to the field of sociology were five volumes (The Flight from Ambiguity, Greater Ethiopia, Powers of the MindWax and Gold, and Visions of the Sociological Tradition), an edited collection (Georg Simmel on Individuality and Social Forms), and a translation (Simmel’s The View of Life), all published by the University of Chicago Press.

As chronicled in memoriam by Susie Allen for UChicagoNews:

Over his long career, Levine published several works that are now considered landmarks of sociology. His “masterpiece,” according to former student Charles Camic, was Visions of the Sociological Tradition, published by the University of Chicago Press in 1995.

In that book, Levine traced the intellectual genealogy of the social sciences and argued that different traditions of social thought could productively inform one another. “It’s a brilliant analysis of theories and intellectual traditions, but also a very thoughtful effort to bring them into intellectual dialogue with one another,” said Camic, PhD’79, now a professor of sociology at Northwestern University. “The beauty with which it’s argued and the depth of his knowledge about these different intellectual traditions are astounding.”

Levine was also influential in promoting the work of German sociologist Georg Simmel and translated several of Simmel’s works into English. “He brought Simmel to awareness in the U.S.,” said Douglas Mitchell, a longtime editor at the University of Chicago Press, who worked with Levine throughout his career.

Executive editor T. David Brent noted, “I thought that if immortality were a possibility it would be conferred upon Don.”

To read more about Levine’s work, click here.

Add a Comment
34. 2015 Laing Prize

9780226273587

Each year, the University of Chicago Press,  awards the Gordon J. Laing Prize to “the faculty author, editor or translator of a book published in the previous three years that brings the Press the greatest distinction.” Originated in 1963, the Prize was named after a former general editor of the Press, whose commitment to extraordinary scholarship helped establish UCP as one of the country’s premier university presses. Conferred by a vote from the Board of University Publications and celebrated earlier this week, the 2015 Laing Prize was awarded to Mauricio Tenorio-Trillo, professor of history at the University of Chicago, and associate professor at the Centro de Investigación y Docencia Económicas, Mexico City, for his book I Speak the City: Mexico City at the Turn of the Twentieth Century

University of Chicago President Robert J. Zimmer’s presented the award at a ceremony earlier this week. From the Press’s official citation:

From art to city planning, from epidemiology to poetry, I Speak of the City challenges the conventional wisdom about Mexico City, investigating the city and the turn-of-the-century world to which it belonged. By engaging with the rise of modernism and the cultural experiences of such personalities as Hart Crane, Mina Loy and Diego Rivera, I Speak of the City will find an enthusiastic audience across the disciplines.

While accepting the award, Tenorio-Trillo noted his fear that the book would ever find a publisher:

His colleague, Prof. Emilio Kouri, told him to try the University of Chicago Press. “He said they do not normally publish Latin American history, but they publish what you do: history and thinking,” said Tenorio-Trillo. And so the manuscript was sent to Press Executive Editor Douglas Mitchell to review.

“My books in Spanish sometimes are catalogued as history, sometimes as essays, closer to literature. I was truly surprised to learn of this very prestigious prize. I do not know if my work has finally reached the maturity to deserve such a prize or if I have luckily arrived to the intellectual milieu where the idiosyncratic nature of my work is considered a true intellectual contribution. With or without prizes, it’s been a privilege to work here and to collaborate with the University of Chicago Press,” he added.

In addition to the Laing Prize, I Speak of the City was awarded the Spiro Kostof Book Award from the Society of Architectural Historians and the Bolton-Johnson Prize Honorable Mention Award from the American Historical Association.

To read more about the book, click here.

Add a Comment
35. Our Fall 2015 catalog has arrived

Untitled

We’ve gone mimetic and we’re not coming back; executive editor Doug Mitchell models our new on-brand lookbook.

We call it Informcore.

In the meantime, here’s a link to what’s on offer for Fall 2015.

Add a Comment
36. Alan Shapiro: Pulitzer Prize finalist

9780226110639

Hearty congratulations to Alan Shapiro, whose collection of poems Reel to Reel was recently shortlisted for the 2015 Pulitzer Prize in poetry. Shapiro, who teaches at the University of North Carolina at Chapel Hill, has published twelve volumes of poetry, and has previously been nominated for both the National Book Award and the Griffin Prize. The Pulitzer Prize citation commended Reel to Reel‘s “finely crafted poems with a composure that cannot conceal the troubled terrain they traverse.” The book, written with Shapiro’s recognizably graceful, abstracting, and subtle minimalism, was one of two finalists, along with Arthur Sze’s Compass Rose; Gregory Pardlo’s Digest won the award.

From the jacket copy for Reel to Reel:

Reel to Reel, Alan Shapiro’s twelfth collection of poetry, moves outward from the intimate spaces of family and romantic life to embrace not only the human realm of politics and culture but also the natural world, and even the outer spaces of the cosmos itself. In language richly nuanced yet accessible, these poems inhabit and explore fundamental questions of existence, such as time, mortality, consciousness, and matter. How did we get here? Why is there something rather than nothing? How do we live fully and lovingly as conscious creatures in an unconscious universe with no ultimate purpose or destination beyond returning to the abyss that spawned us? Shapiro brings his humor, imaginative intensity, characteristic syntactical energy, and generous heart to bear on these ultimate mysteries. In ways few poets have done, he writes from a premodern, primal sense of wonder about our postmodern world.

“Family Bed,” on the book’s poems:

My sister first and then my brother woke
Inside the house they dreamed, and so the dream
House, which, in my dream, was the house in which
I found them now, was vanishing as they woke,
Was swallowing itself the way the picture did
Inside the switched off television screen.
It was the nightmare picture of them sleeping
As if alive beside me in the last
Room left to us, the nightmare of the picture
Suddenly collapsing on the screen
Into the tick and crackle of the shriveling
Abyss they were being sucked away into
By having wakened, while I, alone now,
Clung to the screen of sleeping in the not
Yet undreamt bedroom they no longer dreamed.

To read more about Reel to Reel, or to view more of the author’s books published by the University of Chicago Press, click here.

Add a Comment
37. Excerpt: Portrait of a Man Known as Il Condottiere

9780226054254

An excerpt from Portrait of a Man Known as Il Condottiere by Georges Perec

***

Madera was heavy. I grabbed him by the armpits and went backwards down the stairs to the laboratory. His feet bounced from tread to tread in a staccato rhythm that matched my own unsteady descent, thumping and banging around the narrow stairwell. Our shadows danced on the walls. Blood was still flowing, all sticky, seeping from the soaking wet towel, rapidly forming drips on the silk lapels, then disappearing into the folds of the jacket, like trails of slightly glinting snot side-tracked by the slightest roughness in the fabric, sometimes accumulating into drops that fell to the floor and exploded into star-shaped stains. I let him slump at the bottom of the stairs, right next to the laboratory door, and then went back up to fetch the razor and to mop up the bloodstains before Otto returned. But Otto came in by the other door at almost the same time as I did. He looked at me uncomprehendingly. I beat a retreat, ran down the stairs, and shut myself in the laboratory. I padlocked the door and jammed the wardrobe up against it. He came down a few minutes later, tried to force the door open, to no avail, then went back upstairs, dragging Madera behind him. I reinforced the door with the easel. He called out to me. He fired at the door twice with his revolver.

You see, maybe you told yourself it would be easy. Nobody in the house, no-one round and about. If Otto hadn’t come back so soon, where would you be? You don’t know, you’re here. In the same laboratory as ever, and nothing’s changed, or almost nothing. Madera is dead. So what? You are still in the same underground studio, it’s just a bit less tidy and bit less clean. The same light of day seeps through the basement window. The Condottiere, crucified on his easel . . .

He had looked all around. It was the same office—the same glass table-top, the same telephone, the same calendar on its chrome-plated steel base. It still had the stark orderliness and uncluttered iciness of an intentionally cold style, with strictly matching colours—dark green carpet, mauve leather armchairs, light brown wall covering—giving a sense of discreet impersonality with its large metal filing cabinets . . . But all of a sudden the flabby mass of Madera’s body seemed grotesque, like a wrong note, something incoherent, anachronistic . . . He’d slipped off his chair and was lying on his back with his eyes half-closed and his slightly parted lips stuck in an expression of idiotic stupor enhanced by the dull gleam of a gold tooth. Blood streamed from his cut throat in thick spurts and trickled onto the floor, gradually soaking into the carpet, making an ill-defined, blackish stain that grew ever larger around his head, around his face whose whiteness had long seemed rather fishy, a warm, living, animal stain slowly taking possession of the room, as if the walls were already soaked through with it, as if the orderliness and strictness had already been overturned, abolished, pillaged, as if nothing more existed beyond the radiating stain and the obscene and ridiculous heap on the floor, the corpse, fulfilled, multiplied, made infinite . . .

Why? Why had he said that sentence: “I don’t think that’ll be a problem”? He tries to recall the precise tone of Madera’s voice, the timbre that had taken him by surprise the first time he’d heard it, that slight lisp, its faintly hesitant intonation, the almost imperceptible limp in his words, as if he were stumbling— almost tripping—as if he were permanently afraid of making a mistake. I don’t think. What nationality? Spanish? South American? Accent? Put on? Tricky. No. Simpler than that: he rolled his rs in the back of his throat. Or perhaps he was just a bit hoarse? He can see him coming towards him with outstretched hand: “Gaspard—that’s what I should call you, isn’t it?—I’m truly delighted to make your acquaintance.” So what? It didn’t mean much to him. What was he doing here? What did the man want of him? Rufus hadn’t warned him . . .

People always make mistakes. They think things will work out, will go on as per normal. But you never can tell. It’s so easy to delude yourself. What do you want, then? An oil painting? You want a top-of-the-range Renaissance piece? Can do. Why not aPortrait of a Young Man, for instance . . .

A flabby, slightly over-handsome face. His tie. “Rufus has told me a lot about you.” So what? Big deal! You should have paid attention, you should have been wary . . . A man you didn’t know from Adam or Eve . . . But you rushed headlong to accept the opportunity. It was too easy. And now. Well, now . . .

This is where it had got him. He did the sums in his head: all that had been spent setting up the laboratory, including the cost of materials and reproductions—photographs, enlargements, X-ray images, images seen through Wood’s lamp and with sideillumination—and the spotlights, the tour of European art galleries, upkeep . . . a fantastic outlay for a farcical conclusion . . . But what was comical about his idiotic incarceration? He was at his desk as if nothing had happened . . . That was yesterday . . . But upstairs there was Madera’s corpse in a puddle of blood . . . and Otto’s heavy footsteps as he paced up and down keeping guard. All that to get to this! Where would he be now if . . . He thinks of the sunny Balearic Islands—it would have taken just a wave of his hand a year and half before—Geneviève would be at his side . . . the beach, the setting sun . . . a picture postcard scene . . . Is this where it all comes to a full stop?

Now he recalled every move he’d made. He’d just lit a cigarette, he was standing with one hand on the table, with his weight on one hip. He was looking at the Portrait of a Man. Then he’d stubbed out his cigarette quickly and his left hand had swept over the table, stopped, gripped a piece of cloth, and crumpled it tight—an old handkerchief used as a brush-rag. Everything was hazy. He was putting ever more of his weight onto the table without letting the Condottiere out of his sight. Days and days of useless effort? It was as if his weariness had given way to the anger rising in him, step by certain step. He was crushing the fabric in his hand and his nails had scored the wooden table-top. He had pulled himself up, gone to his work bench, rummaged among his tools . . .

A black sheath made of hardened leather. An ebony handle. A shining blade. He had raised it to the light and checked the cutting edge. What had he been thinking of? He’d felt as if there was nothing in the world apart from that anger and that weariness . . . He’d flopped into the armchair, put his head in his hands, with the razor scarcely a few inches from his eyes, set off clearly and sharply by the dangerously smooth surface of the Condottiere’s doublet. A single movement and then curtains . . . One thrust would be enough . . . His arm raised, the glint of the blade . . . a single movement . . . he would approach slowly and the carpet would muffle the sound of his steps, he would steal up on Madera from behind . . .

A quarter of an hour had gone by, maybe. Why did he have an impression of distant gestures? Had he forgotten? Where was he? He’d been upstairs. He’d come back down. Madera was dead. Otto was keeping guard. What now? Otto was going to phone Rufus, Rufus would come. And then? What if Otto couldn’t get hold of Rufus? Where was Rufus? That’s what it all hung on. On this stupid what-if. If Rufus came, he would die, and if Otto didn’t get hold of Rufus, he would live. How much longer? Otto had a weapon. The skylight was too high and too small. Would Otto fall asleep? Does a man on guard need to sleep? . . .

He was going to die. The thought of it comforted him like a promise. He was alive, he was going to be dead. Then what?

Leonardo is dead, Antonello is dead, and I’m not feeling too well myself. A stupid death. A victim of circumstance. Struck down by bad luck, a wrong move, a mistake. Convicted in absentia. By unanimous decision with one abstention—which one?—he was sentenced to die like a rat in a cellar, under a dozen unfeeling eyes—the side lights and X-ray lamps purchased at outrageous prices from the laboratory at the Louvre—sentenced to death for murder by virtue of that good old moral legend of the eye, the tooth and the turn of the wheel—Achilles’ wheel—death is the beginning of the life of the mind—sentenced to die because of a combination of circumstances, an incoherent conjunction of trivial events . . . Across the globe there were wires and submarine cables . . . Hello, Paris, this is Dreux, hold the line, we’re connecting to Dampierre. Hello, Dampierre, Paris calling. You can talk now. Who could have imagined those peaceable operators with their earpieces becoming implacable executioners . . . Hello, Monsieur Koenig, Otto speaking, Madera has just died . . .

In the dark of night the Porsche will leap forward with its headlights spitting fire like dragons. There will be no accident. In the middle of the night they will come and get him . . .

And then? What the hell does it matter to you? They’ll come and get you. Next? Slump into an armchair and stare long and hard until death overtakes into the eyes of the tall joker with the shiv, the ineffable Condottiere. Responsible or not responsible? Guilty or not guilty? I’m not guilty, you’ll scream when they drag you up to the guillotine. We’ll soon see about that, says the executioner. And down the blade comes with a clunk. Curtains. Self-evident justice. Isn’t that obvious? Isn’t it normal? Why should there be any other way out?

To read more about Portrait of a Man Known as Il Condottiere, click here.

Add a Comment
38. Excerpt: That’s the Way It Is

9780226472454

An excerpt from That’s the Way It Is: A History of Television News in America 
by Charles L. Ponce de Leon

***

“Beginnings”

Few technologies have stirred the utopian imagination like television. Virtually from the moment that research produced the first breakthroughs that made it more than a science fiction fantasy, its promoters began gushing about how it would change the world. Perhaps the most effusive was David Sarnoff. Like the hero of a dime novel, Sarnoff had come to America as a nearly penniless immigrant child, and had risen from lowly office boy to the presidency of RCA, a leading manufacturer of radio receivers and the parent company of the nation’s biggest radio network, NBC. More than anyone else, it was Sarnoff who had recognized the potential of “wireless” as a form of broadcasting—a way of transmitting from a single source to a geographically dispersed audience. Sarnoff had built NBC into a juggernaut, the network with the largest number of affiliates and the most popular programs. He had also become the industry’s loudest cheerleader, touting its contributions to “progress” and the “American Way of Life.” Having blessed the world with the miracle of radio, he promised Americans an even more astounding marvel, a device that would bring them sound and pictures over the air, using the same invisible frequencies.

In countless speeches heralding television’s imminent arrival, Sarnoff rhapsodized about how it would transform American life and encourage global communication and “international solidarity.” “Television will be a mighty window, through which people in all walks of life, rich and poor alike, will be able to see for themselves, not only the small world around us but the larger world of which we are a part,” he proclaimed in 1945, as the Second World War was nearing an end and Sarnoff and RCA eagerly anticipated an increase in public demand for the new technology.

Sarnoff predicted that television would become the American people’s “principal source of entertainment, education and news,” bringing them a wealth of program options. It would increase the public’s appreciation for “high culture” and, when supplemented by universal schooling, enable Americans to attain “the highest general cultural level of any people in the history of the world.” Among the new medium’s “outstanding contributions,” he argued, would be “its ability to bring news and sporting events to the listener while they are occurring,” and build on the news programs that NBC and the other networks had already developed for radio. He saw no conflicts or potential problems. Action-adventure programs, mysteries, soap operas, situation comedies, and variety shows would coexist harmoniously with high-toned drama, ballet, opera, classical music performances, and news and public affairs programs. And they would all be supported by advertising, making it unnecessary for the United States to move to a system of “government control,” as in Europe and the UK. Television in the US would remain “free.”

Yet Sarnoff ’s booster rhetoric overlooked some thorny issues. Radio in the US wasn’t really free. It was thoroughly commercialized, and this had a powerful influence on the range of programs available to listeners. To pay for program development, the networks and individual stations “sold” airtime to advertisers. Advertisers, in turn, produced programs—or selected ones created by independent producers—that they hoped would attract listeners. The whole point of “sponsorship” was to reach the public and make them aware of your products, most often through recurrent advertisements. Though owners of radios didn’t have to pay an annual fee for the privilege of listening, as did citizens in other countries, they were forced to endure the commercials that accompanied the majority of programs.

This had significant consequences. As the development of radio made clear, some kinds of programs were more popular than others, and advertisers were naturally more interested in sponsoring ones that were likely to attract large numbers of listeners. These were nearly always entertainment programs, especially shows that drew on formulas that had proven successful in other fields—music and variety shows, comedy, and serial fiction. More off-beat and esoteric programs were sometimes able to find sponsors who backed them for the sake of prestige; from 1937 to 1954, for example, General Motors sponsored live performances by NBC’s acclaimed “Symphony of the Air.” But most cultural, news, and public affairs programs were unsponsored, making them unprofitable for the networks and individual stations. Thus in the bountiful mix envisioned by Sarnoff, certain kinds of broadcasts were more valuable than others. If high culture and news and public affairs programs were to thrive, their presence on network schedules would have to be justified by something other than their contribution to the bottom line.

The most compelling reason was provided by the Federal Communications Commission (FCC). Established after Congress passed the Federal Communications Act in 1934, the FCC was responsible for overseeing the broadcasting industry and the nation’s airwaves, which, at least in theory, belonged to the public. Rather than selling frequencies, which would have violated this principle, the FCC granted individual parties station licenses. These allowed licensees sole possession of a frequency to broadcast to listeners in their community or region. This system allocated a scarce resource—the nation’s limited number of frequencies—and made possession of a license a lucrative asset for businessmen eager to exploit broadcasting’s commercial potential. Licenses granted by the FCC were temporary, and all licensees were required to go through a periodic renewal process. As part of this process, they had to demonstrate to the FCC that at least some of the programs they aired were in the “public interest.” Inspired by a deep suspicion of commercialization, which had spread widely among the public during the early 1900s, the FCC’s public-interest requirement was conceived as a countervailing force that would prevent broadcasting from falling entirely under the sway of market forces. Its champions hoped that it might protect programming that did not pay and ensure that the nation’s airwaves weren’t dominated by the cheap, sensational fare that, reformers feared, would proliferate if broadcasting was unregulated

In practice, however, the FCC’s oversight of broadcasting proved to be relatively lax. More concerned about NBC’s enormous market power—it controlled two networks of affiliates, NBC Red and NBC Blue—FCC commissioners in the 1930s were unusually sympathetic to the businessmen who owned individual stations and possessed broadcast licenses and made it quite easy for them to renew their licenses. They were allowed to air a bare minimum of public-affairs programming and fill their schedules with the entertainment programs that appealed to listeners and sponsors alike. By interpreting the public-interest requirement so broadly, the FCC encouraged the commercialization of broadcasting and unwittingly tilted the playing field against any programs—including news and public affairs—that could not compete with the entertainment shows that were coming to dominate the medium.

Nevertheless, news and public-affairs programs were able to find a niche on commercial radio. But until the outbreak of the Second World War, it wasn’t a very large or comfortable one, and it was more a result of economic competition than the dictates of the FCC. Occasional news bulletins and regular election returns were broadcast by individual stations and the fledgling networks in the 1920s. They became more frequent in the 1930s, when the networks, chafing at the restrictions placed on them by the newspaper industry, established their own news divisions to supplement the reports they acquired through the newspaper-dominated wire services.

By the mid-1930s, the most impressive radio news division belonged not to Sarnoff ’s NBC but its main rival, CBS. Owned by William S. Paley, the wealthy son of a cigar magnate, CBS was struggling to keep up with NBC, and Paley came to see news as an area where his young network might be able to gain an advantage. A brilliant, visionary businessman, Paley was fascinated by broadcasting and would soon steer CBS ahead of NBC, in part by luring away its biggest stars. His bold initiative to beef up its news division was equally important, giving CBS an identity that clearly distinguished it from its rivals. Under Paley, CBS would become the “Tiffany network,” the home of “quality” as well as crowd-pleasers, a brand that made it irresistible to advertisers.

Paley hired two print journalists, Ed Klauber and Paul White, to run CBS’s news unit. Under their watch, the network increased the frequency of its news reports and launched news-and-commentary programs hosted by Lowell Thomas, H. V. Kaltenborn, and Robert Trout. In 1938, with Europe drifting toward war, CBS expanded these programs and began broadcasting its highly praised World News Roundup; its signature feature was live reports from correspondents stationed in London, Paris, Berlin, and other European capitals. These programs were well received and popular with listeners, prompting NBC and the other networks to follow Paley’s lead.

The outbreak of war sparked a massive increase in news programming on all the networks. It comprised an astonishing 20 percent of the networks’ schedules by 1944. Heightened public interest in news, particularly news about the war, was especially beneficial to CBS, where Klauber and White had built a talented stable of reporters. Led by Edward R. Murrow, they specialized in vivid on-the-spot reporting and developed an appealing style of broadcast journalism, affirming CBS’s leadership in news. By the end of the war, surveys conducted by the Office of Radio Research revealed that radio had become the main source of news for large numbers of Americans, and Murrow and other radio journalists were widely respected by the public. And though network news people knew that their audience and airtime would decrease now that the war was over, they were optimistic about the future and not very keen to jump into the new field of television.

This is ironic, since it was television that was uppermost in the minds of network leaders like Sarnoff and Paley. The television industry had been poised for takeoff as early as 1939, when NBC, CBS, and DuMont, a growing network owned by an ambitious television manufacturer, established experimental stations in New York City and began limited broadcasting to the few thousand households that had purchased the first sets for consumer use. After Pearl Harbor, CBS’s experimental station even developed a pathbreaking news program that used maps and charts to explain the war’s progress to viewers. This experiment came to an abrupt end in 1942, when the enormous shift of public and private resources to military production forced the networks to curtail and eventually shut down their television units, delaying television’s launch for several years.

Meanwhile, other events were shaking up the industry. In 1943, in response to an FCC decree, RCA was forced to sell one of its radio networks—NBC Blue—to the industrialist Edward J. Noble. The sale included all the programs and personalities that were contractually bound to the network, and in 1945 it was rechristened the American Broadcasting Company (ABC). The birth of ABC created another competitor not just in radio, where the Blue network had a loyal following, but in the burgeoning television industry as well. ABC joined NBC, CBS, and DuMont in their effort to persuade local broadcasters—often owners of radio stations who were moving into the new field of television—to become affiliates.

In 1944, the New York City stations owned by NBC, CBS, and Du-Mont resumed broadcasting, and NBC and CBS in particular launched aggressive campaigns to sign up affiliates in other cities. ABC and DuMont, hamstrung by financial and legal problems, quickly fell behind as most station owners chose NBC or CBS, largely because of their proven track record in radio. But even for the “ big two,” building television networks was costly and difficult. Unlike radio programming, which could be fed through ordinary phone lines to affiliates, who then broadcast them over the air in their communities, linking television stations into a network required a more advanced technology, a coaxial cable especially designed for the medium that AT&T, the private, government-regulated telephone monopoly, would have to lay throughout the country. At the end of the war, at the government’s and television industry’s behest, AT&T began work on this project. By the end of the 1940s, most of the East Coast had been linked, and the connection extended to Chicago and much of the Midwest. But it was slow going, and at the dawn of the 1950s, no more than 30 percent of the nation’s population was within reach of network programming. Until a city was linked to the coaxial cable, there was no reason for station owners to sign up with a network; instead, they relied on local talent to produce programs. As a result, the television networks grew more slowly than executives might have wished, and the audience for network programs was restricted by geography until the mid-1950s. An important breakthrough occurred in 1951, when the coaxial cable was extended to the West Coast and made transcontinental broadcasting possible. But until microwave relay stations were built to reach large swaths of rural America, many viewers lacked access to the networks.

Access wasn’t the only problem. The first television sets that rolled off the assembly lines were expensive. RCA’s basic model, the one that Sarnoff envisioned as its “Model T,” cost $385, while top-of-the-line models were more than $2,000. With the average annual salary in the mid-1940s just over $3,000, this was a lot of money, even if consumers were able to buy sets through department-store installment plans. And though the price of TVs would steadily decline, throughout the 1940s the audience for television was restricted by income. Most early adopters were from well-to-do families—or tavern owners who hoped that their investment in television would attract patrons.

Still, the industry expanded dramatically. In 1946, there were approximately 20,000 television sets in the US; by 1948, there were 350,000; and by 1952, there were 15.3 million. Less than 1 percent of American homes had TVs in 1948; a whopping 32 percent did by 1952. The number of stations also multiplied, despite an FCC freeze in the issuing of station licenses from 1948 to 1952. In 1946, there were six stations in only four cities; by 1952, there were 108 stations in sixty-five cities, most of them recipients of licenses issued right before the freeze. When the freeze was lifted and new licenses began to be issued again, there was a mad rush to establish new stations and get on the air. By 1955, almost 500 television stations were operating in the US.

The FCC freeze greatly benefited NBC and CBS. Eighty percent of the markets with TV at the start of the freeze in 1948 had only one or two licensees, and it made sense for them to contract with one or both of the big networks for national programming to supplement locally produced material. Shut out of these markets, ABC and DuMont were forced to secure affiliates in the small number of markets—usually large cities—where stations were more plentiful. By the time the FCC starting issuing licenses again, NBC and CBS had established reputations for popular, high-quality programs, and when new markets were opened, it became easier for them to sign up stations with the most desirable frequencies, usually the lowest “channels” on the dial. Meanwhile, ABC languished for much of the 1950s, with the fewest and poorest affiliates, and the struggling DuMont network ceased operations altogether in 1955.

News programs were among the first kinds of broadcasts that aired in the waning years of the war, and virtually everyone in the industry expected them to be part of the program mix as the networks increased programming to fill the broadcast day. News was “an invaluable builder of prestige,” noted Sig Mickelson, who joined CBS as an executive in 1949 and served as head of its news division throughout the 1950s. “It helped create an image that was useful in attracting audiences and stimulating commercial sales, not to mention maintaining favorable government relations. . . . News met the test of ‘public service.’ ” As usual, CBS led the way, inaugurating a fifteen-minute evening news program in 1944. It was broadcast on Thursdays and Fridays at 8:00 PM, the two nights of the week the network was on the air. NBC launched its own short Sunday evening newscast in 1945 as the lead-in to its ninety minutes of programming. Both programs resembled the newsreels that were regularly shown in movie theaters, a mélange of filmed stories with voice-over narration by off-screen announcers.

Considering the limited technology available, this was not surprising. Newsreels offered television news producers the most readily applicable model for a visual presentation of news, and the first people the networks hired to produce news programs were often newsreel veterans. But newsreels relied on 35mm film and were expensive and time-consuming to produce, and they had never been employed for breaking news. Aside from during the war, when they were filled with military stories that employed footage provided by the government, they specialized in fluff, events that were staged and would make the biggest impression on the screen: celebrity weddings, movie premiers, beauty contests, ship launches. In the mid-1940s, recognizing this shortcoming, producers at WCBW, CBS’s wholly owned subsidiary in New York, developed a number of innovative techniques for “visualizing” stories for which they had no film and established the precedent of sending a reporter to cover local stories.

These conventions were well established when the networks, in response to booming sales of television sets, expanded their evening schedules to seven days a week and launched regular weeknight newscasts. NBC’s premiered first, in February 1948. Sponsored by R. J. Reynolds, the makers of Camel cigarettes, it was produced for the network by the Fox Movietone newsreel company and had no on-screen news-readers. CBS soon followed suit, with the CBS Evening News, in April 1948. Relying on film provided by another newsreel outfit, Telenews, it featured a rotating cast of announcers, including Douglas Edwards, who had only reluctantly agreed to work in television after failing to break into the top tier of the network’s radio correspondents. In the late summer, after CBS president Frank Stanton convinced Edwards of television’s potential, Edwards was installed as the program’s regular on-screen newsreader, its recognizable “face.” DuMont created an evening newscast as well. But its News from Washington, which reached only the handful of stations that were owned by or affiliated with the network, was canceled in less than a year, and DuMont’s subsequent attempt, Camera Headlines, suffered the same fate and was off the air by 1950. ABC’s experience with news was similarly frustrating. Its first newscast, News and Views, began airing in August 1948 and was soon canceled. It didn’t try to broadcast another one until 1952, when it launched an ambitious prime-time news program called ABC All Star News, which combined filmed news reports with man-on-the street interviews, a technique popularized by local stations. By this time, however, the prime-time schedules of all the networks were full of popular entertainment programs, and All Star News, which failed to attract viewers, was pulled from the air after less than three months.

In February 1949, NBC, eager to make up ground lost to CBS, transformed its weeknight evening newscast into the Camel News Caravan, with John Cameron Swayze, a veteran of NBC’s radio division, as sole on-camera newsreader. Film for the program was acquired from a variety of sources, including foreign and domestic newsreel agencies and freelance stringers. But Swayze’s narration and on-screen presence distinguished the broadcast from its earlier incarnation. He sat at a desk that prominently displayed the Camel logo and presented an overview of the day’s major headlines, sometimes accompanied by film and still photos, but sometimes in the form of a “tell-story”— Swayze on camera reading from a script. In between, he would plug Camels and even occasionally light up, much to his sponsor’s delight. One of the show’s highlights was a whirlwind review of stories for which producers had no visuals, which Swayze would introduce by announcing, “Now let’s go hopscotching the news for headlines!” Swayze was popular with viewers and hosted the broadcast for seven years. He became well known to the public, especially for this nightly sign off, “That’s the story, folks. Glad we could get together.”

The Camel News Caravan was superficial, and Swayze’s tone undeniably glib, as critics at the time noted. But the assumption that guided its production did not set particularly high standards. As Reuven Frank, who joined the show as its main writer in 1950 and soon became its producer, recalled, “We assumed that almost everyone who watched us had read a newspaper . . . that our contribution . . . would be pictures. The people at home, knowing what the news was, could see it happen.” Yet over the next few years, especially after William McAndrew became head of NBC’s news division and Frank was installed as the program’s producer, the News Caravansteadily improved. Making good use of the largesse provided by R. J. Reynolds, which more than covered the news department’s rapidly expanding budget, the show increased its use of filmed reports, acquired from foreign sources like the BBC and other European news agencies, the US government and military, and the network’s growing corps of inhouse cameramen and technicians. It also came to rely more and more on the network’s staff of reporters, including a young North Carolinian named David Brinkley, and reporters at NBC’s “O-and-Os,” the five television stations that the network owned and operated. In the days before network bureaus, journalists at network O-and-Os were responsible for combing their cities for stories of potential national interest. NBC also employed stringers on whom it relied for material from cities or regions where it had no O-and-Os. Airing at 7:45 PM, right before the network’s lineup of prime-time entertainment programs, the News Caravan became the first widely viewed news program of the television age. Its success gave McAndrew and his staff greater leverage in their efforts to command network resources and put added pressure on their main rival.

The CBS Evening News, broadcast at 7:30, was also very much a work-in-progress. Influenced by the experiments in “visualizing” news that CBS producers had conducted at the network’s flagship New York City O-and-O in the mid-1940s, it was produced by a mix of radio people like Edwards and newcomers from other fields. Most of the radio people, however, were second-stringers. The network’s leading radio personnel, including Murrow and his comrades, had little interest in moving to television. Though this disturbed Paley and his second-in-command, CBS president Frank Stanton, it allowed CBS’s fledgling television news unit to escape from the long shadow of the network’s radio news operation, and it increased the influence of staff committed to the tradition of “visualizing.” With few radio people willing to work on the program, the network was forced to hire new staff from outside the network. These newcomers from the wire services, photojournalism, and news and photographic syndicates brought a lively spirit of innovation to CBS’s nascent television news division. They were impressed by the notion of “visualizing,” and they resolved that TV news ought to be different from radio news, “an amalgam of existing news media, with a substantial infusion of showmanship from the stage and motion pictures.”

The most important new hire was Don Hewitt, an ambitious, energetic twenty-five-year-old who joined the small staff of the CBS Evening News in 1948 and soon become its producer. Despite his age, Hewitt was already an experienced print journalist, and his resume included a stint at ACME News Pictures, a syndicate that provided newspapers with photographs. He was well aware of the power of pictures, and when he joined CBS, he brought a new sensibility and willingness to experiment. Under Hewitt, the Edwards program made rapid strides. Eager to find ways of compensating for television’s technical limitations, Hewitt made extensive use of still photos and created a graphic arts department to produce charts, maps, and captions to illustrate tell-stories. To make Edwards’s delivery more natural and smooth, he introduced a new machine called a TelePrompTer, which replaced the heavy cue cards on which his script had been written. Expanding on the experiments of CBS’s early “visualizers,” Hewitt devised a number of clever devices to provide visuals for stories—for example, using toy soldiers to illustrate battles during the Korean War. He was the principal figure behind the shift to 16mm film, which was easier and less expensive to produce, and the network’s decision to establish its own in-house camera crews. His most significant innovation, however, was the double-projector system that he developed to mix narration and film. This technique, which was copied throughout the industry, made possible a new kind of filmed report that would become the archetypal television news package: a reporter on camera, often at the scene of a story, beginning with a “stand-upper” that introduces the story; then film of other scenes, while the reporter’s words, recorded separately, serve as voice-over narration; finally, at the end, a “wrap-up,” where the reporter appears on camera again. By the early 1950s, the CBS newscast, now titled Douglas Edwards with the News, was adding viewers and winning plaudits from critics. And it had gained the respect of many of the network’s radio journalists, who now agreed to contribute to the program and other television news shows.

During the 1950s, Don Hewitt (left) was perhaps the most influential producer of television news. He was responsible not only for CBS’s successful evening newscast, but also worked on See It Now and other network programs. Douglas Edwards (right) anchored the broadcast from the late 1940s to 1962, when he was replaced by Walter Cronkite. Photo courtesy of CBS/Photofest.

The big networks were not the only innovators. In the late 1940s, with network growth limited and many stations still independent, local stations developed many different kinds of programs, including news shows. WPIX, a New York City station owned by theDaily News, the city’s most popular tabloid, established a daily news program in June 1948. The Telepix Newsreel aired twice a day, at 7:30 PM and 11:00 PM, and specialized in coverage of big local events like fires and plane crashes. Its staff went to great lengths to acquire film of these stories, which it hyped with what would become a standard teaser, “film at eleven.” Like its print cousin, it also featured lots of human-interest stories and man-on-the-street interviews. A Chicago station, WGN, developed a similar program, the Chicagoland Newsreel, which was also successful. The real pioneer was KTLA in Los Angeles. Run by Klaus Landsberg, a brilliant engineer, KTLA established the most technologically sophisticated news program of the era. Employing relatively small, portable cameras and mobile live transmitters, its reporters excelled in covering breaking news stories, and it would remain a trailblazer in the delivery of breaking news throughout the 1950s and 1960s. It was Landsberg, for example, who first conceived of putting a TV camera in a helicopter.

But such programs were the exception. Most local stations offered little more than brief summaries of wire-service headlines, and the expense of film technology led most to emphasize live entertainment programs instead of news. Believing that viewers got their news from local papers and radio stations, television stations saw no need to duplicate their efforts. Not until the 1960s, when new, inexpensive video and microwave technology made local newsgathering economically feasible, did local stations, including network affiliates, expand their news programming.

The television news industry’s first big opportunity to display its potential occurred in 1948, when the networks descended on Philadelphia for the political conventions. The major parties had selected Philadelphia with an eye on the emerging medium of television. Sales were booming, and Philadelphia was on the coaxial cable, which was reaching more and more cities as the weeks and months passed. By the time the Republicans convened in July, it extended from Boston to Richmond, Virginia, with the potential for reaching millions of viewers. Radio journalists had been covering the conventions for two decades, but with lucrative entertainment programs on network schedules, it hadn’t paid to produce “gavel-to-gavel” coverage—just bulletins, wrap-ups, and the acceptance speeches of the nominees. In 1948, however, television was a wide-open field, and with much of the broadcast day open—or devoted to unsponsored programming that cost nothing to preempt—the conventions were a great showcase. In cities where they were broadcast, friends and neighbors gathered in the homes of early adopters, in bars and taverns, even in front of department store display windows, where store managers had carefully arranged TVs to draw the attention of passers-by. Crowds on the sidewalk sometimes overflowed into the street, blocking traffic. “No more effective way could have been found to stimulate receiver sales than these impromptu TV set demonstrations,” suggested Sig Mickelson.

Because of the enormous technical difficulties and a lack of experience, the networks collaborated extensively. All four networks used the same pictures, provided by a common pool of cameras set up to focus on the podium and surrounding area. NBC’s coverage was produced by Life magazine and featured journalists from Henry Luce’s media empire as well as Swayze and network radio stars H. V. Kaltenborn and Richard Harkness. CBS’s starred Murrow, Quincy Howe, and Douglas Edwards, newly installed on the Evening News and soon to be its sole newsreader. ABC relied on the gossip columnist and radio personality Walter Winchell. Lacking its own news staff, DuMont hired the Washington-based political columnist Drew Pearson to provide commentary. Many of these announcers did double duty, providing radio bulletins, too. With cameras still heavy and bulky, there were no roving floor reporters conducting interviews with delegates and candidates; instead, interviews occurred in makeshift studios set up in adjacent rooms off the main convention floor. Accordingly, there was little coverage of anything other than events occurring on the podium, and it was print journalists who provided Americans with the behindthe-scenes drama, particularly at the Democrats’ convention, where Southern delegates, angered by the party’s growing commitment to civil rights, walked out in protest and chose Strom Thurmond to run as the nominee of the hastily organized “Dixiecrats.” The conventions were a hit with viewers. Though there were only about 300,000 sets in the entire US, industry research suggested that as many as 10 million Americans saw at least some convention coverage thanks to group viewing and department store advertising and special events.

Four years later, when the Republicans and Democrats again gathered for their conventions, this time in Chicago, the networks were better prepared. Besides experience, they brought more nimble and sophisticated equipment. And, thanks to the spread of the coaxial cable, there were in a position to reach a nationwide audience. Excited by the geometric increase in receiver sales, and inspired by access to new markets that seemed to make it possible to double or even triple the number of television households, major manufacturers signed up as sponsors, and advertisements in newspapers urged consumers to buy sets to “see the conventions.” Coverage was much wider and more complete than in 1948. Several main pool cameras with improved zoom capabilities focused on the podium, while each network deployed between twenty and twenty-five cameras on the periphery and at downtown hotels and in mobile units. “Never before,” noted Mickelson, the CBS executive responsible for the event, “had so many television cameras been massed at one event.”

Meanwhile, announcers from each of the networks explained what was occurring and provided analysis and commentary. NBC’s main announcer was Bill Henry, a Los Angeles print journalist. He was assisted by Kaltenborn and Harkness. Henry sat in a tiny studio and watched the proceedings through monitors, and did not appear on camera. CBS’s coverage differed and established a new precedent. Its main announcer, Walter Cronkite, provided essentially the same narration, explanation, and commentary as Henry. But his face appeared on-screen, in a tiny window in the corner of the screen; when there was a lull on the convention floor, the window expanded to fill the entire screen. Cronkite, an experienced wire service correspondent, had just joined CBS after a successful stint at WTOP, its Washington affiliate. Mickelson had been impressed with his ability to explain and ad lib, and he insisted that CBS use Cronkite rather than the far more experienced and well-known Robert Trout. Mickelson conceded that, from his years of radio work, Trout excelled at “creating word pictures.” But, with television, this was a superfluous gift. The cameras delivered the pictures. “What we needed was interpretation of the pictures on the screen. That was Cronkite’s forte.”

When print journalists asked Mickelson on the eve of the conventions what exact role Cronkite would play, he responded by suggesting that his new hire would be the “anchorman,” a term that soon came to refer to newsreaders like Swayze and Edwards as well. Yet in coining this term, Mickelson was referring to the complex process that Don Hewitt had conceived to provide more detailed and up-to-the-minute coverage of the convention. Recognizing that the action was on the floor, and that if TV journalists were to match the efforts of print reporters they needed to be able to report from there as quickly as possible, Hewitt mounted a second camera that could pan the floor and zoom in on floor reporters armed with walkie-talkies and flashlights, which they used to inform Hewitt when they had an interview or report ready to deliver. It worked like clockwork: “They combed through the delegations, talked to both leaders and members, queried them on motivations and prospective actions, and kept relaying information to the editorial desk.” It was then filtered and collated and passed on to Cronkite, who served as the “anchor” of the relay, delivering the latest news and ad-libbing with the poise and self-assurance that he would display at subsequent conventions and during live coverage of space flights and major breaking news. Cronkite’s seemingly effortless ability to provide viewers with useful and interesting information about the proceedings won praise from television critics and boosted CBS’s reputation with viewers.

NBC was not so successful. In keeping with the network’s—and RCA’s—infatuation with technology, it sought to cover events on the convention floor with a new gadget, a small, hand-held, live-television camera that could transmit pictures and needn’t be connected by wire. As Frank recalled, “It could roam the floor . . . showing delegates reacting to speakers and even join a wireless microphone for interviews.” But it regularly malfunctioned and contributed little to NBC’s coverage. More effective and popular were a series of programs that Bill McAndrew developed to provide background. Convention Call was broadcast twice a day during the conventions, before sessions and when they adjourned for breaks. Its hosts encouraged viewers to call in and ask NBC reporters to explain what was occurring, especially rules of procedure. The show sparked a flood of calls that overwhelmed telephone company switchboards and forced NBC to switch to telegrams instead.

Ratings for network coverage of the conventions exceeded expectations. Approximately 60 million viewers saw at least some of the conventions on television, with an estimated audience of 55 million tuning in at their peak. And the conventions inspired viewers to begin watching the evening newscasts and contributed to an increase in their popularity. Television critics praised the networks for their contributions to civic enlightenment. Jack Gould of the New York Times suggested that television had “won its spurs” and was “a welcome addition to the Fourth Estate.”

Conventions, planned in advance at locations well-suited for television’s limited technology, were ideal events for the networks to cover. These were the days before front-loaded primaries made them little more than coronations of nominees determined months beforehand, and the parties were undergoing important changes that were often revealed in angry debates and frantic back-room deliberations. And while print journalists remained the most complete source for such information, television allowed viewers to see it in real time, and its stable of experienced reporters and analysts proved remarkably adept at conveying the drama and explaining the stakes.

To read more about That’s the Way It Is, click here.

Add a Comment
39. Can We Race Together? An Autopsy

Berrey_Enigma_9780226246239_cvr_IFt

“Can We Race Together? An Autopsy”*

by Ellen Berrey

***

Corporate diversity dialogues are ripe for backlash, the research shows,
even without coffee counter gimmicks.

Corporate executives and university presidents are, yet again, calling for public discussion on race and racial inequality. Revelations about the tech industry’s diversity problem have company officials convening panels on workplace barriers, and, at the University of Oklahoma spokespeople and students are organizing town-hall sessions in response to a fraternity’s racist chant.

The most provocative of the efforts was Starbucks’ failed Race Together program. In March, the company announced that it would ask baristas to initiate dialogues with customers about America’s most vexing dilemma. Although public outcry shut down those conversations before they even got to “Hello,” Starbucks said it would nonetheless carry on Race Together with forums and special USA Today discussion guides. As someone who has done sociological research on diversity initiatives for the past 15 years, I was intrigued.

 For a moment, let’s take this seriously

What would conversations about race have looked like if they played out as Starbucks imagined, given the social science of race? Can companies, in Starbucks’ CEO Howard Schultz’s words, “create a more empathetic and inclusive society—one conversation at a time”? A data-driven autopsy of Starbucks’ ambitions is in order.

Surprisingly, Starbucks turned its sights on the provocative issue of racial inequality—not just feel-good cultural differences (or, thank goodness, the sort of “respectability politics” that, under well-intentioned cover, focus on the moral flaws of black people). Most Americans, especially those of us who are white, are ill-informed on the topic of inequality. We generally do not recognize our personal prejudice. We routinely, and incorrectly, insist that we are colorblind and that racism is a thing of the past, as sociologist Eduardo Bonilla-Silva has documented. When we do try to talk about race, we usually resort to what sociologists Joyce Bell and Doug Hartmann call the “happy talk” of diversity, without a language for discussing who comes out ahead and who gets pushed behind.

Starbucks pulls back the veil on our unconscious

How to take this on? Starbucks opted to tackle the thorny issue of unacknowledged prejudice—the cognitive biases that predispose a person against racial minorities and in favor of white people. The company intended to offer “insight into the divisive role unconscious bias plays in our society and the role empathy can play to bridge those divides.” The conversation guide it distributed the first week described a bias experiment in which lawyers were asked to assess an error-ridden memo. When told that the (fictional) author was white, the lawyers commented “has potential.” When told he was black, they remarked “can’t believe he went to NYU.”

Perhaps this was a promising starting point. Americans prefer psychological explanations; we like to think that terrorism, poverty, obesity, and other social ills are rooted in the individual’s psyche.

 A comforting thought: I’m not racist

We also do not want to see ourselves as complicit in the segregation of our communities, workplaces, or friendships. We definitely don’t want the stigma of being “racist.” Even white supremacists resist that label. So if it’s true that we can’t see our own bias, as Starbucks told us, we can take comfort in our innocence.

Starbucks’ description of the bias experiment actually took the conversation where it never seems to venture: to the advantages that white people enjoy. White people get help, forgiveness, and the inside track far more often than do people of color. But Starbucks stopped before pointing the finger at who gives white people these advantages.

The rest of Race Together veered off in a confused direction, mostly bent on educated enlightenment. The conversation guide was a mishmash of racial utopianism (the millennials have it figured out!), demography as destiny (immigration changes everything!), triumph over a troublesome past (progress!), testimonies by people of color (the one white guy is clueless!), statistics, inspired introspection, and social network tallies (“I have ____ friends of a different race”!).

Not your daddy’s diversity training

Companies have been trying to positively address race for decades. Typically, they do so through diversity management within their own workforce. Their stated purpose is to increase the numbers of people of color in the top ranks or improve the corporate culture. Most diversity management strategies, however, are far from effective (unless they make someone responsible for results), as shown by as sociologists Alexandra Kalev, Frank Dobbin, and Erin Kelly. Corporate aggrandizement and the façade of legal compliance seem as much the goals as actual change.

Race Together most closely resembled diversity training, which tries to undo managerial stereotyping through educational exchange, but this time the exchange was between capitalists and consumers. And it bucked the typical managerial spin. Usually, the kicker is the business case for diversity: this will boost productivity and profits. Instead, Starbucks made the diversity case for business. Consumption, supposedly, would create inclusion and equity. That would be its own reward. There was no clear connection to its specific business goals, beyond (disgruntled) buzz about the brand.

What were you thinking, Howard Schultz?

Briefly, let’s revisit what made Starbucks’ over-the-counter conversations so offensive. Starbucks was asking low-wage, young, disproportionately minority workers to prompt meaningful exchanges about race with uncaffeinated, mostly white and affluent customers. Even under the best of circumstances, diversity dialogues tend to put the burden of explaining racism on people of color. Here, baristas were supposed to walk the third rail during the morning rush hour without specialized training, much less extra compensation. One sociological term for this is Arlie Hochschild‘s “emotional labor.” The employee was required to tactfully manage customers’ feelings. The most likely reaction from coffee drinkers? Microaggressions of avoidance, denial, and eye-rolling.

The alternative, for Starbucks so-called “partners,” was disgruntled defiance. At my local Starbucks, when I asked about these conversations, the manager emphatically said, “We’re not participating.” The barista next to her was blunt: “We think it’s bullshit.”

Swiftly, the company came out with public statements that had the air of faux intention and cover-up, as if to say, “We’re not retreating; we’re merely advancing in the other direction.” Starbucks had promised a year of Race Together, but the collapse of the café stunt made an all-out retreat more likely: one more forum, one more ad, then silence.

 This doesn’t work…

Race Together trod treacherous ground. The research shows that diversity training backfires when it attempts to ferret out prejudice. It puts white people on the defensive and creates a backlash against people of color. For committed consumers, Starbucks was messing with the equivocally best part about capitalism: that you can give someone money and they give you a thing. For activists, this all smelled wrong (i.e., not how you want your latté). Like co-opted social justice.

… Does anyone in HQ ever ask what works?

Starbucks was wise to shift closer to the traditional role of a coffee house—the so-called Third Place between work and home that Schultz has long exalted. Hopefully, the company looks to proven models for productive conversations on race. Organizations such as the Center for Racial Justice Innovation push forward discussions that recognize racism as systemic, not as isolated individual attitudes and bad behaviors. This helps to avoid what people hate most about diversity trainings: forced discourse about superficial differences (“are you a daytime or nighttime person?”) and the wretched hunt for guilty bad guys.

According to social psychologists, unconscious bias can be minimized when people have positive incentives for interpersonal, cross-racial relationships. Wearing a sports jersey for the same team is impressively effective for getting white people to cooperate with African Americans, as shown in a study led by psychologist Jason Nier. The idea is to not provoke white people’s fear and avoidance of doing wrong. It is to motivate people to try to do what’s right by establishing a shared identity

Starbucks also needs to wrestle with its goal of “together.” That’s not always the outcome of conversations about race. Political scientist Katherine Cramer Walsh found that participants in civic dialogues on race commonly walk away with a heightened awareness of their differences, not with the unity that meeting organizers hope to foster.

 Is it better to abandon ship?

Despite its missteps, Starbucks, in fact, alit on hopeful insights. Individuals can ignite change, and empathy and listening are starting points. The company deserves some applause for taking the risk and for its deliberate focus on inequality. Undoubtedly, working-class, minority millennials could teach the rest of the country something about race (and executives something about company policy).

The truth hurts

But let’s be clear about what Race Together was not. It was not about addressing institutional discrimination. In that scenario, Starbucks would have issued a press release about eliminating patterns of unfair hiring and firing. It would have overhauled a corporate division of labor that channels racial minorities into lower-tier, nonunionized jobs. It might very well have closed stores in gentrifying neighborhoods.

Those solutions start with incisive diagnosis, not personal reflection. (The U.S. Department of Justice did just that when it scrutinized racial profiling in traffic stops and court fines in Ferguson, Missouri.) Those solutions require change in corporate policy.

To make Race Together honest, Starbucks needed to recognize an ugly truth: America’s race problem is not an inability to talk. It is a failure to rectify the unfair disadvantages hoisted on people of color and the unearned privileges that white people enjoy. Corporations, in their internal operations, are complicit in these very dynamics. So, too, are long-standing government policies, such as tax deductions of home mortgage interest (white folks are far more likely to own their homes). And white Americans may not want to hear it, but racial inequality is, in large measure, rooted in our collective choices: where we’ll pay property taxes, who we’ll tell about a job lead, what we’ll deem criminal, and even when we’ll smile or scowl. Howard Schultz, are you listening?

*This piece was originally published at the Society Pages, http://www.thesocietypages.org

***

Ellen Berrey teaches in the Department of Sociology at the University of Buffalo, SUNY, and is an affiliated scholar of the American Bar Foundation. Her book The Enigma of Diversity: The Language of Race and the Limits of Racial Justice will publish in April 2015.

Add a Comment
40. Excerpt: Paying with Their Bodies

9780226210094

An excerpt from Paying with Their Bodies: American War and the Problem of the Disabled Veteran by John M. Kinder

***

Thomas H. Graham

On August 30, 1862, Thomas H. Graham, an eighteen-year-old Union private from rural Michigan, was gut-shot at the Second Battle of Bull Run near Manassas Junction, Virginia. One of 10,000 Union casualties in the three-day battle, Graham had little chance of survival. Penetrating gunshot wounds to the abdomen were among the deadliest injuries of the Civil War, killing 87 percent of patients—either from the initial trauma or the inevitable infection. Quickly evacuated, he was sent by ambulance to Washington, DC, where he was admitted to Judiciary Square Hospital the next day. Physicians took great interest in Graham’s case, and over the following nine months, the young man endured numerous operations to suture his wounds. Deemed fully disabled, he was eventually discharged from service on June 6, 1863.

But Graham’s injuries never healed completely. His colon remained perforated, and he had open sinuses just above his left leg where a conoidal musket ball had entered and exited his body. As Dr. R. C. Hutton, Graham’s pension examiner, reported shortly after the Civil War’s end, “From each of these sinuses there is constantly escaping an unhealthy sanious discharge, together with the faecal contents of the bowels. Occasionally kernels of corn, apple seeds, and other indigestible articles have passed through the stomach and been ejected through these several sinuses.” Broad-shouldered and physically strong, Graham attempted to make a living as a day laborer and later as a teacher, covering his open wounds with a bandage. By the early 1870s, however, he bore “a sallow, sickly countenance” and could no longer hold a job, dress his injuries, or even stand on his own two feet. Most pitiful of all, the putrid odor from his “artificial anus” made him a social pariah. Regarding Graham’s case as “utterly hopeless,” Hutton concluded, “he would have died long ago from utter detestation of his condition, were it not for his indomitable pluck and patriotism.” Within a few months, Graham was dead, but hundreds of thousands lived on, altering the United States’ response to disabled veterans for decades to come.

 Arthur Guy Empey

For American readers during World War I, no contemporary account offered a more compelling portrait of life on the Western Front than Arthur Guy Empey’s autobiography, “Over the Top,” by an American Soldier Who Went (1917). Disgusted by his own country’s refusal to enter the Great War, Empey had joined the British army in 1916, eventually serving with the Royal Fusiliers in northwestern France. Invalided out of service a year later, the former New Jersey National Guardsman became an instant celebrity, electrifying US audiences with his tales from the front lines. Despite nearly dying on several occasions, Empey looked back on his time in the trenches with profound nostalgia. “War is not a pink tea,” he reflected, “but in a worthwhile cause like ours, mud, rats, cooties, shells, wounds, or death itself, are far outweighed by the deep sense of satisfaction felt by the man who does his bit.”

Beneath the surface of Empey’s rollicking narrative, however, was a far more disturbing story. For all of the author’s giddy enthusiasm, Empey made little effort to hide the Great War’s insatiable consumption of soldier’s bodies, including his own. During a nighttime raid on a German trench, Empey was shot in the face at close range, the bullet smashing his cheekbones just below his left eye. As he staggered back toward his own lines, he discovered the body of an English soldier hanging on coil of barbed wire: “I put my hand on his head, the top of which had been blown off by a bomb. My fingers sank into the hole. I pulled my hand back full of blood and brains, then I went crazy with fear and horror and rushed along the wire until I came to our lane.” Before reaching shelter, Empey was wounded twice more in the left shoulder, the second time causing him to black out. He awoke to find himself choking on his own blood, a “big flap from the wound in my cheek . . . hanging over my mouth.” Empey spent the next thirty- six hours in no man’s land waiting for help.

As he recuperated in England, Empey’s mood swung between exhilaration and deep depression: “The wound in my face had almost healed and I was a horrible- looking sight— the left cheek twisted into a knot, the eye pulled down, and my mouth pointing in a north by northwest direction. I was very down- hearted and could imagine myself during the rest of my life being shunned by all on account of the repulsive scar.” Although reconstructive surgery did much to restore his prewar appearance, Empey never recovered entirely. Like hundreds of thousands of Americans who followed him, he was forever marked by his experiences on the Western Front.

Elsie Ferguson in “Hero Land”

In the immediate afterglow of World War I, Americans welcomed home the latest generation of wounded warriors as national heroes— men whose bodies bore the scars of Allied victory. Among the scores of prominent supporters was Elsie Ferguson, a Broadway actress and film star renowned for her maternal beauty and patrician demeanor.

During the war years, “The Aristocrat of the Silent Screen” had been an outspoken champion of the Allied effort, raising hundreds of thousands of dollars for liberty bonds through her stage performances and public rallies. After the Armistice, she regularly visited injured troops at Debarkation Hospital No. 5, one of nine makeshift convalescent facilities established in New York City in the winter of 1918. Nicknamed “Hero Land,” No. 5 was housed in the lavish, nine- story Grand Central Palace, and was temporary home to more than 3,000 sick and wounded soldiers recently returned from European battlefields.

Unlike Ferguson’s usual cohort, who reser ved their heroics for the big screen, the patients at No. 5 did not resemble matinee idols— far from it. Veterans of Château-Thierry, Belleau Wood, and the Meuse-Argonne, many of the men were prematurely aged by disease and loss of limb. Others endured constant pain, their bodies wracked with the lingering effects of shrapnel and poisonous gas.

Like most observers in the early days of the Armistice, Ferguson was optimistic about such men’s prospects for recovery. Chronicling her visits in Motion Picture Magazine, she reported that the residents of Hero Land received the finest care imaginable. Besides regular excursions to hot spots throughout the city, convalescing vets enjoyed in- house film screenings and stage shows, and the hospital storeroom (staffed by attractive Red Cross volunteers) was literally overflowing with cigarettes, chocolate bars, and “all the good things waiting to give comfort and pleasure to the men who withheld nothing in their giving to their country.” The patients themselves were upbeat to a man and, in Ferguson’s view, seemed to harbor no ill will about their injuries. Reflecting upon a young marine from Minnesota, now missing an arm and too weak to leave his bed, she echoed the sentiments of many postwar Americans: “The world loves these fighting men and a uniform is a sure passport to anything they want.”

Still, the actress cautioned her readers against expecting too much, too soon. The road to readjustment was a long one, and Ferguson warned that the United States would never be a “healed nation” until its disabled doughboys were back on their feet.

Sunday at the Hippodrome

On the afternoon of Sunday, March 24, 1919, more than 5,000 spectators crowded into New York City’s Hippodrome Theater to attend the culmination of the International Conference on Rehabilitation of the Disabled. The purpose of the conference was to foster an exchange of ideas about the rehabilitation of wounded and disabled soldiers in the wake of World War I. Earlier sessions held the week before at Carnegie Hall and the Waldorf-Astoria, had been attended primarily by specialists in the field, among them representatives from the US Army Office of the Surgeon General, the French Ministry of War, the British Ministries of Pensions and Labor, and the Canadian Department of Soldiers’ Civil Re-Establishment. But the final day was meant for a different audience. Part vaudeville, part civic revival, it was organized to raise mass support for the rehabilitation movement and to honor the men whose bodies bore the scars of the Allied victory.

The afternoon’s program opened with the debut performance of the People’s Liberty Chorus, a hastily organized vocal group whose female members were dressed as Red Cross nurses and arranged in a white rectangle across the back of the theater’s massive stage. As they belted out patriotic anthems, an American flag and other patriotic symbols flashed in colored lights above their heads. Between songs, the event’s host, former New York governor Charles Evans Hughes, introduced inspirational speakers, among them publisher Douglas C. McMurtrie, the foremost advocate of soldiers’ rehabilitation in the United States. In his own address, Hughes paid homage to the men and women working to reconstruct the bodies and lives of America’s war-wounded. He also extended a warm greeting to the more than 1,000 disabled soldiers and sailors in the audience, many transported to the theater in Red Cross ambulances from nearby hospitals and convalescent centers.

The high point of the afternoon’s proceedings came near the program’s end, when a small group of disabled men took the stage. Lewis Young, a bilateral arm amputee, thrilled the onlookers by lighting a cigarette and catching a ball with tools strapped to his shoulders. Charles Bennington, a professional dancer with one leg amputated above the knee, danced the “buck and wing” on his wooden peg, kicking his prosthetic high above his head. The last to address the crowd was Charles Dowling, already something of a celebrity for triumphing over his physical impairments. At the age of fourteen, Dowling had been caught in a Minnesota blizzard. The frostbite in his extremities was so severe that he eventually lost both legs and one arm to the surgeon’s saw. Now a bank president, Republican state congressman, and married father of three, he offered a message of hope to his newly disabled comrades:

I have found that you do not need hands and feet, but you do need courage and character. You must play the game like a thoroughbred. . . . You have been handicapped by the Hun, who could not win the fight. For most of you it will prove to be God’s greatest blessing, for few begin to think until they find themselves up against a stone wall.

Dowling stood before them as living proof that with hard work and careful preparation even the most severely disabled man could achieve lasting success. Furthermore, he chided the nondisabled in the audience not to “coddle” or “spoon-feed” America’s wounded warriors: “Don’t treat these boys like babies. Treat them like what they have proved themselves to be—men.”

The Sweet Bill

On December 15, 1919, representatives of the American Legion, the United States’ largest organization of Great War veterans, gathered in Washington, DC, for the first skirmish in a decades- long campaign to expand federal benefits for disabled veterans. They had been invited by the head of the War Risk Insurance Bureau, R. G. Cholmeley-Jones, to take part in a three-day conference on reforming veterans’ legislation. Foremost on the Legionnaires’ agenda was the immediate passage of the Sweet Bill, a measure that would raise the base compensation rate for war-disabled veterans from $30 to $80 a month. Submitted by Representative Burton E. Sweet (R-IA) three months earlier, the bill had passed by a wide margin in the House but had yet to reach the Senate floor. Some members of the Senate Appropriations Committee were put off by the high cost of the legislation (upward of $80 million a year); others felt that the country had more pressing concerns—such as the fate of the League of Nations— than disabled veterans’ relief. Meanwhile, as one veteran-friendly journalist lamented, war-injured doughboys languished in a kind of legislative limbo: “Men with two or more limbs gone, both eyes shot out, virulent tuberculosis and gas cases—these are the kind of men who have suffered from congress [sic] inaction.”

After an opening day of mixed results, the Legionnaires reconvened on the Hill the following afternoon to press individual lawmakers about the urgency of the problem. That evening, leading members of Congress hosted the lobbyists at a dinner party in the Capitol basement. Before the meal began, Legionnaire H. H. Raegge, a single- leg amputee from Texas, caught a streetcar to nearby Walter Reed Hospital and returned with a group of convalescing vets. The men waited as the statesmen praised the Legionnaires’ stalwart patriotism; then Thomas W. Miller, the chairman of the Legion’s Legislative Committee, rose from his seat and introduced the evening’s surprise guests. “These men are only twenty minutes away from your Capitol, Mr. Chairman [Indiana Republican senator James Eli Watson], and twenty minutes away from your offices, Mr. Cholmeley-Jones,” Miller announced to the audience. “Every man has suffered—actually suffered—not only from his wounds, but in his spirit, which is a condition this great Nation’s Government ought to change.” Over the next three hours, the men from Walter Reed testified about the low morale of convalescing veterans, the “abuses” suffered at the hands of the hospital officials, and their relentless struggle to make ends meet. By the time it was over, according to one eye-witness, the lawmakers were reduced to tears. Within forty-eight hours, the Sweet Bill—substantially amended according to the Legion’s recommendations—sailed through the Senate, and on Christmas Eve, Woodrow Wilson signed it into law.

For the newly formed America Legion, the Sweet Bill’s passage represented more than a legislative victory. It marked the debut of the famed “Legion Lobby,” whose skillful deployment of sentimentality and hard-knuckle politics have made it one of the most influential (and feared) pressure groups in US history. No less important, the story of the Sweet Bill became one of the group’s founding myths, retold—often with new details and rhetorical flourish—at veterans’ reunions throughout the following decades.4 Its message was self- aggrandizing, but it also had an element of truth: in the face of legislative gridlock, the American Legion was the best friend a disabled veteran could have.

Forget-Me-Not Day

On the morning of Saturday, December 17, 1921, an army of high school girls, society women, and recently disabled veterans assembled for one of the largest fund- raising campaigns since the end of World War I. The group’s mission was to sell millions of handcrafted, crepe-paper forget-me-nots to be worn in remembrance of disabled veterans. Where the artificial blooms were unavailable, volunteers peddled sketches of the pale blue flowers or cardboard tags with the phrase “I Did Not Forget” printed on the front. The sales drive was the brainchild of the Disabled American Veterans of the World War (DAV), and proceeds went toward funding assorted relief programs for permanently injured doughboys. Event supporters hoped high turnout would put to rest any doubt about the nation’s appreciation of disabled veterans and their families. As governor Albert C. Ritchie told his Maryland residents on the eve of the flower drive: “Let us organize our gratitude so that in a year’s time there will not be a single disabled soldier who can point an accusing finger at us.”

Over the next decade, National Forget-Me-Not Day became a minor holiday in the United States. In 1922, patients from Washington, DC, hospitals presented a bouquet of forget- me- nots to first lady Florence Kling Harding, at the time recovering from a major illness. Her husband wore one of the little flowers pinned to his lapel, as did the entire White House staff. That same year, Broadway impresario George M. Cohan orchestrated massive Forget-Me-Not Day concerts in New York City. As bands played patriotic tunes, stage actresses worked the crowds, smiling, flirting, and raking in coins by the bucketful. According to press reports at the time, the flower sales were meant to perform a double duty for disabled vets. Pinned to a suit jacket or dress, a forget-me-not bloom provided a “visible tribute” to the bodily sacrifices of the nation’s fighting men. As the manufacture of remembrance flowers evolved into a cottage industry for indigent vets, the sales drive acquired an additional motive: to turn a “community liability” into a “community asset.”

Although press accounts of Forget-Me-Not Day reassured readers that “Americans Never Forget,” many disabled veterans remained skeptical.

From the holiday’s inception, the DAV tended to frame Forget-Me-Not Day in antagonistic terms, using the occasion to vent its frustration with the federal government, critics of veterans’ policies, and a forgetful public. Posters from the first sales drive featured an anonymous amputee on crutches, coupled with the accusation “Did you call it charity when they gave their legs, arms and eyes?” As triumphal memories of the Great War waned, moreover, Forget-Me-Not Day slogans turned increasingly hostile. “They can’t believe the nation is grateful if they are allowed to go hungry,” sneered one DAV slogan, two years before the start of the Great Depression. Another characterized the relationship between civilians and disabled vets as one of perpetual indebtedness: “You can never give them as much as they gave you.”

 James M. Kirwin

On November 26, 1939, three months after the start of World War II in Europe, James M. Kirwin, pastor of the St. James Catholic Church in Port Arthur, Texas, devoted his weekly newspaper column to one of the most haunting figures of the World War era: the “basket case.” Originating as British army slang in World War I, the term referred to quadruple amputees, men so horrifically mangled in combat they had to be carried around in wicker baskets. Campfire stories about basket cases and other “living corpses” had circulated widely during the Great War’s immediate aftermath. And Kir win, a staunch isolationist fearful of US involvement in World War II, was eager to revive them as object lessons in the perils of military adventurism. “The basket case is helpless, but not useless,” the preacher explained. “He can tell us what war is. He can tell us that if the United States sends troops to Europe, your son, your brother, father, husband, or sweetheart, may also be a basket case.” In Kir win’s mind, mutilated soldiers were not heroes to be venerated; they were monstrosities, hideous reminders of why the United States should avoid overseas war- making at all costs. Facing an upsurge in pro- war sentiment, the reverend implored his readers to take the lessons of the basket case to heart: “We must not add to war’s carnage and barbarity by drenching foreign fields with American blood. . . . Looking at the basket case, we know that for civilization’s sake, we dare not, MUST NOT.”

Harold Russell

The most famous disabled veteran of the “Good War” era never saw action overseas. On June 6, 1944, Harold Russell was serving as an Army instructor at Camp Mackall, North Carolina, when a defective explosive blew off both of his hands. Sent to Walter Reed Medical Center, Russell despaired the thought of spending the rest of his days a cripple. As he recounted in his 1981 autobiography, The Best Years of My Life, “For a disabled veteran in 1944, ‘rehabilitation’ was not a realistic prospect. For all I knew, I was better off dead, and I had plenty of time to figure out if I was right.” Not long after his arrival, his mood brightened after watching Meet Charlie McGonegal, an Army documentary about a rehabilitated veteran of World War I. Inspired by McGonegal’s example—“I watched the film in awe,” he recalled—Russell went on to star in his own Army rehabilitation film, and was eventually tapped by director William Wyler to act alongside Fredric March and Dana Andrews in The Best Years of Our Lives (1946), a Hollywood melodrama about three veterans attempting to pick up their lives after World War II.

Russell played Homer Parrish, a former high school quarterback turned sailor who lost his hands during an attack at sea. Much of Russell’s section of the film follows Homer’s anxieties about burdening his family—and especially his fiancée, Wilma—with his disability. In the film’s most poignant scene, Homer engages in a form of striptease, removing his articulated metal hooks and baring his naked stumps to Wilma—and, it turns out, to the largest ticket-buying audience since the release of Gone with the Wind. Even as Homer decries his own helplessness—“I’m as dependent as a baby,” he protests—Wilma tucks him into bed and pledges her everlasting love and fidelity. In the film’s finale, the young couple is married; however, there is little to suggest that Homer’s struggles are over.

Though Russell worried what disabled veterans would make of the film, The Best Years of the Our Lives was a critical and box-office smash. For his portrayal of Homer Parrish, Russell would win not one but two Academy Awards (one for Best Supporting Actor and the other for “bringing aid and comfort to disabled veterans”). He would spend the next few decades working with American Veterans (AMVETS) and other veterans’ groups to change public perceptions of disabled veterans. “Tragically, if somebody said ‘physically disabled’ in 1951,” he later observed, “too many Americans thought only of street beggars. We DAV’s were determined to change that.” In 1961, he was appointed as vice chairman of the president’s Committee on Employment of the Handicapped, succeeding to the chairman’s role three years later.

A decade before his death in 2002, Russell returned to the public spotlight when he was forced to sell one of his Oscars to pay for his wife’s medical bills.

Tammy Duckworth

At first glance, Ladda Tammy Duckworth bears little resemblance to the popular stereotype of a wounded hero. The daughter of a Thai immigrant and an ex-Marine, the “self-described girlie girl” joined the ROTC in the early 1990s. Earning a master’s degree in international affairs at George Washington University, she enlisted in the Illinois National Guard with the sole purpose of becoming a helicopter pilot, one of the few combat careers open to women at the time. Life in uniform was far from easy. Dubbed “mommy platoon leader,” she endured routine verbal abuse from her male cohort. As she recalled to reporter Adam Weinstein, the men in her unit “knew that I was hypersensitive about wanting to be one of the guys, that I wanted to be—pardon my language—a swinging dick, just like ever yone else, so they just poked. And I let them, that’s the dumb thing.” She persisted all the same, eventually coming to command more than forty troops.

On November 12, 2004, the thirty-six-year-old Duckworth was copiloting a Black Hawk helicopter in the skies above Iraq when a rocket-propelled grenade exploded just below her feet. The blast tore off her lower legs and she lost consciousness as her copilot struggled to guide the chopper to the ground. Duckworth awoke in a Baghdad field hospital, her right arm shattered and her body dangerously low on blood. Once stabilized, she followed the aerial trajectory of thousands of severely injured Iraq War soldiers— first to Germany and then on to Walter Reed Medical Center (in her words, the “amputee petting zoo”), where she became an instant celebrity and a prized photo-companion for visiting politicians.

Duckworth has since devoted her career to public service and veterans’ advocacy. Narrowly defeated in her bid for Congress in 2006, she headed the Illinois Veterans Bureau from 2006 to 2009 and later went on to serve in the Obama administration as assistant secretary of the Department of Veterans Affairs. At the VA, she “boosted ser vices for homeless vets and created an Office of Online Communications, staffing it with respected militar y bloggers to help with troops’ day-to-day questions.” Yet politics was never far from her heart, and in November 2012, Duckworth defeated Tea Party incumbent Joe Walsh to become the first female disabled veteran to serve in the House of Representatives.

Balanced atop her high-tech prostheses (one colored red, white, and blue; the other, a camouflage green), Duckworth might easily be caricatured as a supercrip, a term disability scholars use to describe inspirational figures who by sheer force of will manage to “triumph” over their disabilities and achieve extraordinary success. Indeed, it’s easy to be awed by her remarkable determination, both before and after her injury. However, as just one of tens of thousands of disabled veterans of Afghanistan and Iraq, Duckworth is far less remarkable than many of us would like to believe. Nearly a century after Great War evangelists predicted the end of war-produced disability, she is a public reminder that the goal of safe, let alone disability-free, combat remains as illusive as ever.

To read more about Paying with Their Bodies, click here.

Add a Comment
41. Excerpt: A Significant Life

9780226235677

 

“A Meaningful Life”

An excerpt from A Significant Life: Human Meaning in a Silent Universe by Todd May

***

Let us start with a question. What does it mean to ask about the meaningfulness of life? It seems a simple question, but there are many ways to inflect it. We might ask,“What is the meaning of life?” Or we could ask it in the plural: “What are the meanings of life?” If we put the question either of these ways, we seem to be asking for a something orsomethings, a what that gives a human life its meaningfulness. The universe is thought or hoped to contain something—a meaning—that is the point of our being alive. If the universe contains a meaning, then the task for us becomes one of discovery. It is built into the universe, part of its structure. In the image that some philosophers like to use, it is part of the “furniture” of the universe.

When we say that the meaning of life is independent of us—that is, independent of what any of us happens to believe about it—we do not need to believe that there would be a meaning to our lives even if none of us were around to live it. We only need to believe that whatever meaning there is to our lives, it is not in any way up to us what it is. What makes our lives meaningful, whether it arises at the same time as we do or not, does not arise as part of us.

The idea that something exists independent of us and that it is our task to discover it, is how Camus thought of the meaning of life. If our lives are to be meaningful, it can only be because the universe contains a meaning that we can discern. And it is the failure not only to have discerned it but to have any prospect of discerning it that causes him to despair. The silence of the universe, the silence that affronts human nature’s need for meaning, is that of the universe regarding meaning itself.

The universe, after all, is not silent about everything. It has yielded numerous of its workings to our inquiry. In many ways, the universe seems loquacious, and perhaps increasingly so. There are scientists who believe that physics may be on the cusp of articulating a unified theory of the universe. This unified theory would give us a complete account of its structure. But nowhere in this theory is there glimpsed a meaning that would satisfy our need for one. This is because either such a meaning does not exist or, if it does, it eludes our ability to recognize it.

The idea that the universe is meaningful precisely because it contains a meaning independent of us is not foreign to the history of philosophy. It is also not foreign to our own more everyday way of thinking. It has a long history, a history as long as the history of philosophy itself, and indeed probably longer. One form this way of thinking has taken is that of the ancient philosopher Aristotle.

For my own part, I long detested Aristotle’s thought, what little I knew of it. For me, Aristotle was just a set of sometimes disjointed writings that I somehow had to get through in order to pass my qualifying exams in graduate school. It wasn’t until a number of years into my teaching career that a student persuaded me to read him again. In particular, he insisted, the Nicomachean Ethics would speak to me. I doubted this, but I respected the student, so one semester I decided to incorporate a large part of theEthics into a course I was teaching on moral theory. Teaching a philosopher is often a way to develop sympathy for him or her. It forces one to take up the thinker’s perspective. Before embarking on the course, I recalled the words of the great historian of science Thomas Kuhn, who once said that he came to realize that he did not understand a thinker until he could see the world through that thinker’s eyes. In fact, he said that he realized this after reading Aristotle’s Physics. I figured that if anything would do the trick with Aristotle, teaching his Ethics would be it.

It did the trick.

Not only do I find myself teaching the Ethics on a regular basis. Once, in a moment of hubris, I signed up to teach a senior-level seminar on Aristotle’s general philosophy. In doing so, I told my students that I would try to defend every aspect of his thought, even the most obsolete aspects of his physics and biology. This forced me and the students to take his thought seriously as a synoptic vision of human life and the universe in which it unfolds.

Aristotle’s ethics, his view of a human life as a trajectory arcing from birth to death and his attempt to comprehend what the trajectory of a good human life would be, has left its mark on my own view of meaningfulness. His attempt to bring together the various elements of a life—reason, desire, the need for food and shelter—into a coherent whole displays a wisdom rarely found even among the most enlightened minds in the history of philosophy. It stands out particularly against the background of more recent developments in philosophy, which often concern themselves less with wisdom and more with specialized problems and the interpretations of other thinkers.

Aristotle talks not in terms of meaning, but of the good. So the ethical question for Aristotle is, What is the good aimed at by human being? Or, to put it in more Aristotelean terms, What is the human telos? It is, in the Greek term, eudaemonia.Eudaemonia literally means“good” (eu) “spirit” (daemon). The term is commonly translated as “happiness.” However, happiness as we use the word does not seem to capture much of what Aristotle portrays as a good human life. For Aristotle,eudaemonia is a way of living, a way of carrying out the trajectory of one’s life. A more recent and perhaps better translation is “flourishing.” Flourishing may seem a bit more technical than happiness, or perhaps a bit more dated, but that is one advantage it possesses. Rather than carrying our own assumptions into the reading of the term, it serves as a cipher. Its meaning can be determined by what Aristotle says about a good life rather than by what we already think about happiness.

Flourishing is the human telos. It is what being human is structured to aim at. Not all humans achieve a flourishing life. In fact, Aristotle thinks that a very flourishing life is rare. It is not difficult to see why. In order to flourish, one must have a reasonably strong mental and physical constitution, be nourished by the right conditions when one is young, be willing to cultivate on’s virtue as one matures, and not face overwhelming tragedy during one’s life. Many of us can attain to some degree of flourishing over the course of our lives, but a truly flourishing life: that is seldom achieved.

What is it to flourish, to trace a path in accordance with the good for human beings? The Nicomachean Ethics is a fairly long book. The English translation runs to several hundred pages. There are discussions of justice, friendship, desire, contemplation, politics, and the soul, all of which figure in detailing the aspects of flourishing. But Aristotle’s general definition of the good for human beings is concise: “The human good turns out to be the soul’s activity that expresses virtue.” The good life, the flourishing life, is an ongoing activity. And that activity expresses the character of the person living it, her virtue.

For Aristotle, the good life is not merely a state. One does’t arrive at a good life. The telos of a human life is not an end result, where one becomes something and then spends the rest of one’s life in that condition that one becomes. It is not like nirvana, an exiting of the trials of human existence into a state where they no longer disturb one’s inner calm. It is, instead, active and engaged with the world. It is an ongoingexpression of who one is. This does not mean that there is no inner peace. A person whose life is virtuous, Aristotle tells us, experiences more pleasure than a person whose life is not, and is unlikely to be undone by the tribulations of human existence. And a virtuous person, because he has more perspective, will certainly possess an inner calm that is not entirely foreign to the idea of nirvana. However, a good life is not simply the possession of that calm. It is one’s very way of being in the world.

To be virtuous, to have any virtue to express, requires us to mold ourselves in a certain way. It requires us to fashion ourselves into virtuous people. Human beings are structured with the capacity to be virtuous. But most of us never get there. We are lazy, or we do not have the right models to follow, or else we do have the right models to follow but do’t recognize them, or some combination of these failures and perhaps others. A human being, unless she is severely damaged by her genetic constitution or early profound trauma, can become virtuous, whether or not she really does. But to do so takes work, the kind of work that molds on’s character into someone whose behavior consistently expresses virtue. Most of us are only partly up to the task.

What is this virtue that a good life expresses? For Aristotle, virtues are in the plural. Moreover, they come in two types, corresponding to two aspects of the human soul. The human soul has three parts. There is the vegetative or nutritive part: the part that breathes, sleeps, keeps the organism running biologically. Then there is desire. It is directed toward the world, wanting to have or to do or to be certain things. But desire is not blind. It is responsive to reason, which is the third part of the soul. And because desire is responsive to reason, there are virtues associated with desire, just as there are virtues associated with reason. There are virtues of character and virtues of thought.

The vegetative or nutritive part of the soul cannot not have its own virtues, because it is immune to reason. Unlike desire, it cannot be controlled or directed or channeled. To be capable of virtue is to be capable of developing it. It is not to be already endowed with it. This requires that we can both recognize the virtue to be developed and develop ourselves in accordance with it. And to do that, we must be able to apply reason. I can apply reason to my desire to vent anger on my child when he has failed to recognize the need to share his toys with his little sister. In fact, I can do more than this. I can notice the anger when it begins to appear in inappropriate situations, reflect on its inappropriateness, lay the anger aside, and eventually mold myself into the kind of person who does’t get angry when there is no need for it. With anger I can do this, but not with breathing.

The virtues of character include, among others, sincerity, temperance, courage, good temper, and modesty. For Aristotle, all of these virtues are matters of finding the right mean between extremes. Good temper, for example, is the mean between spiritlessness and irascibility. It is the mean I try to develop when I learn to refrain from getting angry in situations that do not call for it, as with my child. If I never got angry at all, though, that would not display good character any more than a readiness to vent would. There are situations that call for anger: when my child is older and does something knowingly cruel to another, or when my country acts callously toward its most vulnerable citizens. Virtues of character are matters of balance. We reflect on our desires, asking which among them to develop and when. Sometimes we need to learn restraint; sometimes, alternatively, we need to elicit expression. We are all (almost all) born with the ability to do this. What we need are models to show us the way and a willingness to work on ourselves.

Virtues of thought, in contrast to virtues of character, are matters of reason alone: understanding, wisdom, and intelligence, for instance. The goal of virtues of thought is to come to understand ourselves and our world. Like the virtues of character, they are active. And like the development of virtues of character, the development of virtues of thought is not a means to an end. The goal of these virtues is not simply to gain knowledge. It is to remain engaged intellectually with the world. As Aristotle tells us, “Wisdom produces happiness [or flourishing], not in the way that medical science produces health but in the way that health produces health.”

This point is easy to miss in our world. In contrast to when I attended college, many of my students are encouraged to think of their time in a university as nothing more than job training. It is not that previous generations were not encouraged to think in these terms. But there were other terms as well, terms concerning what might, perhaps a bit quaintly, be called “the life of the mind.” In 1998, the New York Times reported that “in the [annual UCLA] survey taken at the start of the fall semester, 74.9 percent of freshmen chose being well off as an essential goal and 40.8 percent chose developing a philosophy. In 1968, the numbers were reversed, with 40.8 percent selecting financial security and 82.5 percent citing the importance of developing a philosophy.” The threat faced at many universities to the humanities, from foreign languages to history to philosophy, signals a leery or even dismissive attitude toward a view of the university as helping students to “develop a philosophy.” Aristotle insists that a good life is not one where our mental capacities are taken to be means to whatever ends are sought by our desires. It is instead a life in which our mental capacities are exercised as an end in itself. In fact, although we need not follow him this far, for Aristotle contemplation is the highest good that a life can achieve. It is the good he associates with the gods.

What might a good life look like, a life that Aristotle envisions as the good life for human beings? How might we picture it?

We need not think of someone with almost superhuman capacities. A good person is not someone larger than life. Even less should we think of someone entirely altruistic, who dedicates her life to the good of others. That is a more Christian conception of a good life. It is foreign to Aristotle’s world, where a good life involves a dedication to self-cultivation. Last, we should not light upon famous people as examples of those who lead good lives. It may be that there are good lives among the famous. But a good life is not one that seeks fame, so whether a good person is famous or popular would be irrelevant to her. For this reason, we might expect fewer good lives among the famous, since public recognition often alights upon those who chase after it.

Instead, a good life is likely to be found among on’s peers, but not among many of them. It would be among those who take up their lives seriously as a task, a task of a particular sort. They see themselves as material to be molded, even disciplined. Their discipline is dedicated to make them act and react in the proper ways to the conditions in which they find themselves. This discipline is not blind, however. It is not a military kind of discipline, where the goal is mindless conformity. It is a more reflective discipline, one where an understanding of the world and a desire to act well in it are combined to yield a person that we might call, in the contemporary sense of the word, wise. e

It would be a mistake to picture the good life as overly reflective, though, and for two reasons. First, a good life, in keeping with the Greek ideal, requires sound mind and sound body. Cultivation of character is not inconsistent with cultivation of physical health. In fact, if recent studies are to be believed, good physical health contributes to a healthy mind. It is, of course, not the sole contributant. We cannot assume that because someone is athletic, he is a paragon of good character. On the contrary, the list of examples that would tell against this assumption would make for long and depressing reading. There is a mean to athleticism as there is a mean to the virtues. But the person lost to reflection, the person who mistakes himself for an ethereal substance or sees his body merely as an encumbrance to thought, is not leading a good human life. Even if, for Aristotle, contemplation is the highest good, it can only be sustained over a long period by the gods. The good human life is an embodied one.

Second, if a person cultivates herself rightly, then over time there should be less of a need for continued discipline. A good person will learn to act well automatically. It will become part of her nature. Someone who is flourishing, confronted with a choice between helping others where it would benefit herself and helping them in spite of the lack of benefit, would not even take the possibility of benefit as a relevant factor in the situation. It would remain in the background, never rising to the level of a consideration worth taking into account. The fact that she would benefit just wouldn’t matter.

It is not surprising, then, that for Aristotle the person who does not think of acting poorly, for whom it is not even a possibility, is leading a better life than someone who is tempted to evil but struggles, even successfully, with herself to overcome it. The latter person has not cultivated herself in the right way. She may be strong in battle but she is weak in character. This is one of the reasons Aristotle says that a good life is more pleasurable to the one living it than a bad one. Someone who has become virtuous is at peace with herself. She knows who she is and what she must do and does not wrestle with herself to do it. Rather, she takes pleasure in not having to wrestle with herself.

It might be thought that the good life would be solitary or overly self-involved. But for Aristotle this is not so. In fact, what he calls true friendship is possible only among the virtuous. The weak will always want something from a friend: encouragement, support, entertainment, flattery, a sense of one’s own significance. It is only when these needs are left behind that one can care for another for the sake of that other. True friendship and the companionship that comes with it are not the offspring of need; they are the progeny of strength. They arise when the question between friends is not what each can receive from the other but what each can offer.

The flourishing life depicted by Aristotle is certainly an attractive one. It is attractive both inside and out, to oneself and to others. To be in such control of one’s life and to have such a sense of direction must be a rewarding experience to the person living it. As Aristotle says, it is the most pleasurable life. And from the other side, there is much good that such a life brings to others. This good is given freely, as an excess or overflow of one’s own resources, rather than as an investment in future gain. It is a life that is lived well and does good.

But is it a meaningful life?

In order to answer that question, we must know something about what makes a life meaningful. If we were ask Aristotle whether this life is meaningful, he would undoubtedly answer yes. The reason for this returns us to the framework of his thought. Everything has its good, its telos. To live according to on’s telos is to be who or what one should be. It is to find one’s place in the universe. For Aristotle, in contrast to Camus, the universe is not silent. It is capable of telling us our role and place. Or better, since the universe does not actually tell it to us, whisper it in our ear as it were, it allows us to find it. What we need to do is reflect upon the universe and upon human beings, and to notice the important facts about our human nature and abilities. Once we know these, we can figure out what the universe intends for us. That is what theNicomachean Ethics does. When Camus seeks in vain for meaningfulness, he is seeking precisely what Aristotle thinks is always there, inscribed in the nature of things, part of the furniture of the universe.

The problem for us is that we are not Aristotle, or one of his contemporaries. We do not share the framework of his time. The universe is not ordered in such a way that everything has its telos. The cosmos is not for us as rational a place as he thought. Perhaps it can confer meaning on what we do. But even if it can, it will not be by means of allocating to everything its role in a judiciously organized whole.

***

To read more about A Significant Life, click here.

Add a Comment
42. Free e-book for April: Hybrid

9780226437132

Just in time for your ur-garden, our free-ebook for April is Noel Kingsbury’s Hybrid: The History and Science of Plant Breeding.

***

Disheartened by the shrink-wrapped, Styrofoam-packed state of contemporary supermarket fruits and vegetables, many shoppers hark back to a more innocent time, to visions of succulent red tomatoes plucked straight from the vine, gleaming orange carrots pulled from loamy brown soil, swirling heads of green lettuce basking in the sun.

With Hybrid, Noel Kingsbury reveals that even those imaginary perfect foods are themselves far from anything that could properly be called natural; rather, they represent the end of a millennia-long history of selective breeding and hybridization. Starting his story at the birth of agriculture, Kingsbury traces the history of human attempts to make plants more reliable, productive, and nutritious—a story that owes as much to accident and error as to innovation and experiment. Drawing on historical and scientific accounts, as well as a rich trove of anecdotes, Kingsbury shows how scientists, amateur breeders, and countless anonymous farmers and gardeners slowly caused the evolutionary pressures of nature to be supplanted by those of human needs—and thus led us from sparse wild grasses to succulent corn cobs, and from mealy, white wild carrots to the juicy vegetables we enjoy today. At the same time, Kingsbury reminds us that contemporary controversies over the Green Revolution and genetically modified crops are not new; plant breeding has always had a political dimension.

A powerful reminder of the complicated and ever-evolving relationship between humans and the natural world, Hybrid will give readers a thoughtful new perspective on—and a renewed appreciation of—the cereal crops, vegetables, fruits, and flowers that are central to our way of life.

***
Download your copy of Hybrid, here.

Add a Comment
43. Excerpt: Southern Provisions

9780226141114

An excerpt from Southern Provisions: The Creation and Revival of a Cuisine by David S. Shields

***

Rebooting a Cuisine

“I want to bring back Carolina Gold rice. I want there to be authentic Lowcountry cuisine again. Not the local branch of southern cooking incorporated.” That was Glenn Roberts in 2003 during the waning hours of a conference in Charleston exploring “ The Cuisines of the Lowcountry and the Caribbean.”

When Jeffrey Pilcher, Nathalie Dupree, Marion Sullivan, Robert Lukey, and I brainstormed this meeting into shape over 2002, we paid scant attention to the word cuisine.1 I’m sure we all thought that it meant something like “a repertoire of refined dishes that inspired respect among the broad public interested in food.” We probably chose “cuisines” rather than “foodways” or “cookery” for the title because its associations with artistry would give it more splendor in the eyes of the two institutions—the College of Charleston and Johnson & Wales University—footing the administrative costs of the event. Our foremost concern was to bring three communities of people into conversation: culinary historians, chefs, and provisioners (i.e., farmers and fishermen) who produced the food cooked along the southern Atlantic coast and in the West Indies. Theorizing cuisine operated as a pretext.

Glenn Roberts numbered among the producers. The CEO of Anson Mills, he presided over the American company most deeply involved with growing, processing, and selling landrace grains to chefs. I knew him only by reputation. He grew and milled the most ancient and storied grains on the planet—antique strains of wheat, oats, spelt, rye, barley, faro, and corn—so that culinary professionals could make use of the deepest traditional flavor chords in cookery: porridges, breads, and alcoholic beverages. Given Roberts’s fascination with grains, expanding the scope of cultivars to include Carolina’s famous rice showed intellectual consistency. Yet I had always pegged him as a preservationist rather than a restorationist. He asked me, point-blank, whether I wished to participate in the effort to restore authentic Lowcountry cuisine.

Roberts pronounced cuisine with a peculiar inflection, suggesting that it was something that was and could be but that in 2003 did not exist in this part of the South. I knew in a crude way what he meant. Rice had been the glory of the southern coastal table, yet rice had not been commercially cultivated in the region since a hurricane breached the dykes and salted the soil of Carolina’s last commercial plantation in 1911. (Isolated planters on the Combahee River kept local stocks going until the Great Depression, and several families grew it for personal use until World War II, yet Carolina Gold rice disappeared on local grocers’ shelves in 1912.)

When Louisa Stoney and a network Charleston’s grandes dames gathered theirCarolina Rice Cook Book in 1901, the vast majority of ingredients were locally sourced. When John Martin Taylor compiled his Hoppin’ John’s Lowcountry Cooking in 1992,4 the local unavailability of traditional ingredients and a forgetfulness about the region’s foodways gave the volume a shock value, recalling the greatness of a tradition while alerting readers to its tenuous hold on the eating habits of the people.

Glenn Roberts had grown up tasting the remnants of the rice kitchen, his mother having mastered in her girlhood the art of Geechee black skillet cooking. In his younger days, Roberts worked on oyster boats, labored in fields, and cooked in Charleston restaurants, so when he turned to growing grain in the 1990s, he had a peculiar perspective on what he wished for: he knew he wanted to taste the terroir of the Lowcountry in the food.5 Because conventional agriculture had saturated the fields of coastal Carolina with pesticides, herbicides, and chemical fertilizers, he knew he had to restore the soil as well as restore Carolina Gold, and other crops, into cultivation.

I told Roberts that I would help, blurting the promise before understanding the dimensions of what he proposed. Having witnessed the resurgence in Creole cooking in New Orleans and the efflorescence of Cajun cooking in the 1980s, and having read John Folse’s pioneering histories of Louisiana’s culinary traditions, I entertained romantic visions of lost food-ways being restored and local communities being revitalized. My default opinions resembled those of an increasing body of persons, that fast food was aesthetically impoverished, that grocery preparations (snacks, cereals, and spreads) had sugared and salted themselves to a brutal lowest common denominator of taste, and that industrial agriculture was insuring indifferent produce by masking local qualities of soil with chemical supplementations. When I said “yes,” I didn’t realize that good intentions are a kind of stupidity in the absence of an attuned intuition of the problems at hand. When Roberts asked whether I would like to restore a cuisine, my thoughts gravitated toward the payoffs on the consumption end of things: no insta-grits made of GMO corn in my shrimp and grits; no farm-raised South American tiger shrimp. In short, something we all knew around here would be improved.

It never occurred to me that the losses in Lowcountry food had been so great that we all don’t know jack about the splendor that was, even with the aid of historical savants such as “Hoppin’ John” Taylor. Nor did I realize that traditional cuisines cannot be understood simply by reading old cookbooks; you can’t simply re-create recipes and—voilà! Roberts, being a grower and miller, had fronted the problem: cuisines had to be understood from the production side, from the farming, not just the cooking or eating. If the ingredients are mediocre, there will be no revelation on the tongue. There is only one pathway to understanding how the old planters created rice that excited the gastronomes of Paris—the path leading into the dustiest, least-used stacks in the archive, those holding century-and-a-half-old agricultural journals, the most neglected body of early American writings.

In retrospect, I understand why Roberts approached me and not some chef with a penchant for antiquarian study or some champion of southern cooking. While interested in culinary history, it was not my interest but my method that drew Roberts. He must’ve known at the time that I create histories of subjects that have not been explored; that I write “total histories” using only primary sources, finding, reading, and analyzing every extant source of information. He needed someone who could navigate the dusty archive of American farming, a scholar who could reconstruct how cuisine came to be from the ground up. He found me in 2003.

At first, questions tugged in too many directions. When renovating a cuisine, what is it, exactly, that is being restored? An aesthetic of plant breeding? A farming system? A set of kitchen practices? A gastronomic philosophy? We decided not to exclude questions at the outset, but to pursue anything that might serve the goals of bringing back soil, restoring cultivars, and renovating traditional modes of food processing. The understandings being sought had to speak to a practice of growing and kitchen creation. We should not, we all agreed, approach cuisine as an ideal, a theoretical construction, or a utopian possibility.

Our starting point was a working definition of that word I had used so inattentively in the title of the conference: cuisine. What is a cuisine? How does it differ from diet, cookery, or food? Some traditions of reflection on these questions were helpful. Jean-François Revel’s insistence in Culture and Cuisine that cuisines are regional, not national, because of the enduring distinctiveness of local ingredients, meshed with the agricultural preoccupations of our project. Sidney Mintz usefully observed that a population “eats that cuisine with sufficient frequency to consider themselves experts on it. They all believe, and care that they believe, that they know what it consists of, how it is made, and how it should taste. In short, a genuine cuisine has common social roots.” The important point here is consciousness. Cuisine becomes a signature of community and, as such, becomes a source of pride, a focus of debate, and a means of projecting an identity in other places to other people.

There is, of course, a commercial dimension to this. If a locale becomes famous for its butter (as northern New York did in the nineteenth century) or cod (as New England did in the eighteenth century), a premium is paid in the market for those items from those places. The self-consciousness about ingredients gives rise to an artistry in their handling, a sense of tact from long experience of taste, and a desire among both household and professional cooks to satisfy the popular demand for dishes by improving their taste and harmonizing their accompaniments at the table.

One hallmark of the maturity of a locale’s culinary artistry is its discretion when incorporating non-local ingredients with the products of a region’s field, forest, and waters. Towns and cities with their markets and groceries invariably served as places where the melding of the world’s commodities with a region’s produce took place. Cuisines have two faces: a cosmopolitan face, prepared by professional cooks; and a common face, prepared by household cooks. In the modern world, a cuisine is at least bimodal in constitution, with an urbane style and a country vernacular style. At times, these stylistic differences become so pronounced that they described two distinct foodways—the difference between Creole and Cajun food and their disparate histories, for example. More frequently, an urban center creates its style elaborating the bounty of the surrounding countryside—the case of Baltimore and the Tidewater comes to mind.

With a picture of cuisine in hand, Roberts and I debated how to proceed in our understanding. In 2004 the Carolina Gold Rice Foundation was formed with the express purpose of advancing the cultivation of land-race grains and insuring the repatriation of Carolina Gold. Dr. Merle Shepard of Clemson University (head of the Clemson Coastal Experimental Station at Charleston), Dr. Richard Schulze (who planted the first late twentieth-century crops of Carolina Gold on his wetlands near Savannah), Campbell Coxe (the most experienced commercial rice farmer in the Carolinas), Max E. Hill (historian and planter), and Mack Rhodes and Charles Duell (whose Middleton Place showcased the historical importance of rice on the Lowcountry landscape) formed the original nucleus of the enterprise.

It took two and a half years before we knew enough to reformulate our concept of cuisine and historically contextualize the Carolina Rice Kitchen well enough to map our starting point for the work of replenishment—a reboot of Lowcountry cuisine. The key insights were as follows: The enduring distinctiveness of local ingredients arose from very distinct sets of historical circumstances and a confluence of English, French Huguenot, West African, and Native American foodways. What is grown where, when, and for what occurred for very particular reasons. A soil crisis in the early nineteenth century particularly shaped the Lowcountry cuisine that would come, distinguishing it from food produced and prepared elsewhere.

The landraces of rice, wheat, oats, rye, and corn that were brought into agriculture in the coastal Southeast were, during the eighteenth century, planted as cash crops, those same fields being replanted season after season, refreshed only with manuring until the early nineteenth century. Then the boom in long staple Sea Island cotton, a very “exhausting” plant, pushed Lowcountry soil into crisis. (A similar crisis related to tobacco culture and soil erosion because of faulty plowing methods afflicted Maryland, Virginia, and North Carolina.) The soil crisis led to the depopulation of agricultural lands as enterprising sons went westward seeking newly cleared land, causing a decline in production, followed by rising farm debt and social distress. The South began to echo with lamentations and warnings proclaimed by a generation of agrarian prophets—John Taylor of Caroline County in Virginia, George W. Jeffreys of North Carolina, Nicholas Herbemont of South Carolina, and Thomas Spalding of Georgia. Their message: Unless the soil is saved; unless crop rotations that build nutrition in soil be instituted; unless agriculture be diversified—then the long-cultivated portions of the South will become a wasteland. In response to the crisis in the 1820s, planters formed associations; they published agricultural journals to exchange information; they read; they planted new crops and employed new techniques of plowing and tilling; they rotated, intercropped, and fallowed fields. The age of experiment began in American agriculture with a vengeance.

The Southern Agriculturist magazine (founded 1828) operated as theengine of changes in the Lowcountry. In its pages, a host of planter-contributors published rotations they had developed for rice, theories of geoponics (soil nourishment), alternatives to monoculture, and descriptions of the world of horticultural options. Just as Judge Jesse Buel in Albany, New York, systematized the northern dairy farm into a self-reliant entity with livestock, pastures, fields, orchard, garden, and dairy interacting for optimum benefit, southern experimentalists conceived of the model plantation. A generation of literate rice planters—Robert F. W. Allston, J. Bryan, Calvin Emmons, James Ferguson, William Hunter, Roswell King, Charles Munnerlyn, Thomas Pinckney, and Hugh Rose— contributed to the conversation, overseen by William Washington, chair of the Committee on Experiments of the South Carolina Agricultural Society. Regularizing the crop rotations, diversifying cultivars, and rationalizing plantation operations gave rise to the distinctive set of ingredients that coalesced into what came to be called the Carolina Rice Kitchen, the cuisine of the Lowcountry.

Now, in order to reconstruct the food production of the Lowcountry, one needs a picture of how the plantations and farms worked internally with respect to local markets, in connection with regional markets, and in terms of commodity trade. One has to know how the field crops, kitchen garden, flower and herb garden, livestock pen, dairy, and kitchen cooperated. Within the matrix of uses, any plant or animal that could be employed in multiple ways would be more widely raised in a locality and more often cycled into cultivation. The sweet potato, for instance, performed many tasks on the plantation: It served as winter feed for livestock, its leaves as fodder; it formed one of the staple foods for slaves; it sold well as a local-market commodity for the home table; and its allelopathic (growth-inhibiting chemistry) made it useful in weed suppression. Our first understandings of locality came by tracing the multiple transits of individual plants through farms, markets, kitchens, and seed brokerages.

After the 1840s, when experiments stabilized into conventions on Low-country plantations, certain items became fixtures in the fields. Besides the sweet potato, one found benne (low-oil West African sesame), corn, colewort/kale/collards, field peas, peanuts, and, late in the 1850s, sorghum. Each one of these plant types would undergo intensive breeding trials, creating new varieties that (a) performed more good for the soil and welfare of the rotation’s other crops; (b) attracted more purchasers at the market; (c) tasted better to the breeder or his livestock; (d) grew more productively than other varieties; and (e) proved more resistant to drought, disease, and infestation than other varieties.

From 1800 to the Civil War, the number of vegetables, the varieties of a given vegetable, the number of fruit trees, the number of ornamental flowers, and the numbers of cattle, pigs, sheep, goat, and fowl breeds all multiplied prodigiously in the United States, in general, and the Low-country, in particular. The seedsman, the orchardist, the livestock breeder, the horticulturist—experimentalists who maintained model farms, nurseries, and breeding herds—became fixtures of the agricultural scene and drove innovation. One such figure was J. V. Jones of Burke County, Georgia, a breeder of field peas in the 1840s and ’50s. In the colonial era, field peas (cowpeas) grew in the garden patches of African slaves, along with okra, benne, watermelon, and guinea squash. Like those other West African plants, their cultivation was taken up by white planters. At first, they grew field peas as fodder for livestock because it inspired great desire among hogs, cattle, and horses. (Hence the popular name cowpea.) Early in the nineteenth century, growers noticed that it improved soils strained by “exhausting plants.” With applications as a green manure, a table pea, and livestock feed, the field pea inspired experiments in breeding with the ends of making it less chalky tasting, more productive, and less prone to mildew when being dried to pea hay. Jones reported on his trials. He grew every sort of pea he could obtain, crossing varieties in the hopes of breeding a pea with superior traits.

  1. Blue Pea, hardy and prolific. A crop of this pea can be matured in less than 60 days from date of planting the seed. Valuable.
  2. Lady, matures with No. 1. Not so prolific and hardy. A delicious table pea.
  3. Rice, most valuable table variety known, and should be grown universally wherever the pea can make a habitation.
  4. Relief, another valuable table kind, with brown pods.
  5. Flint Crowder, very profitable.
  6. Flesh, very profitable.
  7. Sugar, very profitable.
  8. Grey, very profitable. More so than 5, 6, 7. [Tory Pea]
  9. Early Spotted, brown hulls or pods.
  10. Early Locust, brown hulls, valuable.
  11. Late Locust, purple hulls, not profitable.
  12. Black Eyes, valuable for stock.
  13. Early Black Spotted, matures with nos. 1, 2, and 3.
  14. Goat, so called, I presume, from its spots. Very valuable, and a hard kind to shell.
  15. Small Black, very valuable, lies on the field all winter with the power of reproduction.
  16. Large Black Crowder, the largest pea known, and produces great and luxuriant vines. A splendid variety.
  17. Brown Spotted, equal to nos. 6, 7, 8 and 14.
  18. Claret Spotted, equal to nos. 6, 7, 8 and 14.
  19. Large Spotted, equal to nos. 6, 7, 8 and 14.
  20. Jones Little Claret Crowder. It is my opinion a greater quantity in pounds and bushels can be grown per acre of this pea, than any other grain with the knowledge of man. Matures with nos. 1, 2, 3, 9 and 13, and one of the most valuable.
  21. Jones Black Hull, prolific and profitable.
  22. Jones Yellow Hay, valuable for hay only.
  23. Jones no. 1, new and very valuable; originated in the last 2 years.
  24. Chickasaw, its value is as yet unknown. Ignorance has abused it.
  25. Shinney or Java, this is the Prince of Peas.

The list dramatizes the complex of qualities that bear on the judgments of plant breeders—flavor, profitability, feed potential, processability, ability to self-seed, productivity, and utility as hay. And it suggests the genius of agriculture in the age of experiment—the creation of a myriad of tastes and uses.

At this juncture, we confront a problem of culinary history. If one writes the history of taste as it is usually written, using the cookbook authors and chefs as the spokespersons for developments, one will not register the multiple taste options that pea breeders created. Recipes with gnomic reticence call for field peas (or cowpeas). One would not know, for example, that the Shinney pea, the large white lady pea, or the small white rice pea would be most suitable for this or that dish. It is only in the agricultural literature that we learn that the Sea Island red pea was the traditional pea used in rice stews, or that the red Tory pea with molasses and a ham hock made a dish rivaling Boston baked beans.

Growers drove taste innovation in American grains, legumes, and vegetables during the age of experiment. And their views about texture, quality, and application were expressed in seed catalogs, agricultural journals, and horticultural handbooks. If one wishes to understand what was distinctive about regional cookery in the United States, the cookbook supplies but a partial apprehension at best. New England’s plenitude of squashes, to take another example, is best comprehended by reading James J. H. Gregory’s Squashes: How to Grow Them (1867), not Mrs. N. Orr’s De Witt’sConnecticut Cook Book, and Housekeeper’s Assistant (1871). In the pages of the 1869 annual report of the Massachusetts Board of Agriculture, we encounter the expert observation, “As a general rule, the Turban and Hubbard are too grainy in texture to enter the structure of that grand Yankee luxury, a squash pie. For this the Marrow [autumnal marrow squash] excels, and this, I hold, is now the proper sphere of this squash; it is now a pie squash.” No cookbook contains so trenchant an assessment, and when the marrow squash receives mention, it suggests it is a milder-flavored alternative to the pumpkin pie.

Wendell Berry’s maxim that “eating is an agricultural act” finds support in nineteenth-century agricultural letters. The aesthetics of planting, breeding, and eating formed a whole sense of the ends of agriculture. No cookbook would tell you why a farmer chose a clay pea to intercrop with white flint corn, or a lady pea, or a black Crowder, but a reader of the agricultural press would know that the clay pea would be plowed under with the corn to fertilize a field (a practice on some rice fields every fourth year), that the lady pea would be harvested for human consumption, and that the black Crowder would be cut for cattle feed. Only reading a pea savant like J. V. Jones would one know that a black-eyed pea was regarded as “valuable for stock” but too common tasting to recommend it for the supper table.

When the question that guides one’s reading is which pea or peasshould be planted today to build the nitrogen level of the soil and complement the grains and vegetables of Lowcountry cuisines, the multiplicity of varieties suggests an answer. That J. V. Jones grew at least four of his own creations, as well as twenty-one other reputable types, indicates that one should grow several sorts of field peas, with each sort targeted to a desired end. The instincts of southern seed savers such as Dr. David Bradshaw, Bill Best, and John Coykendall were correct—to preserve the richness of southern pea culture, one had to keep multiple strains of cowpea viable. Glenn Roberts and the Carolina Gold Rice Foundation have concentrated on two categories of peas—those favored in rice dishes and those known for soil replenishment. The culinary peas are the Sea Island red pea, known for traditional dishes such as reezy peezy, red pea soup, and red pea gravy; and the rice pea, cooked as an edible pod pea, for most hoppin’ John recipes and for the most refined version of field peas with butter. For soil building, iron and clay peas have been a mainstay of warm-zone agriculture since the second half of the nineteenth century.

It should be clear by this juncture that this inquiry differs from the projects most frequently encountered in food history. Here, the value of a cultivar or dish does not reside in its being a heritage marker, a survival from an originating culture previous to its uses in southern planting and cooking. The Native American origins of a Chickasaw plum, the African origins of okra, the Swedish origins of the rutabaga don’t much matter for our purposes. This is not to discount the worth of the sort of etiological food genealogies that Gary Nabhan performs with the foods of Native peoples, that Karen Hess performed with the cooking of Jewish conversos, or that Jessica Harris and others perform in their explorations of the food of the African diaspora, but the hallmark of the experimental age was change in what was grown—importation, alteration, ramification, improvement, and repurposing. The parched and boiled peanuts/pindars of West Africa were used for oil production and peanut butter. Sorghum, or imphee grass, employed in beer brewing and making flat breads in West Africa and Natal became in the hands of American experimentalists a sugar-producing plant. That said, the expropriations and experimental transformations did not entirely supplant traditional uses. The work of agronomist George Washington Carver at the Tuskegee Agricultural Experiment Station commands particular notice because it combines its novel recommendations for industrial and commercial uses of plants as lubricants, blacking, and toothpaste, with a thoroughgoing recovery of the repertoire of Deep South African American sweet potato, cowpea, and peanut cookery in an effort to present the maximum utility of the ingredients.

While part of this study does depend on the work that Joyce E. Chaplin and Max Edelson have published on the engagement of southern planters with science, it departs from the literature concerned with agricultural reform in the South. Because this exploration proceeds from the factum brutum of an achieved regional cuisine produced as the result of agricultural innovations, market evolutions, and kitchen creativity, it stands somewhat at odds with that literature, arguing the ineffectuality of agricultural reform. Works in this tradition—Charles G. Steffen’s “In Search of the Good Overseer” or William M. Mathew’s Edmund Ruffin and the Crisis of Slavery in the Old South—argue that what passed for innovation in farming was a charade, and that soil restoration and crop diversification were fitful at best. When a forkful of hominy made from the white flint corn perfected in the 1830s on the Sea Islands melts on one’s tongue, there is little doubting that something splendid has been achieved.

The sorts of experiments that produced white flint corn, the rice pea, and the long-grain form of Carolina Gold rice did not cease with the Civil War. Indeed, with the armistice, the scope and intensity of experimentation increased as the economies of the coast rearranged from staple production to truck farming. The reliance on agricultural improvement would culminate in the formation of the network of agricultural experimental stations in the wake of the Hatch Act of 1886. One finding of our research has been that the fullness of Lowcountry agriculture and the efflorescence of Lowcountry cuisine came about during the Reconstruction era, and its heyday continued into the second decade of the twentieth century.

The Lowcountry was in no way exceptional in its embrace of experiments and improvement or insular in its view of what should be grown. In the 1830s, when Carolina horticulturists read about the success that northern growers had with Russian strains of rhubarb, several persons attempted with modest success to grow it in kitchen gardens. Readers of Alexander von Humboldt’s accounts of the commodities of South America experimented with Peruvian quinoa in grain rotations. Because agricultural letters and print mediated the conversations of the experimentalists, and because regional journals reprinted extensively from other journals from other places, a curiosity about the best variety of vegetables, fruits, and berries grown anywhere regularly led many to secure seed from northern brokers (only the Landreth Seed Company of Pennsylvaniamaintained staff in the Lowcountry), or seedsmen in England, France, and Germany. Planters regularly sought new sweet potato varieties from Central and South America, new citrus fruit from Asia, and melons wherever they might be had.

Because of the cosmopolitan sourcing of things grown, the idea of a regional agriculture growing organically out of the indigenous productions of a geographically delimited zone becomes questionable. (The case of the harvest of game animals and fish is different.) There is, of course, a kind of provocative poetry to reminding persons, as Gary Nabhan has done, that portions of the Southeast once regarded the American chestnut as a staple and food mapping an area as “Chestnut Nation,” yet it has little resonance for a population that has never tasted an American chestnut in their lifetime. Rather, region makes sense only as a geography mapped by consciousness—by a community’s attestation in naming, argumentation, and sometimes attempts at legal delimitation of a place.

We can see the inflection of territory with consciousness in the history of the name “Lowcountry.” It emerges as “low country” in the work of early nineteenth-century geographers and geologists who were attempting to characterize the topography of the states and territories of the young nation. In 1812 Jedidiah Morse uses “low country” in the American Universal Gazetteer to designate the coastal mainland of North Carolina, South Carolina, and Georgia. Originally, the Sea Islands were viewed as a separate topography. “ The sea coast,” he writes, “is bordered with a fine chain of islands, between which and the shore there is a very convenient navigation. The main land is naturally divided into the Lower and Upper country. The low country extends 80 or 100 miles from the coast, and is covered with extensive forests of pitch pine, called pine barrens, interspersed with swamps and marshes of rich soil.” Geologist Elisha Mitchell took up the characterization in his 1828 article, “On the Character and Origin of the Low Country of North Carolina,” defining the region east of the Pee Dee River to the Atlantic coast by a stratigraphy of sand and clay layers as the low country. Within a generation, the designation had entered into the usage of the population as a way of characterizing a distinctive way of growing practiced on coastal lands. Wilmot Gibbs, a wheat farmer in Chester County in the South Carolina midlands, observed in a report to the US Patent Office: “ The sweet potatoes do better, much better on sandy soil, and though not to be compared in quantity and quality with the lowcountry sweet potatoes, yet yield a fair crop.” Two words became one word. And when culture—agriculture—inflected the understanding of region, the boundaries of the map altered. The northern boundary of rice growing and the northern range of the cabbage palmetto were just north of Wilmington, North Carolina. The northern bound of USDA Plant Hardiness Zone 8 in the Cape Fear River drainage became the cultural terminus of the Lowcountry. Agriculturally, the farming on the Sea Islands differed little from that on the mainland, so they became assimilated into the cultural Lowcountry. And since the Sea Islands extended to Amelia Island, Florida, the Lowcountry extended into east Florida. What remained indistinct and subject to debate was the interior bound of the Lowcountry. Was the St. Johns River region in Florida assimilated into it, or not? Did it end where tidal flow became negligible upriver on the major coastal estuaries? Perceptual regions that do not evolve into legislated territories, such as the French wine regions, should be treated with a recognition of their mutable shape.

Cuisines are regional to the extent that the ingredients the region supplies to the kitchen are distinctive, not seen as a signature of another place. Consequently, Lowcountry cuisine must be understood comparatively, contrasting its features with those of other perceived styles, such as “southern cooking” or “tidewater cuisine” or “New Orleans Creole cooking” or “American school cooking” or “cosmopolitan hotel gastronomy.” The comparisons will take place, however, acknowledging that all of these styles share a deep grammar. A common store of ancient landrace grains (wheat, spelt, rye, barley, oats, corn, rice, millet, faro), the oil seeds and fruits (sesame, sunflower, rapeseed, linseed, olive), the livestock, the root vegetables, the fruit trees, the garden vegetables, the nuts, the berries, the game, and the fowls—all these supply a broad canvas against which the novel syncretisms and breeders’ creations emerge. It is easy to overstate the peculiarity of a region’s farming or food.

One of the hallmarks of the age of experiment was openness to new plants from other parts of the world. There was nothing of the culinary purism that drove the expulsion of “ignoble grapes” from France in the 1930s. Nor was there the kind of nationalist food security fixation that drives the current Plant Protection and Quarantine (PPQ ) protocols of the USDA. In that era, before crop monocultures made vast stretches of American countryside an uninterrupted banquet for viruses, disease organisms, and insect pests, nightmares of continental pestilence did not roil agronomists. The desire to plant a healthier, tastier, more productive sweet potato had planters working their connections in the West Indies and South America for new varieties. Periodically, an imported variety—a cross between old cultivated varieties, a cross between a traditional and an imported variety, or a sport of an old or new variety—proved something so splendid that it became a classic, a brand, a market variety, a seed catalog–illustrated plant. Examples of these include the Carolina African peanut, the Bradford watermelon, the Georgia pumpkin yam, the Hanson lettuce, Sea Island white flint corn, the Virginia peanut, the Carolina Long Gold rice, the Charleston Wakefield cabbage, and the Dancy tangerine. That something from a foreign clime might be acculturated, becoming central to an American regional cuisine, was more usual than not.

With the rise of the commercial seedsmen, naming of vegetable varieties became chaotic. Northern breeders rebranded the popular white-fleshed Hayman sweet potato, first brought from the West Indies into North Carolina in 1854, as the “Southern Queen sweet potato” in the hope of securing the big southern market, or as the “West Indian White.” Whether a seedsman tweaked a strain or not, it appeared in the catalogs as new and improved. Only with the aid of the skeptical field-trial reporters working the experimental stations of the 1890s can one see that the number of horticultural and pomological novelties named as being available for purchase substantially exceeds the number of varieties that actually exist.

Numbers of plant varieties enjoyed sufficient following to resist the yearly tide of “new and improved” alternatives. They survived over decades, supported by devotees or retained by experimental stations and commercial breeders as breeding stock. Of Jones’s list of cowpeas, for instance, the blue, the lady, the rice, the flint Crowder, the claret, the small black, the black-eyed, and Shinney peas still exist in twenty-first-century fields, and two remain in commercial cultivation: the lady and the Crowder.

In order to bring back the surviving old varieties important in traditional Lowcountry cuisine yet no longer commercially farmed, Dr. Merle Shepard, Glenn Roberts, or I sought them in germplasm banks andthrough the networks of growers and seed savers. Some important items seem irrevocably lost: the Neunan’s strawberry and the Hoffman seedling strawberry, both massively cultivated during the truck-farming era in the decades following the Civil War. The Ravenscroft watermelon has perished. Because of the premium placed on taste in nineteenth-century plant and fruit breeding, we believed the repatriation of old strains to be important. Yet we by no means believed that skill at plant breeding suddenly ceased in 1900. Rather, the aesthetics of breeding changed so that cold tolerance, productivity, quick maturity, disease resistance, transportability, and slow decay often trumped taste in the list of desiderata. The recent revelation that the commercial tomato’s roundness and redness was genetically accomplished at the expense of certain of the alleles governing taste quality is only the most conspicuous instance of the subordination of flavor in recent breeding aesthetics.

We have reversed the priority—asserting the primacy of taste over other qualities in a plant. We cherish plants that in the eyes of industrial farmers may seem inefficient, underproductive, or vulnerable to disease and depredation because they offer more to the kitchen, to the tongue, and to the imagination. The simple fact that a plant is heirloom does not make it pertinent for our purposes. It had to have had traction agriculturallyand culinarily. It had to retain its vaunted flavor. Glenn Roberts sought with particular avidity the old landrace grains because their flavors provided the fundamental notes comprising the harmonics of Western food, both bread and alcohol. The more ancient, the better. I sought benne, peanuts, sieva beans, asparagus, peppers, squashes, and root vegetables. Our conviction has been—and is—that the quality of the ingredients will determine the vitality of Lowcountry cuisine.

While the repertoire of dishes created in Lowcountry cuisine interested us greatly, and while we studied the half-dozen nineteenth-century cookbooks, the several dozen manuscript recipe collections, and the newspaper recipe literature with the greatest attention, we realized that our project was not the culinary equivalent of Civil War reenactment, a kind of temporary evacuation of the present for some vision of the past. Rather, we wanted to revive the ingredients that had made that food so memorable and make the tastes available again, so the best cooks of this moment could combine them to invoke or invent a cooking rich with this place. Roberts was too marked by his Californian youth, me by formative years in Japan, Shepard by his long engagement with Asian food culture, Campbell Coxe with his late twentieth-century business mentality, to yearn for some antebellum never-never land of big house banqueting. What did move us, however, was the taste of rice. We all could savor the faint hazelnut delicacy, the luxurious melting wholesomeness of Carolina Gold. And we all wondered at those tales of Charleston hotel chefs of the Reconstruction era who could identify which stretch of which river where a plate of gold rice had been nourished. They could, they claimed, taste the water and the soil in the rice.

The quality of ingredients depends upon the quality of the soil, and this book is not, to my regret, a recovery of the lost art of soil building. Though we have unearthed, with the aid of Dr. Stephen Spratt, a substantial body of information about crop rotations and their effects, and though certain of these traditional rotations have been followed in growing rice, benne, corn, beans, wheat, oats, et cetera, we can’t point to a particular method of treating soil that we could attest as having been sufficient and sustainable in its fertility in all cases. While individual planters hit upon soil-building solutions for their complex of holdings, particularly in the Sea Islands and in the Pee Dee River basin, these were often vast operations employing swamp muck, rather than dung, as a manure. Even planter-savants, such as John Couper and Thomas Spalding, felt they had not optimized the growing potential of their lands. Planters who farmed land that had suffered fertility decline and were bringing it back to viability often felt dissatisfaction because its productivity could not match the newly cleared lands in Alabama, Louisiana, Texas, and Mississippi. Lowcountry planters were undersold by producers to the west. Hence, coastal planters heeded the promises of the great advocates of manure—Edmund Ruffin’s call to crush fossilized limestone and spread calcareous manures on fields, or Alexander von Humboldt’s scientific case for Peruvian guano—as the answer to amplifying yield per acre. Those who could afford it became guano addicts. Slowly, southern planters became habituated to the idea that in order to yield a field needed some sort of chemical supplementation. It was then a short step to industrially produced chemical fertilizers.

What we now know to be irrefutably true, after a decade of Glenn Roberts’s field work, is that grain and vegetables grown in soil that has never been subjected to the chemical supplementations of conventional agriculture, or that has been raised in fields cleansed of the chemicals by repeated organic grow-outs, possess greater depth and distinct local inflections of flavor. Tongues taste terroir. This is a truth confirmed by the work of other cuisine restorationists in other areas—I think particularly of Dan Barber’s work at Stone Barns Center in northern New York and John Coykendall’s work in Tennessee.

Our conviction that enhancing the quality of flavors a region produces as the goal of our agricultural work gives our efforts a clarity of purpose that enables sure decision making at the local level. We realize, of course, the human and animal health benefits from consuming food free of toxins and chemical additives. We know that the preservation of the soil and the treatment of water resources in a non-exploitative way constitute a kind of virtue. But without the aesthetic focus on flavor, the ethical treatment of resources will hardly succeed. When pleasure coincides with virtue, the prospect of an enduring change in the production and treatment of food takes on solidity.

Since its organization a decade ago, the Carolina Gold Rice Foundation has published material on rice culture and the cultivation of landrace grains. By 2010 it became apparent that the information we had gleaned and the practical experience we had gained in plant repatriations had reached a threshold permitting a more public presentation of our historical sense of this regional cuisine, its original conditions of production, and observations on its preparation. After substantial conversation about the shape of this study with Roberts, Shepard, Bernard L. Herman, John T. Edge, Nathalie Dupree, Sean Brock, Linton Hopkins, Jim Kibler, and Marcie Cohen Ferris, I determined that it should not resort to the conventional chronological, academic organization of the subject, nor should it rely on the specialized languages of botany, agronomy, or nutrition. My desire in writing Southern Provisions was to treat the subject so that a reader could trace the connections between plants, plantations, growers, seed brokers, markets, vendors, cooks, and consumers. The focus of attention had to alter, following the transit of food from field to market, from garden to table. The entire landscape of the Lowcountry had to be included, from the Wilmington peanut patches to the truck farms of the Charleston Neck, from the cane fields of the Georgia Sea Islands to the citrus groves of Amelia Island, Florida. For comparison’s sake, there had to be moments when attention turned to food of the South generally, to the West Indies, and to the United States more generally.

In current books charting alternatives to conventional agriculture, there has been a strong and understandable tendency to announce crisis. This was also the common tactic of writers at the beginning of the age of experimentation in the 1810s and ’20s. Yet here, curiosity and pleasure, the quest to understand a rich world of taste, direct our inquiry more than fear and trepidation.

***

To read more about Southern Provisions, click here.

Add a Comment
44. Free e-book for March: Freud’s Couch, Scott’s Buttocks, Brontë’s Grave

9780226301310 2

Our free e-book for March is Freud’s Couch, Scott’s Buttocks, Brontë’s Grave by Simon Goldhill. Read more and download your copy below.

***

The Victorian era was the high point of literary tourism. Writers such as Charles Dickens, George Eliot, and Sir Walter Scott became celebrities, and readers trekked far and wide for a glimpse of the places where their heroes wrote and thought, walked and talked. Even Shakespeare was roped in, as Victorian entrepreneurs transformed quiet Stratford-upon-Avon into a combination shrine and tourist trap.

Stratford continues to lure the tourists today, as do many other sites of literary pilgrimage throughout Britain. And our modern age could have no better guide to such places than Simon Goldhill. In Freud’s Couch, Scott’s Buttocks, Brontë’s Grave, Goldhill makes a pilgrimage to Sir Walter Scott’s baronial mansion, Wordsworth’s cottage in the Lake District, the Brontë parsonage, Shakespeare’s birthplace, and Freud’s office in Hampstead. Traveling, as much as possible, by methods available to Victorians—and gamely negotiating distractions ranging from broken bicycles to a flock of giggling Japanese schoolgirls—he tries to discern what our forebears were looking for at these sites, as well as what they have to say to the modern mind. What does it matter that Emily Brontë’s hidden passions burned in this specific room? What does it mean, especially now that his fame has faded, that Scott self-consciously built an extravagant castle suitable for Ivanhoe—and star-struck tourists visited it while he was still living there? Or that Freud’s meticulous recreation of his Vienna office is now a meticulously preserved museum of itself? Or that Shakespeare’s birthplace features student actors declaiming snippets of his plays . . . in the garden of a house where he almost certainly never wrote a single line?

Goldhill brings to these inquiries his trademark wry humor and a lifetime’s engagement with literature. The result is a travel book like no other, a reminder that even today, the writing life still has the power to inspire.

To download a copy, click here.

 

Add a Comment
45. Blood Runs Green: Your nineteenth-century Chicago true crime novel

9780226248950

Below follows a well-contextualized teaser, or a clue (depending on your penchant for genre), from Sharon Wheeler’s full-length review of Blood Runs Green: The Murder that Transfixed Gilded Age Chicago at Inside Higher Ed.

Blood Runs Green is that rarer beast—academic research in the guise of a true crime account. But it leaps off the page like the best fictional murder mystery. Mind you, any author presenting these characters to a publisher under the banner of a novel would probably be sent away to rein in their over-fertile imagination. As Gillian O’Brien says: “The story had everything an editor could want: conspiracy, theft, dynamite, betrayal, and murder.”

So this is far more than just a racy account of a murder in 1880s Chicago, a city built by the Irish, so the boast goes (by the late 1880s, 17 per cent of its population was Irish or Irish-American). At the book’s core is the story of Irish immigrants in the US, and the fight for Irish independence through the secret republican society Clan na Gael. In England, and running parallel to events in America, is the saga of Charles Stewart Parnell, a British MP and leading figure in the Home Rule movement.

Who got bumped off is an easy one to answer: Patrick Cronin, a Chicago doctor, Clan na Gael supporter, and a chap renowned for belting out God Save Ireland at fundraising events. Whodunnit? Ah, well, now you’re asking.

To read more about Blood Runs Green, click here.

Add a Comment
46. The AACM at 50

OLDAACM

2015 marks the 50th anniversary of the Association for the Advancement of Creative Musicians, Inc.(AACM), founded on Chicago’s South Side by musicians Muhal Richard Abrams (pianist/composer), pianist Jodie Christian (pianist), drummer Steve McCall (drummer), and Phil Cohran (composer).

A recent piece in the New York Times by Nate Chinen baseline summarizes their achievements:

Over the half-century of its existence, the association has been one of this country’s great engines of experimental art, producing work with an irreducible breadth of scope and style. By now the organization’s significance derives not only from the example of its first wave—including Mr. Abrams, still formidable at 84—but also from an influence on countless uncompromising artists, many of whom are not even members of its chapters in Chicago and New York.

The AACM is legendary, well beyond—but also emphatically intertwined with—their Chicago origins. With an aim to “provide an atmosphere conducive to the development of its member artists and to continue the AACM legacy of providing leadership and vision for the development of creative music,” the AACM turned jazz on its head, rolled it sideways, stood it upright again, and then leaned on it with a combination of effortless grace and righteous pressure during the second half of the twentieth century and beyond.

Among the events organized around the anniversary are Free at First (currently on view at Chicago’s DuSable Museum of African American History, and running through September 6, 2015) and the forthcoming exhibition The Freedom Principle: Experiments in Art and Music, 1965 to Now, at the MCA Chicago (opening in mid-July), which builds around the aesthetics championed by the association and their legacy.

This YouTube playlist should woo you pretty hard: http://bit.ly/1EHQMid

9780226476964

Our own connection to the AACM, worth every plug one can work in, is George E. Lewis’s definitive history A Power Stronger than Itself: The AACM and American Experimental Music (one of my favorite non-fiction books we’ve published). Lewis, who joined the AACM in 1971 when he was still a teenager, chronicles the group’s communal history via the twin channels of jazz and experimental cultural production, from the AACM’s founding in 1965 to the present. Personal, political, filled with archival details—as well as theory, criticism, and reportage—the book is a must-read jazz ethnography for anyone interested in the trajectory of AACM’s importance and influence, which as the NYT’s piece notes, began from a place of “originality and self-determination,” and landed somewhere that, if nothing else, in the words of Jason Moran, the Kennedy Center’s artistic director for jazz, “shifted the cultural landscape.”

To read more about A Power Stronger than Itself, click here.

Add a Comment
47. Excerpt: Invisible by Philip Ball

9780226238890
Recipes for Invisibility, an excerpt
by Philip Ball
***

 “Occult Forces”

Around 1680 the English writer John Aubrey recorded a spell of invisibility that seems plucked from a (particularly grim) fairy tale. On a Wednesday morning before sunrise, one must bury the severed head of a man who has committed suicide, along with seven black beans. Water the beans for seven days with good brandy, after which a spirit will appear to tend the beans and the buried head. The next day the beans will sprout, and you must persuade a small g irl to pick and shell them. One of these beans, placed in the mouth, will make you invisible.

This was tried, Aubrey says, by two Jewish merchants in London, who could’t acquire the head of a suicide victim and so used instead that of a poor cat killed ritualistically. They planted it with the beans in the garden of a gentleman named Wyld Clark, with his permission. Aubrey’s deadpan relish at the bathetic outcome suggests he was sceptical all along– for he explains that Clark’s rooster dug up the beans and ate them without consequence.

Despite the risk of such prosaic setbacks, the magical texts of the Middle Ages and the early Enlightenment exude confidence in their prescriptions, however bizarre they might be. Of course the magic will work, if you are bold enough to take the chance. This was not merely a sales pitch. The efficacy of magic was universally believed in those days. The common folk feared it and yearned for it, the clergy condemned it, and the intellectuals and philosophers, and a good many charlatans and tricksters, hinted that they knew how to do it.

It is among these fanciful recipes that the quest begins for the origins of invisibility as both a theoretical possibility and a practical technology in the real world. Making things invisible was a kind of magic–but what exactly did that mean?

Historians are confronted with the puzzle of why the tradition of magic lasted so long and laid roots so deep, when it is manifestly impotent. Some of that tenacity is understandable enough. The persistence of magical medicines, for example, isn’t so much of a mystery given that in earlier ages there were no more effective alternatives and that medical cause and effect has always been difficult to establish – people do sometimes get better, and who is to say why? Alchemy, meanwhile, could be sustained by trickery, although that does not solely or even primarily account for its longevity as a practical art: alchemists made much else besides gold and even their gold-making recipes could sometimes change the appearance of metals in ways that might have suggested they were on the right track. As for astrology, it’s persistence even today testifies in part to how readily it can be placed beyond the reach of any attempts at falsification.

But how do you fake invisibility? Either you can see something or someone, or you can’t.

Well, one might think so. But that isn’t the case at all. Magicians have always possessed the power of invisibility. What has changed is the story they tell about how it is done. What has changed far less, however, is our reasons for wishing it to be done and our willingness to believe that it can be. In this respect, invisibility supplies one of the most eloquent testimonies to our changing view of magic – not, as some rationalists might insist, a change from credulous acceptance to hard-headed dismissal, but something far more interesting.

Let’s begin with some recipes. Here is a small selection from what was doubtless once a much more diverse set of options, many of which are now lost. It should give you some intimation of what was required.

John Aubrey provides another prescription, somewhat tamer than the previous one and allegedly from a Rosicrucian source (we’ll see why later):

Take on Midsummer night, at xii [midnight], Astrologically, when all the Planets are above the earth, a Serpent, and kill him, and skinne him: and dry it in the shade, and bring it to a powder. Hold it in your hand and you will be invisible.

If it is black cats you want, look to the notorious Grand Grimoire. Like many magical books, this is a fabrication of the eighteenth century (or perhaps even later), validated by an ostentatious pseudo-history. The author is said to be one‘Alibeck the Egyptian’, who allegedly wrote the following recipe in 1522:

Take a black cat, and a new pot, a mirror, a lighter, coal and tinder. Gather water from a fountain at the strike of midnight. Then you light your fire, and put the cat in the pot. Hold the cover with your left hand without moving or looking behind you, no matter what noises you may hear. After having made it boil 24 hours, put the boiled cat on a new dish. Take the meat and throw it over your left shoulder, saying these words:“accipe quod tibi do, et nihil ampliùs.” [Accept my offering, and don’t delay.] Then put the bones one by one under the teeth on the left side, while looking at yourself in the mirror; and if they do not work, throw them away, repeating the same words each time until you find the right bone; and as soon you cannot see yourself any more in the mirror, withdraw, moving backwards, while saying: “Pater, in manus tuas commendo spiritum meum.” [Father, into your hands I commend my spirit.] This bone you must keep.

Sometimes it was necessary to summon the help of demons, which was always a matter fraught with danger. A medieval manual of demonic magic tells the magician to go to a field and inscribe a circle on the ground, fumigate it and sprink le it, and himself, with holy water while reciting Psalm 51:7 (‘Cleanse me with hyssop, and I shall be clean . . .’). He then conjures several demons and commands them in God’s name to do his bidding by bringing him a cap of invisibility. One of them will fetch this item and exchange it for a white robe. If the magician does not return to the same place in three days, retrieve his robe and burn it, he will drop dead within a week. In other words, this sort of invisibility was both heretical and hazardous. That is perhaps why instructions for invisibility in an otherwise somewhat quotidian fifteenth-century book of household management from Wolfsthurn Castle in the Tyrol have been mutilated by a censorious reader.

Demons are, after all, what you might expect to find in a magical grimoire. TheGrimorium Verum (True Grimoire) is another eighteenth-century fake attributed to Alibeck the Eg yptian; it was alternatively called the Secret of Secrets, an all-purpose title alluding to an encyclopaedic Arabic treatise popular in the Middle Ages. ‘Secrets’ of course hints alluringly at forbidden lore, although in fact the word was often also used simply to refer to any specialized knowledge or skill, not necessarily something intended to be kept hidden. This grimoire says that invisibility can be achieved simply by reciting a Latin prayer – largely just a list of the names of demons whose help is being invoked, and a good indication as to why magic spells came to be regarded as a string of nonsense words:

Athal, Bathel, Nothe, Jhoram, Asey, Cleyungit, Gabellin, Semeney, Mencheno, Bal, Labenenten, Nero, Meclap, Helateroy, Palcin, Timgimiel, Plegas, Peneme, Fruora, Hean, Ha, Ararna, Avira, Ayla, Seye, Peremies, Seney, Levesso, Huay, Baruchalù, Acuth, Tural, Buchard, Caratim, per misericordiam abibit ergo mortale perficiat qua hoc opus ut invisibiliter ire possim . . .

. . . and so on. The prescription continues in a rather freewheeling?tion using characters written in bat’s blood, before calling on yet more demonic ‘masters of invisibility’ to ‘perform this work as you all know how, that this experiment may make me invisible in such wise that no one may see me’.

A magic book was scarcely complete without a spell of invisibility. One of the most notorious grimoires of the Middle Ages, called the Picatrix and based on a tenth-century Arabic work, gives the following recipe.* You take a rabbit on the ‘24th night of the Arabian month’, behead it facing the moon, call upon the ‘angelic spirit’ Salmaquil, and then mix the blood of the rabbit with its bile. (Bury the body well – if it is exposed to sunlight, the spirit of the Moon will kill you.) To make yourself invisible, anoint your face with this blood and bile at nighttime, and ‘you will make yourself totally hidden from the sight of others, and in this way you will be able to achieve whatever you desire’.

‘Whatever you desire’ was probably something bad, because that was usually the way with invisibility. A popular trick in the eighteenth century, known as the Hand of Glory, involved obtaining (don’t ask how) the hand of an executed criminal and preserving it chemically, then setting light to a finger or inserting a burning candle between the fingers. With this talisman you could enter a building unseen and take what you liked, either because you are invisible or because everyone inside is put to sleep.

These recipes seem to demand a tiresome attention to materials and details. But really, as attested in The Book of Abramelin (said to be a system of magic that the Eg yptian mage Abramelin taught to a German Jew in the fifteenth century), it was quite simple to make yourself invisible. You need only write down a‘magic square’ – a small grid in which numbers (or in Abramelin’s case, twelve symbols representing demons) form particular patterns – and place it under your cap. Other grimoires made the trick sound equally straightforward, albeit messy: one should carry the heart of a bat, a black hen, or a frog under the right arm.

Perhaps most evocative of all were accounts of how to make a ring of invisibility, popularly called a Ring of Gyges. The twentieth-century French historian Emile Grillot de Givry explained in his anthology of occult lore how this might be accomplished:

The ring must be made of fixed mercury; it must be set with a little stone to be found in a lapwing’s nest, and round the stone must be engraved the words,“Jésus passant ✠ par le milieu d’eux ✠ s’en allait.” You must put the ring on your finger, and if you look at yourself in a mirror and cannot see the ring it is a sure sign that it has been successfully manufactured.

Fixed mercury is an ill-defined alchemical material in which the liquid metal is rendered solid by mixing it with other substances. It might refer to the chemical reaction of mercury with sulphur to make the blackish-red sulphide, for example, or the formation of an amalgam of mercury with gold. The biblical reference is to the alleged invisibility of Christ mentioned in Luke 4:30 (‘Jesus passed through the midst of them’) and John 8:59 (see page 155). And the lapwing’s stone is a kind of mineral – of which, more below. Invisibility is switched on or off at will by rotating the ring so that this stone sits facing outward or inward (towards the palm), just as Gyges rotated the collet.

Several other recipes in magical texts repeat the advice to check in a mirror that the magic has worked. That way, one could avoid embarrassment of the k ind suffered by a Spaniard who, in 1582, decided to use invisibility magic in his attempt to assassinate the Prince of Orange. Since his spells could not make clothes invisible, he had to strip naked, in which state he arrived at the palace and strolled casually through the gates, unaware that he was perfectly visible to the guards. They followed the outlandish intruder until the purpose of his mission became plain, whereupon they seized him and flogged him.

Some prescriptions combined the alchemical preparation of rings with a necromantic invocation of spirits. One, appearing in an eighteenth-century French manuscript, explains how, if the name of the demon Tonucho is written on parchment and placed beneath a yellow stone set into a gold band while reciting an appropriate incantation, the demon is trapped in the ring and can be impelled to do one’s bidding.

Other recipes seem to refer to different qualities of invisibility. One might be unable to see an object not because it has vanished as though perfectly transparent, but because it lies hidden by darkness or mist, so that the‘cloaking’ is apparent but what it cloaks is obscured. Or one might be dazzled by a play of light (see page 25), or experience some other confusion of the senses. There is no single view of what invisibility consists of, or where it resides. These ambiguities recur throughout the history of the invisible.

Partly for this reason, it might seem hard to discern any pattern in these prescriptions– any common themes or ingredients that might provide a clue to their real meaning. Some of them sound like the cartoon sorcery of wizards stirring bubbling cauldrons. Others are satanic, or else high-minded and allegorical, or merely deluded or fraudulent. They mix pious dedications to God with blasphemous entreaties to uncouthly named demons. That diversity is precisely what makes the tradition of magic so difficult to grasp: one is constantly wondering if it is a serious intellectual enterprise, a smokescreen for charlatans, or the credulous superstition of folk belief. The truth is that magic in the Western world was all of these things and for that very reason has been able to permeate culture at so many different levels and to leave traces in the most unlikely of places: in theoretical physics and pulp novels, the cults of modern mystics and the glamorous veils of cinema. The ever-present theme of invisibility allows us to follow these currents from their source.

*Appearing hard on the heels of an unrelated discussion of the Chaldean city of Adocentyn, it betrays the cut-and-paste nature of many such compendia.

“Making Magic”

Many of the recipes for invisibility from the early Renaissance onward therefore betray an ambiguous credo. They are often odd, sometimes ridiculous, and yet there are indications that they are not mere mumbo-jumbo dreamed up by lunatics or charlatans, but hint at a possible rationale within the system of natural magic.

It’s no surprise, for example, that eyes feature prominently among the ingredients. From a modern perspective the association might seem facile: you grind up an eyeball and therefore people can’t see you. But to an adept of natural magic there would have been a sound causative principle at work, operating through the occult network of correspondences: an eye for an eye, you might say. A medieval collec?tion of Greek magical works from the fourth century AD known as the Cyranides contains some particularly grotesque recipes of this sort for ointments of invisibility. One involves grinding together the fat or eye of an owl, a ball of beetle dung and perfumed olive oil, and then anointing the entire body while reciting a selection of unlikely names. Another uses instead ‘the eye of an ape or of a man who had a violent death’, along with roses and sesame oil. An eighteenth-century text spuriously associated with Albertus Magnus (he was a favourite source of magical lore even in his own times) instructs the magician to‘pierce the right eye of a bat, and carry it with you and you will be invisible’. One of the cruellest prescriptions instructs the magician to cut out the eyes of a live owl and bury them in a secret place.

A fifteenth-century Greek manuscript offers a more explicitly optical theme than Aubrey’s head-grown beans, stipulating that fava beans are imbued with invisibility magic when placed in the eye sockets of a human skull. Even though one must again call upon a pantheon of fantastically named demons, the principle attested here has a more naturalistic flavour: ‘As the eyes of the dead do not see the living, so these beans may also have the power of invisibility.’

Within the magic tradition of correspondences, certain plants and minerals were associated with invisibility. For example, the dust on brown patches of mature fern leaves was said to be a charm of invisibility: unlike other plants, they appeared to possess neither flowers nor seeds, but could nevertheless be found surrounded by their progeny.

The classical stone of invisibility was the heliotrope (sun-turner), also called bloodstone: a form of green or yellow quartz (chalcedony) flecked with streaks of a red mineral that is either iron oxide or red jasper. The name alludes to the ston’s tendency to reflect and disperse light, itself a sign of special optical powers. In his Natural History, Pliny says that magicians assert that the heliotrope can make a person invisible, although he scoffs at the suggestion:

In the use of this stone, also, we have a most glaring illustration of the impudent effrontery of the adepts in magic, for they say that, if it is combined with the plant heliotropium, and certain incantations are then repeated over it, it will render the person invisible who carries it about him.

The plant mentioned here, bearing the same name as the mineral, is a genus of the borage family, the flowers of which were thought to turn to face the sun. How a mineral is‘combined’ with a plant isn’t clear, but the real point is that the two substances are again bound by a system of occult correspondence.

Agrippa repeated Pliny’s claim in the sixteenth century, minus the scepticism:

There is also another vertue of it [the bloodstone] more wonderfull, and that is upon the eyes of men, whose sight it doth so dim, and dazel, that it doth not suffer him that carries it to see it, & this it doth not do without the help of the Hearb of the same name, which also is called Heliotropium.

It is more explicit here that the magic works by dazzlement: the person wearing a heliotrope is ‘invisible’ because the light it reflects befuddles the senses. That is why kings wear bright jewels, explained Anselm Boetius, physician to the Holy Roman Emperor Rudolf II in 1609: they wish to mask their features in brilliance. This use of gems that spark le, reflect and disperse light to confuse and blind the onlooker is attributed by Ben Jonson to the Rosicrucians, who were often popu?larly associated with magical powers of invisibility (see pages 32–3). In his poem The Underwood, Jonson writes of

The Chimera of the Rosie-Crosse,
Their signs, their seales, their hermetique rings;
Their jemme of riches, and bright stone that brings
Invisibilitie, and strength, and tongues.

The bishop Francis Godwin indicates in his fantastical fiction The Man in the Moone(1634), an early vision of space travel, that invisibility jewels were commonly deemed to exist, while implying that their corrupting temptations made them subject to divine prohibition. Godwin’s space-voyaging hero Domingo Gonsales asks the inhabitants of the Moon

whether they had not any kind of Jewell or other means to make a man invisible, which mee thought had beene a thing of great and extraordinary use . . . They answered that if it were a thing faisible, yet they assured themselves that God would not suffer it to be revealed to us creatures subject to so many imperfections, being a thing so apt to be abused to ill purposes.

Other dazzling gemstones were awarded the same‘virtue’, chief among them the opal. This is a form of silica that refracts and reflects light to produce rainbow iridescence, indeed called opalescence.

Whether opal derives from the Greek opollos,‘seeing’ – the root of ‘optical’ – is disputed, but opal’s streaked appearance certainly resem?bles the iris of the eye, and it has long been associated with the evil eye. In the thirteenth-century Book of Secrets, yet again falsely attributed to Albertus Magnus, the mineral is g iven the Greek name for eye (ophthalmos) and is said to cause invisibility by bedazzlement:

Take the stone Ophthalmus, and wrap it in the leaf of the Laurel, or Bay tree; and it is called Lapis Obtalmicus, whose colour is not named, for it is of many colours. And it is of such virtue, that it blindeth the sights of them that stand about. Constantius [probably Constantine the Great] carrying this in his hand, was made invisible by it.

It is’t hard to recognize this as a variant of Pliny’s recipe, complete with cognate herb. In fact it isn’t entirely clear that this Ophthalmus really is opal, since elsewhere in theBook of Secrets that mineral is called Quiritia and isn’t associated with invisibility. This reflects the way that the book was, like so many medieval handbooks and encyoclopedias, patched together from a variety of sources.

Remember the‘stone from the lapwing’s nest’ mentioned by Grillot de Givry? His source was probably an eighteenth-century text called the Petit Albert – a fabrication, with the grand full title of Marvelous Secrets of Natural and Qabalistic Magic, attributed to a ‘Little Albert’ and obviously trading once more on the authority of the ‘Great Albert’ (Magnus). The occult revivalist Arthur Waite gave the full account of this recipe from the Petit Albert in his Book of Ceremonial Magic (1913), which asserts that the bird plays a further role in the affair:

Having placed the ring on a palette-shaped plate of fixed mercury, compose the perfume of mercury, and thrice expose the ring to the odour thereof; wrap it in a small piece of taffeta corresponding to the colour of the planet, carry it to the peewit’s [lapwing’s] nest from which the stone was obtained, let it remain there for nine days, and when removed, fumigate it precisely as before. Then preserve it most carefully in a small box, made also of fixed mercury, and use it when required.

Now we can get some notion of what natural magic had become by the time the Petit Albert was cobbled together. It sounds straightforward enough, but who is going to do all this? Where will you find the lapwin’s nest with a stone in it in the first place? What is this mysterious ‘perfume of mercury’? Will you take the ring back and put it in the nest for nine days and will it still be there later if you do? The spell has become so intricate, so obscure and vexing, that no one will try it. The same character is evident in a nineteenth-century Greek manuscript called the Bernardakean Magical Codex, in which Aubrey’s instructions for growing beans with a severed head are elaborated beyond all hope of success: you need to bury a black cat’s head under an ant hill, water it with human blood brought every day for forty days from a barber (those were the days when barbers still doubled as blood-letters), and check to see if one of the beans has the power of invisibility by looking into a new mirror in which no one has previously looked. If the spell doesn’t work (and the need to check each bean shows that this is always a possibility), it isn’t because the magic is ineffectual but because you must have done something wrong somewhere along the way. In which case, will you find another black cat and begin over? Unlikely; instead, aspiring magicians would buy these books of ‘secrets’, study their prescriptions and incantations and thereby become an adept in a magical circle: someone who possesses powerful secrets, but does not, perhaps, place much store in actually putting them to use. Magical books thus acquired the same talismanic function as a great deal of the academic literature today: to be read, learnt, cited, but never used.

To read more about Invisible, click here.

 

Add a Comment
48. Excerpt: Who Freed the Slaves?

9780226178202

An excerpt from Who Freed the Slaves?: The Fight over the Thirteenth Amendment 

by Leonard L. Richards

***

Prologue

WEDNESDAY, JUNE 15, 1864

James Ashley never forgot the moment. After hours of debate, Schuyler Colfax, the Speaker of the House of Representatives, had finally gaveled the 159 House members to take their seats and get ready to vote.

Most of the members were waving a fan of some sort, but none of the fans did much good. Heat and humidity had turned the nation’s capitol into a sauna. Equally bad was the stench that emanated from Washington’s back alleys, nearby swamps, and the twenty-one hospitals in and about the city, which now housed over twenty thousand wounded and dying soldiers. Worse yet was the news from the front lines. According to some reports, the Union army had lost seven thousand men in less than thirty minutes at Cold Harbor. The commanding general, Ulysses S. Grant, had been deemed a “fumbling butcher.”

Nearly everyone around Ashley was impatient, cranky, and miserable. But Ashley was especially downcast. It was his job to get Senate Joint Resolution Number 16, a constitutional amendment to outlaw slavery in the United States, through the House of Representatives, and he didn’t have the votes.

The need for the amendment was obvious. Of the nation’s four million slaves at the outset of the war, no more than five hundred thousand were now free, and, to his disgust, many white Americans intended to have them reenslaved once the war was over. The Supreme Court, moreover, was still in the hands of Chief Justice Roger B. Taney and other staunch proponents of property rights in slaves and state’s rights. If they ever got the chance, they seemed certain not only to strike down much of Lincoln’s Emancipation Proclamation but also to hold that under the Constitution only the states where slavery existed had the legal power to outlaw it.

Six months earlier, in December 1863, when Ashley and his fellow Republicans had proposed the amendment, he had been more upbeat. He knew that getting the House to abolish slavery, which in his mind was the root cause of the war, was not going to be easy. It required a two-thirds vote. But he had thought that Republicans in both the Senate and the House might somehow muster the necessary two-thirds majority. No longer did they have to worry about the united opposition of fifteen slave states. Eleven of the fifteen were out of the Union, including South Carolina and Mississippi, the two with the highest percentage of slaves, and Virginia, the one with the largest House delegation. In addition, the war was in its thirty-third month. Hundreds of thousands of Northern men had been killed on the battlefield. The one-day bloodbath at Antietam was now etched into the memory of every one of his Toledo constituents as well as every member of Congress. So, too, was the three-day battle at Gettysburg.

If Republicans held firm, all they needed to push the amendment through the House was a handful of votes from their opponents, either from the border slave state representatives who had remained in the Union or from free state Democrats. It was his job to get those votes. He was the bill’s floor manager.

Back in December, Ashley had been the first House member to propose such an amendment. Although few of his colleagues realized it, he had been toying with the idea for nearly a decade. He had made a similar proposal in September 1856, when it didn’t have a chance of passing.

He was a political novice at the time, just twenty-nine years old, and known mainly for being big and burly, six feet tall and starting to spread around the middle, with a wild mane of curly hair and a loud, resonating voice. He had just gotten established in Toledo politics. He had moved there three years earlier from the town of Portsmouth, in southern Ohio, largely because he had just gotten married and was in deep trouble for helping slaves flee across the Ohio River. He was not yet a Congressman. Nor was he running for office. He was just campaigning for the Republican Party’s first presidential candidate, John C. Frémont, and Richard Mott, a House member who was up for reelection. In doing so, he gave a stump speech at a grove near Montpelier, Ohio.

 

James M. Ashley, congressman from Ohio. Brady-Handy Photograph Collection, Library of Congress (lC-Bh824-5303).

The speech lasted two hours. In most respects, it was a typical Republican stump speech. It was mainly a collection of stories, many from his youth, living and working along the Ohio River. Running through it were several themes that tied the stories together and foreshadowed the rest of his career. In touting the two candidates, he blamed the nation’s troubles on a conspiracy of slaveholders and Northern men with Southern principles, or as he called them “slave barons” and “doughfaces.” These men, he claimed, had deliberately misconstrued the Bible, misinterpreted the Constitution, and gained complete control of the federal government. “For nearly half a century,” he told his listeners, some two hundred thousand slave barons had “ruled the nation, morally and politically, including a majority of the Northern States, with a rod of iron.” And before “the advancing march of these slave barons,” the “great body of Northern public men” had “bowed down . . . with their hands on their mouths and mouths in the dust, with an abasement as servile as that of a vanquished, spiritless people, before their conquerors.”

Across the North, many Republican spokesmen were saying much the same thing. What made Ashley’s speech unusual was that he made no attempt to hide his radicalism. He made it clear to the crowd at Montpelier that he would do almost anything to destroy slavery and the men who profited from it. He had learned to hate slavery and the slave barons during his boyhood, traveling with his father, a Campbellite preacher, through Kentucky and western Virginia, and later working as a cabin boy on the Ohio River. Never would he forget how traumatized he had been as a nine-year-old seeing for the first time slaves in chains being driven down a road to the Deep South, whipping posts on which black men had been beaten, and boys his own age being sold away from their mothers. Nor would he ever forget the white man who wouldn’t let his cattle drink from a stream in which his father was baptizing slaves. How, he had wondered, could his father still justify slavery? Certainly, it didn’t square with the teachings of Christ or what his mother was teaching him back home.

Ashley also made it clear to the crowd at Montpelier that he had violated the Fugitive Slave Law more times than he could count. He had actually begun helping slaves flee bondage in 1839, when he was just fifteen years old, and he had continued doing so after the Fugitive Slave Act of 1850 made the penalties much stiffer. To avoid prosecution, he and his wife had fled southern Ohio in 1851. Would he now mend his ways? “Never!” he told his audience. The law was a gross violation of the teachings of Christ, and for that reason he had never obeyed it and with “God’s help . . . never shall.”

What, then, should his listeners do? The first step was to join him in supporting John C. Frémont for president and Richard Mott for another term in Congress. Another was to join him in never obeying the “infamous fugitive-slave law”—the most “unholy” of the laws that these slave barons and their Northern sycophants had passed. And perhaps still another, he suggested, was to join him in pushing for a constitutional amendment outlawing “the crime of American slavery” if that should become “necessary.”

The last suggestion, in 1856, was clearly fanciful. Nearly half the states were slave states. Thus getting two-thirds of the House, much less two-thirds of the Senate, to support an amendment outlawing slavery was next to impossible. Ashley knew that. Perhaps some in his audience, especially those who cheered the loudest, thought otherwise. But not Ashley. Although still a political neophyte, he knew the rules of the game. He was also good with numbers, always had been, and always would be. Nonetheless, he told his audience to put it on their “to do” list.

Five years later, in December 1861, Ashley added to the list. By then he was no longer a political neophyte. He had been twice elected to Congress. Eleven states had seceded from the Union, and the Civil War was in its eighth month. As chairman of the House Committee on Territories, he proposed that the eleven states no longer be treated as states. Instead they should be treated as “territories” under the control of Congress, and Congress should impose on them certain conditions before they were allowed to regain statehood. More specifically, Congress should abolish slavery in these territories, confiscate all rebel lands, distribute the confiscated lands in plots of 160 acres or fewer to loyal citizens of any color, disfranchise the rebel leaders, and establish new governments with universal adult male suffrage. Did that mean, asked one skeptic, that black men were to receive land? And the right to vote? Yes, it did. And if such measures were enacted, said Ashley, he felt certain that the slave barons would be forever stripped of their power.

Ashley’s goal was clear. The 1850 census, from which Ashley and most Republicans drew their numbers, had indicated that just a few Southern families had the lion’s share of the South’s wealth. Especially potent were the truly big slaveholders—families with over one hundred slaves. There were 105 such family heads in Virginia, 181 in Georgia, 279 in Mississippi, 312 in Alabama, 363 in South Carolina, and 460 in Louisiana. With respect to landholdings, there were 371 family heads in Louisiana with more than one thousand acres, 481 in Mississippi, 482 in South Carolina, 641 in Virginia, 696 in Alabama, and 902 in Georgia.

In Ashley’s view, virtually all these wealth holders were rebels, and the Congress should go after all their assets. Strip them of their slaves. Strip them of their land. Strip them of their right to hold office. Halfhearted measures, he contended, would lead only to half-hearted results. Taking away a slave baron’s slaves undoubtedly would hobble him, but it wouldn’t destroy him. With his vast landholdings, he would soon be back in power. And with the right to hold office, he would not only have economic power but also political power. And with the end of the three-fifths clause, the clause in the Constitution that counted slaves as only three-fifths of a free person when it came to tabulating seats in Congress and electoral votes, the South would have more power than ever before.

When Ashley made this proposal in December 1861, everyone on his committee told him it was much too radical ever to get through Congress. He knew that. But he also knew that there were men in Congress who agreed with him, including four of the seven men on his committee, several dozen in the House, maybe a half-dozen in the Senate, and even some notables such as Representative Thaddeus Stevens of Pennsylvania and Senator Ben Wade of Ohio.

The trouble was the opposition. It was formidable. Not only did it include the “Peace” Democrats, men who seemingly wanted peace at any price, men whom Ashley regarded as traitors, but also “War” Democrats, men such as General George McClellan, General Don Carlos Buell, and General Henry Halleck, men who were leading the nation’s troops. Also certain to oppose him were the border state Unionists, especially the Kentuckians, and most important of all, Abraham Lincoln. Against such opposition, all Ashley and the other radicals could do was push, prod, and hope to get maybe a piece or two of the total package enacted.

Two years later, in December 1863, Ashley thought it was indeed “necessary” to strike a deathblow against slavery. He also thought it was possible to get a few pieces of his 1861 package into law. So, just after the House opened for its winter session, he introduced two measures. One was a reconstruction bill that followed, at least at first glance, what Lincoln had called for in his annual message. Like Lincoln, Ashley proposed that a seceded state be let back into the Union when only 10 percent of its 1860 voters took an oath of loyalty.

Had he suddenly become a moderate? A conservative? Not quite. To Lincoln’s famous 10 percent plan, Ashley added two provisions. One would take away the right to vote and to hold office from all those who had fought against the Union or held an office in a rebel state. That was a significant chunk of the population. The other would give the right to vote to all adult black males. That was even a bigger chunk of the population, especially in South Carolina and Mississippi.

The other measure that Ashley proposed that December was the constitutional amendment that outlawed slavery. A few days later, Representative James F. Wilson of Iowa made a similar proposal. The wording differed, but the intent was the same. The Constitution had to be amended, contended Wilson, not only to eradicate slavery but also to stop slaveholders and their supporters from launching a program of reenslavement once the war was over. Then, several weeks later, Senator John Henderson of Missouri and Senator Charles Sumner of Massachusetts introduced similar amendments. Sumner’s was the more radical. The Massachusetts senator not only wanted to end slavery. He also wanted to end racial inequality.

The Senate Judiciary Committee then took charge. They ignored Sumner’s cry for racial justice and worked out the bill’s final language. The wording was clear and simple: “Neither slavery nor involuntary servitude, except as a punishment for crime, whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.”

On April 8, 1864, the committee’s wording came before the Senate for a final vote. Although a few empty seats could be found in the men’s gallery, the women’s gallery was packed, mainly by church women who had organized a massive petition drive calling on Congress to abolish slavery. Congress for the most part had ignored their hard work. But to the women’s delight, thirty-eight senators now voted for the amendment, six against, giving the proposed amendment eight votes more than what was needed to meet the two-thirds requirement.

All thirty Republicans in attendance voted aye. The no votes came from two free state Democrats, Thomas A. Hendricks of Indiana and James McDougall of California, and four slave state senators: Garrett Davis and Lazarus W. Powell of Kentucky and George R. Riddle and Willard Saulsbury of Delaware. Especially irate was Saulsbury. A strong proponent of reenslavement, he made sure that the women knew that he regarded them with contempt. In a booming voice, he told them on leaving the Senate floor that all was lost and that there was no longer any chance of ever restoring the eleven Confederate states to the Union.

Now, nine weeks later, the measure was before the House. And its floor manager, James Ashley, expected the worst. He kept a close count. And, as the members voted, he realized that he was well short of the required two-thirds. Of the eighty Republicans who were in attendance, seventy-nine eventually cast aye votes and one abstained. Of the seventeen slave state representatives in attendance, eleven voted aye and six nay. But of the sixty-two free state Democrats, only four voted for the amendment while fifty-eight voted nay. As a result, the final vote was going to be ninety-four to sixty-four. That was eleven shy of the necessary two-thirds majority.

The outcome was even worse than Ashley had anticipated. “Educated in the political school of Jefferson,” he later recalled, “I was absolutely amazed at the solid Democratic vote against the amendment on the 15th of June. To me it looked as if the golden hour had come, when the Democratic party could, without apology, and without regret, emancipate itself from the fatal dogmas of Calhoun, and reaffirm the doctrines of Jefferson. It had always seemed to me that the great men in the Democratic party had shown a broader spirit in favor of human liberty than their political opponents, and until the domination of Mr. Calhoun and his States-rights disciples, this was undoubtedly true.”

Despite the solid Democratic vote against the resolution, there was still one way that Ashley could save the amendment from certain congressional death. And that was to take advantage of a House rule that allowed a member to bring a defeated measure up for reconsideration if he intended to change his vote. To make use of this rule, however, Ashley had to change his vote before the clerk announced the final tally. He had voted aye along with his fellow Republicans. He now had to get into the “no” column. That he did. The final vote thus became ninety-three to sixty-five.

Two weeks later, Representative William Steele Holman, Democrat of Indiana, asked Ashley when he planned to call for reconsideration. Ashley told him not now but maybe after the next election. The trick, he said, was to find enough men in Holman’s party who were “naturally inclined to favor the amendment, and strong enough to meet and repel the fierce partisan attack which were certain to be made upon them.”

Holman, Ashley knew, would not be one of them. Although the Indiana Democrat had once been a staunch supporter of the war effort, he opposed the destruction of slavery. Not only had he just voted against the amendment—he had vehemently denounced it. Holman, as Ashley viewed him, was thus one of the “devil’s disciples.” He was beyond redemption. And with this in mind, Ashley set about to find at least eleven additional House members who would stand their ground against men like Holman.

To read more about Who Freed the Slaves?, click here.

Add a Comment
49. Facebook’s A Year in Books drafts The Structure of Scientific Revolutions

9780226458120

In his sixth pick for the social network’s online book club (“A Year of Books”), Facebook founder Mark Zuckerberg recently drafted Thomas Kuhn’s The Structure of Scientific Revolutionsa 52-year-old book still one of the most often cited academic resources of all time, and one of UCP’s crowning gems of twentieth century scholarly publishing. Following in the footsteps of Pixar founder Ed Catmull’s Creativity, Inc., Zuckerberg’s most recent pick, Structure will be the subject of a Facebook thread with open commenting, for the next two weeks, in line with the methodology of “A Year of Books.” If you’re thinking about reading along, the 50th Anniversary edition includes a compelling Introduction by Ian Hacking that situates the book’s legacy, both in terms of its contribution to a scientific vernacular (“paradigm shifting”) and its value as a scholarly publication of mass appeal (“paradigm shifting”).

Or, in Zuckerberg’s own words:

It’s a history of science book that explores the question of whether science and technology make consistent forward progress or whether progress comes in bursts related to other social forces. I tend to think that science is a consistent force for good in the world. I think we’d all be better off if we invested more in science and acted on the results of research. I’m excited to explore this theme further.

And from the Guardian:

“Before Kuhn, the normal view was that science simply needed men of genius (they were always men) to clear away the clouds of superstition, and the truth of nature would be revealed,” [David Papineau, professor of philosophy at King’s College London] said. “Kuhn showed it is much more interesting than that. Scientific research requires a rich network of prior assumptions (Kuhn reshaped the term ‘paradigm’ to stand for these), and changing such assumptions can be traumatic, and is always resisted by established interests (thus the need for scientific ‘revolutions’).”

Kuhn showed, said Papineau, that “scientists are normal humans, with prejudices and personal agendas in their research, and that the path to scientific advances runs through a complex social terrain”.

“We look at science quite differently post-Kuhn,” he added.

To read more about Structure, click here.

To read an excerpt from Ian Hacking’s Introduction to the 50th Anniversary edition, click here.

Add a Comment
50. Excerpt: The Territories of Science and Religion

9780226184487

Introduction

An excerpt from The Territories of Science and Religion by Peter Harrison

***

The History of “Religion”

In the section of his monumental Summa theologiae that is devoted to a discussion of the virtues of justice and prudence, the thirteenth-century Dominican priest Thomas Aquinas (122–74) investigates, in his characteristically methodical and insightful way, the nature of religion. Along with North African Church Father Augustine of Hippo (354–430), Aquinas is probably the most influential Christian writer outside of the biblical authors. From the outset it is clear that for Aquinas religion (religio) is a virtue—not, incidentally, one of the preeminent theological virtues, but nonetheless an important moral virtue related to justice. He explains that in its primary sense religiorefers to interior acts of devotion and prayer, and that this interior dimension is more important than any outward expressions of this virtue. Aquinas acknowledges that a range of outward behaviors are associated with religio—vows, tithes, offerings, and so on—but he regards these as secondary. As I think is immediately obvious, this notion of religion is rather different from the one with which we are now familiar. There is no sense in which religio refers to systems of propositional beliefs, and no sense of different religions (plural). Between Thomas’s time and our own, religion has been transformed from a human virtue into a generic something, typically constituted by sets of beliefs and practices. It has also become the most common way of characterizing attitudes, beliefs, and practices concerned with the sacred or supernatural.

Aquina’s understanding of religio was by no means peculiar to him. Before the seventeenth century, the word “religion” and its cognates were used relatively infrequently. Equivalents of the term are virtually nonexistent in the canonical documents of the Western religions—the Hebrew Bible, the New Testament, and the Qur’an. When the term was used in the premodern West, it did not refer to discrete sets of beliefs and practices, but rather to something more like “inner piety,” as we have seen in the case of Aquinas, or “worship.” As a virtue associated with justice, moreover,religio was understood on the Aristotelian model of the virtues as the ideal middle point between two extremes—in this case, irreligion and superstition.

The vocabulary of “true religion” that we encounter in the writings of some of the Church Fathers offers an instructive example. “The true religion” is suggestive of a system of beliefs that is distinguished from other such systems that are false. But careful examination of the content of these expressions reveals that early discussions about true and false religion were typically concerned not with belief, but rather worship and whether or not worship is properly directed. Tertullian (ca. 160–ca. 220) was the first Christian thinker to produce substantial writings in Latin and was also probably the first to use the expression “true religion.” But in describing Christianity as “true religion of the true god,” he is referring to genuine worship directed toward a real (rather than fictitious) God. Another erudite North African Christian writer, Lactantius (ca. 240–ca. 320), gives the first book of his Divine Institutes the title “De Falsa religione.” Again, however, his purpose is not to demonstrate the falsity of pagan beliefs, but to show that “the religionus ceremonies of the [pagan] gods are false,” which is just to say that the objects of pagan worship are false gods. His positive project, an account of true religion, was “to teach in what manner or by what sacrifice God must be worshipped.” Such rightly directed worship was for Lactantius “the duty of man, and in that one object the sum of all things and the whole course of a happy life consists.”

Jerome’s choice of religio for his translation of the relatively uncommon Greekthreskeia in James 1:27 similarly associates the word with cult and worship. In the English of the King James version the verse is rendered: “Pure and undefiled religion [threskeia] before God the Father is this, To visit the fatherless and widows in their affliction, and to keep himself unspotted from the world.” The import of this passage is that the “religion” of the Christians is a form of worship that consists in charitable acts rather than rituals. Here the contrast is between religion that is “vain” (vana) and that which is “pure and undefiled” (religion munda et inmaculata). In the Middle Ages this came to be regarded as equivalent to a distinction between true and false religion. The twelfth-century Distinctiones Abel of Peter the Chanter (d. 1197), one of the most prominent of the twelfth-century theologians at the University of Paris, makes direct reference to the passage from James, distinguishing religion that is pure and true (munda et vera) from that which is vain and false (vana et falsa). His pupil, the scholastic Radulfus Ardens, also spoke of “true religion” in this context, concluding that it consists in “the fear and love of God, and the keeping of his commandments.” Here again there is no sense of true and false doctrinal content.

Perhaps the most conspicuous use of the expression “true religion” among the Church Fathers came in the title of De vera religion (On True religion), written by the great doctor of the Latin Church, Augustine of Hippo. In this early work Augustine follows Tertullian and Lactantius in describing true religion as rightly directed worship. As he was to relate in the Retractions: “I argued at great length and in many ways that true religion means the worship of the one true God.” It will come as no surprise that Augustine here suggests that “true religion is found only in the Catholic Church.” But intriguingly when writing the Retractions he was to state that while Christian religion is a form of true religion, it is not to be identified as the true religion. This, he reasoned, was because true religion had existed since the beginning of history and hence before the inception of Christianity. Augustine addressed the issue of true and false religion again in a short work, Six Questions in Answer to the Pagans, written between 406 and 412 and appended to a letter sent to Deogratius, a priest at Carthage. Here he rehearses the familiar stance that true and false religion relates to the object of worship: “What the true religion reprehends in the superstitious practices of the pagans is that sacrifice is offered to false gods and wicked demons.” But again he goes on to explain that diverse cultic forms might all be legitimate expressions of true religion, and that the outward forms of true religion might vary in different times and places: “it makes no difference that people worship with different ceremonies in accord with the different requirements of times and places, if what is worshipped is holy.” A variety of different cultural forms of worship might thus be motivated by a common underlying “religion”: “different rites are celebrated in different peoples bound together by one and the same religion.” If true religion could exist outside the established forms of Catholic worship, conversely, some of those who exhibited the outward forms of Catholic religion might lack “the invisible and spiritual virtue of religion.”

This general understanding of religion as an inner disposition persisted into the Renaissance. The humanist philosopher and Platonist Marsilio Ficino (143–99) thus writes of “christian religion,” which is evidenced in lives oriented toward truth and goodness. “All religion,” he wrote, in tones reminiscent of Augustine, “has something good in it; as long as it is directed towards God, the creator of all things, it is true Christian religion.” What Ficino seems to have in mind here is the idea that Christian religion is a Christlike piety, with “Christian” referring to the person of Christ, rather than to a system of religion—“the Christian religion.” Augustine’s suggestion that true and false religion might be displayed by Christians was also reprised by the Protestant Reformer Ulrich Zwingli, who wrote in 1525 of “true and false religion as displayed by Christians.”

It is worth mentioning at this point that, unlike English, Latin has no article—no “a” or “the.” Accordingly, when rendering expressions such as “vera religion” or “christiana religio” into English, translators had to decide on the basis of context whether to add an article or not. As we have seen, such decisions can make a crucial difference, for the connotations of “true religion” and “christian religion” are rather different from those of “the true religion” and “the Christian religion.” The former can mean something like “genuine piety” and “Christlike piety” and are thus consistent with the idea of religion as an interior quality. Addition of the definite article, however, is suggestive of a system of belief. The translation history of Protestant Reformer John Calvin’s classic Institutio Christianae Religionis (1536) gives a good indication both of the importance of the definite article and of changing understandings of religion in the seventeenth century. Calvin’s work was intended as a manual for the inculcation of Christian piety, although this fact is disguised by the modern practice of rendering the title in English as The Institutes of the Christian Religion. The title page of the first English edition by Thomas Norton bears the more faithful “The Institution of Christian religion” (1561). The definite article is placed before “Christian” in the 1762 Glasgow edition: “The Institution of the Christian religion.” And the now familiar “Institutes” appears for the first time in John Allen’s 1813 edition: “The Institutes of the Christian religion.” The modern rendering is suggestive of an entity “the Christian religion” that is constituted by its propositional contents—“the institutes.” These connotations were completely absent from the original title. Calvin himself confirms this by declaring in the preface his intention “to furnish a kind of rudiments, by which those who feel some interest in religion might be trained to true godliness.”

With the increasing frequency of the expressions“religion” and “the religions” from the sixteenth century onward we witness the beginning of the objectification of what was once an interior disposition. Whereas for Aquinas it was the “interior” acts of religion that held primacy, the balance now shifted decisively in favor of the exterior. This was a significant new development, the making of religion into a systematic and generic entity. The appearance of this new conception of religion was a precondition for a relationship between science and religion. While the causes of this objectification are various, the Protestant Reformation and the rise of experimental natural philosophy were key factors, as we shall see in chapter 4.

The History of “Science”

It is instructive at this point to return to Thomas Aquinas, because when we consider what he has to say on the notion of science (scientia) we find an intriguing parallel to his remarks on religion. In an extended treatment of the virtues in the Summa theologiae, Aquinas observes that science (scientia) is a habit of mind or an“intellectual virtue.” The parallel with religio, then, lies in the fact that we are now used to thinking of both religion and science as systems of beliefs and practices, rather than conceiving of them primarily as personal qualities. And for us today the question of their relationship is largely determined by their respective doctrinal content and the methods through which that content is arrived at. For Aquinas, however, both religioand scientia were, in the first place, personal attributes.

We are also accustomed to think of virtues as belonging entirely within the sphere of morality. But again, for Aquinas, a virtue is understood more generally as a“habit” that perfects the powers that individuals possess. This conviction—that human beings have natural powers that move them toward particular ends—was related to a general approach associated with the Greek philosopher Aristotle (384–322 BC), who had taught that all natural things are moved by intrinsic tendencies toward certain goals (tele). For Aristotle, this teleological movement was directed to the perfection of the entity, or to the perfection of the species to which it belonged. As it turns out, one of the natural tendencies of human beings was a movement toward knowledge. As Aristotle famously wrote in the opening lines of the Metaphysics, “all men by nature desire to know.” In this scheme of things, our intellectual powers are naturally directed toward the end of knowledge, and they are assisted in their movement toward knowledge by acquired intellectual virtues.

One of the great revolutions of Western thought took place in the twelfth and thirteenth centuries, when much Greek learning, including the work of Aristotle, was rediscovered. Aquinas played a pivotal role in this recovery of ancient wisdom, making Aristotle one of his chief conversation partners. He was by no means a slavish adherent of Aristotelian doctrines, but nonetheless accepted the Greek philosophe’s premise that the intellectual virtues perfect our intellectual powers. Aquinas identified three such virtues—understanding (intellectus), science (scientia), and wisdom (sapientia). Briefly, understanding was to do with grasping first principles, science with the derivation of truths from those first principles, and wisdom with the grasp of the highest causes, including the first cause, God. To make progress in science, then, was not to add to a body of systematic knowledge about the world, but was to become more adept at drawing “scientific” conclusions from general premises. “Science” thus understood was a mental habit that was gradually acquired through the rehearsal of logical demonstrations. In Thomas’s words: “science can increase in itself by addition; thus when anyone learns several conclusions of geometry, the same specific habit of science increases in that man.”

These connotations of scientia were well known in the Renaissance and persisted until at least the end of the seventeenth century. The English physician John Securis wrote in 1566 that“science is a habit” and “a disposition to do any thing confirmed and had by long study, exercise, and use.” Scientia is subsequently defined in Thomas Holyoake’sDictionary (1676) as, properly speaking, the act of the knower, and, secondarily, the thing known. This entry also stresses the classical and scholastic idea of science as “a habit of knowledge got by demonstration.” French philosopher René Descartes (1596–1650) retained some of these generic, cognitive connotations when he defined scientiaas “the skill to solve every problem.”

Yet, according to Aquinas, scientia, like the other intellectual virtues, was not solely concerned with rational and speculative considerations. In a significant departure from Aristotle, who had set out the basic rationale for an ethics based on virtue, Aquinas sought to integrate the intellectual virtues into a framework that included the supernatural virtues (faith, hope, and charity),“the seven gifts of the spirit,” and the nine “fruits of the spirit.” While the various relations are complicated, particularly when beatitudes and vices are added to the equation, the upshot of it all is a considerable overlap of the intellectual and moral spheres. As philosopher Eleonore Stump has written, for Aquinas “all true excellence of intellect—wisdom, understanding andscientia—is possible only in connection with moral excellence as well.” By the same token, on Aquinas’s understanding, moral transgressions will have negative consequences for the capacity of the intellect to render correct judgments: “Carnal vices result in a certain culpable ignorance and mental dullness; and these in turn get in the way of understanding and scientia.” Scientia, then, was not only a personal quality, but also one that had a significant moral component.

The parallels between the virtues of religio and scientia, it must be conceded, are by no means exact. While in the Middle Ages there were no plural religions (or at least no plural religions understood as discrete sets of doctrines), there were undeniably sciences (scientiae), thought of as distinct and systematic bodies of knowledge. The intellectual virtue scientia thus bore a particular relation to formal knowledge. On a strict definition, and following a standard reading of Aristotle’s Posterior Analytics, a body of knowledge was regarded as scientific in the event that it had been arrived at through a process of logical demonstration. But in practice the label “science” was extended to many forms of knowledge. The canonical divisions of knowledge in the Middle Ages—what we now know as the seven “liberal arts” (grammar, logic, rhetoric, arithmetic, astronomy, music, geometry)—were then known as the liberal sciences. The other common way of dividing intellectual territory derived from Aristotle’s classification of theoretical or speculative philosophy. In his discussion of the division and methods of the sciences, Aquinas noted that the standard classification of the seven liberal sciences did not include the Aristotelian disciplines of natural philosophy, mathematics, and theology. Accordingly, he argued that the label “science” should be given to these activities, too. Robert Kilwardby (ca. 1215–79), successively regent at the University of Oxford and archbishop of Canterbury, extended the label even further in his work on the origin of the sciences, identifying forty distinct scientiae.

The English word “science” had similar connotations. As was the case with the Latinscientia, the English term commonly referred to the subjects making up the seven liberal arts. In catalogs of English books published between 1475 and 1700 we encounter the natural and moral sciences, the sciences of physick (medicine), of surgery, of logic and mathematics. Broader applications of the term include accounting, architecture, geography, sailing, surveying, defense, music, and pleading in court. Less familiarly, we also encounter works on the science of angels, the science of flattery, and in one notable instance, the science of drinking, drolly designated by the author the “eighth liberal science.” At nineteenth-century Oxford “science” still referred to elements of the philosophy curriculum. The idiosyncrasies of English usage at the University of Oxford notwithstanding, the now familiar meaning of the English expression dates from the nineteenth century, when “science” began to refer almost exclusively to the natural and physical sciences.

Returning to the comparison with medieval religio, what we can say is that in the Middle Ages both notions have a significant interior dimension, and that what happens in the early modern period is that the balance between the interior and exterior begins to tip in favor of the latter. Over the course of the sixteenth and seventeenth centuries we will witness the beginning of a process in which the idea of religion and science as virtues or habits of mind begins to be overshadowed by the modern, systematic entities“science” and “religion.” In the case of scientia, then, the interior qualities that characterized the intellectual virtue of scientia are transferred to methods and doctrines. The entry for “science” in the 1771 Encyclopaedia Britannica thus reads, in its entirety: “SCIENCE, in philosophy, denotes any doctrine, deduced from self-evident and certain principles, by a regular demonstration.” The logical rigor that had once been primarily a personal characteristic now resides primarily in the corresponding body of knowledge.

The other significant difference between the virtues of religio and scientia lies in the relation of the interior and exterior elements. In the case of religio, the acts of worship are secondary in the sense that they are motivated by an inner piety. In the case ofscientia, it is the rehearsal of the processes of demonstration that strengthens the relevant mental habit. Crucially, because the primary goal is the augmentation of mental habits, gained through familiarity with systematic bodies of knowledge (“the sciences”), the emphasis was less on the production of scientific knowledge than on the rehearsal of the scientific knowledge that already existed. Again, as noted earlier, this was because the “growth” of science was understood as taking place within the mind of the individual. In the present, of course, whatever vestiges of the scientific habitusremain in the mind of the modern scientist are directed toward the production of new scientific knowledge. In so far as they exist at all—and for the most part they have been projected outward onto experimental protocols—they are a means and not the end. Overstating the matter somewhat, in the Middle Ages scientific knowledge was an instrument for the inculcation of scientific habits of mind; now scientific habits of mind are cultivated primarily as an instrument for the production of scientific knowledge.

The atrophy of the virtues of scientia and religio, and the increasing emphasis on their exterior manifestations in the sixteenth and seventeenth centuries, will be discussed in more detail in chapter 4. But looking ahead we can say that in the physical realm virtues and powers were removed from natural objects and replaced by a notion of external law. The order of things will now be understood in terms of laws of nature—a conception that makes its first appearance in the seventeenth century—and these laws will take the place of those inherent tendencies within things that strive for their perfection. In the moral sphere, a similar development takes place, and human virtues will be subordinated to an idea of divinely imposed laws—in this instance, moral laws. The virtues—moral and intellectual—will be understood in terms of their capacity to produce the relevant behaviors or bodies of knowledge. What drives both of these shifts is the rejection of an Aristotelian and scholastic teleology, and the subsequent demise of the classical understanding of virtue will underpin the early modern transformation of the ideas of scientia and religio.

Science and Religion?

It should by now be clear that the question of the relationship between science (scientia) and religion (religio) in the Middle Ages was very different from the modern question of the relationship between science and religion. Were the question put to Thomas Aquinas, he may have said something like this: Science is an intellectual habit; religion, like the other virtues, is a moral habit. There would then have been no question of conflict or agreement between science and religion because they were not the kinds of things that admitted those sorts of relations. When the question is posed in our own era, very different answers are forthcoming, for the issue of science and religion is now generally assumed to be about specific knowledge claims or, less often, about the respective processes by which knowledge is generated in these two enterprises. Between Thomas’s time and our own, religio has been transformed from a human virtue into a generic something typically constituted by sets of beliefs and practices. Scientia has followed a similar course, for although it had always referred both to a form of knowledge and a habit of mind, the interior dimension has now almost entirely disappeared. During the sixteenth and seventeenth centuries, both religion and science were literally turned inside out.

Admittedly, there would have been another way of posing this question in the Middle Ages. In focusing on religio and scientia I have considered the two concepts that are the closest linguistically to our modern “religion” and “science.” But there may be other ancient and medieval precedents of our modern notions “religion” and “science,” that have less obvious linguistic connections. It might be argued, for example, that two other systematic activities lie more squarely in the genealogical ancestry of our two objects of interest, and they are theology and natural philosophy. A better way to frame the central question, it could then be suggested, would be to inquire about theology (which looks very much like a body of religionus knowledge expressed propositionally) and natural philosophy (which was the name given to the systematic study of nature up until the modern period), and their relationship.

There is no doubt that these two notions are directly relevant to our discussion, but I have avoided mention of them up until now, first, because I have not wished to pull apart too many concepts at once and, second, because we will be encountering these two ideas and the question of how they fit into the trajectory of our modern notions of science and religion in subsequent chapters. For now, however, it is worth briefly noting that the term “theology” was not much used by Christian thinkers before the thirteenth century. The word theologia appears for the first time in Plato (ca. 428–348 BC), and it is Aristotle who uses it in a formal sense to refer to the most elevated of the speculative sciences. Partly because of this, for the Church Fathers “theology” was often understood as referring to pagan discourse about the gods. Christian writers were more concerned with the interpretation of scripture than with “theology,” and the expression “sacred doctrine” (sacra doctrina) reflects their understanding of the content of scripture. When the term does come into use in the later Middle Ages, there were two different senses of “theology”—one a speculative science as described by Aristotle, the other the teaching of the Christian scriptures.

Famously, the scholastic philosophers inquired as to whether theology (in the sense ofsacra doctrina) was a science. This is not the place for an extended discussion of that commonplace, but the question does suggest one possible relation between science and theology—that theology is a species of the genus “science.” Needless to say, this is almost completely disanalogous to any modern relationship between science and religion as we now understand them. Even so, this question affords us the opportunity to revisit the relationship between virtues and the bodies of knowledge that they were associated with. In so far as theology was regarded as a science, it was understood in light of the virtue of scientia outlined above. In other words, theology was also understood to be, in part, a mental habit. When Aquinas asks whether sacred doctrine is one science, his affirmative answer refers to the fact that there is a single faculty or habit involved. His contemporary, the Franciscan theologian Bonaventure (1221–74), was to say that theological science was a habit that had as its chief end “that we become good.” The “subtle doctor,” John Duns Scotus (ca. 1265–1308), later wrote that the “science” of theology perfects the intellect and promotes the love of God: “The intellect perfected by the habit of theology apprehends God as one who should be loved.” While these three thinkers differed from each other significantly in how they conceptualized the goals of theology, what they shared was a common conviction that theology was, to use a current expression somewhat out of context, habit forming.

As for “natural philosophy” (physica, physiologia), historians of science have argued for some years now that this is the closest ancient and medieval analogue to modern science, although they have become increasingly sensitive to the differences between the two activities. Typically, these differences have been thought to lie in the subject matter of natural philosophy, which traditionally included such topics as God and the soul, but excluded mathematics and natural history. On both counts natural philosophy looks different from modern science. What has been less well understood, however, are the implications of the fact that natural philosophy was an integral part of philosophy. These implications are related to the fact that philosophy, as practiced in the past, was less about affirming certain doctrines or propositions than it was about pursuing a particular kind of life. Thus natural philosophy was thought to serve general philosophical goals that were themselves oriented toward securing the good life. These features of natural philosophy will be discussed in more detail in the chapter that follows. For now, however, my suggestion is that moving our attention to the alternative categories of theology and natural philosophy will not yield a substantially different view of the kinds of historical transitions that I am seeking to elucidate.

To read more about The Territories of Science and Religion, click here.

Add a Comment

View Next 25 Posts