What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Posts

(tagged with 'History and Philosophy of Science')

Recent Comments

Recently Viewed

JacketFlap Sponsors

Spread the word about books.
Put this Widget on your blog!
  • Powered by JacketFlap.com

Are you a book Publisher?
Learn about Widgets now!

Advertise on JacketFlap

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Tag

In the past 7 days

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing: Blog Posts Tagged with: History and Philosophy of Science, Most Recent at Top [Help]
Results 1 - 25 of 34
1. August excerpt: The Restless Clock

9780226302928

“William Harvey’s Restless Clock”*

Against this passivity, however, there were those who struggled to hold matter, feeling, and will together: to keep the machinery not just alive but active, life-like. These holdouts accordingly had something very different in mind when they talked about the “animal-machine.” William Harvey, whom we have already seen comparing the heart to a pump or other kind of hydraulic machinery, also invoked automata to describe the process of animal generation. Observing the development of a chick embryo, Harvey noted that a great many things happened in a certain order “in the same way as we see one wheel moving another in automata,and other pieces of mechanism.” But, Harvey wrote, adopting Aristotle’s view, the parts of the mechanism were not moving in the sense of “changing their places,” pushing on another like the gears of a clock set in motion by the clockmaker winding the spring. Rather, the parts were remaining in place, but transforming in qualities, “in hardness, softness, colour, &ce.” It was a mechanism made of changing parts.

This was an idea to which Harvey returned regularly. Animals, he surmised, were like automata whose parts were perpetually transforming: expanding and contracting in response to heat and cold, imagination and sensation and ideas. These changes took place as a succession of connected developments that were also, somehow, all occurring at once. Similarly, Harvey wrote with regard to the heart that its consecutive action of auricles and ventricles was like “in a piece of machinery, in which, though one wheel gives motion to another, yet all the wheels seem to move simultaneously.” Geared mechanisms represented constellations of motions that seemed at once sequential and simultaneous, a congress of mutual causes and effects.

The first appearance of life, as Harvey described it, seemed to happen both all at once and as a sequence of events. Harvey wrote of seeing the chick first as a “little cloud,” and then, “in the midst of the cloudlet in question,” the heart appeared as a tiny bloody point, like the point of a pin, so small that it disappeared altogether during contraction, then reappeared again “so that betwixt the visible and the invisible, betwixt being and not being, as it were, it gave by its pulses a kind of representation of the commencement of life.” A gathering cloud and, in the midst, a barely perceptible movement between being and not being: the origin of life. Harvey invoked clockwork and firearm mechanisms to model a defining feature of this cloudy pulse that was the beginning of life: causes and effects happening all at once, together.

***

In addition to geared mechanisms and firearms, Harvey invoked another analogy that would become commonplace by the end of the century—we have seen Descartes and his followers invoke it—the analogy between an animal body and a church organ. Muscles, Harvey suggested, worked like “play on the organ, virginals.” Under James I, English churches had resumed the use of organs in services, so they were once again a feature of the landscape and available as a source of models for living systems. The organ signified to Harvey something more like what it meant in the ancient and medieval tradition of animal machinery, rather than the intricate sequence of contrived movements of parts that it later came to signify. Harvey wrote that the muscles performed their actions by “harmony and rhythm,” a kind of “silent music.” Mind, he said, was the “master of the choir”: “mind sets the mass in motion.”

The particular ways in which Harvey invoked artificial mechanisms make it difficult to classify him, as historians have been inclined to do, either as a “mechanist” or otherwise, the problem being that the meaning of “mechanism” and related terms was very much in flux. Lecturing at the College of Physicians in London in April 1616, Harvey told his anatomy and surgery students that anatomy was “philosophical, medical and mechanical.”But what did he mean, and what did his students understand, by “mechanical”?

In part, he likely meant that there was no need to invoke ethereal or celestial substances in explaining physiological phenomena, because the mundane elements seemed to transcend their own limits when they acted. The “air and water, the winds and the ocean” could “waft navies to either India and round this globe.” The terrestrial elements could also “grind, bake, dig, pump, saw timber, sustain fire, support some things, overwhelm others.” Fire could cook, heat, soften, harden, melt, sublime, transform, set in motion, and produce iron itself. The compass pointing north, the clock indicating the hours, all were accomplished simply by means of the ordinary elements, each of which “exceeded its own proper powers in action.” This was a form of mechanism that was not reductive, but really the reverse: a rising of mechanical parts to new powers, which could conceivably include the power to produce life.

Similarly, Harvey elsewhere defined “mechanics” as “that which overcame things by which Nature is overcome.” His examples were things having “little power of movement” in themselves that were nonetheless able to move great weights, such as a pulley. Mechanics, understood in this way, could include natural phenomena that overcame the usual course of nature, not just artificial ones. Harvey again mentioned the muscles. When he said that the muscles worked mechanically in this instance he meant that the muscles, like artificial devices such as a pulley, overcame the usual course of nature and moved great weights without themselves being weighty.

Motion, relatedly, was a term with various meanings, as Harvey himself emphasized. He noted many different kinds of local movement: the movement of a nigh-blooming tree and that of a heliotrope; the movements caused by a magnet and those caused by a rubbed piece of jet. In what were likely some notes for a treatise on the physiology of movement, he jotted down any form of local movement that came to mind, such as the presumably peristaltic and undeniably graphic “shit by degrees not by squirts.” He identified too, as a distinct form of movement, a kind of controlled escalation, as “in going forward, mounting up, with the consent of the intellect in a state of emotion.”

Harvey drew upon another form of casual motion to resolve another critical mystery in the generation of life: how did the sperm act upon the egg once it was no longer in contact with the egg? Like the apparently simultaneous occurrence of causally connected events, this quandary seemed to pose a problem for a properly “mechanical”anatomy. Invoking Aristotle, Harvey proposed that embryos arose form a kind of contagion, “a vital virus,” with which the sperm infected the egg. But after the initial moment of contact, once the contaminating element had disappeared and become “a nonentity,” Harvey wondered, how did the process continue? “How, I ask, does a nonentity act?” How could something no longer extant continue to act on a material entity? The process seemed to involve too a kind of action at a distance: “How does a thing which is not in contact fashion another thing like itself?”

Aristotle had invoked “automatic puppets” to explain precisely this seeming mystery. He had surmised that the initial contact at conception set off a succession of linked motions that constituted the development of the embryo. According to this model, as Harvey explained, the seed formed the fetus “by motion” transmitted through a kind of automatic mechanism. Harvey rejected this explanation along with a whole host of other traditional explanations by analogy: to clocks, to kingdoms governed by the mandates of their sovereigns, and to instruments used to produce works of art. All, he thought, were insufficient.

In their place, Harvey proposed a different analogy: one between the uterus and the brain. The two, he observed, were strikingly similar in structure and a mechanical anatomy should correlate structures with physiological functions: “Where the same structure exists,” Harvey reasoned,there must be “the same function implanted.” The uterus, when ready to conceive, strongly resembled the “ventricles” of the brain and the functions of each were called “conceptions.” Perhaps, then, these were essentially the same sort of process.

Harvey taught his anatomy and surgery students that the brain was a kind of workshop, a “manufactory.” Brains produced works of art by bringing an immaterial idea or form to matter. Perhaps a uterus produced an embryo in the same way, by means of a “plastic art” capable of bringing an idea or form to flesh. The form of an embryo existed in the uterus of the mother just as the form of a house existed in the brain of the builder. This would solve the apparent problems of action at a distance and nonentities acting upon material entities. The moment of insemination endowed the uterus with an ability to conceive embryos in the same way that education endowed the brain with the ability to conceive ideas. Once the seed disappeared, it no longer needed to act: the uterus itself took over the task of fashioning the embryo.

The idea that the uterus functioned like a brain, actively fashioning an embryo the way a brain fleshes out an idea, was for Harvey not only within the bounds of the “mechanical,” but a model that could actually rescue mechanism by eliminating the need for action at a distance.

*This excerpt has been adapted (without endnotes) from The Restless Clock: A History of the Centuries-Long Argument over What Makes Living Things Tick        by Jessica Riskin (2016).

***

To read more about The Restless Clock, click here.

Add a Comment
2. Publishers Weekly on Amitav Ghosh’s The Great Derangement

9780226323039

Though perhaps best known in the United States for his fiction, Bengali writer Amitav Ghosh has previously published several acclaimed works of non-fiction. His latest book The Great Derangement: Climate Change and the Unthinkable tackles an inescapably global theme: the violent wrath global warming will inflict on our civilization and generations to come, and the duty of fiction—as the cultural form most capable of imagining alternative futures and insisting another world is possible—to take action.

From a recent starred review in Publishers Weekly:

In his first work of long-form nonfiction in over 20 years, celebrated novelist Ghosh (Flood of Fire) addresses “perhaps the most important question ever to confront culture”: how can writers, scholars, and policy makers combat the collective inability to grasp the dangers of today’s climate crisis? Ghosh’s choice of genre is hardly incidental; among the chief sources of the “imaginative and cultural failure that lies at the heart of the climate crisis,” he argues, is the resistance of modern linguistic and narrative traditions—particularly the 20th-century novel—to events so cataclysmic and heretofore improbable that they exceed the purview of serious literary fiction. Ghosh ascribes this “Great Derangement” not only to modernity’s emphasis on this “calculus of probability” but also to notions of empire, capitalism, and democratic freedom. Asia in particular is “conceptually critical to every aspect of global warming,” Ghosh attests, outlining the continent’s role in engendering, conceptualizing, and mitigating ecological disasters in language that both thoroughly convinces the reader and runs refreshingly counter to prevailing Eurocentric climate discourse. In this concise and utterly enlightening volume, Ghosh urges the public to find new artistic and political frameworks to understand and reduce the effects of human-caused climate change, sharing his own visionary perspective as a novelist, scholar, and citizen of our imperiled world.
To read more about The Great Derangement, click here.

Add a Comment
3. Jessica Riskin on The Restless Clock

Jessica Riskin’s The Restless Clock: A History of the Centuries-Long Argument Over What Makes Living Things Tick explores the history of a particular principle—that the life sciences should not ascribe agency to natural phenomena—and traces its remarkable history all the way back to the seventeenth century and the automata of early modern Europe. At the same time, the book tells the story of dissenters to this precept, whose own compelling model cast living things not as passive but as active, self-making machines, in an attempt to naturalize agency rather than outsourcing it to theology’s “divine engineer.” In a recent video trailer for the book (above), Riskin explains the nuances of both sides’ arguments, and accounts for nearly 300 years worth of approaches to nature and design, tracing questions of science and agency through Descartes, Leibniz, Lamarck, Darwin, and others.

From a review at Times Higher Ed:

The Restless Clock is a sweeping survey of the search for answers to the mystery of life. It begins with medieval automata – muttering mechanical Christs, devils rolling their eyes, cherubs “deliberately” aiming water jets at unsuspecting visitors who, in a still-mystical and religious era, half-believe that these contraptions are alive. Then come the Enlightenment android-builders and philosophers, Romantic poet-scientists, evolutionists, roboticists, geneticists, molecular biologists and more: a brilliant cast of thousands fills this encyclopedic account of the competing ideas that shaped the sciences of life and artificial intelligence.

A profile at The Human Evolution Blog:

To understand this unspoken arrangement between science and theology, you must first consider that the founding model of modern science, established during the Scientific Revolution of the seventeenth century, assumed and indeed relied upon the existence of a supernatural God. The founders of modern science, including people such as René Descartes, Isaac Newton and Robert Boyle, described the world as a machine, like a great clock, whose parts were made of inert matter, moving only when set in motion by some external (divine) force.

These thinkers insisted that one could not explain the movements of the great clock of nature by ascribing desires or tendencies or willful actions to its parts. That was against the rules. They banished any form of agency – purposeful or willful action – from nature’s machinery and from natural science. In so doing, they gave a monopoly on agency to an external god, leaving behind a fundamentally passive natural world. Henceforth, science would describe the passive machinery of nature, while questions of meaning, purpose and agency would be the province of theology.

And a piece at Library Journal:

The work of luminaries such as René Descartes, Gottfried Wilhelm Leibniz, Immanuel Kant, Jean-Baptiste Lamarck, and Charles Darwin is discussed, as well as that of contemporaries including Daniel Dennett, Richard Dawkins, and Stephen Jay Gould. But there are also the lesser knowns: the clockmakers, court mechanics, artisans, and their fantastic assortment of gadgets, automata, and androids that stood as models for the nascent life sciences. Riskin’s accounts of these automata will come as a revelation to many readers, as she traces their history from late medieval, early Renaissance clock- and organ-driven devils and muttering Christs in churches to the robots of the post-World War II era. Fascinating on many levels, this book is accessible enough for a science-minded lay audience yet useful for students and scholars.

To read more about The Restless Clock, click here.

Add a Comment
4. Michael Riordan on United Technologies

9780226294797

Michael Riordan, coauthor of Tunnel Visions: The Rise and Fall of the Superconducting Supercollider penned a recent op-ed for the New York Times on United Technologies and their subsidiary, the air-conditioning equipment maker Carrier Corporation, who plans “to transfer its Indianapolis plant’s manufacturing operations and about 1,400 jobs to Monterrey, Mexico.” Read a brief excerpt below, in which the author begins to untangle a web of corporate (mis)behavior, taxpayer investment, government policy, job exports—and their consequences.

***

The transfers of domestic manufacturing jobs to Mexico and Asia have benefited Americans by bringing cheaper consumer goods to our shores and stores. But when the victims of these moves can find only lower-wage jobs at Target or Walmart, and residents of these blighted cities have much less money to spend, is that a fair distribution of the savings and costs?

Recognizing this complex phenomenon, I can begin to understand the great upwelling of working-class support for Bernie Sanders and Donald J. Trump — especially for the latter in regions of postindustrial America left behind by these jarring economic dislocations.

And as a United Technologies shareholder, I have to admit to a gnawing sense of guilt in unwittingly helping to foster this job exodus. In pursuing returns, are shareholders putting pressure on executives to slash costs by exporting good-paying jobs to developing nations?

The core problem is that shareholder returns — and executive rewards — became the paramount goals of corporations beginning in the 1980s, as Hedrick Smith reported in his 2012 book, “Who Stole the American Dream?” Instead of rolling some of the profits back into building their industries and educating workers, executives began cutting costs and jobs to improve their bottom lines, often using the proceeds to raise dividends or buy back stock, which United Technologies began doing extensively last year.

And an easy way to boost profits is to transfer jobs to other countries.

To read more about Tunnel Visions, click here.

Add a Comment
5. Patterns in Nature is PW’s Most Beautiful Book of 2016

9780226332420

It might only be April, but there’s already one foregone conclusion: Philip Ball’s Patterns in Nature is “The Most Beautiful Book of 2016” at Publishers WeeklyAs Ball writes:

The topic is inherently visual, concerned as it is with the sheer splendor of nature’s artistry, from snowflakes to sand dunes to rivers and galaxies. But I was frustrated that my earlier efforts, while delving into the scientific issues in some depth, never secured the resources to do justice to the imagery. This is a science that, heedless of traditional boundaries between physics, chemistry, biology and geology, must be seen to be appreciated. We have probably already sensed the deep pattern of a tree’s branches, of a mackerel sky laced with clouds, of the organized whirlpools in turbulent water. Just by looking carefully at these things, we are halfway to an answer.

I am thrilled at last to be able to show here the true riches of nature’s creativity. It is not mere mysticism to perceive profound unity in the repetition of themes that these images display. Richard Feynman, a scientist not given to flights of fancy, expressed it perfectly: “Nature uses only the longest threads to weave her patterns, so each small piece of her fabric reveals the organization of the entire tapestry.”

You can read more at PW and check out samples from the book’s more than 250 color photographs, or visit a recent profile in the Wall Street Journal here.

To read more about Patterns in Nature, click here.

Add a Comment
6. The Normalization of Deviance

9780226346823

In his piece for the most recent issue of the Atlantic on the origins of the corporate mea culpa and its promulgation of evils, Jerry Useem turned the theory and research of Diane Vaughan, including that drawn from her book The Challenger Launch Decision:

The sociologist Diane Vaughan coined the phrase the normalization of deviance to describe a cultural drift in which circumstances classified as “not okay” are slowly reclassified as “okay.” In the case of the Challenger space-shuttle disaster—the subject of a landmark study by Vaughan—damage to the crucial O‑rings had been observed after previous shuttle launches. Each observed instance of damage, she found, was followed by a sequence “in which the technical deviation of the [O‑rings] from performance predictions was redefined as an acceptable risk.” Repeated over time, this behavior became routinized into what organizational psychologists call a “script.” Engineers and managers “developed a definition of the situation that allowed them to carry on as if nothing was wrong.” To clarify: They were not merely acting as if nothing was wrong. They believed it, bringing to mind Orwell’s concept of doublethink, the method by which a bureaucracy conceals evil not only from the public but from itself.

More explicitly, for Vaughan, the O-ring deviation decision unfolded through the actions and observations of key NASA personnel and aeronautical engineers, who grew acclimated to a culture where high-risk was the norm, and which fostered an increasing descent into poor decision-making. As the book’s jacket (and Useem) note, “[Vaughan] reveals how and why NASA insiders, when repeatedly faced with evidence that something was wrong, normalized the deviance so that it became acceptable to them.”

You can read more about The Challenger Launch Decision here, and the Atlantic piece in full on their site.

Add a Comment
7. From Aristotle to South Park: An online seminar with Randy Olson

9780226270845

In Houston, We Have a Narrative, consummate storyteller—and Hollywood screenwriter and former scientist and communications expert—Randy Olson, conveys his no-nonsense, results-oriented approach to writing about science, the stuff of some of our greatest plots. On December 1, 2015, at 2PM, Olson will be leading an hour-long, online seminar for the AAAS (the American Association for the Advancement of Science, the world’s largest general scientific society). In addition to conveying the fascinating journey of how he left a tenured professorship in marine biology to write for the movies, Olson will let you know why—and, but, therefore—how.

From the AAAS’s description:

He had a single goal — the search for something that might improve the communication of science. He found it in a narrative template he crafted and labeled as “The ABT.” The ABT is adapted from the co-creators of the Emmy and Peabody award-winning animated series, South Park. In a 2011 documentary about the show, they talked about their “Rule of Replacing” which they use for editing scripts. Their rule involves replacing the word “and” with “but” or “therefore.” From this Olson devised his “And, But, Therefore” template (the ABT). This has become the central tool for his new book, “Houston, We Have A Narrative,” his work with individual scientists, and his Story Circles Narrative Training program he has been developing over the past year with NIH and USDA. In this webinar, co-sponsored by the Society for Conservation Biology and the American Geophysical Union/AGU’s Sharing Science program, he will present what he has termed “The ABT Framework” which refers to “the ABT way of thinking.”

You can sign up for the webinar (12/1 at 2PM, EST) here.

To read more about Houston, We Have a Narrative, click here.

 

Add a Comment
8. The Union of Concerned Scientists on Randy Olson

olson_houston_jkt

Randy Olson was once a marine biologist, with one foot in academia, a screenwriting dream, and the uncanny ability to communicate complicated science via narratives that used the foundations of story to draw readers in and keep them engaged. Now one of our most revered interlocutors of how science is understood and appreciated, Olson recently published Houston, We Have a Narrative: Why Science Needs Story, which takes readers through his “And, But, Therefore” principle of writing. In addition to delivering a TED talk on the ABT method, Olson was recently the subject of a review/profile for the Union of Concerned Scientists, in a piece that details his book’s inspiration and operating themes.

From the Equation blog at the Union for Concerned Scientists:

Scientists who want to succeed with Olson’s methods will have to not only read and process what he has to say, but also commit to thinking about how to communicate their work more effectively over time. . . . This isn’t an add-on to doing good science, either, Olson argues. Scientists are born storytellers, trying to make sense of data. Olson writes that even the humble scientific abstract benefits from adhering to an ABT structure and he presents several convincing case studies to underscore this point.

He challenges readers to re-examine what a story really is in the context of science. For instance, he chronicles how Watson and Crick told a good story when they challenged the old model of what DNA looks like. He also tracks the history of IMRAD, the now-accepted standard for how one “tells a story” in the scientific literature: introduction, methods, results, and discussion. And he lays out how positive and negative results correlate to archetypal plot structures.

It’s heady stuff, for sure, but it’s also what scientists and science communicators need to hear: Effective communication and storytelling are not optional add-ons for research; they are inherent to the research process itself.

Video from Olson’s earlier appearance at TED:

To read more about Houston, We Have a Narrative, click here.

Add a Comment
9. Joanna Kempner on Oliver Sacks and migraines

9780226179155

Joanna Kempner’s Not Tonight: Migraine and the Politics of Gender and Health confronts our tendency to dismiss the migraine as an ailment de la femme, subject to the gendered constraints surrounding how we talk about—as well as legislate and alleviate—pain. In the book, Kempner traces the symptoms of headache-like disorders, which often deliver no set of objective symptoms but instead a mix of visual and somatic sensitivities, to the nineteenth-century origins of the migraine, its reputation in the 1940s for soliciting the “migraine personality” (code for so-called uptight neurotic women), forward to present-day sufferers. A couple of weeks ago, following the death of neurologist and writer Oliver Sacks, Kempner published a piece at the Migraine blog on Sacks’s lesser-known first book: called Migraine, it drew upon Sacks’s experience working at Montefiore Hospital in the Bronx, the nation’s first headache clinic, and reflected on the neuropsychological effects of migraines.

From Kempner’s post:

The book itself was a tour de force. The backbone of the text is a thorough and eloquent overview of the various forms of migraine (as they were understood in 1970), peppered throughout with case studies from Sacks’ clinical practice. But what made Migraine different from other texts on the subject were Sacks’ unique observations about the disorder, within which he saw “an entire encyclopedia of neurology.” Foreshadowing his future interests in hallucinations and the nature of consciousness, Sacks devoted a large portion of the text to migraine auras, describing in detail both the variety of visual and sensory disturbances that may be experienced and the affective changes that can accompany aura: déjà vu, existential dread, anxiety, or delirium. That he illustrated these discussions with what might have been the first collection of “migraine art” made the book particularly unusual and innovative. Paintings drawn by people who had experienced migraine aura enabled Sacks to visually describe what aura felt like.

Migraine, however, is a book that ought to be read and understood as a product of its time. In 1970, when it was published, psychosomatic medicine ruled headache medicine. It was a time when some headache specialists thought it was perfectly acceptable to attribute migraine solely to rage or personality flaws of the patient. Sacks, importantly, took the position that migraine was always physiological in nature and he steadfastly rejected the “migraine personality”—an idea popular at the time that held that people with migraine were obsessive, Type-A characters. However, Sacks had not given up the psychological completely. He argued that migraine served important psychological functions, for example providing respite for patients. He also warned that, although the migraine personality may be myth, people with migraine had many other problematic personality types that had to be dealt with at the clinic. So, although Sacks was a progressive physician in many ways, reading Migraine now can sometimes be a jarring experience.

One thing is for sure. Sacks’ trademark empathy and compassion for patients shines throughout his work on migraine.

To read more about Not Tonight, click here.

To read an excerpt from the book, click here.

Add a Comment
10. Excerpt: Elephant Don

9780226106113

An excerpt from Elephant Don: The Politics of a Pachyderm Posse 

by Caitlin O’Connell

“Kissing the Ring”

Sitting in our research tower at the water hole, I sipped my tea and enjoyed the late morning view. A couple of lappet-faced vultures climbed a nearby thermal in the white sky. A small dust devil of sand, dry brush, and elephant dung whirled around the pan, scattering a flock of guinea fowl in its path. It appeared to be just another day for all the denizens of Mushara water hole—except the elephants. For them, a storm of epic proportions was brewing.

It was the beginning of the 2005 season at my field site in Etosha National Park, Namibia—just after the rainy period, when more elephants would be coming to Mushara in search of water—and I was focused on sorting out the dynamics of the resident male elephant society. I was determined to see if male elephants operated under different rules here than in other environments and how this male society compared to other male societies in general. Among the many questions I wanted to answer was how ranking was determined and maintained and for how long the dominant bull could hold his position at the top of the hierarchy.

While observing eight members of the local boys’ club arrive for a drink, I immediately noticed that something was amiss—these bulls weren’t quite up to their usual friendly antics. There was an undeniable edge to the mood of the group.

The two youngest bulls, Osh and Vincent Van Gogh, kept shifting their weight back and forth from shoulder to shoulder, seemingly looking for reassurance from their mid- and high-ranking elders. Occasionally, one or the other held its trunk tentatively outward—as if to gain comfort from a ritualized trunk-to-mouth greeting.

The elders completely ignored these gestures, offering none of the usual reassurances such as a trunk-to-mouth in return or an ear over a youngster’s head or rear. Instead, everyone kept an eye on Greg, the most dominant member of the group. And for whatever reason, Greg was in a foul temper. He moved as if ants were crawling under his skin.

Like many other animals, elephants form a strict hierarchy to reduce conflict over scarce resources, such as water, food, and mates. In this desert environment, it made sense that these bulls would form a pecking order to reduce the amount of conflict surrounding access to water, particularly the cleanest water.

At Mushara water hole, the best water comes up from the outflow of an artesian well, which is funneled into a cement trough at a particular point. As clean water is more palatable to the elephant and as access to the best drinking spot is driven by dominance, scoring of rank in most cases is made fairly simple—based on the number of times one bull wins a contest with another by usurping his position at the water hole, by forcing him to move to a less desirable position in terms of water quality, or by changing trajectory away from better-quality water through physical contact or visual cues.

Cynthia Moss and her colleagues had figured out a great deal about dominance in matriarchal family groups by. Their long-term studies in Amboseli National Park showed that the top position in the family was passed on to the next oldest and wisest female, rather than to the offspring of the most dominant individual. Females formed extended social networks, with the strongest bonds being found within the family group. Then the network branched out into bond groups, and beyond that into associated groups called clans. Branches of these networks were fluid in nature, with some group members coming together and others spreading out to join more distantly related groups in what had been termed a fission-fusion society.

Not as much research had been done on the social lives males, outside the work by Joyce Poole and her colleagues in the context of musth and one-on-one contests. I wanted to understand how male relationships were structured after leaving their maternal family groups as teens, when much of their adult lives was spent away from their female family. In my previous field seasons at Mushara, I’d noticed that male elephants formed much larger and more consistent groups than had been reported elsewhere and that, in dry years, lone bulls were not as common here than were recorded in other research sites.

Bulls of all ages were remarkably affiliative—or friendly—within associated groups at Mushara. This was particularly true of adolescent bulls, which were always touching each other and often maintained body contact for long periods. And it was common to see a gathering of elephant bulls arrive together in one long dusty line of gray boulders that rose from the tree line and slowly morphed into elephants. Most often, they’d leave in a similar manner—just as the family groups of females did.

The dominant bull, Greg, most often at the head of the line, is distinguishable by the two square-shaped notches out of the lower portion of his left ear. But there is something deeper that differentiates him, something that exhibits his character and makes him visible from a long way off. This guy has the confidence of royalty—the way he holds his head, his casual swagger: he is made of kingly stuff. And it is clear that the others acknowledge his royal rank as his position is reinforced every time he struts up to the water hole to drink.

Without fail, when Greg approaches, the other bulls slowly back away, allowing him access to the best, purest water at the head of the trough—the score having been settled at some earlier period, as this deference is triggered without challenge or contest almost every time. The head of the trough is equivalent to the end of the table and is clearly reserved for the top-ranking elephant—the one I can’t help but refer to as the don since his subordinates line up to place their trunks in his mouth as if kissing a Mafioso don’s ring.

As I watched Greg settle in to drink, each bull approached in turn with trunk outstretched, quivering in trepidation, dipping the tip into Greg’s mouth. It was clearly an act of great intent, a symbolic gesture of respect for the highest-ranking male. After performing the ritual, the lesser bulls seemed to relax their shoulder as they shifted to a lower-ranking position within the elephantine equivalent of a social club. Each bull paid their respects and then retreated. It was an event that never failed to impress me—one of those reminders in life that maybe humans are not as special in our social complexity as we sometimes like to think—or at least that other animals may be equally complex. This male culture was steeped in ritual.

Greg takes on Kevin. Both bulls face each other squarely, with ears held out. Greg’s ear cutout pattern in the left ear make him very recognizable

 

But today, no amount of ritual would placate the don. Greg was clearly agitated. He was shifting his weight from one front foot to the other in jerky movements and spinning his head around to watch his back, as if someone had tapped him on the shoulder in a bar, trying to pick a fight.

The midranking bulls were in a state of upheaval in the presence of their pissed-off don. Each seemed to be demonstrating good relations with key higher-ranking individuals through body contact. Osh leaned against Torn Trunk on his one side, and Dave leaned in from the other, placing his trunk in Torn Trunk’s mouth. The most sought-after connection was with Greg himself, of course, who normally allowed lower-ranking individuals like Tim to drink at the dominant position with him.

Greg, however, was in no mood for the brotherly “back slapping” that ordinarily took place. Tim, as a result, didn’t display the confidence that he generally had in Greg’s presence. He stood cowering at the lowest-ranking position at the trough, sucking his trunk, as if uncertain of how to negotiate his place in the hierarchy without the protection of the don.

Finally, the explanation for all of the chaos strode in on four legs. It was Kevin, the third-ranking bull. His wide-splayed tusks, perfect ears, and bald tail made him easy to identify. And he exhibited the telltale sign of musth, as urine was dribbling from his penis sheath. With shoulders high and head up, he was ready to take Greg on.

A bull entering the hormonal state of musth was supposed to experience a kind of “Popeye effect” that trumped established dominance patterns—even the alpha male wouldn’t risk challenging a bull elephant with the testosterone equivalent of a can of spinach on board. In fact, there are reports of musth bulls having on the order of twenty times the normal amount of testosterone circulating in their blood. That’s a lot of spinach.

Musth manifests itself in a suite of exaggerated aggressive displays, including curling the trunk across the brow with ears waving—presumably to facilitate the wafting of a musthy secretion from glands in the temporal region—all the while dribbling urine. The message is the elephant equivalent of “don’t even think about messing with me ’cause I’m so crazy-mad that I’ll tear your frickin’ head off”—a kind of Dennis Hopper approach to negotiating space.

Musth—a Hindi word derived from the Persian and Urdu word “mast,” meaning intoxicated—was first noted in the Asian elephant. In Sufi philosophy, a mast (pronounced “must”) was someone so overcome with love for God that in their ecstasy they appeared to be disoriented. The testosterone-heightened state of musth is similar to the phenomenon of rutting in antelopes, in which all adult males compete for access to females under the influence of a similar surge of testosterone that lasts throughout a discrete season. During the rutting season, roaring red deer and bugling elk, for example, aggressively fight off other males in rut and do their best to corral and defend their harems in order to mate with as many does as possible.

The curious thing about elephants, however, is that only a few bulls go into musth at any one time throughout the year. This means that there is no discrete season when all bulls are simultaneously vying for mates. The prevailing theory is that this staggering of bulls entering musth allows lower-ranking males to gain a temporary competitive advantage over others of higher rank by becoming so acutely agitated that dominant bulls wouldn’t want to contend with such a challenge, even in the presence of an estrus female who is ready to mate. This serves to spread the wealth in terms of gene pool variation, in that the dominant bull won’t then be the only father in the region.

Given what was known about musth, I fully expected Greg to get the daylights beaten out of him. Everything I had read suggested that when a top-ranking bull went up against a rival that was in musth, the rival would win.

What makes the stakes especially high for elephant bulls is the fact that estrus is so infrequent among elephant cows. Since gestation lasts twenty-two months, and calves are only weaned after two years, estrus cycles are spaced at least four and as many as six years apart. Because of this unusually long interval, relatively few female elephants are ovulating in any one season. The competition for access to cows is stiffer than in most other mammalian societies, where almost all mature females would be available to mate in any one year. To complicate matters, sexually mature bulls don’t live within matriarchal family groups and elephants range widely in search of water and forage, sofinding an estrus female is that much more of a challenge for a bull.

Long-term studies in Amboseli indicated that the more dominant bulls still had an advantage, in that they tended to come into musth when more females were likely to be in estrus. Moreover, these bulls were able to maintain their musth period for a longer time than the younger, less dominant bulls. Although estrus was not supposed to be synchronous in females, more females tended to come into estrus at the end of the wet season, with babies appearing toward the middle of the wet season, twenty-two months later. So being in musth in this prime period was clearly an advantage.

Even if Greg enjoyed the luxury of being in musth during the peak of estrus females, this was not his season. According to the prevailing theory, and in this situation, Greg would back down to Kevin.

As Kevin sauntered up to the water hole, the rest of the bulls backed away like a crowd avoiding a street fight. Except for Greg. Not only did Greg not back down, he marched clear around the pan with his head held to its fullest height, back arched, heading straight for Kevin. Even more surprising, when Kevin saw Greg approach him with this aggressive posture, he immediately started to back up.

Backing up is rarely a graceful procedure for any animal, and I had certainly never seen an elephant back up so sure-footedly. But there was Kevin, keeping his same even and wide gait, only in the reverse direction—like a four-legged Michael Jackson doing the moon walk. He walked backward with such purpose and poise that I couldn’t help but feel that I was watching a videotape playing in reverse—that Nordic-track style gait, fluidly moving in the opposite direction, first the legs on the one side, then on the other, always hind foot first.

Greg stepped up his game a notch as Kevin readied himself in his now fifty-yard retreat, squaring off to face his assailant head on. Greg puffed up like a bruiser and picked up his pace, kicking dust in all directions. Just before reaching Kevin, Greg lifted his head even higher and made a full frontal attack, lunging at the offending beast, thrusting his head forward, ready to come to blows.

In another instant, two mighty heads collided in a dusty clash. Tusks met in an explosive crack, with trunks tucked under bellies to stay clear of the collisions. Greg’s ears were pinched in the horizontal position—an extremely aggressive posture. And using the full weight of his body, he raised his head again and slammed at Kevin with his broken tusks. Dust flew as the musth bull now went in full backward retreat.

Amazingly, this third-ranking bull, doped up with the elephant equivalent of PCP, was getting his hide kicked. That wasn’t supposed to happen.

At first, it looked as if it would be over without much of a fight. Then, Kevin made his move and went from retreat to confrontation and approached Greg, holding his head high. With heads now aligned and only inches apart, the two bulls locked eyes and squared up again, muscles tense. It was like watching two cowboys face off in a western.

There were a lot of false starts, mock charges from inches away, and all manner of insults cast through stiff trunks and arched backs. For a while, these two seemed equally matched, and the fight turned into a stalemate.

But after holding his own for half an hour, Kevin’s strength, or confidence, visibly waned—a change that did not go unnoticed by Greg, who took full advantage of the situation. Aggressively dragging his trunk on the ground as he stomped forward, Greg continued to threaten Kevin with body language until finally the lesser bull was able to put a man-made structure between them, a cement bunker that we used for ground-level observations. Now, the two cowboys seemed more like sumo wrestlers, feet stamping in a sideways dance, thrusting their jaws out at each other in threat.

The two bulls faced each other over the cement bunker and postured back and forth, Greg tossing his trunk across the three-meter divide in frustration, until he was at last able to break the standoff, getting Kevin out in the open again. Without the obstacle between them, Kevin couldn’t turn sideways to retreat, as that would have left his body vulnerable to Greg’s formidable tusks. He eventually walked backward until he was driven out of the clearing, defeated.

In less than an hour, Greg, the dominant bull displaced a high-ranking bull in musth. Kevin’s hormonal state not only failed to intimidate Greg but in fact just the opposite occurred: Kevin’s state appeared to fuel Greg into a fit of violence. Greg would not tolerate a usurpation of his power.

Did Greg have a superpower that somehow trumped musth? Or could he only achieve this feat as the most dominant individual within his bonded band of brothers? Perhaps paying respects to the don was a little more expensive than a kiss of the ring.

***

To read more about Elephant Don, click here.

Add a Comment
11. Free e-book for April: Hybrid

9780226437132

Just in time for your ur-garden, our free-ebook for April is Noel Kingsbury’s Hybrid: The History and Science of Plant Breeding.

***

Disheartened by the shrink-wrapped, Styrofoam-packed state of contemporary supermarket fruits and vegetables, many shoppers hark back to a more innocent time, to visions of succulent red tomatoes plucked straight from the vine, gleaming orange carrots pulled from loamy brown soil, swirling heads of green lettuce basking in the sun.

With Hybrid, Noel Kingsbury reveals that even those imaginary perfect foods are themselves far from anything that could properly be called natural; rather, they represent the end of a millennia-long history of selective breeding and hybridization. Starting his story at the birth of agriculture, Kingsbury traces the history of human attempts to make plants more reliable, productive, and nutritious—a story that owes as much to accident and error as to innovation and experiment. Drawing on historical and scientific accounts, as well as a rich trove of anecdotes, Kingsbury shows how scientists, amateur breeders, and countless anonymous farmers and gardeners slowly caused the evolutionary pressures of nature to be supplanted by those of human needs—and thus led us from sparse wild grasses to succulent corn cobs, and from mealy, white wild carrots to the juicy vegetables we enjoy today. At the same time, Kingsbury reminds us that contemporary controversies over the Green Revolution and genetically modified crops are not new; plant breeding has always had a political dimension.

A powerful reminder of the complicated and ever-evolving relationship between humans and the natural world, Hybrid will give readers a thoughtful new perspective on—and a renewed appreciation of—the cereal crops, vegetables, fruits, and flowers that are central to our way of life.

***
Download your copy of Hybrid, here.

Add a Comment
12. Excerpt: Southern Provisions

9780226141114

An excerpt from Southern Provisions: The Creation and Revival of a Cuisine by David S. Shields

***

Rebooting a Cuisine

“I want to bring back Carolina Gold rice. I want there to be authentic Lowcountry cuisine again. Not the local branch of southern cooking incorporated.” That was Glenn Roberts in 2003 during the waning hours of a conference in Charleston exploring “ The Cuisines of the Lowcountry and the Caribbean.”

When Jeffrey Pilcher, Nathalie Dupree, Marion Sullivan, Robert Lukey, and I brainstormed this meeting into shape over 2002, we paid scant attention to the word cuisine.1 I’m sure we all thought that it meant something like “a repertoire of refined dishes that inspired respect among the broad public interested in food.” We probably chose “cuisines” rather than “foodways” or “cookery” for the title because its associations with artistry would give it more splendor in the eyes of the two institutions—the College of Charleston and Johnson & Wales University—footing the administrative costs of the event. Our foremost concern was to bring three communities of people into conversation: culinary historians, chefs, and provisioners (i.e., farmers and fishermen) who produced the food cooked along the southern Atlantic coast and in the West Indies. Theorizing cuisine operated as a pretext.

Glenn Roberts numbered among the producers. The CEO of Anson Mills, he presided over the American company most deeply involved with growing, processing, and selling landrace grains to chefs. I knew him only by reputation. He grew and milled the most ancient and storied grains on the planet—antique strains of wheat, oats, spelt, rye, barley, faro, and corn—so that culinary professionals could make use of the deepest traditional flavor chords in cookery: porridges, breads, and alcoholic beverages. Given Roberts’s fascination with grains, expanding the scope of cultivars to include Carolina’s famous rice showed intellectual consistency. Yet I had always pegged him as a preservationist rather than a restorationist. He asked me, point-blank, whether I wished to participate in the effort to restore authentic Lowcountry cuisine.

Roberts pronounced cuisine with a peculiar inflection, suggesting that it was something that was and could be but that in 2003 did not exist in this part of the South. I knew in a crude way what he meant. Rice had been the glory of the southern coastal table, yet rice had not been commercially cultivated in the region since a hurricane breached the dykes and salted the soil of Carolina’s last commercial plantation in 1911. (Isolated planters on the Combahee River kept local stocks going until the Great Depression, and several families grew it for personal use until World War II, yet Carolina Gold rice disappeared on local grocers’ shelves in 1912.)

When Louisa Stoney and a network Charleston’s grandes dames gathered theirCarolina Rice Cook Book in 1901, the vast majority of ingredients were locally sourced. When John Martin Taylor compiled his Hoppin’ John’s Lowcountry Cooking in 1992,4 the local unavailability of traditional ingredients and a forgetfulness about the region’s foodways gave the volume a shock value, recalling the greatness of a tradition while alerting readers to its tenuous hold on the eating habits of the people.

Glenn Roberts had grown up tasting the remnants of the rice kitchen, his mother having mastered in her girlhood the art of Geechee black skillet cooking. In his younger days, Roberts worked on oyster boats, labored in fields, and cooked in Charleston restaurants, so when he turned to growing grain in the 1990s, he had a peculiar perspective on what he wished for: he knew he wanted to taste the terroir of the Lowcountry in the food.5 Because conventional agriculture had saturated the fields of coastal Carolina with pesticides, herbicides, and chemical fertilizers, he knew he had to restore the soil as well as restore Carolina Gold, and other crops, into cultivation.

I told Roberts that I would help, blurting the promise before understanding the dimensions of what he proposed. Having witnessed the resurgence in Creole cooking in New Orleans and the efflorescence of Cajun cooking in the 1980s, and having read John Folse’s pioneering histories of Louisiana’s culinary traditions, I entertained romantic visions of lost food-ways being restored and local communities being revitalized. My default opinions resembled those of an increasing body of persons, that fast food was aesthetically impoverished, that grocery preparations (snacks, cereals, and spreads) had sugared and salted themselves to a brutal lowest common denominator of taste, and that industrial agriculture was insuring indifferent produce by masking local qualities of soil with chemical supplementations. When I said “yes,” I didn’t realize that good intentions are a kind of stupidity in the absence of an attuned intuition of the problems at hand. When Roberts asked whether I would like to restore a cuisine, my thoughts gravitated toward the payoffs on the consumption end of things: no insta-grits made of GMO corn in my shrimp and grits; no farm-raised South American tiger shrimp. In short, something we all knew around here would be improved.

It never occurred to me that the losses in Lowcountry food had been so great that we all don’t know jack about the splendor that was, even with the aid of historical savants such as “Hoppin’ John” Taylor. Nor did I realize that traditional cuisines cannot be understood simply by reading old cookbooks; you can’t simply re-create recipes and—voilà! Roberts, being a grower and miller, had fronted the problem: cuisines had to be understood from the production side, from the farming, not just the cooking or eating. If the ingredients are mediocre, there will be no revelation on the tongue. There is only one pathway to understanding how the old planters created rice that excited the gastronomes of Paris—the path leading into the dustiest, least-used stacks in the archive, those holding century-and-a-half-old agricultural journals, the most neglected body of early American writings.

In retrospect, I understand why Roberts approached me and not some chef with a penchant for antiquarian study or some champion of southern cooking. While interested in culinary history, it was not my interest but my method that drew Roberts. He must’ve known at the time that I create histories of subjects that have not been explored; that I write “total histories” using only primary sources, finding, reading, and analyzing every extant source of information. He needed someone who could navigate the dusty archive of American farming, a scholar who could reconstruct how cuisine came to be from the ground up. He found me in 2003.

At first, questions tugged in too many directions. When renovating a cuisine, what is it, exactly, that is being restored? An aesthetic of plant breeding? A farming system? A set of kitchen practices? A gastronomic philosophy? We decided not to exclude questions at the outset, but to pursue anything that might serve the goals of bringing back soil, restoring cultivars, and renovating traditional modes of food processing. The understandings being sought had to speak to a practice of growing and kitchen creation. We should not, we all agreed, approach cuisine as an ideal, a theoretical construction, or a utopian possibility.

Our starting point was a working definition of that word I had used so inattentively in the title of the conference: cuisine. What is a cuisine? How does it differ from diet, cookery, or food? Some traditions of reflection on these questions were helpful. Jean-François Revel’s insistence in Culture and Cuisine that cuisines are regional, not national, because of the enduring distinctiveness of local ingredients, meshed with the agricultural preoccupations of our project. Sidney Mintz usefully observed that a population “eats that cuisine with sufficient frequency to consider themselves experts on it. They all believe, and care that they believe, that they know what it consists of, how it is made, and how it should taste. In short, a genuine cuisine has common social roots.” The important point here is consciousness. Cuisine becomes a signature of community and, as such, becomes a source of pride, a focus of debate, and a means of projecting an identity in other places to other people.

There is, of course, a commercial dimension to this. If a locale becomes famous for its butter (as northern New York did in the nineteenth century) or cod (as New England did in the eighteenth century), a premium is paid in the market for those items from those places. The self-consciousness about ingredients gives rise to an artistry in their handling, a sense of tact from long experience of taste, and a desire among both household and professional cooks to satisfy the popular demand for dishes by improving their taste and harmonizing their accompaniments at the table.

One hallmark of the maturity of a locale’s culinary artistry is its discretion when incorporating non-local ingredients with the products of a region’s field, forest, and waters. Towns and cities with their markets and groceries invariably served as places where the melding of the world’s commodities with a region’s produce took place. Cuisines have two faces: a cosmopolitan face, prepared by professional cooks; and a common face, prepared by household cooks. In the modern world, a cuisine is at least bimodal in constitution, with an urbane style and a country vernacular style. At times, these stylistic differences become so pronounced that they described two distinct foodways—the difference between Creole and Cajun food and their disparate histories, for example. More frequently, an urban center creates its style elaborating the bounty of the surrounding countryside—the case of Baltimore and the Tidewater comes to mind.

With a picture of cuisine in hand, Roberts and I debated how to proceed in our understanding. In 2004 the Carolina Gold Rice Foundation was formed with the express purpose of advancing the cultivation of land-race grains and insuring the repatriation of Carolina Gold. Dr. Merle Shepard of Clemson University (head of the Clemson Coastal Experimental Station at Charleston), Dr. Richard Schulze (who planted the first late twentieth-century crops of Carolina Gold on his wetlands near Savannah), Campbell Coxe (the most experienced commercial rice farmer in the Carolinas), Max E. Hill (historian and planter), and Mack Rhodes and Charles Duell (whose Middleton Place showcased the historical importance of rice on the Lowcountry landscape) formed the original nucleus of the enterprise.

It took two and a half years before we knew enough to reformulate our concept of cuisine and historically contextualize the Carolina Rice Kitchen well enough to map our starting point for the work of replenishment—a reboot of Lowcountry cuisine. The key insights were as follows: The enduring distinctiveness of local ingredients arose from very distinct sets of historical circumstances and a confluence of English, French Huguenot, West African, and Native American foodways. What is grown where, when, and for what occurred for very particular reasons. A soil crisis in the early nineteenth century particularly shaped the Lowcountry cuisine that would come, distinguishing it from food produced and prepared elsewhere.

The landraces of rice, wheat, oats, rye, and corn that were brought into agriculture in the coastal Southeast were, during the eighteenth century, planted as cash crops, those same fields being replanted season after season, refreshed only with manuring until the early nineteenth century. Then the boom in long staple Sea Island cotton, a very “exhausting” plant, pushed Lowcountry soil into crisis. (A similar crisis related to tobacco culture and soil erosion because of faulty plowing methods afflicted Maryland, Virginia, and North Carolina.) The soil crisis led to the depopulation of agricultural lands as enterprising sons went westward seeking newly cleared land, causing a decline in production, followed by rising farm debt and social distress. The South began to echo with lamentations and warnings proclaimed by a generation of agrarian prophets—John Taylor of Caroline County in Virginia, George W. Jeffreys of North Carolina, Nicholas Herbemont of South Carolina, and Thomas Spalding of Georgia. Their message: Unless the soil is saved; unless crop rotations that build nutrition in soil be instituted; unless agriculture be diversified—then the long-cultivated portions of the South will become a wasteland. In response to the crisis in the 1820s, planters formed associations; they published agricultural journals to exchange information; they read; they planted new crops and employed new techniques of plowing and tilling; they rotated, intercropped, and fallowed fields. The age of experiment began in American agriculture with a vengeance.

The Southern Agriculturist magazine (founded 1828) operated as theengine of changes in the Lowcountry. In its pages, a host of planter-contributors published rotations they had developed for rice, theories of geoponics (soil nourishment), alternatives to monoculture, and descriptions of the world of horticultural options. Just as Judge Jesse Buel in Albany, New York, systematized the northern dairy farm into a self-reliant entity with livestock, pastures, fields, orchard, garden, and dairy interacting for optimum benefit, southern experimentalists conceived of the model plantation. A generation of literate rice planters—Robert F. W. Allston, J. Bryan, Calvin Emmons, James Ferguson, William Hunter, Roswell King, Charles Munnerlyn, Thomas Pinckney, and Hugh Rose— contributed to the conversation, overseen by William Washington, chair of the Committee on Experiments of the South Carolina Agricultural Society. Regularizing the crop rotations, diversifying cultivars, and rationalizing plantation operations gave rise to the distinctive set of ingredients that coalesced into what came to be called the Carolina Rice Kitchen, the cuisine of the Lowcountry.

Now, in order to reconstruct the food production of the Lowcountry, one needs a picture of how the plantations and farms worked internally with respect to local markets, in connection with regional markets, and in terms of commodity trade. One has to know how the field crops, kitchen garden, flower and herb garden, livestock pen, dairy, and kitchen cooperated. Within the matrix of uses, any plant or animal that could be employed in multiple ways would be more widely raised in a locality and more often cycled into cultivation. The sweet potato, for instance, performed many tasks on the plantation: It served as winter feed for livestock, its leaves as fodder; it formed one of the staple foods for slaves; it sold well as a local-market commodity for the home table; and its allelopathic (growth-inhibiting chemistry) made it useful in weed suppression. Our first understandings of locality came by tracing the multiple transits of individual plants through farms, markets, kitchens, and seed brokerages.

After the 1840s, when experiments stabilized into conventions on Low-country plantations, certain items became fixtures in the fields. Besides the sweet potato, one found benne (low-oil West African sesame), corn, colewort/kale/collards, field peas, peanuts, and, late in the 1850s, sorghum. Each one of these plant types would undergo intensive breeding trials, creating new varieties that (a) performed more good for the soil and welfare of the rotation’s other crops; (b) attracted more purchasers at the market; (c) tasted better to the breeder or his livestock; (d) grew more productively than other varieties; and (e) proved more resistant to drought, disease, and infestation than other varieties.

From 1800 to the Civil War, the number of vegetables, the varieties of a given vegetable, the number of fruit trees, the number of ornamental flowers, and the numbers of cattle, pigs, sheep, goat, and fowl breeds all multiplied prodigiously in the United States, in general, and the Low-country, in particular. The seedsman, the orchardist, the livestock breeder, the horticulturist—experimentalists who maintained model farms, nurseries, and breeding herds—became fixtures of the agricultural scene and drove innovation. One such figure was J. V. Jones of Burke County, Georgia, a breeder of field peas in the 1840s and ’50s. In the colonial era, field peas (cowpeas) grew in the garden patches of African slaves, along with okra, benne, watermelon, and guinea squash. Like those other West African plants, their cultivation was taken up by white planters. At first, they grew field peas as fodder for livestock because it inspired great desire among hogs, cattle, and horses. (Hence the popular name cowpea.) Early in the nineteenth century, growers noticed that it improved soils strained by “exhausting plants.” With applications as a green manure, a table pea, and livestock feed, the field pea inspired experiments in breeding with the ends of making it less chalky tasting, more productive, and less prone to mildew when being dried to pea hay. Jones reported on his trials. He grew every sort of pea he could obtain, crossing varieties in the hopes of breeding a pea with superior traits.

  1. Blue Pea, hardy and prolific. A crop of this pea can be matured in less than 60 days from date of planting the seed. Valuable.
  2. Lady, matures with No. 1. Not so prolific and hardy. A delicious table pea.
  3. Rice, most valuable table variety known, and should be grown universally wherever the pea can make a habitation.
  4. Relief, another valuable table kind, with brown pods.
  5. Flint Crowder, very profitable.
  6. Flesh, very profitable.
  7. Sugar, very profitable.
  8. Grey, very profitable. More so than 5, 6, 7. [Tory Pea]
  9. Early Spotted, brown hulls or pods.
  10. Early Locust, brown hulls, valuable.
  11. Late Locust, purple hulls, not profitable.
  12. Black Eyes, valuable for stock.
  13. Early Black Spotted, matures with nos. 1, 2, and 3.
  14. Goat, so called, I presume, from its spots. Very valuable, and a hard kind to shell.
  15. Small Black, very valuable, lies on the field all winter with the power of reproduction.
  16. Large Black Crowder, the largest pea known, and produces great and luxuriant vines. A splendid variety.
  17. Brown Spotted, equal to nos. 6, 7, 8 and 14.
  18. Claret Spotted, equal to nos. 6, 7, 8 and 14.
  19. Large Spotted, equal to nos. 6, 7, 8 and 14.
  20. Jones Little Claret Crowder. It is my opinion a greater quantity in pounds and bushels can be grown per acre of this pea, than any other grain with the knowledge of man. Matures with nos. 1, 2, 3, 9 and 13, and one of the most valuable.
  21. Jones Black Hull, prolific and profitable.
  22. Jones Yellow Hay, valuable for hay only.
  23. Jones no. 1, new and very valuable; originated in the last 2 years.
  24. Chickasaw, its value is as yet unknown. Ignorance has abused it.
  25. Shinney or Java, this is the Prince of Peas.

The list dramatizes the complex of qualities that bear on the judgments of plant breeders—flavor, profitability, feed potential, processability, ability to self-seed, productivity, and utility as hay. And it suggests the genius of agriculture in the age of experiment—the creation of a myriad of tastes and uses.

At this juncture, we confront a problem of culinary history. If one writes the history of taste as it is usually written, using the cookbook authors and chefs as the spokespersons for developments, one will not register the multiple taste options that pea breeders created. Recipes with gnomic reticence call for field peas (or cowpeas). One would not know, for example, that the Shinney pea, the large white lady pea, or the small white rice pea would be most suitable for this or that dish. It is only in the agricultural literature that we learn that the Sea Island red pea was the traditional pea used in rice stews, or that the red Tory pea with molasses and a ham hock made a dish rivaling Boston baked beans.

Growers drove taste innovation in American grains, legumes, and vegetables during the age of experiment. And their views about texture, quality, and application were expressed in seed catalogs, agricultural journals, and horticultural handbooks. If one wishes to understand what was distinctive about regional cookery in the United States, the cookbook supplies but a partial apprehension at best. New England’s plenitude of squashes, to take another example, is best comprehended by reading James J. H. Gregory’s Squashes: How to Grow Them (1867), not Mrs. N. Orr’s De Witt’sConnecticut Cook Book, and Housekeeper’s Assistant (1871). In the pages of the 1869 annual report of the Massachusetts Board of Agriculture, we encounter the expert observation, “As a general rule, the Turban and Hubbard are too grainy in texture to enter the structure of that grand Yankee luxury, a squash pie. For this the Marrow [autumnal marrow squash] excels, and this, I hold, is now the proper sphere of this squash; it is now a pie squash.” No cookbook contains so trenchant an assessment, and when the marrow squash receives mention, it suggests it is a milder-flavored alternative to the pumpkin pie.

Wendell Berry’s maxim that “eating is an agricultural act” finds support in nineteenth-century agricultural letters. The aesthetics of planting, breeding, and eating formed a whole sense of the ends of agriculture. No cookbook would tell you why a farmer chose a clay pea to intercrop with white flint corn, or a lady pea, or a black Crowder, but a reader of the agricultural press would know that the clay pea would be plowed under with the corn to fertilize a field (a practice on some rice fields every fourth year), that the lady pea would be harvested for human consumption, and that the black Crowder would be cut for cattle feed. Only reading a pea savant like J. V. Jones would one know that a black-eyed pea was regarded as “valuable for stock” but too common tasting to recommend it for the supper table.

When the question that guides one’s reading is which pea or peasshould be planted today to build the nitrogen level of the soil and complement the grains and vegetables of Lowcountry cuisines, the multiplicity of varieties suggests an answer. That J. V. Jones grew at least four of his own creations, as well as twenty-one other reputable types, indicates that one should grow several sorts of field peas, with each sort targeted to a desired end. The instincts of southern seed savers such as Dr. David Bradshaw, Bill Best, and John Coykendall were correct—to preserve the richness of southern pea culture, one had to keep multiple strains of cowpea viable. Glenn Roberts and the Carolina Gold Rice Foundation have concentrated on two categories of peas—those favored in rice dishes and those known for soil replenishment. The culinary peas are the Sea Island red pea, known for traditional dishes such as reezy peezy, red pea soup, and red pea gravy; and the rice pea, cooked as an edible pod pea, for most hoppin’ John recipes and for the most refined version of field peas with butter. For soil building, iron and clay peas have been a mainstay of warm-zone agriculture since the second half of the nineteenth century.

It should be clear by this juncture that this inquiry differs from the projects most frequently encountered in food history. Here, the value of a cultivar or dish does not reside in its being a heritage marker, a survival from an originating culture previous to its uses in southern planting and cooking. The Native American origins of a Chickasaw plum, the African origins of okra, the Swedish origins of the rutabaga don’t much matter for our purposes. This is not to discount the worth of the sort of etiological food genealogies that Gary Nabhan performs with the foods of Native peoples, that Karen Hess performed with the cooking of Jewish conversos, or that Jessica Harris and others perform in their explorations of the food of the African diaspora, but the hallmark of the experimental age was change in what was grown—importation, alteration, ramification, improvement, and repurposing. The parched and boiled peanuts/pindars of West Africa were used for oil production and peanut butter. Sorghum, or imphee grass, employed in beer brewing and making flat breads in West Africa and Natal became in the hands of American experimentalists a sugar-producing plant. That said, the expropriations and experimental transformations did not entirely supplant traditional uses. The work of agronomist George Washington Carver at the Tuskegee Agricultural Experiment Station commands particular notice because it combines its novel recommendations for industrial and commercial uses of plants as lubricants, blacking, and toothpaste, with a thoroughgoing recovery of the repertoire of Deep South African American sweet potato, cowpea, and peanut cookery in an effort to present the maximum utility of the ingredients.

While part of this study does depend on the work that Joyce E. Chaplin and Max Edelson have published on the engagement of southern planters with science, it departs from the literature concerned with agricultural reform in the South. Because this exploration proceeds from the factum brutum of an achieved regional cuisine produced as the result of agricultural innovations, market evolutions, and kitchen creativity, it stands somewhat at odds with that literature, arguing the ineffectuality of agricultural reform. Works in this tradition—Charles G. Steffen’s “In Search of the Good Overseer” or William M. Mathew’s Edmund Ruffin and the Crisis of Slavery in the Old South—argue that what passed for innovation in farming was a charade, and that soil restoration and crop diversification were fitful at best. When a forkful of hominy made from the white flint corn perfected in the 1830s on the Sea Islands melts on one’s tongue, there is little doubting that something splendid has been achieved.

The sorts of experiments that produced white flint corn, the rice pea, and the long-grain form of Carolina Gold rice did not cease with the Civil War. Indeed, with the armistice, the scope and intensity of experimentation increased as the economies of the coast rearranged from staple production to truck farming. The reliance on agricultural improvement would culminate in the formation of the network of agricultural experimental stations in the wake of the Hatch Act of 1886. One finding of our research has been that the fullness of Lowcountry agriculture and the efflorescence of Lowcountry cuisine came about during the Reconstruction era, and its heyday continued into the second decade of the twentieth century.

The Lowcountry was in no way exceptional in its embrace of experiments and improvement or insular in its view of what should be grown. In the 1830s, when Carolina horticulturists read about the success that northern growers had with Russian strains of rhubarb, several persons attempted with modest success to grow it in kitchen gardens. Readers of Alexander von Humboldt’s accounts of the commodities of South America experimented with Peruvian quinoa in grain rotations. Because agricultural letters and print mediated the conversations of the experimentalists, and because regional journals reprinted extensively from other journals from other places, a curiosity about the best variety of vegetables, fruits, and berries grown anywhere regularly led many to secure seed from northern brokers (only the Landreth Seed Company of Pennsylvaniamaintained staff in the Lowcountry), or seedsmen in England, France, and Germany. Planters regularly sought new sweet potato varieties from Central and South America, new citrus fruit from Asia, and melons wherever they might be had.

Because of the cosmopolitan sourcing of things grown, the idea of a regional agriculture growing organically out of the indigenous productions of a geographically delimited zone becomes questionable. (The case of the harvest of game animals and fish is different.) There is, of course, a kind of provocative poetry to reminding persons, as Gary Nabhan has done, that portions of the Southeast once regarded the American chestnut as a staple and food mapping an area as “Chestnut Nation,” yet it has little resonance for a population that has never tasted an American chestnut in their lifetime. Rather, region makes sense only as a geography mapped by consciousness—by a community’s attestation in naming, argumentation, and sometimes attempts at legal delimitation of a place.

We can see the inflection of territory with consciousness in the history of the name “Lowcountry.” It emerges as “low country” in the work of early nineteenth-century geographers and geologists who were attempting to characterize the topography of the states and territories of the young nation. In 1812 Jedidiah Morse uses “low country” in the American Universal Gazetteer to designate the coastal mainland of North Carolina, South Carolina, and Georgia. Originally, the Sea Islands were viewed as a separate topography. “ The sea coast,” he writes, “is bordered with a fine chain of islands, between which and the shore there is a very convenient navigation. The main land is naturally divided into the Lower and Upper country. The low country extends 80 or 100 miles from the coast, and is covered with extensive forests of pitch pine, called pine barrens, interspersed with swamps and marshes of rich soil.” Geologist Elisha Mitchell took up the characterization in his 1828 article, “On the Character and Origin of the Low Country of North Carolina,” defining the region east of the Pee Dee River to the Atlantic coast by a stratigraphy of sand and clay layers as the low country. Within a generation, the designation had entered into the usage of the population as a way of characterizing a distinctive way of growing practiced on coastal lands. Wilmot Gibbs, a wheat farmer in Chester County in the South Carolina midlands, observed in a report to the US Patent Office: “ The sweet potatoes do better, much better on sandy soil, and though not to be compared in quantity and quality with the lowcountry sweet potatoes, yet yield a fair crop.” Two words became one word. And when culture—agriculture—inflected the understanding of region, the boundaries of the map altered. The northern boundary of rice growing and the northern range of the cabbage palmetto were just north of Wilmington, North Carolina. The northern bound of USDA Plant Hardiness Zone 8 in the Cape Fear River drainage became the cultural terminus of the Lowcountry. Agriculturally, the farming on the Sea Islands differed little from that on the mainland, so they became assimilated into the cultural Lowcountry. And since the Sea Islands extended to Amelia Island, Florida, the Lowcountry extended into east Florida. What remained indistinct and subject to debate was the interior bound of the Lowcountry. Was the St. Johns River region in Florida assimilated into it, or not? Did it end where tidal flow became negligible upriver on the major coastal estuaries? Perceptual regions that do not evolve into legislated territories, such as the French wine regions, should be treated with a recognition of their mutable shape.

Cuisines are regional to the extent that the ingredients the region supplies to the kitchen are distinctive, not seen as a signature of another place. Consequently, Lowcountry cuisine must be understood comparatively, contrasting its features with those of other perceived styles, such as “southern cooking” or “tidewater cuisine” or “New Orleans Creole cooking” or “American school cooking” or “cosmopolitan hotel gastronomy.” The comparisons will take place, however, acknowledging that all of these styles share a deep grammar. A common store of ancient landrace grains (wheat, spelt, rye, barley, oats, corn, rice, millet, faro), the oil seeds and fruits (sesame, sunflower, rapeseed, linseed, olive), the livestock, the root vegetables, the fruit trees, the garden vegetables, the nuts, the berries, the game, and the fowls—all these supply a broad canvas against which the novel syncretisms and breeders’ creations emerge. It is easy to overstate the peculiarity of a region’s farming or food.

One of the hallmarks of the age of experiment was openness to new plants from other parts of the world. There was nothing of the culinary purism that drove the expulsion of “ignoble grapes” from France in the 1930s. Nor was there the kind of nationalist food security fixation that drives the current Plant Protection and Quarantine (PPQ ) protocols of the USDA. In that era, before crop monocultures made vast stretches of American countryside an uninterrupted banquet for viruses, disease organisms, and insect pests, nightmares of continental pestilence did not roil agronomists. The desire to plant a healthier, tastier, more productive sweet potato had planters working their connections in the West Indies and South America for new varieties. Periodically, an imported variety—a cross between old cultivated varieties, a cross between a traditional and an imported variety, or a sport of an old or new variety—proved something so splendid that it became a classic, a brand, a market variety, a seed catalog–illustrated plant. Examples of these include the Carolina African peanut, the Bradford watermelon, the Georgia pumpkin yam, the Hanson lettuce, Sea Island white flint corn, the Virginia peanut, the Carolina Long Gold rice, the Charleston Wakefield cabbage, and the Dancy tangerine. That something from a foreign clime might be acculturated, becoming central to an American regional cuisine, was more usual than not.

With the rise of the commercial seedsmen, naming of vegetable varieties became chaotic. Northern breeders rebranded the popular white-fleshed Hayman sweet potato, first brought from the West Indies into North Carolina in 1854, as the “Southern Queen sweet potato” in the hope of securing the big southern market, or as the “West Indian White.” Whether a seedsman tweaked a strain or not, it appeared in the catalogs as new and improved. Only with the aid of the skeptical field-trial reporters working the experimental stations of the 1890s can one see that the number of horticultural and pomological novelties named as being available for purchase substantially exceeds the number of varieties that actually exist.

Numbers of plant varieties enjoyed sufficient following to resist the yearly tide of “new and improved” alternatives. They survived over decades, supported by devotees or retained by experimental stations and commercial breeders as breeding stock. Of Jones’s list of cowpeas, for instance, the blue, the lady, the rice, the flint Crowder, the claret, the small black, the black-eyed, and Shinney peas still exist in twenty-first-century fields, and two remain in commercial cultivation: the lady and the Crowder.

In order to bring back the surviving old varieties important in traditional Lowcountry cuisine yet no longer commercially farmed, Dr. Merle Shepard, Glenn Roberts, or I sought them in germplasm banks andthrough the networks of growers and seed savers. Some important items seem irrevocably lost: the Neunan’s strawberry and the Hoffman seedling strawberry, both massively cultivated during the truck-farming era in the decades following the Civil War. The Ravenscroft watermelon has perished. Because of the premium placed on taste in nineteenth-century plant and fruit breeding, we believed the repatriation of old strains to be important. Yet we by no means believed that skill at plant breeding suddenly ceased in 1900. Rather, the aesthetics of breeding changed so that cold tolerance, productivity, quick maturity, disease resistance, transportability, and slow decay often trumped taste in the list of desiderata. The recent revelation that the commercial tomato’s roundness and redness was genetically accomplished at the expense of certain of the alleles governing taste quality is only the most conspicuous instance of the subordination of flavor in recent breeding aesthetics.

We have reversed the priority—asserting the primacy of taste over other qualities in a plant. We cherish plants that in the eyes of industrial farmers may seem inefficient, underproductive, or vulnerable to disease and depredation because they offer more to the kitchen, to the tongue, and to the imagination. The simple fact that a plant is heirloom does not make it pertinent for our purposes. It had to have had traction agriculturallyand culinarily. It had to retain its vaunted flavor. Glenn Roberts sought with particular avidity the old landrace grains because their flavors provided the fundamental notes comprising the harmonics of Western food, both bread and alcohol. The more ancient, the better. I sought benne, peanuts, sieva beans, asparagus, peppers, squashes, and root vegetables. Our conviction has been—and is—that the quality of the ingredients will determine the vitality of Lowcountry cuisine.

While the repertoire of dishes created in Lowcountry cuisine interested us greatly, and while we studied the half-dozen nineteenth-century cookbooks, the several dozen manuscript recipe collections, and the newspaper recipe literature with the greatest attention, we realized that our project was not the culinary equivalent of Civil War reenactment, a kind of temporary evacuation of the present for some vision of the past. Rather, we wanted to revive the ingredients that had made that food so memorable and make the tastes available again, so the best cooks of this moment could combine them to invoke or invent a cooking rich with this place. Roberts was too marked by his Californian youth, me by formative years in Japan, Shepard by his long engagement with Asian food culture, Campbell Coxe with his late twentieth-century business mentality, to yearn for some antebellum never-never land of big house banqueting. What did move us, however, was the taste of rice. We all could savor the faint hazelnut delicacy, the luxurious melting wholesomeness of Carolina Gold. And we all wondered at those tales of Charleston hotel chefs of the Reconstruction era who could identify which stretch of which river where a plate of gold rice had been nourished. They could, they claimed, taste the water and the soil in the rice.

The quality of ingredients depends upon the quality of the soil, and this book is not, to my regret, a recovery of the lost art of soil building. Though we have unearthed, with the aid of Dr. Stephen Spratt, a substantial body of information about crop rotations and their effects, and though certain of these traditional rotations have been followed in growing rice, benne, corn, beans, wheat, oats, et cetera, we can’t point to a particular method of treating soil that we could attest as having been sufficient and sustainable in its fertility in all cases. While individual planters hit upon soil-building solutions for their complex of holdings, particularly in the Sea Islands and in the Pee Dee River basin, these were often vast operations employing swamp muck, rather than dung, as a manure. Even planter-savants, such as John Couper and Thomas Spalding, felt they had not optimized the growing potential of their lands. Planters who farmed land that had suffered fertility decline and were bringing it back to viability often felt dissatisfaction because its productivity could not match the newly cleared lands in Alabama, Louisiana, Texas, and Mississippi. Lowcountry planters were undersold by producers to the west. Hence, coastal planters heeded the promises of the great advocates of manure—Edmund Ruffin’s call to crush fossilized limestone and spread calcareous manures on fields, or Alexander von Humboldt’s scientific case for Peruvian guano—as the answer to amplifying yield per acre. Those who could afford it became guano addicts. Slowly, southern planters became habituated to the idea that in order to yield a field needed some sort of chemical supplementation. It was then a short step to industrially produced chemical fertilizers.

What we now know to be irrefutably true, after a decade of Glenn Roberts’s field work, is that grain and vegetables grown in soil that has never been subjected to the chemical supplementations of conventional agriculture, or that has been raised in fields cleansed of the chemicals by repeated organic grow-outs, possess greater depth and distinct local inflections of flavor. Tongues taste terroir. This is a truth confirmed by the work of other cuisine restorationists in other areas—I think particularly of Dan Barber’s work at Stone Barns Center in northern New York and John Coykendall’s work in Tennessee.

Our conviction that enhancing the quality of flavors a region produces as the goal of our agricultural work gives our efforts a clarity of purpose that enables sure decision making at the local level. We realize, of course, the human and animal health benefits from consuming food free of toxins and chemical additives. We know that the preservation of the soil and the treatment of water resources in a non-exploitative way constitute a kind of virtue. But without the aesthetic focus on flavor, the ethical treatment of resources will hardly succeed. When pleasure coincides with virtue, the prospect of an enduring change in the production and treatment of food takes on solidity.

Since its organization a decade ago, the Carolina Gold Rice Foundation has published material on rice culture and the cultivation of landrace grains. By 2010 it became apparent that the information we had gleaned and the practical experience we had gained in plant repatriations had reached a threshold permitting a more public presentation of our historical sense of this regional cuisine, its original conditions of production, and observations on its preparation. After substantial conversation about the shape of this study with Roberts, Shepard, Bernard L. Herman, John T. Edge, Nathalie Dupree, Sean Brock, Linton Hopkins, Jim Kibler, and Marcie Cohen Ferris, I determined that it should not resort to the conventional chronological, academic organization of the subject, nor should it rely on the specialized languages of botany, agronomy, or nutrition. My desire in writing Southern Provisions was to treat the subject so that a reader could trace the connections between plants, plantations, growers, seed brokers, markets, vendors, cooks, and consumers. The focus of attention had to alter, following the transit of food from field to market, from garden to table. The entire landscape of the Lowcountry had to be included, from the Wilmington peanut patches to the truck farms of the Charleston Neck, from the cane fields of the Georgia Sea Islands to the citrus groves of Amelia Island, Florida. For comparison’s sake, there had to be moments when attention turned to food of the South generally, to the West Indies, and to the United States more generally.

In current books charting alternatives to conventional agriculture, there has been a strong and understandable tendency to announce crisis. This was also the common tactic of writers at the beginning of the age of experimentation in the 1810s and ’20s. Yet here, curiosity and pleasure, the quest to understand a rich world of taste, direct our inquiry more than fear and trepidation.

***

To read more about Southern Provisions, click here.

Add a Comment
13. Excerpt: The Territories of Science and Religion

9780226184487

Introduction

An excerpt from The Territories of Science and Religion by Peter Harrison

***

The History of “Religion”

In the section of his monumental Summa theologiae that is devoted to a discussion of the virtues of justice and prudence, the thirteenth-century Dominican priest Thomas Aquinas (122–74) investigates, in his characteristically methodical and insightful way, the nature of religion. Along with North African Church Father Augustine of Hippo (354–430), Aquinas is probably the most influential Christian writer outside of the biblical authors. From the outset it is clear that for Aquinas religion (religio) is a virtue—not, incidentally, one of the preeminent theological virtues, but nonetheless an important moral virtue related to justice. He explains that in its primary sense religiorefers to interior acts of devotion and prayer, and that this interior dimension is more important than any outward expressions of this virtue. Aquinas acknowledges that a range of outward behaviors are associated with religio—vows, tithes, offerings, and so on—but he regards these as secondary. As I think is immediately obvious, this notion of religion is rather different from the one with which we are now familiar. There is no sense in which religio refers to systems of propositional beliefs, and no sense of different religions (plural). Between Thomas’s time and our own, religion has been transformed from a human virtue into a generic something, typically constituted by sets of beliefs and practices. It has also become the most common way of characterizing attitudes, beliefs, and practices concerned with the sacred or supernatural.

Aquina’s understanding of religio was by no means peculiar to him. Before the seventeenth century, the word “religion” and its cognates were used relatively infrequently. Equivalents of the term are virtually nonexistent in the canonical documents of the Western religions—the Hebrew Bible, the New Testament, and the Qur’an. When the term was used in the premodern West, it did not refer to discrete sets of beliefs and practices, but rather to something more like “inner piety,” as we have seen in the case of Aquinas, or “worship.” As a virtue associated with justice, moreover,religio was understood on the Aristotelian model of the virtues as the ideal middle point between two extremes—in this case, irreligion and superstition.

The vocabulary of “true religion” that we encounter in the writings of some of the Church Fathers offers an instructive example. “The true religion” is suggestive of a system of beliefs that is distinguished from other such systems that are false. But careful examination of the content of these expressions reveals that early discussions about true and false religion were typically concerned not with belief, but rather worship and whether or not worship is properly directed. Tertullian (ca. 160–ca. 220) was the first Christian thinker to produce substantial writings in Latin and was also probably the first to use the expression “true religion.” But in describing Christianity as “true religion of the true god,” he is referring to genuine worship directed toward a real (rather than fictitious) God. Another erudite North African Christian writer, Lactantius (ca. 240–ca. 320), gives the first book of his Divine Institutes the title “De Falsa religione.” Again, however, his purpose is not to demonstrate the falsity of pagan beliefs, but to show that “the religionus ceremonies of the [pagan] gods are false,” which is just to say that the objects of pagan worship are false gods. His positive project, an account of true religion, was “to teach in what manner or by what sacrifice God must be worshipped.” Such rightly directed worship was for Lactantius “the duty of man, and in that one object the sum of all things and the whole course of a happy life consists.”

Jerome’s choice of religio for his translation of the relatively uncommon Greekthreskeia in James 1:27 similarly associates the word with cult and worship. In the English of the King James version the verse is rendered: “Pure and undefiled religion [threskeia] before God the Father is this, To visit the fatherless and widows in their affliction, and to keep himself unspotted from the world.” The import of this passage is that the “religion” of the Christians is a form of worship that consists in charitable acts rather than rituals. Here the contrast is between religion that is “vain” (vana) and that which is “pure and undefiled” (religion munda et inmaculata). In the Middle Ages this came to be regarded as equivalent to a distinction between true and false religion. The twelfth-century Distinctiones Abel of Peter the Chanter (d. 1197), one of the most prominent of the twelfth-century theologians at the University of Paris, makes direct reference to the passage from James, distinguishing religion that is pure and true (munda et vera) from that which is vain and false (vana et falsa). His pupil, the scholastic Radulfus Ardens, also spoke of “true religion” in this context, concluding that it consists in “the fear and love of God, and the keeping of his commandments.” Here again there is no sense of true and false doctrinal content.

Perhaps the most conspicuous use of the expression “true religion” among the Church Fathers came in the title of De vera religion (On True religion), written by the great doctor of the Latin Church, Augustine of Hippo. In this early work Augustine follows Tertullian and Lactantius in describing true religion as rightly directed worship. As he was to relate in the Retractions: “I argued at great length and in many ways that true religion means the worship of the one true God.” It will come as no surprise that Augustine here suggests that “true religion is found only in the Catholic Church.” But intriguingly when writing the Retractions he was to state that while Christian religion is a form of true religion, it is not to be identified as the true religion. This, he reasoned, was because true religion had existed since the beginning of history and hence before the inception of Christianity. Augustine addressed the issue of true and false religion again in a short work, Six Questions in Answer to the Pagans, written between 406 and 412 and appended to a letter sent to Deogratius, a priest at Carthage. Here he rehearses the familiar stance that true and false religion relates to the object of worship: “What the true religion reprehends in the superstitious practices of the pagans is that sacrifice is offered to false gods and wicked demons.” But again he goes on to explain that diverse cultic forms might all be legitimate expressions of true religion, and that the outward forms of true religion might vary in different times and places: “it makes no difference that people worship with different ceremonies in accord with the different requirements of times and places, if what is worshipped is holy.” A variety of different cultural forms of worship might thus be motivated by a common underlying “religion”: “different rites are celebrated in different peoples bound together by one and the same religion.” If true religion could exist outside the established forms of Catholic worship, conversely, some of those who exhibited the outward forms of Catholic religion might lack “the invisible and spiritual virtue of religion.”

This general understanding of religion as an inner disposition persisted into the Renaissance. The humanist philosopher and Platonist Marsilio Ficino (143–99) thus writes of “christian religion,” which is evidenced in lives oriented toward truth and goodness. “All religion,” he wrote, in tones reminiscent of Augustine, “has something good in it; as long as it is directed towards God, the creator of all things, it is true Christian religion.” What Ficino seems to have in mind here is the idea that Christian religion is a Christlike piety, with “Christian” referring to the person of Christ, rather than to a system of religion—“the Christian religion.” Augustine’s suggestion that true and false religion might be displayed by Christians was also reprised by the Protestant Reformer Ulrich Zwingli, who wrote in 1525 of “true and false religion as displayed by Christians.”

It is worth mentioning at this point that, unlike English, Latin has no article—no “a” or “the.” Accordingly, when rendering expressions such as “vera religion” or “christiana religio” into English, translators had to decide on the basis of context whether to add an article or not. As we have seen, such decisions can make a crucial difference, for the connotations of “true religion” and “christian religion” are rather different from those of “the true religion” and “the Christian religion.” The former can mean something like “genuine piety” and “Christlike piety” and are thus consistent with the idea of religion as an interior quality. Addition of the definite article, however, is suggestive of a system of belief. The translation history of Protestant Reformer John Calvin’s classic Institutio Christianae Religionis (1536) gives a good indication both of the importance of the definite article and of changing understandings of religion in the seventeenth century. Calvin’s work was intended as a manual for the inculcation of Christian piety, although this fact is disguised by the modern practice of rendering the title in English as The Institutes of the Christian Religion. The title page of the first English edition by Thomas Norton bears the more faithful “The Institution of Christian religion” (1561). The definite article is placed before “Christian” in the 1762 Glasgow edition: “The Institution of the Christian religion.” And the now familiar “Institutes” appears for the first time in John Allen’s 1813 edition: “The Institutes of the Christian religion.” The modern rendering is suggestive of an entity “the Christian religion” that is constituted by its propositional contents—“the institutes.” These connotations were completely absent from the original title. Calvin himself confirms this by declaring in the preface his intention “to furnish a kind of rudiments, by which those who feel some interest in religion might be trained to true godliness.”

With the increasing frequency of the expressions“religion” and “the religions” from the sixteenth century onward we witness the beginning of the objectification of what was once an interior disposition. Whereas for Aquinas it was the “interior” acts of religion that held primacy, the balance now shifted decisively in favor of the exterior. This was a significant new development, the making of religion into a systematic and generic entity. The appearance of this new conception of religion was a precondition for a relationship between science and religion. While the causes of this objectification are various, the Protestant Reformation and the rise of experimental natural philosophy were key factors, as we shall see in chapter 4.

The History of “Science”

It is instructive at this point to return to Thomas Aquinas, because when we consider what he has to say on the notion of science (scientia) we find an intriguing parallel to his remarks on religion. In an extended treatment of the virtues in the Summa theologiae, Aquinas observes that science (scientia) is a habit of mind or an“intellectual virtue.” The parallel with religio, then, lies in the fact that we are now used to thinking of both religion and science as systems of beliefs and practices, rather than conceiving of them primarily as personal qualities. And for us today the question of their relationship is largely determined by their respective doctrinal content and the methods through which that content is arrived at. For Aquinas, however, both religioand scientia were, in the first place, personal attributes.

We are also accustomed to think of virtues as belonging entirely within the sphere of morality. But again, for Aquinas, a virtue is understood more generally as a“habit” that perfects the powers that individuals possess. This conviction—that human beings have natural powers that move them toward particular ends—was related to a general approach associated with the Greek philosopher Aristotle (384–322 BC), who had taught that all natural things are moved by intrinsic tendencies toward certain goals (tele). For Aristotle, this teleological movement was directed to the perfection of the entity, or to the perfection of the species to which it belonged. As it turns out, one of the natural tendencies of human beings was a movement toward knowledge. As Aristotle famously wrote in the opening lines of the Metaphysics, “all men by nature desire to know.” In this scheme of things, our intellectual powers are naturally directed toward the end of knowledge, and they are assisted in their movement toward knowledge by acquired intellectual virtues.

One of the great revolutions of Western thought took place in the twelfth and thirteenth centuries, when much Greek learning, including the work of Aristotle, was rediscovered. Aquinas played a pivotal role in this recovery of ancient wisdom, making Aristotle one of his chief conversation partners. He was by no means a slavish adherent of Aristotelian doctrines, but nonetheless accepted the Greek philosophe’s premise that the intellectual virtues perfect our intellectual powers. Aquinas identified three such virtues—understanding (intellectus), science (scientia), and wisdom (sapientia). Briefly, understanding was to do with grasping first principles, science with the derivation of truths from those first principles, and wisdom with the grasp of the highest causes, including the first cause, God. To make progress in science, then, was not to add to a body of systematic knowledge about the world, but was to become more adept at drawing “scientific” conclusions from general premises. “Science” thus understood was a mental habit that was gradually acquired through the rehearsal of logical demonstrations. In Thomas’s words: “science can increase in itself by addition; thus when anyone learns several conclusions of geometry, the same specific habit of science increases in that man.”

These connotations of scientia were well known in the Renaissance and persisted until at least the end of the seventeenth century. The English physician John Securis wrote in 1566 that“science is a habit” and “a disposition to do any thing confirmed and had by long study, exercise, and use.” Scientia is subsequently defined in Thomas Holyoake’sDictionary (1676) as, properly speaking, the act of the knower, and, secondarily, the thing known. This entry also stresses the classical and scholastic idea of science as “a habit of knowledge got by demonstration.” French philosopher René Descartes (1596–1650) retained some of these generic, cognitive connotations when he defined scientiaas “the skill to solve every problem.”

Yet, according to Aquinas, scientia, like the other intellectual virtues, was not solely concerned with rational and speculative considerations. In a significant departure from Aristotle, who had set out the basic rationale for an ethics based on virtue, Aquinas sought to integrate the intellectual virtues into a framework that included the supernatural virtues (faith, hope, and charity),“the seven gifts of the spirit,” and the nine “fruits of the spirit.” While the various relations are complicated, particularly when beatitudes and vices are added to the equation, the upshot of it all is a considerable overlap of the intellectual and moral spheres. As philosopher Eleonore Stump has written, for Aquinas “all true excellence of intellect—wisdom, understanding andscientia—is possible only in connection with moral excellence as well.” By the same token, on Aquinas’s understanding, moral transgressions will have negative consequences for the capacity of the intellect to render correct judgments: “Carnal vices result in a certain culpable ignorance and mental dullness; and these in turn get in the way of understanding and scientia.” Scientia, then, was not only a personal quality, but also one that had a significant moral component.

The parallels between the virtues of religio and scientia, it must be conceded, are by no means exact. While in the Middle Ages there were no plural religions (or at least no plural religions understood as discrete sets of doctrines), there were undeniably sciences (scientiae), thought of as distinct and systematic bodies of knowledge. The intellectual virtue scientia thus bore a particular relation to formal knowledge. On a strict definition, and following a standard reading of Aristotle’s Posterior Analytics, a body of knowledge was regarded as scientific in the event that it had been arrived at through a process of logical demonstration. But in practice the label “science” was extended to many forms of knowledge. The canonical divisions of knowledge in the Middle Ages—what we now know as the seven “liberal arts” (grammar, logic, rhetoric, arithmetic, astronomy, music, geometry)—were then known as the liberal sciences. The other common way of dividing intellectual territory derived from Aristotle’s classification of theoretical or speculative philosophy. In his discussion of the division and methods of the sciences, Aquinas noted that the standard classification of the seven liberal sciences did not include the Aristotelian disciplines of natural philosophy, mathematics, and theology. Accordingly, he argued that the label “science” should be given to these activities, too. Robert Kilwardby (ca. 1215–79), successively regent at the University of Oxford and archbishop of Canterbury, extended the label even further in his work on the origin of the sciences, identifying forty distinct scientiae.

The English word “science” had similar connotations. As was the case with the Latinscientia, the English term commonly referred to the subjects making up the seven liberal arts. In catalogs of English books published between 1475 and 1700 we encounter the natural and moral sciences, the sciences of physick (medicine), of surgery, of logic and mathematics. Broader applications of the term include accounting, architecture, geography, sailing, surveying, defense, music, and pleading in court. Less familiarly, we also encounter works on the science of angels, the science of flattery, and in one notable instance, the science of drinking, drolly designated by the author the “eighth liberal science.” At nineteenth-century Oxford “science” still referred to elements of the philosophy curriculum. The idiosyncrasies of English usage at the University of Oxford notwithstanding, the now familiar meaning of the English expression dates from the nineteenth century, when “science” began to refer almost exclusively to the natural and physical sciences.

Returning to the comparison with medieval religio, what we can say is that in the Middle Ages both notions have a significant interior dimension, and that what happens in the early modern period is that the balance between the interior and exterior begins to tip in favor of the latter. Over the course of the sixteenth and seventeenth centuries we will witness the beginning of a process in which the idea of religion and science as virtues or habits of mind begins to be overshadowed by the modern, systematic entities“science” and “religion.” In the case of scientia, then, the interior qualities that characterized the intellectual virtue of scientia are transferred to methods and doctrines. The entry for “science” in the 1771 Encyclopaedia Britannica thus reads, in its entirety: “SCIENCE, in philosophy, denotes any doctrine, deduced from self-evident and certain principles, by a regular demonstration.” The logical rigor that had once been primarily a personal characteristic now resides primarily in the corresponding body of knowledge.

The other significant difference between the virtues of religio and scientia lies in the relation of the interior and exterior elements. In the case of religio, the acts of worship are secondary in the sense that they are motivated by an inner piety. In the case ofscientia, it is the rehearsal of the processes of demonstration that strengthens the relevant mental habit. Crucially, because the primary goal is the augmentation of mental habits, gained through familiarity with systematic bodies of knowledge (“the sciences”), the emphasis was less on the production of scientific knowledge than on the rehearsal of the scientific knowledge that already existed. Again, as noted earlier, this was because the “growth” of science was understood as taking place within the mind of the individual. In the present, of course, whatever vestiges of the scientific habitusremain in the mind of the modern scientist are directed toward the production of new scientific knowledge. In so far as they exist at all—and for the most part they have been projected outward onto experimental protocols—they are a means and not the end. Overstating the matter somewhat, in the Middle Ages scientific knowledge was an instrument for the inculcation of scientific habits of mind; now scientific habits of mind are cultivated primarily as an instrument for the production of scientific knowledge.

The atrophy of the virtues of scientia and religio, and the increasing emphasis on their exterior manifestations in the sixteenth and seventeenth centuries, will be discussed in more detail in chapter 4. But looking ahead we can say that in the physical realm virtues and powers were removed from natural objects and replaced by a notion of external law. The order of things will now be understood in terms of laws of nature—a conception that makes its first appearance in the seventeenth century—and these laws will take the place of those inherent tendencies within things that strive for their perfection. In the moral sphere, a similar development takes place, and human virtues will be subordinated to an idea of divinely imposed laws—in this instance, moral laws. The virtues—moral and intellectual—will be understood in terms of their capacity to produce the relevant behaviors or bodies of knowledge. What drives both of these shifts is the rejection of an Aristotelian and scholastic teleology, and the subsequent demise of the classical understanding of virtue will underpin the early modern transformation of the ideas of scientia and religio.

Science and Religion?

It should by now be clear that the question of the relationship between science (scientia) and religion (religio) in the Middle Ages was very different from the modern question of the relationship between science and religion. Were the question put to Thomas Aquinas, he may have said something like this: Science is an intellectual habit; religion, like the other virtues, is a moral habit. There would then have been no question of conflict or agreement between science and religion because they were not the kinds of things that admitted those sorts of relations. When the question is posed in our own era, very different answers are forthcoming, for the issue of science and religion is now generally assumed to be about specific knowledge claims or, less often, about the respective processes by which knowledge is generated in these two enterprises. Between Thomas’s time and our own, religio has been transformed from a human virtue into a generic something typically constituted by sets of beliefs and practices. Scientia has followed a similar course, for although it had always referred both to a form of knowledge and a habit of mind, the interior dimension has now almost entirely disappeared. During the sixteenth and seventeenth centuries, both religion and science were literally turned inside out.

Admittedly, there would have been another way of posing this question in the Middle Ages. In focusing on religio and scientia I have considered the two concepts that are the closest linguistically to our modern “religion” and “science.” But there may be other ancient and medieval precedents of our modern notions “religion” and “science,” that have less obvious linguistic connections. It might be argued, for example, that two other systematic activities lie more squarely in the genealogical ancestry of our two objects of interest, and they are theology and natural philosophy. A better way to frame the central question, it could then be suggested, would be to inquire about theology (which looks very much like a body of religionus knowledge expressed propositionally) and natural philosophy (which was the name given to the systematic study of nature up until the modern period), and their relationship.

There is no doubt that these two notions are directly relevant to our discussion, but I have avoided mention of them up until now, first, because I have not wished to pull apart too many concepts at once and, second, because we will be encountering these two ideas and the question of how they fit into the trajectory of our modern notions of science and religion in subsequent chapters. For now, however, it is worth briefly noting that the term “theology” was not much used by Christian thinkers before the thirteenth century. The word theologia appears for the first time in Plato (ca. 428–348 BC), and it is Aristotle who uses it in a formal sense to refer to the most elevated of the speculative sciences. Partly because of this, for the Church Fathers “theology” was often understood as referring to pagan discourse about the gods. Christian writers were more concerned with the interpretation of scripture than with “theology,” and the expression “sacred doctrine” (sacra doctrina) reflects their understanding of the content of scripture. When the term does come into use in the later Middle Ages, there were two different senses of “theology”—one a speculative science as described by Aristotle, the other the teaching of the Christian scriptures.

Famously, the scholastic philosophers inquired as to whether theology (in the sense ofsacra doctrina) was a science. This is not the place for an extended discussion of that commonplace, but the question does suggest one possible relation between science and theology—that theology is a species of the genus “science.” Needless to say, this is almost completely disanalogous to any modern relationship between science and religion as we now understand them. Even so, this question affords us the opportunity to revisit the relationship between virtues and the bodies of knowledge that they were associated with. In so far as theology was regarded as a science, it was understood in light of the virtue of scientia outlined above. In other words, theology was also understood to be, in part, a mental habit. When Aquinas asks whether sacred doctrine is one science, his affirmative answer refers to the fact that there is a single faculty or habit involved. His contemporary, the Franciscan theologian Bonaventure (1221–74), was to say that theological science was a habit that had as its chief end “that we become good.” The “subtle doctor,” John Duns Scotus (ca. 1265–1308), later wrote that the “science” of theology perfects the intellect and promotes the love of God: “The intellect perfected by the habit of theology apprehends God as one who should be loved.” While these three thinkers differed from each other significantly in how they conceptualized the goals of theology, what they shared was a common conviction that theology was, to use a current expression somewhat out of context, habit forming.

As for “natural philosophy” (physica, physiologia), historians of science have argued for some years now that this is the closest ancient and medieval analogue to modern science, although they have become increasingly sensitive to the differences between the two activities. Typically, these differences have been thought to lie in the subject matter of natural philosophy, which traditionally included such topics as God and the soul, but excluded mathematics and natural history. On both counts natural philosophy looks different from modern science. What has been less well understood, however, are the implications of the fact that natural philosophy was an integral part of philosophy. These implications are related to the fact that philosophy, as practiced in the past, was less about affirming certain doctrines or propositions than it was about pursuing a particular kind of life. Thus natural philosophy was thought to serve general philosophical goals that were themselves oriented toward securing the good life. These features of natural philosophy will be discussed in more detail in the chapter that follows. For now, however, my suggestion is that moving our attention to the alternative categories of theology and natural philosophy will not yield a substantially different view of the kinds of historical transitions that I am seeking to elucidate.

To read more about The Territories of Science and Religion, click here.

Add a Comment
14. Facebook’s A Year in Books drafts The Structure of Scientific Revolutions

9780226458120

In his sixth pick for the social network’s online book club (“A Year of Books”), Facebook founder Mark Zuckerberg recently drafted Thomas Kuhn’s The Structure of Scientific Revolutionsa 52-year-old book still one of the most often cited academic resources of all time, and one of UCP’s crowning gems of twentieth century scholarly publishing. Following in the footsteps of Pixar founder Ed Catmull’s Creativity, Inc., Zuckerberg’s most recent pick, Structure will be the subject of a Facebook thread with open commenting, for the next two weeks, in line with the methodology of “A Year of Books.” If you’re thinking about reading along, the 50th Anniversary edition includes a compelling Introduction by Ian Hacking that situates the book’s legacy, both in terms of its contribution to a scientific vernacular (“paradigm shifting”) and its value as a scholarly publication of mass appeal (“paradigm shifting”).

Or, in Zuckerberg’s own words:

It’s a history of science book that explores the question of whether science and technology make consistent forward progress or whether progress comes in bursts related to other social forces. I tend to think that science is a consistent force for good in the world. I think we’d all be better off if we invested more in science and acted on the results of research. I’m excited to explore this theme further.

And from the Guardian:

“Before Kuhn, the normal view was that science simply needed men of genius (they were always men) to clear away the clouds of superstition, and the truth of nature would be revealed,” [David Papineau, professor of philosophy at King’s College London] said. “Kuhn showed it is much more interesting than that. Scientific research requires a rich network of prior assumptions (Kuhn reshaped the term ‘paradigm’ to stand for these), and changing such assumptions can be traumatic, and is always resisted by established interests (thus the need for scientific ‘revolutions’).”

Kuhn showed, said Papineau, that “scientists are normal humans, with prejudices and personal agendas in their research, and that the path to scientific advances runs through a complex social terrain”.

“We look at science quite differently post-Kuhn,” he added.

To read more about Structure, click here.

To read an excerpt from Ian Hacking’s Introduction to the 50th Anniversary edition, click here.

Add a Comment
15. Excerpt: Invisible by Philip Ball

9780226238890
Recipes for Invisibility, an excerpt
by Philip Ball
***

 “Occult Forces”

Around 1680 the English writer John Aubrey recorded a spell of invisibility that seems plucked from a (particularly grim) fairy tale. On a Wednesday morning before sunrise, one must bury the severed head of a man who has committed suicide, along with seven black beans. Water the beans for seven days with good brandy, after which a spirit will appear to tend the beans and the buried head. The next day the beans will sprout, and you must persuade a small g irl to pick and shell them. One of these beans, placed in the mouth, will make you invisible.

This was tried, Aubrey says, by two Jewish merchants in London, who could’t acquire the head of a suicide victim and so used instead that of a poor cat killed ritualistically. They planted it with the beans in the garden of a gentleman named Wyld Clark, with his permission. Aubrey’s deadpan relish at the bathetic outcome suggests he was sceptical all along– for he explains that Clark’s rooster dug up the beans and ate them without consequence.

Despite the risk of such prosaic setbacks, the magical texts of the Middle Ages and the early Enlightenment exude confidence in their prescriptions, however bizarre they might be. Of course the magic will work, if you are bold enough to take the chance. This was not merely a sales pitch. The efficacy of magic was universally believed in those days. The common folk feared it and yearned for it, the clergy condemned it, and the intellectuals and philosophers, and a good many charlatans and tricksters, hinted that they knew how to do it.

It is among these fanciful recipes that the quest begins for the origins of invisibility as both a theoretical possibility and a practical technology in the real world. Making things invisible was a kind of magic–but what exactly did that mean?

Historians are confronted with the puzzle of why the tradition of magic lasted so long and laid roots so deep, when it is manifestly impotent. Some of that tenacity is understandable enough. The persistence of magical medicines, for example, isn’t so much of a mystery given that in earlier ages there were no more effective alternatives and that medical cause and effect has always been difficult to establish – people do sometimes get better, and who is to say why? Alchemy, meanwhile, could be sustained by trickery, although that does not solely or even primarily account for its longevity as a practical art: alchemists made much else besides gold and even their gold-making recipes could sometimes change the appearance of metals in ways that might have suggested they were on the right track. As for astrology, it’s persistence even today testifies in part to how readily it can be placed beyond the reach of any attempts at falsification.

But how do you fake invisibility? Either you can see something or someone, or you can’t.

Well, one might think so. But that isn’t the case at all. Magicians have always possessed the power of invisibility. What has changed is the story they tell about how it is done. What has changed far less, however, is our reasons for wishing it to be done and our willingness to believe that it can be. In this respect, invisibility supplies one of the most eloquent testimonies to our changing view of magic – not, as some rationalists might insist, a change from credulous acceptance to hard-headed dismissal, but something far more interesting.

Let’s begin with some recipes. Here is a small selection from what was doubtless once a much more diverse set of options, many of which are now lost. It should give you some intimation of what was required.

John Aubrey provides another prescription, somewhat tamer than the previous one and allegedly from a Rosicrucian source (we’ll see why later):

Take on Midsummer night, at xii [midnight], Astrologically, when all the Planets are above the earth, a Serpent, and kill him, and skinne him: and dry it in the shade, and bring it to a powder. Hold it in your hand and you will be invisible.

If it is black cats you want, look to the notorious Grand Grimoire. Like many magical books, this is a fabrication of the eighteenth century (or perhaps even later), validated by an ostentatious pseudo-history. The author is said to be one‘Alibeck the Egyptian’, who allegedly wrote the following recipe in 1522:

Take a black cat, and a new pot, a mirror, a lighter, coal and tinder. Gather water from a fountain at the strike of midnight. Then you light your fire, and put the cat in the pot. Hold the cover with your left hand without moving or looking behind you, no matter what noises you may hear. After having made it boil 24 hours, put the boiled cat on a new dish. Take the meat and throw it over your left shoulder, saying these words:“accipe quod tibi do, et nihil ampliùs.” [Accept my offering, and don’t delay.] Then put the bones one by one under the teeth on the left side, while looking at yourself in the mirror; and if they do not work, throw them away, repeating the same words each time until you find the right bone; and as soon you cannot see yourself any more in the mirror, withdraw, moving backwards, while saying: “Pater, in manus tuas commendo spiritum meum.” [Father, into your hands I commend my spirit.] This bone you must keep.

Sometimes it was necessary to summon the help of demons, which was always a matter fraught with danger. A medieval manual of demonic magic tells the magician to go to a field and inscribe a circle on the ground, fumigate it and sprink le it, and himself, with holy water while reciting Psalm 51:7 (‘Cleanse me with hyssop, and I shall be clean . . .’). He then conjures several demons and commands them in God’s name to do his bidding by bringing him a cap of invisibility. One of them will fetch this item and exchange it for a white robe. If the magician does not return to the same place in three days, retrieve his robe and burn it, he will drop dead within a week. In other words, this sort of invisibility was both heretical and hazardous. That is perhaps why instructions for invisibility in an otherwise somewhat quotidian fifteenth-century book of household management from Wolfsthurn Castle in the Tyrol have been mutilated by a censorious reader.

Demons are, after all, what you might expect to find in a magical grimoire. TheGrimorium Verum (True Grimoire) is another eighteenth-century fake attributed to Alibeck the Eg yptian; it was alternatively called the Secret of Secrets, an all-purpose title alluding to an encyclopaedic Arabic treatise popular in the Middle Ages. ‘Secrets’ of course hints alluringly at forbidden lore, although in fact the word was often also used simply to refer to any specialized knowledge or skill, not necessarily something intended to be kept hidden. This grimoire says that invisibility can be achieved simply by reciting a Latin prayer – largely just a list of the names of demons whose help is being invoked, and a good indication as to why magic spells came to be regarded as a string of nonsense words:

Athal, Bathel, Nothe, Jhoram, Asey, Cleyungit, Gabellin, Semeney, Mencheno, Bal, Labenenten, Nero, Meclap, Helateroy, Palcin, Timgimiel, Plegas, Peneme, Fruora, Hean, Ha, Ararna, Avira, Ayla, Seye, Peremies, Seney, Levesso, Huay, Baruchalù, Acuth, Tural, Buchard, Caratim, per misericordiam abibit ergo mortale perficiat qua hoc opus ut invisibiliter ire possim . . .

. . . and so on. The prescription continues in a rather freewheeling?tion using characters written in bat’s blood, before calling on yet more demonic ‘masters of invisibility’ to ‘perform this work as you all know how, that this experiment may make me invisible in such wise that no one may see me’.

A magic book was scarcely complete without a spell of invisibility. One of the most notorious grimoires of the Middle Ages, called the Picatrix and based on a tenth-century Arabic work, gives the following recipe.* You take a rabbit on the ‘24th night of the Arabian month’, behead it facing the moon, call upon the ‘angelic spirit’ Salmaquil, and then mix the blood of the rabbit with its bile. (Bury the body well – if it is exposed to sunlight, the spirit of the Moon will kill you.) To make yourself invisible, anoint your face with this blood and bile at nighttime, and ‘you will make yourself totally hidden from the sight of others, and in this way you will be able to achieve whatever you desire’.

‘Whatever you desire’ was probably something bad, because that was usually the way with invisibility. A popular trick in the eighteenth century, known as the Hand of Glory, involved obtaining (don’t ask how) the hand of an executed criminal and preserving it chemically, then setting light to a finger or inserting a burning candle between the fingers. With this talisman you could enter a building unseen and take what you liked, either because you are invisible or because everyone inside is put to sleep.

These recipes seem to demand a tiresome attention to materials and details. But really, as attested in The Book of Abramelin (said to be a system of magic that the Eg yptian mage Abramelin taught to a German Jew in the fifteenth century), it was quite simple to make yourself invisible. You need only write down a‘magic square’ – a small grid in which numbers (or in Abramelin’s case, twelve symbols representing demons) form particular patterns – and place it under your cap. Other grimoires made the trick sound equally straightforward, albeit messy: one should carry the heart of a bat, a black hen, or a frog under the right arm.

Perhaps most evocative of all were accounts of how to make a ring of invisibility, popularly called a Ring of Gyges. The twentieth-century French historian Emile Grillot de Givry explained in his anthology of occult lore how this might be accomplished:

The ring must be made of fixed mercury; it must be set with a little stone to be found in a lapwing’s nest, and round the stone must be engraved the words,“Jésus passant ✠ par le milieu d’eux ✠ s’en allait.” You must put the ring on your finger, and if you look at yourself in a mirror and cannot see the ring it is a sure sign that it has been successfully manufactured.

Fixed mercury is an ill-defined alchemical material in which the liquid metal is rendered solid by mixing it with other substances. It might refer to the chemical reaction of mercury with sulphur to make the blackish-red sulphide, for example, or the formation of an amalgam of mercury with gold. The biblical reference is to the alleged invisibility of Christ mentioned in Luke 4:30 (‘Jesus passed through the midst of them’) and John 8:59 (see page 155). And the lapwing’s stone is a kind of mineral – of which, more below. Invisibility is switched on or off at will by rotating the ring so that this stone sits facing outward or inward (towards the palm), just as Gyges rotated the collet.

Several other recipes in magical texts repeat the advice to check in a mirror that the magic has worked. That way, one could avoid embarrassment of the k ind suffered by a Spaniard who, in 1582, decided to use invisibility magic in his attempt to assassinate the Prince of Orange. Since his spells could not make clothes invisible, he had to strip naked, in which state he arrived at the palace and strolled casually through the gates, unaware that he was perfectly visible to the guards. They followed the outlandish intruder until the purpose of his mission became plain, whereupon they seized him and flogged him.

Some prescriptions combined the alchemical preparation of rings with a necromantic invocation of spirits. One, appearing in an eighteenth-century French manuscript, explains how, if the name of the demon Tonucho is written on parchment and placed beneath a yellow stone set into a gold band while reciting an appropriate incantation, the demon is trapped in the ring and can be impelled to do one’s bidding.

Other recipes seem to refer to different qualities of invisibility. One might be unable to see an object not because it has vanished as though perfectly transparent, but because it lies hidden by darkness or mist, so that the‘cloaking’ is apparent but what it cloaks is obscured. Or one might be dazzled by a play of light (see page 25), or experience some other confusion of the senses. There is no single view of what invisibility consists of, or where it resides. These ambiguities recur throughout the history of the invisible.

Partly for this reason, it might seem hard to discern any pattern in these prescriptions– any common themes or ingredients that might provide a clue to their real meaning. Some of them sound like the cartoon sorcery of wizards stirring bubbling cauldrons. Others are satanic, or else high-minded and allegorical, or merely deluded or fraudulent. They mix pious dedications to God with blasphemous entreaties to uncouthly named demons. That diversity is precisely what makes the tradition of magic so difficult to grasp: one is constantly wondering if it is a serious intellectual enterprise, a smokescreen for charlatans, or the credulous superstition of folk belief. The truth is that magic in the Western world was all of these things and for that very reason has been able to permeate culture at so many different levels and to leave traces in the most unlikely of places: in theoretical physics and pulp novels, the cults of modern mystics and the glamorous veils of cinema. The ever-present theme of invisibility allows us to follow these currents from their source.

*Appearing hard on the heels of an unrelated discussion of the Chaldean city of Adocentyn, it betrays the cut-and-paste nature of many such compendia.

“Making Magic”

Many of the recipes for invisibility from the early Renaissance onward therefore betray an ambiguous credo. They are often odd, sometimes ridiculous, and yet there are indications that they are not mere mumbo-jumbo dreamed up by lunatics or charlatans, but hint at a possible rationale within the system of natural magic.

It’s no surprise, for example, that eyes feature prominently among the ingredients. From a modern perspective the association might seem facile: you grind up an eyeball and therefore people can’t see you. But to an adept of natural magic there would have been a sound causative principle at work, operating through the occult network of correspondences: an eye for an eye, you might say. A medieval collec?tion of Greek magical works from the fourth century AD known as the Cyranides contains some particularly grotesque recipes of this sort for ointments of invisibility. One involves grinding together the fat or eye of an owl, a ball of beetle dung and perfumed olive oil, and then anointing the entire body while reciting a selection of unlikely names. Another uses instead ‘the eye of an ape or of a man who had a violent death’, along with roses and sesame oil. An eighteenth-century text spuriously associated with Albertus Magnus (he was a favourite source of magical lore even in his own times) instructs the magician to‘pierce the right eye of a bat, and carry it with you and you will be invisible’. One of the cruellest prescriptions instructs the magician to cut out the eyes of a live owl and bury them in a secret place.

A fifteenth-century Greek manuscript offers a more explicitly optical theme than Aubrey’s head-grown beans, stipulating that fava beans are imbued with invisibility magic when placed in the eye sockets of a human skull. Even though one must again call upon a pantheon of fantastically named demons, the principle attested here has a more naturalistic flavour: ‘As the eyes of the dead do not see the living, so these beans may also have the power of invisibility.’

Within the magic tradition of correspondences, certain plants and minerals were associated with invisibility. For example, the dust on brown patches of mature fern leaves was said to be a charm of invisibility: unlike other plants, they appeared to possess neither flowers nor seeds, but could nevertheless be found surrounded by their progeny.

The classical stone of invisibility was the heliotrope (sun-turner), also called bloodstone: a form of green or yellow quartz (chalcedony) flecked with streaks of a red mineral that is either iron oxide or red jasper. The name alludes to the ston’s tendency to reflect and disperse light, itself a sign of special optical powers. In his Natural History, Pliny says that magicians assert that the heliotrope can make a person invisible, although he scoffs at the suggestion:

In the use of this stone, also, we have a most glaring illustration of the impudent effrontery of the adepts in magic, for they say that, if it is combined with the plant heliotropium, and certain incantations are then repeated over it, it will render the person invisible who carries it about him.

The plant mentioned here, bearing the same name as the mineral, is a genus of the borage family, the flowers of which were thought to turn to face the sun. How a mineral is‘combined’ with a plant isn’t clear, but the real point is that the two substances are again bound by a system of occult correspondence.

Agrippa repeated Pliny’s claim in the sixteenth century, minus the scepticism:

There is also another vertue of it [the bloodstone] more wonderfull, and that is upon the eyes of men, whose sight it doth so dim, and dazel, that it doth not suffer him that carries it to see it, & this it doth not do without the help of the Hearb of the same name, which also is called Heliotropium.

It is more explicit here that the magic works by dazzlement: the person wearing a heliotrope is ‘invisible’ because the light it reflects befuddles the senses. That is why kings wear bright jewels, explained Anselm Boetius, physician to the Holy Roman Emperor Rudolf II in 1609: they wish to mask their features in brilliance. This use of gems that spark le, reflect and disperse light to confuse and blind the onlooker is attributed by Ben Jonson to the Rosicrucians, who were often popu?larly associated with magical powers of invisibility (see pages 32–3). In his poem The Underwood, Jonson writes of

The Chimera of the Rosie-Crosse,
Their signs, their seales, their hermetique rings;
Their jemme of riches, and bright stone that brings
Invisibilitie, and strength, and tongues.

The bishop Francis Godwin indicates in his fantastical fiction The Man in the Moone(1634), an early vision of space travel, that invisibility jewels were commonly deemed to exist, while implying that their corrupting temptations made them subject to divine prohibition. Godwin’s space-voyaging hero Domingo Gonsales asks the inhabitants of the Moon

whether they had not any kind of Jewell or other means to make a man invisible, which mee thought had beene a thing of great and extraordinary use . . . They answered that if it were a thing faisible, yet they assured themselves that God would not suffer it to be revealed to us creatures subject to so many imperfections, being a thing so apt to be abused to ill purposes.

Other dazzling gemstones were awarded the same‘virtue’, chief among them the opal. This is a form of silica that refracts and reflects light to produce rainbow iridescence, indeed called opalescence.

Whether opal derives from the Greek opollos,‘seeing’ – the root of ‘optical’ – is disputed, but opal’s streaked appearance certainly resem?bles the iris of the eye, and it has long been associated with the evil eye. In the thirteenth-century Book of Secrets, yet again falsely attributed to Albertus Magnus, the mineral is g iven the Greek name for eye (ophthalmos) and is said to cause invisibility by bedazzlement:

Take the stone Ophthalmus, and wrap it in the leaf of the Laurel, or Bay tree; and it is called Lapis Obtalmicus, whose colour is not named, for it is of many colours. And it is of such virtue, that it blindeth the sights of them that stand about. Constantius [probably Constantine the Great] carrying this in his hand, was made invisible by it.

It is’t hard to recognize this as a variant of Pliny’s recipe, complete with cognate herb. In fact it isn’t entirely clear that this Ophthalmus really is opal, since elsewhere in theBook of Secrets that mineral is called Quiritia and isn’t associated with invisibility. This reflects the way that the book was, like so many medieval handbooks and encyoclopedias, patched together from a variety of sources.

Remember the‘stone from the lapwing’s nest’ mentioned by Grillot de Givry? His source was probably an eighteenth-century text called the Petit Albert – a fabrication, with the grand full title of Marvelous Secrets of Natural and Qabalistic Magic, attributed to a ‘Little Albert’ and obviously trading once more on the authority of the ‘Great Albert’ (Magnus). The occult revivalist Arthur Waite gave the full account of this recipe from the Petit Albert in his Book of Ceremonial Magic (1913), which asserts that the bird plays a further role in the affair:

Having placed the ring on a palette-shaped plate of fixed mercury, compose the perfume of mercury, and thrice expose the ring to the odour thereof; wrap it in a small piece of taffeta corresponding to the colour of the planet, carry it to the peewit’s [lapwing’s] nest from which the stone was obtained, let it remain there for nine days, and when removed, fumigate it precisely as before. Then preserve it most carefully in a small box, made also of fixed mercury, and use it when required.

Now we can get some notion of what natural magic had become by the time the Petit Albert was cobbled together. It sounds straightforward enough, but who is going to do all this? Where will you find the lapwin’s nest with a stone in it in the first place? What is this mysterious ‘perfume of mercury’? Will you take the ring back and put it in the nest for nine days and will it still be there later if you do? The spell has become so intricate, so obscure and vexing, that no one will try it. The same character is evident in a nineteenth-century Greek manuscript called the Bernardakean Magical Codex, in which Aubrey’s instructions for growing beans with a severed head are elaborated beyond all hope of success: you need to bury a black cat’s head under an ant hill, water it with human blood brought every day for forty days from a barber (those were the days when barbers still doubled as blood-letters), and check to see if one of the beans has the power of invisibility by looking into a new mirror in which no one has previously looked. If the spell doesn’t work (and the need to check each bean shows that this is always a possibility), it isn’t because the magic is ineffectual but because you must have done something wrong somewhere along the way. In which case, will you find another black cat and begin over? Unlikely; instead, aspiring magicians would buy these books of ‘secrets’, study their prescriptions and incantations and thereby become an adept in a magical circle: someone who possesses powerful secrets, but does not, perhaps, place much store in actually putting them to use. Magical books thus acquired the same talismanic function as a great deal of the academic literature today: to be read, learnt, cited, but never used.

To read more about Invisible, click here.

 

Add a Comment
16. Excerpt: Seeing Green

9780226169903

An excerpt from Seeing Green: The Use and Abuse of American Environmental Images

by Finis Dunaway

***

“The Crying Indian”

It may be the most famous tear in American history. Iron Eyes Cody, an actor in native garb, paddles a birch bark canoe on water that seems at first tranquil and pristine but becomes increasingly polluted along his journey. He pulls his boat from the water and walks toward a bustling freeway. As the lone Indian ponders the polluted landscape and stares at vehicles streaming by, a passenger hurls a paper bag out a car window. The bag bursts on the ground, scattering fast-food wrappers all over his beaded moccasins. In a stern voice, the narrator comments: “Some people have a deep abiding respect for the natural beauty that was once this country. And some people don’t.” The camera zooms in closely on Iron Eyes Cody’s face to reveal a single tear falling, ever so slowly, down his cheek (fig. 5.1).

This tear made its television debut in 1971 at the close of a public service advertisement for the antilitter organization Keep America Beautiful. Appearing in languid motion on television, the tear would also circulate in other visual forms, stilled on billboards and print media advertisements to become a frame stopped in time, forever fixing the image of Iron Eyes Cody as the Crying Indian. Garnering many advertising accolades, including two Clio Awards, and still ranked as one of the best commercials of all time, the Crying Indian spot enjoyed tremendous airtime during the 1970s, allowing it to gain, in advertising lingo, billions of “household impressions” and achieve one of the highest viewer recognition rates in television history. After being remade multiple times to support Keep America Beautiful, and after becoming indelibly etched into American public culture, the commercial has more recently been spoofed by various television shows, including The Simpsons (always a reliable index of popular culture resonance),King of the Hill, and Penn & Teller: Bullshit. These parodies—together with the widely publicized reports that Iron Eyes Cody was actually born Espera De Corti, an Italian-American who literally played Indian in both his life and onscreen—may make it difficult to view the commercial with the same degree of moral seriousness it sought to convey to spectators at the time. Yet to appreciate the commercial’s significance, to situate Cody’s tear within its historical moment, we need to consider why so many viewers believed that the spot represented an image of pure feeling captured by the camera. As the television scholar Robert Thompson explains: “The tear was such an iconic moment. . . . Once you saw it, it was unforgettable. It was like nothing else on television. As such, it stood out in all the clutter we saw in the early 70s.”

FIGURE 5.1. The Crying Indian. Advertising Council / Keep America Beautiful advertisement, 1971. Courtesy of Ad Council Archives, University of Illinois, record series 13/2/203.

As a moment of intense emotional expression, Iron Eyes Cody’s tear compressed and concatenated an array of historical myths, cultural narratives, and political debates about native peoples and progress, technology and modernity, the environment and the question of responsibility. It reached back into the past to critique the present; it celebrated the ecological virtue of the Indian and condemned visual signs of pollution, especially the heedless practices of the litterbug. It turned his crying into a moment of visual eloquence, one that drew upon countercultural currents but also deflected the radical ideas of environmental, indigenous, and other protest groups.

At one level, this visual eloquence came from the tear itself, which tapped into a legacy of romanticism rekindled by the counterculture. As the writer Tom Lutz explains in his history of crying, the Romantics enshrined the body as “the seal of truth,” the authentic bearer of sincere emotion. “To say that tears have a meaning greater than any words is to suggest that truth somehow resides in the body,” he argues. “For [Romantic authors], crying is superior to words as a form of communication because our bodies, uncorrupted by culture or society, are naturally truthful, and tears are the most essential form of speech for this idealized body.”

Rather than being an example of uncontrolled weeping, the single tear shed by Iron Eyes Cody also contributed to its visual power, a moment readily aestheticized and easily reproduced, a drop poised forever on his cheek, seemingly suspended in perpetuity. Cody himself grasped how emotions and aesthetics became intertwined in the commercial. “The final result was better than anybody expected,” he noted in his autobiography. “In fact, some people who had been working on the project were moved to tears just reviewing the edited version. It was apparent we had something of a 60-second work of art on our hands.” The aestheticizing of his tear yielded emotional eloquence; the tear seemed to express sincerity, an authentic record of feeling and experience. Art and reality merged to offer an emotional critique of the environmental crisis.

That the tear trickled down the leathered face of a Native American (or at least someone reputed to be indigenous) made its emotionality that much more poignant, its critique that much more palpable. By designing the commercial around the imagined experience of a native person, someone who appears to have journeyed out of the past to survey the current landscape, Keep America Beautiful (KAB) incorporated the counterculture’s embrace of Indianness as a marker of oppositional identity.

Yet KAB, composed of leading beverage and packaging corporations and staunchly opposed to many environmental initiatives, sought to interiorize the environmentalist critique of progress, to make individual viewers feel guilty and responsible for the degraded environment. Deflecting the question of responsibility away from corporations and placing it entirely in the realm of individual action, the commercial castigated spectators for their environmental sins but concealed the role of industry in polluting the landscape. A ghost from the past, someone who returns to haunt the contemporary American imagination, the Crying Indian evoked national guilt for the environmental crisis but also worked to erase the presence of actual Indians from the landscape. Even as Red Power became a potent organizing force, KAB conjured a spectral Indian to represent the native experience, a ghost whose melancholy presence mobilized guilt but masked ongoing colonialism, whose troubling visitation encouraged viewers to feel responsible but to forget history. Signifying resistance and secreting urgency, his single tear glossed over power to generate a false sense of personal blame. For all its implied sincerity, many environmentalists would come to see the tear as phony and politically problematic, the liquid conclusion to a sham campaign orchestrated by corporate America.

Before KAB appropriated Indianness by making Iron Eyes Cody into a popular environmental symbol, the group had promoted a similar message of individual responsibility through its previous antilitter campaigns. Founded in 1951 by the American Can Company and the Owens-Illinois Glass Company, a corporate roster that later included the likes of Coca-Cola and the Dixie Cup Company, KAB gained the support of the Advertising Council, the nation’s preeminent public service advertising organization. Best known for creating Smokey Bear and the slogan “Only You Can Prevent Forest Fires” for the US Forest Service, the Ad Council applied the same focus on individual responsibility to its KAB advertising.

The Ad Council’s campaigns for KAB framed litter as a visual crime against landscape beauty and an affront to citizenship values. David F. Beard, a KAB leader and the director of advertising for Reynolds Metals Company, described the litter problem in feverish tones and sought to infuse the issue with a sense of crisis. “During this summer and fall, all media will participate in an accelerated campaign to help to curb the massive defacement of the nation by thoughtless and careless people,” he wrote in 1961. “The bad habits of littering can be changed only by making all citizens aware of their responsibilities to keep our public places as clean as they do their own homes.” The KAB fact sheet distributed to media outlets heightened this rhetoric of urgency by describing litter as an infringement upon the rights of American citizens who “derive much pleasure and recreation from their beautiful outdoors. . . . Yet their enjoyment of the natural and man- made attractions of our grand landscape is everywhere marred by the litter which careless people leave in their wake.” “The mountain of refuse keeps growing,” draining public coffers for continual cleanup and even posing “a menace to life and health,” the Ad Council concluded.

And why had this litter crisis emerged? The Ad Council acknowledged that “more and more products” were now “wrapped and packaged in containers of paper, metal and other materials”—the very same disposable containers that were manufactured, marketed, and used by the very same companies that had founded and directed KAB. Yet rather than critique the proliferation of disposables, rather than question the corporate decisions that led to the widespread use of these materials, KAB and the Ad Council singled out “individual thoughtlessness” as “the outstanding factor in the litter nuisance.”

Each year Beard’s rhetoric became increasingly alarmist as he began to describe the antilitter effort as the moral equivalent of war. “THE LITTERBUGS ARE ON THE LOOSE,” he warned newspapers around the nation, “and we’re counting on you to take up arms against them. . . . Your newspaper is a big gun in the battle against thoughtless littering.” Each year the campaign adopted new visuals to illustrate the tag line: “Bit by bit . . . every litter bit hurts.” “This year we are taking a realistic approach to the litter problem, using before-and-after photographs to illustrate our campaign theme,” Beard reported in 1963. “We think you’ll agree that these ads pack a real wallop.” These images showed a white family or a group of white teenagers enjoying themselves in one photograph but leaving behind unsightly debris in the next. The pictures focused exclusively on places of leisure—beaches, parks, and lakes—to depict these recreational environments as spaces treasured by white middle-class Americans, the archetypal members of the national community. The fight against litter thus appeared as a patriotic effort to protect the beauty of public spaces and to reaffirm the rights and responsibilities of citizenship, especially among the social group considered to exemplify the American way of life.

In 1964, though, Beard announced a shift in strategy. Rather than appealing to citizenship values in general, KAB would target parents in particular by deploying images of children to appeal to their emotions. “This year we are . . . reminding the adult that whenever he strews litter he is remiss in setting a good example for the kids—an appeal which should hit . . . with more emotional force than appealing primarily to his citizenship,” he wrote. The campaign against litter thus packaged itself as a form of emotional citizenship. Situating private feelings within public spaces, KAB urged fathers and mothers to see littering as a sign of poor parenting: “The good citizenship habits you want your children to have go overboard when they see you toss litter away.”

These new advertisements featured Susan Spotless, a young white girl who always wore a white dress—completely spotless, of course— together with white shoes, white socks, and a white headband. In the ads, Susan pointed her accusatory finger at pieces of trash heedlessly dropped by her parents (fig. 5.2). The goal of this campaign, Beard explained, was “to dramatize the message that ‘Keeping America Beautiful’ is a family affair’”—a concept that would later be applied not just to litter, but to the entire environmental crisis. Susan Spotless introduced a moral gaze into the discourse on litter, a gaze that used the wagging finger of a child to condemn individual adults for being bad parents, irresponsible citizens, and unpatriotic Americans. She played the part of a child who not only had a vested interest in the future but also appealed to private feelings to instruct her parents how to be better citizens. Launched in 1964, the same year that the Lyndon Johnson campaign broadcast the “Daisy Girl” ad, the Susan Spotless campaign also represented a young white girl as an emblem of futurity to promote citizenship ideals.

Throughout the 1960s and beyond, the Ad Council and KAB continued to present children as emotional symbols of the antilitter agenda. An ad from the late 1960s depicted a chalkboard with children’s antilitter sentiments scrawled across it: “Litter is not pretty. Litter is not healthy. Litter is not clean. Litter is not American.” What all these campaigns assumed was a sense of shared American values and a faith that the United States was fundamentally a good society. The ads did not attempt to mobilize resistant images or question dominant narratives of nationalism. KAB did not in any way attempt to appeal to the social movements and gathering spirit of protest that marked the 1960s.

With this background history in mind, the Crying Indian campaign appears far stranger, a surprising turn for the antilitter movement. KAB suddenly moved from its rather bland admonishments about litter to encompass a broader view of pollution and the environmental crisis. Within a few years it had shifted from Susan Spotless to the Crying Indian. Rather than signaling its commitment to environmentalism, though, this new representational strategy indicated KAB’s fear of the environmental movement.

FIGURE 5.2. “Daddy, you forgot . . . every litter bit hurts!” Advertising Council / Keep America Beautiful advertisement, 1964. Courtesy of Ad Council Archives, University of Illinois, record series 13/2/207.

The soft drink and packaging industries—composed of the same companies that led KAB —viewed the rise of environmentalism with considerable trepidation. Three weeks before the first Earth Day, the National Soft Drink Association (NSDA) distributed a detailed memo to its members, warning that “any bottling company” could be targeted by demonstrators hoping to create an “attention-getting scene.” The memo explained that in March, as part of a “‘dress rehearsal’” for Earth Day, University of Michigan students had protested at a soft drink plant by dumping a huge pile of nonreturnable bottles and cans on company grounds. Similar stunts, the memo cautioned, might be replicated across the nation on Earth Day.

And, indeed, many environmental demonstrations staged during the week surrounding Earth Day focused on the issue of throwaway containers. All these protests held industry—not consumers—responsible for the proliferation of disposable items that wasted natural resources and created a solid waste crisis. In Atlanta, for example, the week culminated with an “Ecolog y Trek”—featuring a pickup truck full of bottles and cans—to the Coca-Cola company headquarters. FBI surveillance agents, posted at fifty locations around the United States to monitor the potential presence of radicals at Earth Day events, noted that in most cases the bottling plants were ready for the demonstrators. Indeed, the plant managers heeded the memo’s advice: they not only had speeches prepared and “trash receptacles set up” for the bottles and cans hauled by participants, but also offered free soft drinks to the demonstrators. At these protests, environmental activists raised serious questions about consumer culture and the ecological effects of disposable packaging. In response, industry leaders in Atlanta and elsewhere announced, in effect: “Let them drink Coke.”

The NSDA memo combined snideness with grudging respect to emphasize the significance of environmentalism and to warn about its potential impact on their industry: If legions of consumers imbibed the environmentalist message, would their sales and profi ts diminish? “Those who are protesting, although many may be only semi- informed, have a legitimate concern for the environment they will inherit,” the memo commented. “From a business point of view, the protestors . . . represent the growing numbers of today’s and tomorrow’s soft drink consumers. An industry whose product sales are based on enjoyment of life must be concerned about ecological problems.” Placed on the defensive by Earth Day, the industry recognized that it needed to formulate a more proactive public relations effort.

KAB and the Ad Council would devise the symbolic solution that soft drink and packaging industries craved: the image of the Crying Indian. The conceptual brilliance of the ad stemmed from its ability to incorporate elements of the countercultural and environmentalist critique of progress into its overall vision in order to offer the public a resistant narrative that simultaneously deflected attention from industry practices. When Iron Eyes Cody paddled his birch bark canoe out of the recesses of the imagined past, when his tear registered shock at the polluted present, he tapped into a broader current of protest and, as the ad’s designers knew quite well, entered a cultural milieu already populated by other Ecological Indians.

In 1967 Life magazine ran a cover story titled “Rediscovery of the Red-man,” which emphasized how certain notions of Indianness were becoming central to countercultural identity. Native Americans, the article claimed, were currently “being discovered again—by the hippies. . . . Viewing the dispossessed Indian as America’s original dropout, and convinced that he has deeper spiritual values than the rest of society, hippies have taken to wearing his costume and horning in on his customs.” Even as the article revealed how the counterculture trivialized native culture by extracting symbols of imagined Indianness, it also indicated how the image of the Indian could be deployed as part of an oppositional identity to question dominant values.

While Life stressed the material and pharmaceutical accoutrements the counterculture ascribed to Indianness— from beads and headbands to marijuana and LSD—other media sources noted how many counter-cultural rebels found ecological meaning in native practices. In 1969, as part of a special issue devoted to the environmental crisis,Look magazine profiled the poet Gary Snyder, whose work enjoyed a large following among the counterculture. Photographed in the nude as he held his smiling young child above his head and sat along a riverbank, Snyder looked like the archetypal natural man, someone who had found freedom in nature, far away from the constraints and corruptions of modern culture. In a brief statement to the magazine he evoked frontier mythology to contrast the failures of the cowboy with the virtues of the Indian. “We’ve got to leave the cowboys behind,” Snyder said. “We’ve got to become natives of this land, join the Indians and recapture America.”

Although the image of the Ecological Indian grew out of longstanding traditions in American culture, it circulated with particular intensity during the late 1960s and early 1970s. A 1969 poster distributed by activists in Berkeley, California, who wanted to protect “People’s Park” as a communal garden, features a picture of Geronimo, the legendary Apache resistance fighter, armed with a rifle. The accompanying text contrasts the Indians’ reverence for the land with the greed of white men who turned the space into a parking lot. Likewise, a few weeks before Earth Day, the New York Times Magazine reported on Ecology Action, a Berkeley-based group. The author was particularly struck by one image that appeared in the group’s office. “After getting past the sign at the door, the visitor is confronted with a large poster of a noble, if somewhat apprehensive, Indian. The first Americans have become the culture heroes of the ecology movement.” Native Americans had become symbolically important to the movement, because, one of Ecology Action’s leaders explained, “‘the Indians lived in harmony with this country and they had a reverence for the things they depended on.’”

Hollywood soon followed suit. The 1970 revisionist Western Little Big Man, one of the most popular films of the era, portrayed Great Plains Indians living in harmony with their environment, respecting the majestic herds of bison that filled the landscape. While Indians killed the animals only for subsistence, whites indiscriminately slaughtered the creatures for profit, leaving their carcasses behind to amass, in one memorable scene, enormous columns of skins for the market. One film critic noted that “the ominous theme is the invincible brutality of the white man, the end of ‘natural’ life in America.”18

In creating the image of the Crying Indian, KAB practiced a sly form of propaganda. Since the corporations behind the campaign never publicized their involvement, audiences assumed that KAB was a disinterested party. KAB documents, though, reveal the level of duplicity in the campaign. Disingenuous in joining the ecology bandwagon, KAB excelled in the art of deception. It promoted an ideology without seeming ideological; it sought to counter the claims of a political movement without itself seeming political. The Crying Indian, with its creative appropriation of countercultural resistance, provided the guilt-inducing tear KAB needed to propagandize without seeming propagandistic.

Soon after the first Earth Day, Marsteller agreed to serve as the volunteer ad agency for a campaign whose explicit purpose was to broaden the KAB message beyond litter to encompass pollution and the environmental crisis. Acutely aware of the stakes of the ideological struggle, Marsteller’s vice president explained to the Ad Council how he hoped the campaign would battle the ideas of environmentalists—ideas, he feared, that were becoming too widely accepted by the American public. “The problem . . . was the attitude and the thinking of individual Americans,” he claimed. “They considered everyone else but themselves as polluters. Also, they never correlated pollution with litter. . . . The ‘mind-set’ of the public had to be overcome. The objective of the advertising, therefore, would be to show that polluters are people—no matter where they are, in industry or on a picnic.” While this comment may have exaggerated the extent to which the American public held industry and industry alone responsible for environmental problems (witness the popularity of the Pogo quotation), it revealed the anxiety felt by corporate leaders who saw the environmentalist insurgency as a possible threat to their control over the means of production.19

As outlined by the Marsteller vice president, the new KAB advertising campaign would seek to accomplish the following ideological objectives: It would conflate litter with pollution, making the problems seem indistinguishable from one another; it would interiorize the sense of blame and responsibility, making viewers feel guilty for their own individual actions; it would generalize and universalize with abandon, making all people appear equally complicit in causing pollution and the environmental crisis. While the campaign would still sometimes rely on images of young white children, images that conveyed futurity to condemn the current crisis, the Crying Indian offered instead an image of the past returning to the haunt the present.

Before becoming the Crying Indian, Iron Eyes Cody had performed in numerous Hollywood films, all in roles that embodied the stereotypical, albeit contradictory, characteristics attributed to cinematic Indians. Depending on the part, he could be solemn and stoic or crazed and bloodthirsty; most of all, though, in all these films he appeared locked in the past, a visual relic of the time before Indians, according to frontier myth, had vanished from the continent.

The Crying Indian ad took the dominant mythology as prologue; it assumed that audiences would know the plotlines of progress and disappearance and would imagine its prehistoric protagonist suddenly entering the contemporary moment of 1971. In the spot, the time- traveling Indian paddles his canoe out of the pristine past. His long black braids and feather, his buckskin jacket and beaded moccasins— all signal his pastness, his inability to engage with modernity. He is an anachronism who does not belong in the picture.

The spectral Indian becomes an emblem of protest, a phantomlike figure whose untainted ways allow him to embody native ecological wisdom and to critique the destructive forces of progress. He confronts viewers with his mournful stare, challenging them to atone for their environmental sins. Although he has glimpsed various signs of pollution, it is the final careless act—the one passenger who flings trash at his feet—that leads him to cry. At the moment the tear appears, the narrator, in a baritone voice, intones: “People start pollution. People can stop it.” The Crying Indian does not speak. The voice-over sternly confi rms his tearful judgment and articulates what the silent Indian cannot say: Industry and public policy are not to blame, because individual people cause pollution. The resistant narrative becomes incorporated into KAB’s propaganda effort. His tear tries to alter the public’s “mind-set,” to deflect attention away from KAB’s corporate sponsors by making individual Americans feel culpable for the environmental crisis.

Iron Eyes Cody became a spectral Indian at the same moment that actual Indians occupied Alcatraz Island—located, ironically enough, in San Francisco Bay, the same body of water in which the Crying Indian was paddling his canoe. As the ad was being filmed, native activists on nearby Alcatraz were presenting themselves not as past-tense Indians but as coeval citizens laying claim to the abandoned island. For almost two years—from late 1969 through mid-1971, a period that overlapped with both the filming and release of the Crying Indian commercial— they demanded that the US government cede control of the island. The Alcatraz activists, composed mostly of urban Indian college students, called themselves the “Indians of All Tribes” to express a vision of pan- Indian unity—an idea also expressed by the American Indian Movement (AIM) and the struggle for Red Power. On Alcatraz they hoped to create several centers, including an ecological center that would promote “an Indian view of nature—that man should live with the land and not simply on it.”

While the Crying Indian was a ghost in the media machine, the Alcatraz activists sought to challenge the legacies of colonialism and contest contemporary injustices—to address, in other words, the realities of native lives erased by the anachronistic Indians who typically populated Hollywood film. “The Alcatraz news stories are somewhat shocking to non-Indians,” the Indian author and activist Vine Deloria Jr. explained a few months after the occupation began. “It is difficult for most Americans to comprehend that there still exists a living community of nearly one million Indians in this country. For many people, Indians have become a species of movie actor periodically dispatched to the Happy Hunting Grounds by John Wayne on the ‘Late, Late Show.’” The Indians on Alcatraz, Deloria believed, could advance native issues and also potentially teach the United States how to establish a more sustainable relationship with the land. “Non-Indian society has created a monstrosity of a culture where . . . the sun can never break through the smog,” he wrote. “It just seems to a lot of Indians that this continent was a lot better off when we were running it.” While the Crying Indian and Deloria both upheld the notion of native ecological wisdom, they did so in diametrically opposed ways. Iron Eyes Cody’s tear, ineffectual and irrelevant to contemporary Indian lives, evoked only the idea of Indianness, a static symbol for polluting moderns to emulate. In contrast, the burgeoning Red Power movement demonstrated that native peoples would not be consigned to the past, and would not act merely as screens on which whites could project their guilt and desire.

A few weeks after the Crying Indian debuted on TV, the Indians of All Tribes were removed from Alcatraz. Iron Eyes Cody, meanwhile, repeatedly staked out a political position quite different from that of AIM, whose activists protested and picketed one of his films for its stereotypical and demeaning depictions of native characters. Still playing Indian in real life, Cody chastised the group for its radicalism. “The American Indian Movement (AIM) has some good people in it, and I know them,” he later wrote in his autobiography. “But, while the disruptions it has instigated helped put the Indians on the world map, its values and direction must change. AIM must work at encouraging Indians to work within the system if we’ve to really improve our lives. If that sounds ‘Uncle Tom,’ so be it. I’m a realist, damn it! The buffalo are never coming back.” Iron Eyes Cody, the prehistoric ghost, the past-tense ecological Indian, disingenuously condemned AIM for failing to engage with modernity and longing for a pristine past when buffalo roamed the continent.

Even as AIM sought to organize and empower Indian peoples to improve present conditions, the Crying Indian appears completely powerless, unable to challenge white domination. In the commercial, all he can do is lament the land his people lost.

To read more about Seeing Green, click here.

Add a Comment
17. 2015 PROSE Awards

header

Now in their 39th year, the PROSE Awards honor “the very best in professional and scholarly publishing by bringing attention to distinguished books, journals, and electronic content in over 40 categories,” as determined by a jury of peer publishers, librarians, and medical professionals.

As is the usual case with this kind of acknowledgement, we are honored and delighted to share several University of Chicago Press books that were singled-out in their respective categories as winners or runners-up for the 2015 PROSE Awards.

***

sch

Kurt Schwitters: Space, Image, Exile
By Megan R. Luke
Art History, Honorable Mention

***

debt

House of Debt: How They (and You) Caused the Great Recession, and How We Can Prevent It from Happening Again
By Atif Mian and Amir Sufi
Economics, Honorable Mention

***

mc

American School Reform: What Works, What Fails, and Why
By Joseph P. McDonald
Winner, Education Practice

***

lub

The Public School Advantage: Why Public Schools Outperform Private Schools
By Christopher A. Lubienski and Sarah Theule Lubienski
Winner, Education Theory

***

rud

Earth’s Deep History: How It Was Discovered and Why It Matters
By Martin J. S. Rudwick
Honorable Mention, History of STM

***

paso

The Selected Poetry of Pier Paolo Pasolini: A Bilingual Edition
By Pier Paolo Pasolini
Edited and translated by Stephen Sartarelli
Honorable Mention, Literature

***

kekes

How Should We Live?: A Practical Approach to Everyday Morality
By John Kekes
Honorable Mention, Philosophy

***

Congrats to all of the winners, honorable mentions, and nominees!

To read more about the PROSE Awards, click here.

Add a Comment
18. Excerpt: Elaine Conis’s Vaccine Nation

9780226923765

An excerpt from Vaccine Nation: America’s Changing Relationship with Immunization

by Elaine Conis

(recent pieces featuring the book at the Washington Post and Bloomberg News)

***

“Mumps in Wartime”

Between 1963 and 1969, the nation‘s flourishing pharmaceutical industry launched several vaccines against measles, a vaccine against mumps, and a vaccine against rubella in rapid succession. The measles vaccine became the focus of the federally sponsored eradication campaign described in the previous chapter; the rubella vaccine prevented birth defects and became entwined with the intensifying abortion politics of the time. Both vaccines overshadowed the debut of the vaccine against mumps, a disease of relatively little concern to most Americans in the late 1960s. Mumps was never an object of public dread, as polio had been, and its vaccine was never anxiously awaited, like the Salk polio vaccine had been. Nor was mumps ever singled out for a high–profile immunization campaign or for eradication, as measles had been. All of which made it quite remarkable that, within a few years of its debut, the mumps vaccine would be administered to millions of American children with little fanfare or resistance.

The mumps vaccine first brought to market in 1968 was developed by Maurice Hilleman, then head of Virus and Cell Biology at the burgeoning pharmaceutical company Merck. Hilleman was just beginning to earn a reputation as a giant in the field of vaccine development; upon his death in 2005, the New York Times would credit him with saving “more lives than any other scientist in the 20th century.” Today the histories of mumps vaccine that appear in medical textbooks and the like often begin in 1963, when Hilleman‘s daughter, six–year–old Jeryl Lynn, came down with a sore throat and swollen glands. A widower who found himself tending to his daughter‘s care, Hilleman was suddenly inspired to begin work on a vaccine against mumps—which he began by swabbing Jeryl Lynn‘s throat. Jeryl Lynn‘s viral strain was isolated, cultured, and then gradually weakened, or attenuated, in Merck‘s labs. After field trials throughout Pennsylvania proved the resulting shot effective, the “Jeryl–Lynn strain” vaccine against mumps, also known as Mumpsvax, was approved for use.

But Hilleman was not the first to try or even succeed at developing a vaccine against mumps. Research on a mumps vaccine began in earnest during the 1940s, when the United States‘ entry into World War II gave military scientists reason to take a close look at the disease. As U.S. engagement in the war began, U.S. Public Health Service researchers began reviewing data and literature on the major communicable infections affecting troops during the First World War. They noted that mumps, though not a significant cause of death, was one of the top reasons troops were sent to the infirmary and absent from duty in that war—often for well over two weeks at a time. Mumps had long been recognized as a common but not “severe” disease of childhood that typically caused fever and swelling of the salivary glands. But when it struck teens and adults, its usually rare complications—including inflammation of the reproductive organs and pancreas—became more frequent and more troublesome. Because of its highly contagious nature, mumps spread rapidly through crowded barracks and training camps. Because of its tendency to inflame the testes, it was second only to venereal disease in disabling recruits. In the interest of national defense, the disease clearly warranted further study. PHS researchers estimated that during World War I, mumps had cost the United States close to 4 million “man days” from duty, contributing to more total days lost from duty than foreign forces saw.

The problem of mumps among soldiers quickly became apparent during the Second World War, too, as the infection once again began to spread through army camps. This time around, however, scientists had new information at hand: scientists in the 1930s had determined that mumps was caused by a virus and that it could, at least theoretically, be prevented through immunization. PHS surgeon Karl Habel noted that while civilians didn‘t have to worry about mumps, the fact that infection was a serious problem for the armed forces now justified the search for a vaccine. “To the military surgeon, mumps is no passing indisposition of benign course,” two Harvard epidemiologists concurred. Tipped off to the problem of mumps by a U.S. Army general and funded by the Office of Scientific Research and Development (OSRD), the source of federal support for military research at the time, a group of Harvard researchers began experiments to promote mumps virus immunity in macaque monkeys in the lab.

Within a few years, the Harvard researchers, led by biologist John Enders, had developed a diagnostic test using antigens from the monkey‘s salivary glands, as well as a rudimentary vaccine. In a subsequent set of experiments, conducted both by the Harvard group and by Habel at the National Institute of Health, vaccines containing weakened mumps virus were produced and tested in institutionalized children and plantation laborers in Florida, who had been brought from the West Indies to work on sugar plantations during the war. With men packed ten to a bunkhouse in the camps, mumps was rampant, pulling workers off the fields and sending them to the infirmary for weeks at a time. When PHS scientists injected the men with experimental vaccine, one man in 1,344 went into anaphylactic shock, but he recovered with a shot of adrenaline and “not a single day of work was lost,” reported Habel. To the researchers, the vaccine seemed safe and fairly effective—even though some of the vaccinated came down with the mumps. What remained, noted Enders, was for someone to continue experimenting until scientists had a strain infective enough to provoke a complete immune response while weak enough not to cause any signs or symptoms of the disease.

Those experiments would wait for well over a decade. Research on the mumps vaccine, urgent in wartime, became a casualty of shifting national priorities and the vagaries of government funding. As the war faded from memory, polio, a civilian concern, became the nation‘s number one medical priority. By the end of the 1940s, the Harvard group‘s research was being supported by the National Foundation for Infantile Paralysis, which was devoted to polio research, and no longer by OSRD. Enders stopped publishing on the mumps virus in 1949 and instead turned his full–time attention to the cultivation of polio virus. Habel, at the NIH, also began studying polio. With polio occupying multiple daily headlines throughout the 1950s, mumps lost its place on the nation‘s political and scientific agendas.

Although mumps received scant resources in the 1950s, Lederle Laboratories commercialized the partially protective mumps vaccine, which was about 50 percent effective and offered about a year of protection. When the American Medical Association‘s Council on Drugs reviewed the vaccine in 1957, they didn‘t see much use for it. The AMA advised against administering the shot to children, noting that in children mumps and its “sequelae,” or complications, were “not severe.” The AMA acknowledged the vaccine‘s potential utility in certain populations of adults and children—namely, military personnel, medical students, orphans, and institutionalized patients—but the fact that such populations would need to be revaccinated every year made the vaccine‘s deployment impractical. The little professional discussion generated by the vaccine revealed a similar ambivalence. Some observers even came to the disease‘s defense. Edward Shaw, a physician at the University of California School of Medicine, argued that given the vaccine‘s temporary protection, “deliberate exposure to the disease in childhood … may be desirable”: it was the only way to ensure lifelong immunity, he noted, and it came with few risks. The most significant risk, in his view, was that infected children would pass the disease to susceptible adults. But even this concern failed to move experts to urge vaccination. War had made mumps a public health priority for the U.S. government in the 1940s, but the resulting technology (imperfect as it was) generated little interest or enthusiasm in a time of peace, when other health concerns loomed larger.

After the war but before the new live virus vaccine was introduced, mumps went back to being what it long had been: an innocuous and sometimes amusing childhood disease. The amusing nature of mumps in the 1950s is evident even in seemingly serious documents from the time. When the New York State health department published a brochure on mumps in 1955, they adopted a light tone and a comical caricature of chipmunk–cheeked “Billy” to describe a brush with the disease. In the Chicago papers, health columnist and Chicago Medical Society president Theodore Van Dellen noted that when struck with mumps, “the victim is likely to be dubbed ‘moon–face.‘” Such representations of mumps typically minimized the disease‘s severity. Van Dellen noted that while mumps did have some unpleasant complications—including the one that had garnered so much attention during the war—“the sex gland complication is not always as serious as we have been led to believe.” The health department brochure pointed out that “children seldom develop complications,” and should therefore not be vaccinated: “Almost always a child is better off having mumps: the case is milder in childhood and gives him life–long immunity.”

Such conceptualizations helped shape popular representations of the illness. In press reports from the time, an almost exaggeratedly lighthearted attitude toward mumps prevailed. In Atlanta, papers reported with amusement on the oldest adult to come down with mumps, an Englishwoman who had reached the impressive age of ninety–nine. Chicago papers featured the sad but cute story of the boy whose poodle went missing when mumps prevented him from being able to whistle to call his dog home. In Los Angeles, the daily paper told the funny tale of a young couple forced to exchange marital vows by phone when the groom came down with mumps just before the big day.Los Angeles Times readers speculated on whether the word “mumps” was singular or plural, while Chicago Daily Defender readers got to laugh at a photo of a fat–cheeked matron and her fat–cheeked cocker spaniel, heads wrapped in matching dressings to soothe their mumps–swollen glands. Did dogs and cats actually get the mumps? In the interest of entertaining readers, newspapers speculated on that as well.

The top reason mumps made headlines throughout the fifties and into the sixties, however, was its propensity to bench professional athletes. Track stars, baseball players, boxers, football stars, and coaches all made the news when struck by mumps. So did Washington Redskins player Clyde Goodnight, whose story revealed a paradox of mumps at midcentury: the disease was widely regarded with casual dismissal and a smirk, even as large enterprises fretted over its potential to cut into profits. When Goodnight came down with a case of mumps in 1950, his coaches giddily planned to announce his infection to the press and then send him into the field to play anyway, where the Pittsburgh Steelers, they gambled, would be sure to leave him open for passes. But the plan was nixed before game time by the Redskins‘ public relations department, who feared the jubilant Goodnight might run up in the stands after a good play and give fans the mumps. Noted one of the team‘s publicists: “That‘s not good business.”

When Baltimore Orioles outfielder Frank Robinson came down with the mumps during an away game against the Los Angeles Angels in 1968, however, the tone of the team‘s response was markedly different. Merck‘s new Mumpsvax vaccine had recently been licensed for sale, and the Orioles‘ managers moved quickly to vaccinate the whole team, along with their entire press corps and club officials. The Orioles‘ use of the new vaccine largely adhered to the guidelines that Surgeon General William Stewart had announced upon the vaccine‘s approval: it was for preteens, teenagers, and adults who hadn‘t yet had a case of the mumps. (For the time being, at least, it wasn‘t recommended for children.) The Angels‘ management, by contrast, decided not to vaccinate their players—despite their good chances of having come into contact with mumps in the field.

Baseball‘s lack of consensus on how or whether to use the mumps vaccine was symptomatic of the nation‘s response as a whole. Cultural ambivalence toward mumps had translated into ambivalence toward the disease‘s new prophylactic, too. That ambivalence was well–captured in the hit movie Bullitt, which came out the same year as the new mumps vaccine. In the film‘s opening scene, San Francisco cop Frank Bullitt readies himself for the workday ahead as his partner, Don Delgetti, reads the day‘s headlines aloud. “Mumps vaccine on the market … the government authorized yesterday what officials term the first clearly effective vaccine to prevent mumps … ,” Delgetti begins—until Bullitt sharply cuts him off. “Why don‘t you just relax and have your orange juice and shut up, Delgetti.” Bullitt, a sixties icon of machismo and virility, has more important things to worry about than the mumps. So, apparently, did the rest of the country. The Los Angeles Times announced the vaccine‘s approval on page 12, and the New York Times buried the story on page 72, as the war in Vietnam and the race to the moon took center stage.

Also ambivalent about the vaccine—or, more accurately, the vaccine‘s use—were the health professionals grappling with what it meant to have such a tool at their disposal. Just prior to Mumpsvax‘s approval, the federal Advisory Committee on Immunization Practices at the CDC recommended that the vaccine be administered to any child approaching or in puberty; men who had not yet had the mumps; and children living in institutions, where “epidemic mumps can be particularly disruptive.” Almost immediately, groups of medical and scientific professionals began to take issue with various aspects of these national guidelines. For some, the vaccine‘s unknown duration was troubling: ongoing trials had by then demonstrated just two years of protection. To others, the very nature of the disease against which the shot protected raised philosophical questions about vaccination that had yet to be addressed. The Consumers Union flinched at the recommendation that institutionalized children be vaccinated, arguing that “mere convenience is insufficient justification for preventing the children from getting mumps and thus perhaps escorting them into adulthood without immunity.” The editors of the New England Journal of Medicine advised against mass application of mumps vaccine, arguing that the “general benignity of mumps” did not justify “the expenditure of large amounts of time, efforts, and funds.” The journal‘s editors also decried the exaggeration of mumps‘ complications, noting that the risk of damage to the male sex glands and nervous system had been overstated. These facts, coupled with the ever–present risk of hazards attendant with any vaccination program, justified, in their estimation, “conservative” use of the vaccine.

This debate over how to use the mumps vaccine was often coupled with the more generalized reflection that Mumpsvax helped spark over the appropriate use of vaccines in what health experts began referring to as a new era of vaccination. In contrast to polio or smallpox, the eradication of mumps was far from urgent, noted the editors of the prestigious medical journal the Lancet. In this “next stage” of vaccination, marked by “prevention of milder virus diseases,” they wrote, “a cautious attitude now prevails.” If vaccines were to be wielded against diseases that represented only a “minor inconvenience,” such as mumps, then such vaccines needed to be effective, completely free of side effects, long–lasting, and must not in any way increase more severe adult forms of childhood infections, they argued. Immunization officials at the CDC acknowledged that with the approval of the mumps vaccine, they had been “forced to chart a course through unknown waters.” They agreed that the control of severe illnesses had “shifted the priorities for vaccine development to the remaining milder diseases,” but how to prevent these milder infections remained an open question. They delineated but a single criterion justifying a vaccine‘s use against such a disease: that it pose less of a hazard than its target infection.

To other observers, this was not enough. A vaccine should not only be harmless—it should also produce immunity as well as or better than natural infection, maintained Oklahoma physician Harris Riley. The fact that the mumps vaccine in particular became available before the longevity of its protection was known complicated matters for many weighing in on the professional debate. Perhaps, said Massachusetts health officer Morton Madoff, physicians should be left to decide for themselves how to use such vaccines as “a matter of conscience.” His comment revealed a hesitancy to delineate policy that many displayed when faced with the uncharted territory the mumps vaccine had laid bare. It also hinted at an attempt to shift future blame in case mumps vaccination went awry down the line—a possibility that occurred to many observers given the still–unknown duration of the vaccine‘s protection.

Mumps was not a top public health priority in 1967—in fact, it was not even a reportable disease—but the licensure of Mumpsvax would change the disease‘s standing over the course of the next decade. When the vaccine was licensed, editors at the Lancet noted that there had been little interest in a mumps vaccine until such a vaccine became available. Similarly, a CDC scientist remarked that the vaccine had “stimulated renewed interest in mumps” and had forced scientists to confront how little they knew about the disease‘s etiology and epidemiology. If the proper application of a vaccine against a mild infection remained unclear, what was clear—to scientists at the CDC at least—was that such ambiguities could be rectified through further study of both the vaccine and the disease. Given a new tool, that is, scientists were determined to figure out how best to use it. In the process of doing so, they would also begin to create new representations of mumps, effectively changing how they and Americans in general would perceive the disease in the future.

A Changing Disease

Shortly after the mumps vaccine‘s approval, CDC epidemiologist Adolf Karchmer gave a speech on the infection and its vaccine at an annual immunization conference. In light of the difficulties that health officials and medical associations were facing in trying to determine how best to use the vaccine, Karchmer devoted his talk to a review of existing knowledge on mumps. Aside from the fact that the disease caused few annual deaths, peaked in spring, and affected mostly children, particularly males, there was much scientists didn‘t know about mumps. They weren‘t certain about the disease‘s true prevalence; asymptomatic cases made commonly cited numbers a likely underestimate. There was disagreement over whether the disease occurred in six– to seven–year cycles. Scientists weren‘t sure whether infection was truly a cause of male impotence and sterility. And they didn‘t know the precise nature of the virus‘s effects on the nervous system. Karchmer expressed a concern shared by many: if the vaccine was administered to children and teens, and if it proved to wear off with time, would vaccination create a population of non–immune adults even more susceptible to the disease and its serious complications than the current population? Karchmer and others thus worried—at this early stage, at least—that trying to control mumps not only wouldn‘t be worth the resources it would require, but that it might also create a bigger public health problem down the road.

To address this concern, CDC scientists took a two–pronged approach to better understanding mumps and the potential for its vaccine. They reinstated mumps surveillance, which had been implemented following World War I but suspended after World War II. They also issued a request to state health departments across the country, asking for help identifying local outbreaks of mumps that they could use to study both the disease and the vaccine. Within a few months, the agency had dispatched teams of epidemiologists to study mumps outbreaks in Campbell and Fleming Counties in Kentucky, the Colin Anderson Center for the “mentally retarded” in West Virginia, and the Fort Custer State Home for the mentally retarded in Michigan.

The Fort Custer State Home in Augusta, Michigan, hadn‘t had a single mumps outbreak in its ten years of existence when the CDC began to investigate a rash of 105 cases that occurred in late 1967. In pages upon pages of detailed notes, the scientists documented the symptoms (largely low–grade fever and runny noses) as well as the habits and behaviors of the home‘s children. They noted not only who slept where, who ate with whom, and which playgrounds the children used, but also who was a “toilet sitter,” who was a “drippy, drooley, messy eater,” who was “spastic,” who “puts fingers in mouth,” and who had “impressive oral–centered behavior.” The index case—the boy who presumably brought the disease into the home—was described as a “gregarious and restless child who spends most of his waking hours darting from one play group to another, is notably untidy and often places his fingers or his thumbs in his mouth.” The importance of these behaviors was unproven, remarked the researchers, but they seemed worth noting. Combined with other observations—such as which child left the home, for example, to go on a picnic with his sister—it‘s clear that the Fort Custer children were viewed as a petri dish of infection threatening the community at large.

Although the researchers‘ notes explicitly stated that the Fort Custer findings were not necessarily applicable to the general population, they were presented to the 1968 meeting of the American Public Health Association as if they were. The investigation revealed that mumps took about fifteen to eighteen days to incubate, and then lasted between three and six days, causing fever for one or two days. Complications were rare (three boys ages eleven and up suffered swollen testes), and attack rates were highest among the youngest children. The team also concluded that crowding alone was insufficient for mumps to spread; interaction had to be “intimate,” involving activities that stimulated the flow and spread of saliva, such as the thumb–sucking and messy eating so common among not only institutionalized children but children of all kinds.

Mumps preferentially strikes children, so it followed that children offered the most convenient population for studying the disease‘s epidemiology. But in asking a question about children, scientists ipso facto obtained an answer—or series of answers—about children. Although mumps had previously been considered a significant healthproblem only among adults, the evidence in favor of immunizing children now began to accumulate. Such evidence came not only from studies like the one at Fort Custer, but also from local reports from across the country. When Bellingham and Whatcom Counties in Washington State made the mumps vaccine available in county and school clinics, for example, few adults and older children sought the shot; instead, five– to nine–yearolds were the most frequently vaccinated. This wasn‘t necessarily a bad thing, said Washington health officer Phillip Jones, who pointed out that there were two ways to attack a health problem: you could either immunize a susceptible population or protect them from exposure. Immunizing children did both, as it protected children directly and in turn stopped exposure of adults, who usually caught the disease from kids. Immunizing children sidestepped the problem he had noticed in his own county. “It is impractical to think that immunization of adults and teen–agers against mumps will have any significant impact on the total incidence of adult and teen–age mumps. It is very difficult to motivate these people,” said Jones. “On the other hand, parents of younger children eagerly seek immunization of these younger children and there are numerous well–established programs for the immunization of children, to which mumps immunization can be added.”

Setting aside concerns regarding the dangers of giving children immunity of unknown duration, Jones effectively articulated the general consensus on immunization of his time. The polio immunization drives described in chapters 1 and 2 had helped forge the impression that vaccines were “for children” as opposed to adults. The establishment of routine pediatric care, also discussed in chapter 1, offered a convenient setting for broad administration of vaccines, as well as an audience primed to accept the practice. As a Washington, D.C., health officer remarked, his district found that they could effectively use the smallpox vaccine, which most “mothers” eagerly sought for their children, as “bait” to lure them in for vaccines against other infections. The vaccination of children got an added boost from the news that Russia, the United States‘ key Cold War opponent and foil in the space race, had by the end of 1967 already vaccinated more than a million of its youngsters against mumps.

The initial hesitation to vaccinate children against mumps was further dismantled by concurrent discourse concerning a separate vaccine, against rubella (then commonly known as German measles). In the mid1960s, rubella had joined polio and smallpox in the ranks of diseases actively instilling fear in parents, and particularly mothers. Rubella, a viral infection that typically caused rash and a fever, was harmless in children. But when pregnant women caught the infection, it posed a risk of harm to the fetus. A nationwide rubella epidemic in 1963 and 1964 resulted in a reported 30,000 fetal deaths and the birth of more than 20,000 children with severe handicaps. In fact, no sooner had the nation‘s Advisory Committee on Immunization Practices been formed, in 1964, than its members began to discuss the potential for a pending rubella vaccine to prevent similar outbreaks in the future. But as research on the vaccine progressed, it became apparent that while the shot produced no side effects in children, in women it caused a “rubella–like syndrome” in addition to swollen and painful joints. Combined with the fact that the vaccine‘s potential to cause birth defects was unknown, and that the vaccination of women planning to become pregnant was perceived as logistically difficult, federal health officials concluded that “the widespread immunization of children would seem to be a safer and more efficient way to control rubella syndrome.” Immunization of children against rubella was further justified based on the observation that children were “the major source of virus dissemination in the community.” Pregnant women, that is, would be protected from the disease as long as they didn‘t come into contact with it.

The decision to recommend the mass immunization of children against rubella marked the first time that vaccination was deployed in a manner that offered no direct benefit to the individuals vaccinated, as historian Leslie Reagan has noted. Reagan and, separately, sociologist Jacob Heller have argued that a unique cultural impetus was at play in the adoption of this policy: as an accepted but difficult–to–verify means of obtaining a therapeutic abortion at a time when all other forms of abortion were illegal, rubella infection was linked to the contentious abortion politics of the time. A pregnant woman, that is, could legitimately obtain an otherwise illegal abortion by claiming that she had been exposed to rubella, even if she had no symptoms of the disease. Eliminating rubella from communities through vaccination of children would close this loophole—or so some abortion opponents likely hoped. Eliminating rubella was also one means of addressing the growing epidemic of mental retardation, since the virus was known to cause birth defects and congenital deformities that led children to be either physically disabled or cognitively impaired. Rubella immunization promotion thus built directly upon the broader public‘s anxieties about abortion, the “crippling” diseases (such as polio), and mental retardation.

In its early years, the promotion of mumps immunization built on some of these same fears. Federal immunization brochures from the 1940s and 1950s occasionally mentioned that mumps could swell the brain or the meninges (the fluid surrounding the brain), but they never mentioned a risk of brain damage. In the late 1960s, however, such insinuations began to appear in reports on the new vaccine. Hilleman‘s early papers on the mumps vaccine trials opened with the repeated statement that “Mumps is a common childhood disease that may be severely and even permanently crippling when it involves the brain.” When Chicago announced Mumps Prevention Day, the city‘s medical director described mumps as a disease that can “contribute to mental retardation.” Though newspaper reporters focused more consistently on the risk that mumps posed to male fertility, many echoed the “news” that mumps could cause permanent damage to the brain. Such reports obscured substantial differentials of risk noted in the scientific literature. For unlike the link between mumps and testicular swelling, the relationship between mumps and brain damage or mental retardation was neither proven nor quantified, even though “benign” swelling of meninges was documented to appear in 15 percent of childhood cases. In a nation just beginning to address the treatment of mentally retarded children as a social (instead of private) problem, however, any opportunity to prevent further potential cases of brain damage, no matter how small, was welcomed by both parents and cost–benefit–calculating municipalities.

The notion that vaccines protected the health (and, therefore, the productivity and utility) of future adult citizens had long been in place by the time the rubella vaccine was licensed in 1969. In addition to fulfilling this role, the rubella vaccine and the mumps vaccine—which, again, was most commonly depicted as a guard against sterility and “damage to the sex glands” in men—were also deployed to ensure the existence of future citizens, by protecting the reproductive capacities of the American population. The vaccination of children against both rubella and mumps was thus linked to cultural anxiety over falling fertility in the post–Baby Boom United States. In this context, mumps infection became nearly as much a cause for concern in the American home as it had been in army barracks and worker camps two decades before. This view of the disease was captured in a 1973 episode of the popular television sitcom The Brady Bunch, in which panic ensued when young Bobby Brady learned he might have caught the mumps from his girlfriend and put his entire family at risk of infection. “Bobby, for your first kiss, did you have to pick a girl with the mumps?” asked his father, who had made it to adulthood without a case of the disease. This cultural anxiety was also evident in immunization policy discussions. CDC scientists stressed the importance of immunizing against mumps given men‘s fears of mumps–induced impotence and sterility—even as they acknowledged that such complications were “rather poorly documented and thought to occur rarely, if at all.”

As the new mumps vaccine was defining its role, the revolution in reproductive technologies, rights, and discourse that extended from the 1960s into the 1970s was reshaping American—particularly middle–class American—attitudes toward children in a manner that had direct bearing on the culture‘s willingness to accept a growing number of vaccines for children. The year 1967 saw more vaccines under development than ever before. Merck‘s own investment in vaccine research and promotion exemplified the trend; even as doctors and health officials were debating how to use Mumpsvax, Hilleman‘s lab was testing a combined vaccine against measles, rubella, and mumps that would ultimately help make the company a giant in the vaccine market. This boom in vaccine commodification coincided with the gradual shrinking of American families that new contraceptive technologies and the changing social role of women (among other factors) had helped engender.

The link between these two trends found expression in shifting attitudes toward the value of children, which were well–captured by Chicago Tribune columnist Joan Beck in 1967. Beck predicted that 1967 would be a “vintage year” for babies, for the 1967 baby stood “the best chance in history of being truly wanted” and the “best chance in history to grow up healthier and brighter and to get a better education than his forebears.” He‘d be healthier—and smarter—thanks in large part to vaccines, which would enable him to “skip” mumps, rubella, and measles, with their attendant potential to “take the edge off a child‘s intelligence.” American children might be fewer in number as well as costly, Beck wrote, but they‘d be both deeply desired and ultimately well worth the tremendous investment. This attitude is indicative of the soaring emotional value that children accrued in the last half of the twentieth century. In the 1960s, vaccination advocates appealed directly to the parent of the highly valued child, by emphasizing the importance of vaccinating against diseases that seemed rare or mild, or whose complications seemed even rarer still. Noted one CDC scientist, who extolled the importance of vaccination against such diseases as diphtheria and whooping cough even as they became increasingly rare: “The disease incidence may be one in a thousand, but if that one is your child, the incidence is a hundred percent.”

Discourse concerning the “wantedness” of individual children in the post–Baby Boom era reflected a predominantly white middle–class conceptualization of children. As middle–class birth rates continued to fall, reaching a nadir in 1978, vaccines kept company with other commodities—a suburban home, quality schooling, a good college—that shaped the truly wanted child‘s middle–class upbringing. From the late 1960s through the 1970s, vaccination in general was increasingly represented as both a modern comfort and a convenience of contemporary living. This portrayal dovetailed with the frequent depiction of the mild infections, and mumps in particular, as “nuisances” American no longer needed to “tolerate.” No longer did Americans of any age have to suffer the “variety of spots and lumps and whoops” that once plagued American childhood, noted one reporter. Even CDC publications commented on “the luxury and ease of health provided by artificial antigens” of the modern age.

And even though mumps, for one, was not a serious disease, remarked one magazine writer, the vaccination was there “for those who want to be spared even the slight discomfort of a case.” Mumps vaccination in fact epitomized the realization of ease of modern living through vaccination. Because it kept kids home from school and parents home from work, “it is inconvenient, to say the least, to have mumps,” noted a Massachusetts health official. “Why should we tolerate it any longer?” Merck aimed to capitalize on this view with ads it ran in the seventies: “To help avoid the discomfort, the inconvenience—and the possibility of complications: Mumpsvax,” read the ad copy. Vaccines against infections such as mumps might not be perceived as absolutely necessary, but the physical and material comfort they provided could not be undervalued.

To read more about Vaccine Nation, click here.

Add a Comment
19. Free e-book for February: Floating Gold

9780226430362

Our free e-book for February is Christopher Kemp’s idiosyncratic exegesis on the backstory of whale poop,

Floating Gold: A Natural (and Unnatural) History of Ambergris.

***

“Preternaturally hardened whale dung” is not the first image that comes to mind when we think of perfume, otherwise a symbol of glamour and allure. But the key ingredient that makes the sophisticated scent linger on the skin is precisely this bizarre digestive by-product—ambergris. Despite being one of the world’s most expensive substances (its value is nearly that of gold and has at times in history been triple it), ambergris is also one of the world’s least known. But with this unusual and highly alluring book, Christopher Kemp promises to change that by uncovering the unique history of ambergris.

A rare secretion produced only by sperm whales, which have a fondness for squid but an inability to digest their beaks, ambergris is expelled at sea and floats on ocean currents for years, slowly transforming, before it sometimes washes ashore looking like a nondescript waxy pebble. It can appear almost anywhere but is found so rarely, it might as well appear nowhere. Kemp’s journey begins with an encounter on a New Zealand beach with a giant lump of faux ambergris—determined after much excitement to nothing more exotic than lard—that inspires a comprehensive quest to seek out ambergris and its story. He takes us from the wild, rocky New Zealand coastline to Stewart Island, a remote, windswept island in the southern seas, to Boston and Cape Cod, and back again. Along the way, he tracks down the secretive collectors and traders who populate the clandestine modern-day ambergris trade.

Floating Gold is an entertaining and lively history that covers not only these precious gray lumps and those who covet them, but presents a highly informative account of the natural history of whales, squid, ocean ecology, and even a history of the perfume industry. Kemp’s obsessive curiosity is infectious, and eager readers will feel as though they have stumbled upon a precious bounty of this intriguing substance.

Download your free copy of Floating Gold, here.

Add a Comment
20. Free e-book for November: Mr. Jefferson and the Giant Moose

9780226169149

 

Lee Alan Dugatkin’s Mr. Jefferson and the Giant Moose, our free e-book for November, reconsiders the crucial supporting role played by a moose carcass in Jeffersonian democracy.

***

Thomas Jefferson—author of the Declaration of Independence, US president, and ardent naturalist—spent years countering the French conception of American degeneracy. His Notes on Virginia systematically and scientifically dismantled Buffon’s case through a series of tables and equally compelling writing on the nature of his home state. But the book did little to counter the arrogance of the French and hardly satisfied Jefferson’s quest to demonstrate that his young nation was every bit the equal of a well-established Europe. Enter the giant moose.

The American moose, which Jefferson claimed was so enormous a European reindeer could walk under it, became the cornerstone of his defense. Convinced that the sight of such a magnificent beast would cause Buffon to revise his claims, Jefferson had the remains of a seven-foot ungulate shipped first class from New Hampshire to Paris. Unfortunately, Buffon died before he could make any revisions to his Histoire Naturelle, but the legend of the moose makes for a fascinating tale about Jefferson’s passion to prove that American nature deserved prestige.

In Mr. Jefferson and the Giant Moose, Lee Alan Dugatkin vividly recreates the origin and evolution of the debates about natural history in America and, in so doing, returns the prize moose to its rightful place in American history.

To download your free copy, click here.

 

 

Add a Comment
21. Excerpt: Packaged Pleasures

9780226121277
by Gary S. Cross and Robert N. Proctor

 ***

“The Carrot and the Candy Bar”

Our topic is a revolution—as significant as anything that has tossed the world over the past two hundred years. Toward the end of the nineteenth century, a host of often ignored technologies transformed human sensual experience, changing how we eat, drink, see, hear, and feel in ways we still benefit (and suffer) from today. Modern people learned how to capture and intensify sensuality, to preserve it, and to make it portable, durable, and accessible across great reaches of social class and physical space. Our vulnerability to such a transformation traces back hundreds of thousands of years, but the revolution itself did not take place until the end of the nineteenth century, following a series of technological changes altering our ability to compress, distribute, and commercialize a vast range of pleasures.

Strangely, historians have neglected this transformation. Indeed, behind this astonishing lapse lies a common myth—that there was an age of production that somehow gave rise to an age of consumption, with historians of the former exploring industrial technology, while historians of the latter stress the social and symbolic meaning of goods. This artificial division obscures how technologies of production have transformed what and how we actually consume. Technology does far more than just increase productivity or transform work, as historians of the Industrial Revolution so often emphasize. Industrial technology has also shaped how and how much we eat, what we wear and why, and how and what (and how much!) we hear and see. And myriad other aspects of how we experience daily life—or even how we long for escape from it.

Bound to such transformations is a profound disruption in modern life, a breakdown of the age-old tension between our bodily desires and the scarcity of opportunities for fulfillment. New technologies— from the rolling of cigarettes to the recording of sound—have intensified the gratification of desires but also rendered them far more easily satisfied, often to the point of grotesque excess. An obvious example is the mechanized packaging of highly sugared foods, which began over a century ago and has led to a health and moral crisis today. Lots of media attention has focused on the irresponsibility of the food industry and the rise of recreational and workplace sedentism—but there are other ways to look at this.

It should be obvious that technology has transformed how people eat, especially with regard to the ease and speed with which it is now possible to ingest calories. Roots of such transformations go very deep: the Neolithic revolution ten-plus thousand years ago brought with it new methods of regularizing the growing of food and the world’s first possibility of elite obesity. The packaged pleasure revolution in the nineteenth century, however, made such excess possible for much larger numbers of “consumers”—a word only rarely used prior to that time. Industrial food processors learned how to pack fat, sugar, and salt into concentrated and attractive portions, and to manufacture these cheaply and in packages that could be widely distributed. Foods that were once luxuries thus became seductively commonplace. This is the first thing we need to understand.

We also need to appreciate that responsibility for the excesses of today’s consumers cannot be laid entirely at the doors of modern technology and the corporations that benefit from it. We cannot blame the food industry alone. No one is forced to eat at McDonald’s; people choose Big Macs with fries because they satisfy with convenience and affordability, just as people decide to turn on their iPods rather than listen to nature or go to a concert. But why would we make such a choice—and is it entirely a “free choice”? This brings us to a second crucial point: humans have evolved to seek high-energy foods because in prehistoric conditions of scarcity, eating such foods greatly improved their ancestors’ chances of survival. This has limited, but not entirely eliminated, our capacity to resist these foods when they no longer are scarce. And if we today crave sugar and fat and salt, that is partly because these longings must have once promoted survival, deep in the pre-Paleolithic and Paleolithic. Our taste buds respond gleefully to sugars because we are descended from herbivores and especially frugivores for whom sweet-tasting plants and fruits were neuro-marked as edible and nutritious. Poisonous plants were more often bitter-tasting. Pleasure at least in this sensory sense was often a clue to what might help one survive.

But here again is the rub. Thanks to modern industrialism, high-calorie foods once rare are now cheap and plentiful. Industrial technology has overwhelmed and undercut whatever balance may have existed between the biological needs of humans and natural scarcity. We tend to crave those foods that before modern times were rare; cravings for fat and sugar were no threat to health; indeed, they improved our chances of survival. Now, however, sugar, especially in its refined forms, is plentiful, and as a result makes us fat and otherwise unhealthy. And what is true for sugar is also true for animal fat. In our prehistoric past fat was scarce and valuable, accounting for only 2 to 4 percent of the flesh of deer, rabbits, and birds, and early humans correctly gorged whenever it was available. Today, though, factory-farmed beef can consist of 36 percent fat, and most of us expend practically no energy obtaining it. And still we gorge.

And so the candy bar, a perfect example of the engineered pleasure, wins out over the carrot and even the apple. More sugar and seemingly more varied flavors are packed into the confection than the unprocessed fruit or vegetable. In this sense our craving for a Snickers bar is partly an expression of the chimp in us, insofar as we desire energy-packed foods with maximal sugars and fat. The concentration, the packaging, and the ease of access (including affordability) all make it possible—indeed enticingly easy—to ingest far more than we know is good for us. Our biological desires have become imperfect guides for good behavior: drives born in a world of scarcity do not necessarily lead to health and happiness in a world of plenty.

But food is not the only domain where such tensions operate. Indeed, a broader historical optic reveals tensions in our response to the packaged provisioning of other sensations, and this broader perspective invites us to go beyond our current focus on food, as important as that may be.

As biological creatures we are naturally attracted to certain sights and sounds, even smells and motion, insofar as we have evolved in environments where such sensitivities helped our ancestors prevail over myriad threats to human existence. The body’s perceptual organs are, in a sense, some of our oldest tools, and much of the pleasure we take in bright colors, combinations of particular shapes, and certain kinds of movement must be rooted in prehistoric needs to identify food, threats, or mates from a distance. Today we embrace the recreational counterparts, filling our domestic spaces with visual ornaments, fixed or in motion, reminding ourselves of landscapes, colors, or shapes that provoke recall or simulate absent or even impossible worlds.

What has changed, in other words, is our access to once-rare sensations, including sounds but especially imagery. The decorated caves of southern France, once rare and ritualized space, are now tourist attractions, accessible to all through electronic media. Changes in visual technology have made possible a virtual orgy of visual culture; a 2012 count estimated over 348,000,000,000 images on the Internet, with a growth rate of about 10,000 per second. The mix and matrix of information transfer has changed accordingly: orality (and aurality) has been demoted to a certain extent, first with the rise of typography (printing) and then the published picture, and now the ubiquitous electronic image on screens of different sorts. “Seeing is believing” is an expression dating only from about 1800, signaling the surging primacy of the visual. Civilization itself celebrates the light, the visual sense, as the darkness of the night and the narrow street gradually give way to illuminated interiors, light after dark, and ever broader visual surveillance.

Humans also have preferences for certain smells, of course, even if we are (far) less discriminating than most other mammals. Technologies of odor have never been developed as intensively as those of other senses, though we should not forget that for tens of thousands of years hunters have employed dogs—one of the oldest human “tools”—to do their smelling. Smell has also sometimes marked differences between tribes and classes, rationalizing the isolation of slaves or some other subject group. The wealthy are known to have defined themselves by their scents (the ancient Greeks used mint and thyme oils for this purpose), and fragrances have been used to ward off contagions. Some philosophers believed that the scent of incense could reach and please the gods; and of course the devil smelled foul—as did sin.

Still, the olfactory sense lost much of its acuity in upright primates, and it is the rare philosopher who would base an epistemology on odor. Philosophers have always privileged sight over all other senses—which makes sense given how much of our brain is devoted to processing visual images (canine epistemology and agnotology would surely be quite different). Optico-centricity was further accentuated with the rise of novel ways of extending vision in the seventeenth century (microscopes, telescopes) and still more with the rise of photography and moving pictures. Industrial societies have continued to devalue scent, with some even trying to make the world smell-free. Pasteur’s discovery of germs meant that foul air (think miasma) lost its role in carrying disease, but efforts to remove the germs that caused such odors (especially the sewage systems installed in cities in the nineteenth century) ended up mollifying much of the stink of large urban centers. Bodily perfuming has probably been around for as long as humans have been human, but much of recent history has involved a process of deodorizing, further reducing the value of the sensitive nose.

Modern people may well gorge on sight, but we certainly remain sound-sensitive and long for music, “the perfume of hearing” in the apt metaphor of Diane Ackerman. Music has always aroused a certain spiritual consciousness and may even have facilitated social bonding among early humans. Stringed and drum instruments date back only to about 5,500 years ago (in Mesopotamia), but unambiguous flutes date back to at least 40,000 years ago; the oldest known so far is made from vulture and swan bones found in southern Germany. Singing, though, must be far older than whatever physical evidence we have for prehistoric music.

There is arguably a certain industrial utility to music, insofar as “moving and singing together made collective tasks far more efficient” (so claims historian William McNeill). As a mnemonic aid, a song “hooks onto your subconscious and won’t let go.” Music carries emotion and preserves and transports feelings when passed from one person or generation to another—think of the “Star Spangled Banner” or “La Marseillaise.” And music also marks social differences in stratified societies. In Europe by the eighteenth century, for example, people of rank had abandoned participation in the sounds and music of traditional communal festivals and spectacles. To distinguish themselves from the masses, the rich and powerful came to favor the orderly stylized sounds of chamber music—and even demanded that audiences keep silent during performances. One of the signal trends of this particular modernity is the withdrawal of elites from public festivals, creating space instead for their own exclusive music and dance to eliminate the unruly/unmanaged sounds of the street and work. Music helps forge social bonds, but it can also work to separate and to isolate, facilitating escape from community (think earbuds).

We humans also of course crave motion and bodily contact, flexing our muscles in the manner of our ancestors exhilarating in the chase. And even if we no longer chase mammoth herds with spears, we recreate elements of this excitement in our many sports, testing strength against strength or speed against speed, forcing projectiles of one sort or another into some kind of target. Dance is an equally ancient expression of this thrill of movement, with records of ritual motion appearing already on cave and rock walls of early humans. The emotion-charged dance may be diminished in elite civilized life, but it clearly reappears in the physicality of amusement park throngs at the end of the nineteenth century, and more recently in the rhythmic motions of crowds at sporting events and rock concert moshing where strangers slam and grind into each other.

Sensual pleasure is thus central to the “thick tapestry of rewards” of human evolutionary adaptation, rewards wired into the complex circuitry of the brain’s pleasure centers. Pursuit of pleasure (and avoidance of pain) was certainly not an evil in our distant past; indeed, it must have had obvious advantages in promoting evolutionary fitness. Along with other adaptive emotions (fear, surprise, and disgust, for example), pleasure and its pursuit must also have helped create capacities to bond socially—and perhaps even to use and to understand language. The joy that motivates babies to delight in rhythmic and consonant sounds, bright colors, friendly faces, and bouncing motion helps build brain connections essential for motor and cognitive maturity.

Of course the biological propensity to gorge cannot be new; that much we know from the relative constancy of the human genetic constitution over many millennia. We also know that efforts to augment or intensify sensual pleasure long predate industrial civilization. This should come as no surprise, given that, as already noted, our longings for rare delights of taste, sight, smell, sound, and motion are rooted in our prehistoric past. Humans—like wolves—have been bred to binge. But in the past, at least, nature’s parsimony meant that gorging was generally rare and its impact on our bodies, psyches, and sociability limited.

This leads us again to a critical point: pleasure is born in its paucity and scarcity sustains it. And scarcity has been a fact of life for most of human history; in fact, it is very often a precondition for pleasure. Too much of any good can lead to boredom—that is as true for music or arcade games as for ice cream or opera. Most pleasures seem to require a context of relative scarcity. Amongst our prehistoric ancestors this was naturally enforced through the rarity of honey and the all-tooinfrequent opportunity for the chase. Humans eventually developed the ability, however, to create and store surpluses of pleasure-giving goods, first by cooking and preserving foods and drinks and eventually by transforming even fleeting sensory experiences into reproducible and transmissible packets of pleasure. Think about candy bars, soda pop, and cigarettes, but also photography, phonography, and motion pictures—all of which emerged during the packaged pleasure revolution.

Of course, in certain respects the defeat of scarcity has a much older history, having to do with techniques of containerization. Prior to the Neolithic, circa ten thousand years ago, humans had little in the way of either technical means or social organization to store any kind of sensual surplus (though meats may have been stashed the way some nonhuman predators do). Farming and its associated technics changed this. After hundreds of thousands of years of scavenging and predation, people in this new era began to grow their own food—and then to save and preserve it in containers, especially in pots made from clay but also in bags made from skins or fibers from plants. Agriculture seems to have led to the world’s first conspicuous inequalities in wealth, but also the first routine encounters with obesity and other sins of the flesh (drunkenness, for example). Of course the rich—the rulers and priests of ancient city-states and empires or the lords and abbots of religious centers in the Middle Ages—were able to satisfy sensual longings more often, and in some cases continually.

While Christianity was in part a reaction to this sensual indulgence, being originally a religion of the excluded slave and the appalled rich, medieval aristocrats returned to the ancient love of sweet and sour dishes, favoring roasted game (a throwback to the preagricultural era) and the absurd notion that torturing animals before killing them made for the tastiest meats. Medieval European nobility mixed sex, smell, and taste in their large midday meals and frequent evening banquets. Christian church fathers banned perfumes and roses as Roman decadence, but treatments of this sort—along with passions for pungent flavors and scents—were revived with the Crusades and intimate contact with the Orient.

Until recently, pursuit of pleasure on such an opulent scale was confined to those tiny minorities with regular access to the resources to contain and intensify nature. Since antiquity, in fact, the powerful have often been snobbish killjoys, trying to restrict what the poor were allowed to eat, wear, and enjoy. Sometimes this made economic (if invidious) sense—as when England’s Edward III rationed the diet of servants during shortages that followed the Black Death. In the sixteenth century, French law prohibited the eating of fish and meat at the same meal in hopes of preserving scarce supplies. And given the low output of agriculture, there was a certain logic underlying the rationing of access to “luxuries.” But the powerful sometimes seem to have relished denying pleasure to others. How else do we explain sumptuary laws that prohibited the commoner from wearing colorful and costly clothing reserved for aristocrats?

Access to pleasure has long been an expression of privilege and power, but much can be made with little, and rarely has pleasurable display been totally suppressed in any culture. Think of the ceremonies surrounding seasonal festivals, especially the gathering of harvest surplus, when humans drenched themselves in the senses that seemed almost to ache for expression. Think of the Bacchanalia of the Greeks, the Saturnalia of the Romans, the Mardi Gras of medieval Europeans, or the orgies of feasting, dancing, music, and colorful costumes of any society whose everyday world of scarcity is forgotten in bingeing after harvest. Agriculture produced cycles of carnival and Lent, “a self-adjusting gastric equilibrium,” in the words of one historian.

Of course there are many examples of ancient philosophers and sages seeking to limit the hedonism of the privileged (and the festival culture of the poor). Certainly there are ancients who embraced the virtues of moderation, as in Aristotle’s “golden mean” or Confucian ideals of restrained desire. Hebrew prophets, Puritans, Jesuits, and countless Asian ascetics likewise attempted to rein in the fêtes of the senses. Medieval authorities in Europe forbade the eating of meat on Wednesdays, Fridays, and numerous fast days that added up to more than 150 days a year. The classical ideal of moderation was revived, and the moral superiority of grain-based foods was defended. Gluttony was condemned along with lust. Pleasure was to be regulated even in the afterlife, insofar as the Christian heaven was not for pleasure but for self-improvement. These and other ascetic moralities arguably helped people cope with uncertain supplies, putting a brake also on the rapacious greed of the rich and powerful. Curbing of excess extended to all manner of “pleasures of the flesh,” including those that, like sex, were not necessarily even scarce.

Dance came under suspicion in this regard, especially in its ecstatic form. European explorers frowned on the gesticulations of “possessed natives” whom they encountered in Africa and the Americas in the sixteenth and seventeenth centuries. At the same time, European elites smothered social dancing in the towns and villages of their own societies. The reasons were many. Clergy demanded that their holy days and rituals be protected from defilement by the boisterous and even sacrilegious customs of the frolicking crowd; the rich also chose to withdraw from—and then suppressed—the emotional intensity of common people’s celebrations, retiring instead to the confines of their private gatherings and sedate dances. The military also needed a new type of soldier and new ways of preparing men for war: the demand was no longer to fire up the emotions of soldiers to prepare them for handto-hand combat; the new need was to drill and discipline troops to march unflinching into musket and cannon fire, with individual fighters acting as precision components in a machine. The regular rhythms of the military march served this purpose better than the ecstatic dance.

Even when people found ways of intensifying sensation (as in the distillation of alcoholic spirits), state and church authorities were often able to enforce limits, sometimes by harsh means. In London in the 1720s, authorities repressed the widespread and addictive use of gin (a juniper-flavored liquor). At the beginnings of the Industrial Revolution, just as unleashing desire was becoming respectable, philosophers such as Adam Smith and David Hume still mused about the need for personal restraint and moral sympathies.

By this time, and increasingly over the course of the nineteenth century, especially between about 1880 and 1910, these traditional calls for moderation and self-control were starting to face a new kind of challenge, thanks to new techniques of containerization and intensification that would culminate in the packaged pleasure revolution. New kinds of machines brought new sensations to ordinary people, producing goods that for the first time could be made quite cheap and easily storable and portable. Canned food defeated the seasons, extending the availability of fruits and vegetables to the entirety of the year. Candy bars purchased at any newsstand or convenience store replaced the rare encounter with the honeycomb or wild strawberry. And while our more immediate predecessors may have enjoyed a pipe of tobacco or a draft of warm beer, the deadly convenience of the cigarette and the refreshing coolness of the chilled beverage came within the grasp of the masses only toward the end of the nineteenth century. And this revolution in the range and intensity of sensation radically upset the traditional relationship between desire and scarcity.

A similar process occurred with other sensory delights. While earlynineteenth-century Americans and Europeans thrilled at the sight of painted dioramas and magic lantern shows, nothing compared to the spectacle of fast-paced police chases in the one-reel movies viewable after 1900. Opera was a privileged treat of the few in lavish public places, but imagine the revolution wrought by the 1904 hard wax cylinder phonograph, when Caruso could be called upon to sing in the family parlor whenever (and however often) one wanted. Daredevils in Vanuatu dove from high places holding vines long before bungee jumping became a fad; even so, there was nothing like the mass-market calibrated delivery of physical thrills before the roller coaster, popularized in the 1890s. We find something similar even with binge partying: while peoples had long celebrated surpluses in festivals, they typically did so only on those rare days designated by the authorities. By the end of the nineteenth century, however, festive pleasures of a more programmed sort had become widely available on demand in the modern commercial amusement park.

Especially important is how the packaged pleasure intensified (certain aspects of ) human sensory experience. An extreme example is when opium, formerly chewed, smoked, or drunk as tea, was transformed through distillation into morphine and eventually heroin—and then injected directly into the bloodstream with the newly invented syringe in the 1850s. The creation of a wide variety of “tubes” like the syringe for delivering chemically purified, intense sensation was characteristic of much of this new technology—which we shall describe in terms of “tubularization.” The cigarette is another fateful example: tobacco smoking was made cheap, convenient, and “mild” (i.e., deadly) with the advent of James Bonsack’s automated cigarette rolling machine (in the 1880s) and new methods of curing tobacco. Bonsack’s machine lowered the cost of manufacturing by an order of magnitude, and new methods of chemical processing (such as flue curing) allowed a milder, less alkaline smoke to be drawn deep into the lungs. A new mass-market consumer “good” was born, accompanied by mass addiction and mass death from maladies of the heart and lungs.

The “tubing” of tobacco into cigarettes was closely related to techniques used in packing and packaging many other commercial products. Think of mechanized canning—culminating in the double-seamed cylinder of the “sanitary” can-making machinery of 1904—and mechanized bottle and cap making from the late 1890s. New forms of sugar consumption appeared with the invention of soda fountain drinks. Coca-Cola was first served in drug stores in 1886 and in bottles by the end of the century, and in the 1890s the mixing of sugar with bitter chocolate led to candy bars, such as Hershey’s in 1900. Packaged pleasures of this sort—offered in conveniently portable portions with carefully calibrated constituents—allowed manufacturers to claim to have surpassed the sensuous joys of paradise. Chemists also began to be hired to see what new kinds of foods and drugs could be synthesized to surpass the taste, smell, and look of anything nature had created. A new discipline of “marketing” came of age about this time—the word was coined in 1884—with the task of creating demand for this riot of new products, decked out increasingly in colorful and striking labels with eye- and ear-catching slogans.

New technologies also sped up our consumption of visual, auditory, and motion sensoria. In 1839 the Daguerreotype revolutionized the familiar curiosity of the camera obscura—a dark room featuring a pinhole that would project an image of the outside world onto an interior wall—by chemically capturing that image on a metal plate in a miniaturized “camera” (meaning literally “room”). While these early photographs required long periods of exposure to fix an image, that time dramatically declined over the course of the century, allowing by 1888 the amateur snapshot camera and only three years later the motion picture camera. The effect, as we shall see, was a sea change in how we view and recollect the world. Sound was also captured (and preserved and sold) about this same time. The phonograph, invented in 1877 by Thomas Edison, became a new way of experiencing sound when improved and domesticated. And Emile Berliner’s “record” of 1887 made possible the mass production of sound on stamped-out discs, capturing a concert or a speech in a two- or three-minute record available to anyone, anywhere, with the appropriate gear.

Access and speed took another sensual twist when a Midwesterner by the name of La Marcus Thompson introduced the first mechanized roller coaster, in 1884. Bodily sensations that might have signaled danger or even death on a real train were packed into a two- or three-minute adventure trip on a rail “gravity ride.” Adding another dimension to the thrill was Thompson’s scenic railroad (in 1886) with its artificial tunnels and painted images of exotic natural or fantasy scenes. This was a new form of concentrated pleasure, distilling sights and sounds that formerly would have required days of “regular travel.” Rides, in combination with an array of novel multisensory spectacles, were concentrated into dedicated “amusement parks,” offering a kind of packaged recreational experience, accessible (very often) via the new trolley cars of the 1890s. Some of the earliest and most famous were those built at Coney Island on the southernmost tip of Brooklyn, New York.

Innovations of this sort led us into new worlds of sensory access, speed, and intensity. Distance and season were no longer restraints, as canned and bottled goods moved by rail, ship, and eventually truck across vast stretches of space and climate—with mixed outcomes for human health and well-being.

Some of these new technologies nourished and improved our bodies with cheaper, more hygienic, and varied food and drink; others offered more convenient and effective medicines and toiletries. Still others provided unprecedented opportunities to enjoy the beauty of nature (or at least its image), along with music and new kinds of “visual arts.” Amusement rides gave us (relatively) harm-free ways of experiencing the ecstatic and the exhilaration of danger—plus a kind of simulated or virtual travel; photography froze the evanescent sight, preserving images on a scale never previously possible, and with near-perfect fidelity. Yet packaged pleasures also led to new health and moral threats.

In the most extreme form, concentrating intoxicants led to addictions—physical dependencies that often required ever-increasing dosages to maintain a constant effect, and substantial physical discomfort accompanying withdrawal. Here of course the syringe injection of distilled opiates is the paradigmatic example, and addiction to tobacco and alcoholic drinks must also be included. But the impact of concentrated high-energy foods is not entirely different. Fat- or sugar-rich foods produce not just energy but very often endorphins, morphine-like painkillers that offer comfort and calm. That is one reason they are called “comfort” foods. These rich foods cause neurotransmitters in the brain to go out of balance, resulting in cravings. By contrast, the natural physical pleasures of exercise are much less addicting because we get tired; and some “excess”—here pain is gain—can actually make us healthier.

Not all packaged pleasure dependencies were so obviously chemical. Engineered pleasures often create astonishment and delight when first introduced, for example, but can also raise expectations and dull sensibilities for “unpackaged” stimuli, be they nature’s wonders or unaided convivial and social delights. The pleasures of recorded sound, the captured image, and even the amusement park ride and electronic game often satisfy with a kind of ratcheting effect, rendering the visual, auditory, and motion pleasures in uncommodified nature and society boring. In this sense, the packaging of pleasure can turn the once rare into an everyday, even numbing, occurrence. The world beyond the package becomes less thrilling, less desirable. In the wake of the telephoto lens and artful editing of film—with all the “boring bits” taken out—nature itself can appear dull or impoverished. Why go to the waterfall or forest if you can experience these in compressed form at your local zoo or theme park? Or on IMAX or your widescreen, high-def TV? Packaged pleasures of this sort may not induce physical dependencies, but they can create inflated expectations or even degrade other, less distilled or concentrated, kinds of experiences.

Another point we shall be making is that packaged pleasures have often de-socializedpleasure taking. Many create neurological responses similar to those of religious ecstasies, physical exercise, and social or even sexual intercourse, and can end up substituting for, or displacing, such enjoyments. Weak wine and mild natural hallucinogens have long enhanced spiritual and social experience, but the modern packaged pleasure often has the effect of privatizing satisfaction, isolating it from the crowd. Think of the privatization of public space through portable mp3 players, or the isolating effect of television.

The key point to appreciate is that we today live in a vastly different world from that of peoples living prior to the packaged pleasure revolution, when a broad range of sensual pleasures came to be bottled, canned, condensed, distilled, and otherwise intensified. The impact of this revolution has not been uniform, and we acknowledge and stress these differences, but it does seem to have transformed our sensory universe in ways we are only beginning to understand.

The packaged pleasures we shall be considering in this book include cigarettes, candy and soda pop, phonograph records, photographs, movies, amusements park spectacles, and a few other odds and ends.

But of course not all commodities that are tubed, packed, portable, or preserved can be considered packaged pleasures. For our purposes, we can identify several key and interrelated elements:

  1. The packaged pleasure is an engineered commodity that contains, concentrates, preserves, and very often intensifies some form of sensual satisfaction.
  2. It is generally speaking inexpensive, easy to access (readily at hand), and very often portable and storable, often in a domestic setting.
  3. It is typically wrapped and labeled and thus often marketed by branding. Although often portable, in the case of the amusement park, it can also be enclosed and branded in a contained and fixed space.
  4. The packaged pleasure is often produced by companies with broad regional if not national or even global reach, creating a recognizable bond between the individual consumer and the corporate producer.

Of course we are well aware that many other consumer products exhibit one or more of these attributes—clothes, cars, books, packaged cereals, cocaine, pornography, and department stores just to name a few. Our focus will be on those packaged pleasures that signal key features of the early part of this transformation, and notably those that involve the elements of containment, compression, intensification, mobilization, and commodification. And we recognize that we will not offer an encyclopedic survey of pleasures that have been intensified and packaged—we won’t be treating the history of pornography or perfume, for example, and will consider narcotics and alcoholic beverages only briefly.

We should also be clear that the packaged pleasure revolution is on-going and in many ways has strengthened over time, as pleasure engineers find ever-more sophisticated ways of intensifying desire. And we’ll consider this history at least briefly. Since funneled fun has a tendency to bore us over time, pleasure engineers have repeatedly raised the bar on sensory intensity. Nuts and nougat were added to the simple chocolate bar, and cigarette makers added flavorants and chemicals to enhance or optimize nicotine delivery. The visual panel in motion pictures has been made more alluring with increasingly rapid cuts, and recorded sound has seen a dramatic expansion in both fidelity and acoustical range. Roller coasters went ever higher and faster while also becoming ever safer. Pornography is delivered with ever-greater convenience and is now basically free to anyone with an Internet connection. Even opera fans can now hear (and see) their favorite arias with a simple click on YouTube—at no cost and without leaving home (or sitting through those “boring bits”). Entertainment without the “fiber,” one could say.

Another outcome of the packaged pleasure revolution, then, is the progressive refinement—really reengineering—of sensory experience in the century or so since its beginnings. Optimization of satisfactions has become a big part in this, as one might expect from the fact that packaged pleasures are very often commodities produced by corporations with research and marketing departments. Menthol was added to cigarettes in the 1930s, with the idea of turning tobacco back into a kind of medicine. Ammonia and levulinic acid and candied flavors of various sorts were later added to augment the nicotine “kick,” but also to appeal to younger tastes. Flavor chemists meanwhile learned to manipulate the jolt of “soft drinks” by refining dosings of caffeine and sugar, while candy makers developed nuanced “flavor profiles”— surpassing traditional hard candy, for example, with the sensory complex of a Snickers.

Optimization and calibration we also find in other parts of this revolution. The intense thrill of a loop-de-loop ride, debuted first at Coney Island in the 1890s, gave way to the more varied sensuality of “themed” rides. Roller coasters have been designed to go to the edge of exhilaration, stopping just short of the point of nausea or injury. The same principle works with gambling, where even losers keep playing because of the carefully calibrated conditioning that comes with the periodic (and precisely calculated) win built into the game. Pleasure engineers have learned how to create video games that are easy enough to engage newcomers, but complex enough to sustain the interest of experienced players. Gaming engineers even seek to encourage (or require) physical movement and social interactions—think Wii games—to counter critics cautioning against the bodily and social negatives of overly virtualized lives.

Our focus is on the origins of the technologies involved in such transformations, though we also are aware that such novelties have always encountered critics, those who worry that an oversated consuming public would lose control and abandon work and family responsibilities. But the reality in terms of social impact often has been quite different. Few of these optimized pleasures have ever undermined the willingness of consumers to work and obey—and have done little to undermine nerves and sensibilities (as some have feared). Indeed they have often contributed to a new work ethic driven by new needs and imperatives to earn and toil evermore in order to be able to afford the delights of movies, candy, soda, cigarettes, and the rest of the show. Over time, and often a surprisingly short time, these commodified delights have become a kind of second sensory nature—customary and accepted ways of eating, inhaling, seeing and hearing, and feeling.

Scholars have long debated the impact of “modern consumer culture,” albeit too often in negative terms without considering the historical origins of the phenomena in question. In the 1890s, the French sociologist Émile Durkheim feared that the “masses” would be enervated, even immobilized, by technical modernity’s overwhelming assault on the senses. And Aldous Huxley in his Brave New World (1932) warned of a coming culture of commoditized hedonism oblivious to tyranny. Jeremiahs of this sort have singled out different culprits, with blame most often placed on the “weaknesses” of the masses or the manipulation of merchandisers, with the hope expressed that the virtuous few in their celebration of nature and simplicity would constitute a bulwark against immediate gratification and degrading consumerism. These critics have been opposed by apologists for “democratic access” to the choice and comforts of modern consumer society—who champion the idea that only killjoy elitists could find fault in the delights of pleasure engineering. This perspective dominates a broad swath of social science—especially from neoclassical economists (think of George Stigler and Gary Becker’s famous dictum on the nondisputability of taste).

We argue instead that we need to abandon the overgeneralization common to both jeremiahs and free-market populists. Of course it is true that the very notion of a “packaged pleasure revolution” suggests certain links between the cigarette, bottled soda, phonograph records, cameras, movies, and even amusement parks. But the impact of these various inventions over the decades has been very different, and cannot be subsumed under some procrustean notion of “modern consumer culture.” Rather, as we shall see, their distinct histories suggest very different effects on our bodies and our cultures that would seem to require very different personal and policy responses. Our view is that the sale of cigarettes (as presently designed) should be heavily regulated and ultimately banned, for example, while soda should probably only be shamed and (heavily) taxed. And we make no policy recommendations for film or sound “packages.” But we certainly need to better understand how these technologies have shaped and refined (distorted?) our sensibilities.

We should also keep in mind that there are global consequences to the packaged pleasure revolution—and that most of these lie in the future. This is unfinished business. Overconsumption is part of the problem, as is the undermining of world health (notably from processed sugar and cigarettes). The revolution is ongoing, as the engineered world of compressed sensibility spreads to ever-different parts of the globe, and ever-different parts of human anatomy and sociability. It may be hard to opt out of or to escape from this brave new world, but the conditions under which it arose are certainly worth understanding and confronting.

This book takes on a lot. Our hope is to move us beyond the classic debate between the jeremiahs against consumerism and the defenders of a democratic access to commercial delights. We root mass consumption in a sensory revolution facilitated by techniques that upset the ancient balance between desire and scarcity. We take a fresh look at how technology has transformed our nature.

To read more about Packaged Pleasures, click here.

Add a Comment
22. Rachel Sussman and The Oldest Living Things in the World

9780226057507

 

This past week, Rachel Sussman’s colossal photography project—and its associated book—The Oldest Living Things in the World, which documents her attempts to photograph continuously living organisms that are 2,000 years old and older, was profiled by the New Yorker:

To find the oldest living thing in New York City, set out from Staten Island’s West Shore Plaza mall (Chuck E. Cheese’s, Burlington Coat Factory, D.M.V.). Take a right, pass Industry Road, go left. The urban bleakness will fade into a litter-strewn route that bisects a nature preserve called Saw Mill Creek Marsh. Check the tides, and wear rubber boots; trudging through the muddy wetlands is necessary.

The other day, directions in hand, Rachel Sussman, a photographer from Greenpoint, Brooklyn, went looking for the city’s most antiquated resident: a colony of Spartina alterniflora or Spartina patens cordgrass which, she suspects, has been cloning and re-cloning itself for millennia.

Not simply the story of a cordgrass selfie, Sussman’s pursuit becomes contextualized by the lives—and deaths—of our fragile ecological forbearers, and her desire to document their existence while they are still of the earth. In support of the project, Sussman has a series of upcoming events surrounding The Oldest Living Things in the World. You can read more at her website, or see a listing of public events below:

EXHIBITIONS:

Imagining Deep Time (a cultural program of the National Academy of Sciences in Washington, DC), on view from August 28, 2014 to January 15, 2015

Another Green World, an eco-themed group exhibition at NYU’s Gallatin Galleries, featuring Nina KatchadourianMitchell JoaquimWilliam LamsonMary MattinglyMelanie Baker and Joseph Heidecker, on view from September 12, 2014 to October 15, 2014

The Oldest Living Things in the World, a solo exhibition at Pioneer Works in Brooklyn, NY, from September 15, 2014 to November 2, 2014, including a closing program

TALKS:

Sept 18th: a discussion in conjunction with the National Academy of Sciences exhibition Imagining Deep Time for DASER (DC Art Science Evening Rendezvous), Washington, DC (free and open to the public)

Nov 20th: an artist’s talk at the Museum of Contemporary Photography, Chicago

To read more about The Oldest Living Things in the World, click here.

 

 

Add a Comment
23. Tom Koch on Ebola and the new epidemic

9780226449357

“Ebola and the new epidemic” by Tom Koch

Mindless but intelligent, viruses and bacteria want what we all want: to survive, evolve, and then, to procreate. That’s been their program since before there were humans. From the first influenza outbreak around 2500 BC to the current Ebola epidemic, we have created the conditions for microbial evolution, hosted their survival, and tried to live with the results.

These are early days for the Ebola epidemic, which was for some years constrained to a few isolated African sites, but has now advanced from its natal place to several countries, with outbreaks elsewhere. Since the first days of influenza, this has always been the viral way. Born in a specific locale, the virus hitches itself to a traveler who brings it to a new and fertile field of humans. The “epidemic curve,” as it is called, starts slowly but then, as the virus spreads and travels, spreads and travels, the numbers mount.

Hippocrates provided a fine description of an influenza pandemic in 500 BC, one that reached Greece from Asia. The Black Death that hastened the end of the Middle Ages traveled with Crusaders and local traders, infecting the then-known world. Cholera (with a mortality rate of over thirty percent) started in India in 1818 and by 1832 had infected Europe and North America.

Since the end of the seventeenth century, we’ve mapped these spreads in towns and villages located in provinces and nations. The first maps were of plague, but in the eighteenth century that scourge was replaced in North American minds by yellow fever, which in turn, was replaced by the global pandemic of cholera (and then at the end of the century came polio).

In attempting to combat these viral outbreaks, the question is one of scale. Early cases are charted on the streets of a city, the homes of a town. Can they be quarantined and those infected separated? And then, as the epidemic grows, the mapping pulls back to the nations in which those towns are located as travelers, infected but as yet not symptomatic, move from place to place. Those local streets become bus and rail lines that become, as a pandemic begins, airline routes that gird the world.

There are lots of models for us to follow here. In the 1690s, Filipo Arrieta mapped a four-stage containment program that attempted to limit the passage of plague through the province of Bari, Italy, where he marshaled the army to create containment circles.

Indeed, quarantines have been employed, often with little success, since the days of plague. The sooner they are put in place, the better they seem to be. They are not, however, foolproof.

Complacently, we have assumed that our expertise at genetic profiling would permit rapid identification and the speedy production of vaccines or at least curative drugs. We thought we were beyond viral attack. Alas, our microbial friends are faster than that. By the time we’ve genetically typed the virus and found a biochemical to counter it, it will have, most likely, been and gone. Epidemiologists talk about the “epidemic curve” as a natural phenomenon that begins slowly, rises fiercely, and then ends.

We have nobody to blame but ourselves.

Four factors promote the viral and bacterial evolution that results in pandemic diseases and their spread. First, there is the deforestation and man-made ecological changes that upset natural habitats, forcing microbes to seek new homes. Second, urbanization brings people together in dense fields of habitation that become the microbe’s new hosts—when those people live in poverty, the field is even better. Third, trade provides travelers to carry microbes, one way or another, to new places. And, fourth and finally, war always promotes the spread of disease among folk who are poor and stressed.

29_merchant

We have created this perfect context in recent decades and the result has been a fast pace of viral and bacterial evolution to meet the stresses we impose and the opportunities we present as hosts. For their part, diseases must balance between virulence—killing the person quickly—and longevity. The diseases that kill quickly usually modify over time. They need their hosts, or something else, to help them move to new fields of endeavor. New diseases like Ebola are aggressive adolescents seeking the fastest, and thus deadliest, exchanges.

Will it become an “unstoppable” pandemic? Probably not, but we do not know for certain; we don’t know how Ebola will mutate in the face of our plans for resistance.

What we do know is that as anxiety increases the niceties developed over the past fifty years of medical protocol and ethics will fade away. There will now be heated discussions surrounding “ethics” and “justice,” as well as practical questions of quarantine and care. Do we try experimental drugs without the normal safety protocol? (The answer will be … yes, sooner if not later.) If something works and there is not enough for all, how do we decide to whom it is to be given first?

For those like me who have tacked diseases through history and mapped their outbreaks in our world, Ebola, or something like it, is what we have feared would come. And when Ebola is contained it will not be the end. We’re in a period of rapid viral and bacterial evolution brought on by globalization and its trade practices. Our microbial friends will, almost surely, continue to take advantage.

***

Tom Koch is a medical geographer and ethicist, and the author of a number of papers in the history of medicine and disease. His most recent book in this field was Disease Maps: Epidemics on the Ground, published by University of Chicago Press.

 

Add a Comment
24. An Orchard Invisible: Our free e-book for April

9780226757742Just in time for garden prep, our free e-book for April is Jonathan Silvertown’s An Orchard Invisible: A Natural History of Seeds.

“I have great faith in a seed,” Thoreau wrote. “Convince me that you have a seed there, and I am prepared to expect wonders.”

The story of seeds, in a nutshell, is a tale of evolution. From the tiny sesame that we sprinkle on our bagels to the forty-five-pound double coconut borne by the coco de mer tree, seeds are a perpetual reminder of the complexity and diversity of life on earth. With An Orchard Invisible, Jonathan Silvertown presents the oft-ignored seed with the natural history it deserves, one nearly as varied and surprising as the earth’s flora itself.

Beginning with the evolution of the first seed plant from fernlike ancestors more than 360 million years ago, Silvertown carries his tale through epochs and around the globe. In a clear and engaging style, he delves into the science of seeds: How and why do some lie dormant for years on end? How did seeds evolve? The wide variety of uses that humans have developed for seeds of all sorts also receives a fascinating look, studded with examples, including foods, oils, perfumes, and pharmaceuticals. An able guide with an eye for the unusual, Silvertown is happy to take readers on unexpected—but always interesting—tangents, from Lyme disease to human color vision to the Salem witch trials. But he never lets us forget that the driving force behind the story of seeds—its theme, even—is evolution, with its irrepressible habit of stumbling upon new solutions to the challenges of life.

To download your copy, click here.

For more about our free e-book of the month program, click here.

Add a Comment
25. University Presses in Space

header_upinspace_1200

Welcome to the boundless third dimension: university presses—figuratively speaking—in space!

From the website:

“University Presses in Space” showcases a special sampling of the many works that university presses have published about space and space exploration. These books have all the hallmarks of university press publishing—groundbreaking content, editorial excellence, high production values, and striking design. The titles included here were selected by each Press as their strongest works across a variety of space-related topics, from the selling of the Apollo lunar program to the history of the Shuttle program to the future of manned space exploration and many subjects in between.

As part of the “University Presses in Space” program, we were geeked to select Time Travel and Warp Drives: A Scientific Guide to Shortcuts through Space and Time by Allen Everett and Thomas Roman, which takes readers on a clear, concise tour of our current understanding of the nature of time and space—and whether or not we might be able to bend them to our will. Using no math beyond high school algebra, the authors lay out an approachable explanation of Einstein’s special relativity, then move through the fundamental differences between traveling forward and backward in time and the surprising theoretical connection between going back in time and traveling faster than the speed of light.

Even better? The book lent itself to twelve video demonstrations of concepts like nontransversable wormholes and, ahem, the cylindrical universe.

To read more about “University Presses in Space,” visit the website here.

For more on Time Travel and Warp Drives, click here.

 

Add a Comment

View Next 8 Posts