What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Date

Click days in this calendar to see posts by day or month
<<August 2015>>
SuMoTuWeThFrSa
      01
02030405060708
09101112131415
16171819202122
23242526272829
3031     
new posts in all blogs
Viewing Blog: The Chicago Blog, Most Recent at Top
Results 26 - 50 of 1,875
Visit This Blog | Login to Add to MyJacketFlap
Blog Banner
Publicity news from the University of Chicago Press including news tips, press releases, reviews, and intelligent commentary.
Statistics for The Chicago Blog

Number of Readers that added this blog to their MyJacketFlap: 13
26. Excerpt: Portrait of a Man Known as Il Condottiere

9780226054254

An excerpt from Portrait of a Man Known as Il Condottiere by Georges Perec

***

Madera was heavy. I grabbed him by the armpits and went backwards down the stairs to the laboratory. His feet bounced from tread to tread in a staccato rhythm that matched my own unsteady descent, thumping and banging around the narrow stairwell. Our shadows danced on the walls. Blood was still flowing, all sticky, seeping from the soaking wet towel, rapidly forming drips on the silk lapels, then disappearing into the folds of the jacket, like trails of slightly glinting snot side-tracked by the slightest roughness in the fabric, sometimes accumulating into drops that fell to the floor and exploded into star-shaped stains. I let him slump at the bottom of the stairs, right next to the laboratory door, and then went back up to fetch the razor and to mop up the bloodstains before Otto returned. But Otto came in by the other door at almost the same time as I did. He looked at me uncomprehendingly. I beat a retreat, ran down the stairs, and shut myself in the laboratory. I padlocked the door and jammed the wardrobe up against it. He came down a few minutes later, tried to force the door open, to no avail, then went back upstairs, dragging Madera behind him. I reinforced the door with the easel. He called out to me. He fired at the door twice with his revolver.

You see, maybe you told yourself it would be easy. Nobody in the house, no-one round and about. If Otto hadn’t come back so soon, where would you be? You don’t know, you’re here. In the same laboratory as ever, and nothing’s changed, or almost nothing. Madera is dead. So what? You are still in the same underground studio, it’s just a bit less tidy and bit less clean. The same light of day seeps through the basement window. The Condottiere, crucified on his easel . . .

He had looked all around. It was the same office—the same glass table-top, the same telephone, the same calendar on its chrome-plated steel base. It still had the stark orderliness and uncluttered iciness of an intentionally cold style, with strictly matching colours—dark green carpet, mauve leather armchairs, light brown wall covering—giving a sense of discreet impersonality with its large metal filing cabinets . . . But all of a sudden the flabby mass of Madera’s body seemed grotesque, like a wrong note, something incoherent, anachronistic . . . He’d slipped off his chair and was lying on his back with his eyes half-closed and his slightly parted lips stuck in an expression of idiotic stupor enhanced by the dull gleam of a gold tooth. Blood streamed from his cut throat in thick spurts and trickled onto the floor, gradually soaking into the carpet, making an ill-defined, blackish stain that grew ever larger around his head, around his face whose whiteness had long seemed rather fishy, a warm, living, animal stain slowly taking possession of the room, as if the walls were already soaked through with it, as if the orderliness and strictness had already been overturned, abolished, pillaged, as if nothing more existed beyond the radiating stain and the obscene and ridiculous heap on the floor, the corpse, fulfilled, multiplied, made infinite . . .

Why? Why had he said that sentence: “I don’t think that’ll be a problem”? He tries to recall the precise tone of Madera’s voice, the timbre that had taken him by surprise the first time he’d heard it, that slight lisp, its faintly hesitant intonation, the almost imperceptible limp in his words, as if he were stumbling— almost tripping—as if he were permanently afraid of making a mistake. I don’t think. What nationality? Spanish? South American? Accent? Put on? Tricky. No. Simpler than that: he rolled his rs in the back of his throat. Or perhaps he was just a bit hoarse? He can see him coming towards him with outstretched hand: “Gaspard—that’s what I should call you, isn’t it?—I’m truly delighted to make your acquaintance.” So what? It didn’t mean much to him. What was he doing here? What did the man want of him? Rufus hadn’t warned him . . .

People always make mistakes. They think things will work out, will go on as per normal. But you never can tell. It’s so easy to delude yourself. What do you want, then? An oil painting? You want a top-of-the-range Renaissance piece? Can do. Why not aPortrait of a Young Man, for instance . . .

A flabby, slightly over-handsome face. His tie. “Rufus has told me a lot about you.” So what? Big deal! You should have paid attention, you should have been wary . . . A man you didn’t know from Adam or Eve . . . But you rushed headlong to accept the opportunity. It was too easy. And now. Well, now . . .

This is where it had got him. He did the sums in his head: all that had been spent setting up the laboratory, including the cost of materials and reproductions—photographs, enlargements, X-ray images, images seen through Wood’s lamp and with sideillumination—and the spotlights, the tour of European art galleries, upkeep . . . a fantastic outlay for a farcical conclusion . . . But what was comical about his idiotic incarceration? He was at his desk as if nothing had happened . . . That was yesterday . . . But upstairs there was Madera’s corpse in a puddle of blood . . . and Otto’s heavy footsteps as he paced up and down keeping guard. All that to get to this! Where would he be now if . . . He thinks of the sunny Balearic Islands—it would have taken just a wave of his hand a year and half before—Geneviève would be at his side . . . the beach, the setting sun . . . a picture postcard scene . . . Is this where it all comes to a full stop?

Now he recalled every move he’d made. He’d just lit a cigarette, he was standing with one hand on the table, with his weight on one hip. He was looking at the Portrait of a Man. Then he’d stubbed out his cigarette quickly and his left hand had swept over the table, stopped, gripped a piece of cloth, and crumpled it tight—an old handkerchief used as a brush-rag. Everything was hazy. He was putting ever more of his weight onto the table without letting the Condottiere out of his sight. Days and days of useless effort? It was as if his weariness had given way to the anger rising in him, step by certain step. He was crushing the fabric in his hand and his nails had scored the wooden table-top. He had pulled himself up, gone to his work bench, rummaged among his tools . . .

A black sheath made of hardened leather. An ebony handle. A shining blade. He had raised it to the light and checked the cutting edge. What had he been thinking of? He’d felt as if there was nothing in the world apart from that anger and that weariness . . . He’d flopped into the armchair, put his head in his hands, with the razor scarcely a few inches from his eyes, set off clearly and sharply by the dangerously smooth surface of the Condottiere’s doublet. A single movement and then curtains . . . One thrust would be enough . . . His arm raised, the glint of the blade . . . a single movement . . . he would approach slowly and the carpet would muffle the sound of his steps, he would steal up on Madera from behind . . .

A quarter of an hour had gone by, maybe. Why did he have an impression of distant gestures? Had he forgotten? Where was he? He’d been upstairs. He’d come back down. Madera was dead. Otto was keeping guard. What now? Otto was going to phone Rufus, Rufus would come. And then? What if Otto couldn’t get hold of Rufus? Where was Rufus? That’s what it all hung on. On this stupid what-if. If Rufus came, he would die, and if Otto didn’t get hold of Rufus, he would live. How much longer? Otto had a weapon. The skylight was too high and too small. Would Otto fall asleep? Does a man on guard need to sleep? . . .

He was going to die. The thought of it comforted him like a promise. He was alive, he was going to be dead. Then what?

Leonardo is dead, Antonello is dead, and I’m not feeling too well myself. A stupid death. A victim of circumstance. Struck down by bad luck, a wrong move, a mistake. Convicted in absentia. By unanimous decision with one abstention—which one?—he was sentenced to die like a rat in a cellar, under a dozen unfeeling eyes—the side lights and X-ray lamps purchased at outrageous prices from the laboratory at the Louvre—sentenced to death for murder by virtue of that good old moral legend of the eye, the tooth and the turn of the wheel—Achilles’ wheel—death is the beginning of the life of the mind—sentenced to die because of a combination of circumstances, an incoherent conjunction of trivial events . . . Across the globe there were wires and submarine cables . . . Hello, Paris, this is Dreux, hold the line, we’re connecting to Dampierre. Hello, Dampierre, Paris calling. You can talk now. Who could have imagined those peaceable operators with their earpieces becoming implacable executioners . . . Hello, Monsieur Koenig, Otto speaking, Madera has just died . . .

In the dark of night the Porsche will leap forward with its headlights spitting fire like dragons. There will be no accident. In the middle of the night they will come and get him . . .

And then? What the hell does it matter to you? They’ll come and get you. Next? Slump into an armchair and stare long and hard until death overtakes into the eyes of the tall joker with the shiv, the ineffable Condottiere. Responsible or not responsible? Guilty or not guilty? I’m not guilty, you’ll scream when they drag you up to the guillotine. We’ll soon see about that, says the executioner. And down the blade comes with a clunk. Curtains. Self-evident justice. Isn’t that obvious? Isn’t it normal? Why should there be any other way out?

To read more about Portrait of a Man Known as Il Condottiere, click here.

Add a Comment
27. Excerpt: That’s the Way It Is

9780226472454

An excerpt from That’s the Way It Is: A History of Television News in America 
by Charles L. Ponce de Leon

***

“Beginnings”

Few technologies have stirred the utopian imagination like television. Virtually from the moment that research produced the first breakthroughs that made it more than a science fiction fantasy, its promoters began gushing about how it would change the world. Perhaps the most effusive was David Sarnoff. Like the hero of a dime novel, Sarnoff had come to America as a nearly penniless immigrant child, and had risen from lowly office boy to the presidency of RCA, a leading manufacturer of radio receivers and the parent company of the nation’s biggest radio network, NBC. More than anyone else, it was Sarnoff who had recognized the potential of “wireless” as a form of broadcasting—a way of transmitting from a single source to a geographically dispersed audience. Sarnoff had built NBC into a juggernaut, the network with the largest number of affiliates and the most popular programs. He had also become the industry’s loudest cheerleader, touting its contributions to “progress” and the “American Way of Life.” Having blessed the world with the miracle of radio, he promised Americans an even more astounding marvel, a device that would bring them sound and pictures over the air, using the same invisible frequencies.

In countless speeches heralding television’s imminent arrival, Sarnoff rhapsodized about how it would transform American life and encourage global communication and “international solidarity.” “Television will be a mighty window, through which people in all walks of life, rich and poor alike, will be able to see for themselves, not only the small world around us but the larger world of which we are a part,” he proclaimed in 1945, as the Second World War was nearing an end and Sarnoff and RCA eagerly anticipated an increase in public demand for the new technology.

Sarnoff predicted that television would become the American people’s “principal source of entertainment, education and news,” bringing them a wealth of program options. It would increase the public’s appreciation for “high culture” and, when supplemented by universal schooling, enable Americans to attain “the highest general cultural level of any people in the history of the world.” Among the new medium’s “outstanding contributions,” he argued, would be “its ability to bring news and sporting events to the listener while they are occurring,” and build on the news programs that NBC and the other networks had already developed for radio. He saw no conflicts or potential problems. Action-adventure programs, mysteries, soap operas, situation comedies, and variety shows would coexist harmoniously with high-toned drama, ballet, opera, classical music performances, and news and public affairs programs. And they would all be supported by advertising, making it unnecessary for the United States to move to a system of “government control,” as in Europe and the UK. Television in the US would remain “free.”

Yet Sarnoff ’s booster rhetoric overlooked some thorny issues. Radio in the US wasn’t really free. It was thoroughly commercialized, and this had a powerful influence on the range of programs available to listeners. To pay for program development, the networks and individual stations “sold” airtime to advertisers. Advertisers, in turn, produced programs—or selected ones created by independent producers—that they hoped would attract listeners. The whole point of “sponsorship” was to reach the public and make them aware of your products, most often through recurrent advertisements. Though owners of radios didn’t have to pay an annual fee for the privilege of listening, as did citizens in other countries, they were forced to endure the commercials that accompanied the majority of programs.

This had significant consequences. As the development of radio made clear, some kinds of programs were more popular than others, and advertisers were naturally more interested in sponsoring ones that were likely to attract large numbers of listeners. These were nearly always entertainment programs, especially shows that drew on formulas that had proven successful in other fields—music and variety shows, comedy, and serial fiction. More off-beat and esoteric programs were sometimes able to find sponsors who backed them for the sake of prestige; from 1937 to 1954, for example, General Motors sponsored live performances by NBC’s acclaimed “Symphony of the Air.” But most cultural, news, and public affairs programs were unsponsored, making them unprofitable for the networks and individual stations. Thus in the bountiful mix envisioned by Sarnoff, certain kinds of broadcasts were more valuable than others. If high culture and news and public affairs programs were to thrive, their presence on network schedules would have to be justified by something other than their contribution to the bottom line.

The most compelling reason was provided by the Federal Communications Commission (FCC). Established after Congress passed the Federal Communications Act in 1934, the FCC was responsible for overseeing the broadcasting industry and the nation’s airwaves, which, at least in theory, belonged to the public. Rather than selling frequencies, which would have violated this principle, the FCC granted individual parties station licenses. These allowed licensees sole possession of a frequency to broadcast to listeners in their community or region. This system allocated a scarce resource—the nation’s limited number of frequencies—and made possession of a license a lucrative asset for businessmen eager to exploit broadcasting’s commercial potential. Licenses granted by the FCC were temporary, and all licensees were required to go through a periodic renewal process. As part of this process, they had to demonstrate to the FCC that at least some of the programs they aired were in the “public interest.” Inspired by a deep suspicion of commercialization, which had spread widely among the public during the early 1900s, the FCC’s public-interest requirement was conceived as a countervailing force that would prevent broadcasting from falling entirely under the sway of market forces. Its champions hoped that it might protect programming that did not pay and ensure that the nation’s airwaves weren’t dominated by the cheap, sensational fare that, reformers feared, would proliferate if broadcasting was unregulated

In practice, however, the FCC’s oversight of broadcasting proved to be relatively lax. More concerned about NBC’s enormous market power—it controlled two networks of affiliates, NBC Red and NBC Blue—FCC commissioners in the 1930s were unusually sympathetic to the businessmen who owned individual stations and possessed broadcast licenses and made it quite easy for them to renew their licenses. They were allowed to air a bare minimum of public-affairs programming and fill their schedules with the entertainment programs that appealed to listeners and sponsors alike. By interpreting the public-interest requirement so broadly, the FCC encouraged the commercialization of broadcasting and unwittingly tilted the playing field against any programs—including news and public affairs—that could not compete with the entertainment shows that were coming to dominate the medium.

Nevertheless, news and public-affairs programs were able to find a niche on commercial radio. But until the outbreak of the Second World War, it wasn’t a very large or comfortable one, and it was more a result of economic competition than the dictates of the FCC. Occasional news bulletins and regular election returns were broadcast by individual stations and the fledgling networks in the 1920s. They became more frequent in the 1930s, when the networks, chafing at the restrictions placed on them by the newspaper industry, established their own news divisions to supplement the reports they acquired through the newspaper-dominated wire services.

By the mid-1930s, the most impressive radio news division belonged not to Sarnoff ’s NBC but its main rival, CBS. Owned by William S. Paley, the wealthy son of a cigar magnate, CBS was struggling to keep up with NBC, and Paley came to see news as an area where his young network might be able to gain an advantage. A brilliant, visionary businessman, Paley was fascinated by broadcasting and would soon steer CBS ahead of NBC, in part by luring away its biggest stars. His bold initiative to beef up its news division was equally important, giving CBS an identity that clearly distinguished it from its rivals. Under Paley, CBS would become the “Tiffany network,” the home of “quality” as well as crowd-pleasers, a brand that made it irresistible to advertisers.

Paley hired two print journalists, Ed Klauber and Paul White, to run CBS’s news unit. Under their watch, the network increased the frequency of its news reports and launched news-and-commentary programs hosted by Lowell Thomas, H. V. Kaltenborn, and Robert Trout. In 1938, with Europe drifting toward war, CBS expanded these programs and began broadcasting its highly praised World News Roundup; its signature feature was live reports from correspondents stationed in London, Paris, Berlin, and other European capitals. These programs were well received and popular with listeners, prompting NBC and the other networks to follow Paley’s lead.

The outbreak of war sparked a massive increase in news programming on all the networks. It comprised an astonishing 20 percent of the networks’ schedules by 1944. Heightened public interest in news, particularly news about the war, was especially beneficial to CBS, where Klauber and White had built a talented stable of reporters. Led by Edward R. Murrow, they specialized in vivid on-the-spot reporting and developed an appealing style of broadcast journalism, affirming CBS’s leadership in news. By the end of the war, surveys conducted by the Office of Radio Research revealed that radio had become the main source of news for large numbers of Americans, and Murrow and other radio journalists were widely respected by the public. And though network news people knew that their audience and airtime would decrease now that the war was over, they were optimistic about the future and not very keen to jump into the new field of television.

This is ironic, since it was television that was uppermost in the minds of network leaders like Sarnoff and Paley. The television industry had been poised for takeoff as early as 1939, when NBC, CBS, and DuMont, a growing network owned by an ambitious television manufacturer, established experimental stations in New York City and began limited broadcasting to the few thousand households that had purchased the first sets for consumer use. After Pearl Harbor, CBS’s experimental station even developed a pathbreaking news program that used maps and charts to explain the war’s progress to viewers. This experiment came to an abrupt end in 1942, when the enormous shift of public and private resources to military production forced the networks to curtail and eventually shut down their television units, delaying television’s launch for several years.

Meanwhile, other events were shaking up the industry. In 1943, in response to an FCC decree, RCA was forced to sell one of its radio networks—NBC Blue—to the industrialist Edward J. Noble. The sale included all the programs and personalities that were contractually bound to the network, and in 1945 it was rechristened the American Broadcasting Company (ABC). The birth of ABC created another competitor not just in radio, where the Blue network had a loyal following, but in the burgeoning television industry as well. ABC joined NBC, CBS, and DuMont in their effort to persuade local broadcasters—often owners of radio stations who were moving into the new field of television—to become affiliates.

In 1944, the New York City stations owned by NBC, CBS, and Du-Mont resumed broadcasting, and NBC and CBS in particular launched aggressive campaigns to sign up affiliates in other cities. ABC and DuMont, hamstrung by financial and legal problems, quickly fell behind as most station owners chose NBC or CBS, largely because of their proven track record in radio. But even for the “ big two,” building television networks was costly and difficult. Unlike radio programming, which could be fed through ordinary phone lines to affiliates, who then broadcast them over the air in their communities, linking television stations into a network required a more advanced technology, a coaxial cable especially designed for the medium that AT&T, the private, government-regulated telephone monopoly, would have to lay throughout the country. At the end of the war, at the government’s and television industry’s behest, AT&T began work on this project. By the end of the 1940s, most of the East Coast had been linked, and the connection extended to Chicago and much of the Midwest. But it was slow going, and at the dawn of the 1950s, no more than 30 percent of the nation’s population was within reach of network programming. Until a city was linked to the coaxial cable, there was no reason for station owners to sign up with a network; instead, they relied on local talent to produce programs. As a result, the television networks grew more slowly than executives might have wished, and the audience for network programs was restricted by geography until the mid-1950s. An important breakthrough occurred in 1951, when the coaxial cable was extended to the West Coast and made transcontinental broadcasting possible. But until microwave relay stations were built to reach large swaths of rural America, many viewers lacked access to the networks.

Access wasn’t the only problem. The first television sets that rolled off the assembly lines were expensive. RCA’s basic model, the one that Sarnoff envisioned as its “Model T,” cost $385, while top-of-the-line models were more than $2,000. With the average annual salary in the mid-1940s just over $3,000, this was a lot of money, even if consumers were able to buy sets through department-store installment plans. And though the price of TVs would steadily decline, throughout the 1940s the audience for television was restricted by income. Most early adopters were from well-to-do families—or tavern owners who hoped that their investment in television would attract patrons.

Still, the industry expanded dramatically. In 1946, there were approximately 20,000 television sets in the US; by 1948, there were 350,000; and by 1952, there were 15.3 million. Less than 1 percent of American homes had TVs in 1948; a whopping 32 percent did by 1952. The number of stations also multiplied, despite an FCC freeze in the issuing of station licenses from 1948 to 1952. In 1946, there were six stations in only four cities; by 1952, there were 108 stations in sixty-five cities, most of them recipients of licenses issued right before the freeze. When the freeze was lifted and new licenses began to be issued again, there was a mad rush to establish new stations and get on the air. By 1955, almost 500 television stations were operating in the US.

The FCC freeze greatly benefited NBC and CBS. Eighty percent of the markets with TV at the start of the freeze in 1948 had only one or two licensees, and it made sense for them to contract with one or both of the big networks for national programming to supplement locally produced material. Shut out of these markets, ABC and DuMont were forced to secure affiliates in the small number of markets—usually large cities—where stations were more plentiful. By the time the FCC starting issuing licenses again, NBC and CBS had established reputations for popular, high-quality programs, and when new markets were opened, it became easier for them to sign up stations with the most desirable frequencies, usually the lowest “channels” on the dial. Meanwhile, ABC languished for much of the 1950s, with the fewest and poorest affiliates, and the struggling DuMont network ceased operations altogether in 1955.

News programs were among the first kinds of broadcasts that aired in the waning years of the war, and virtually everyone in the industry expected them to be part of the program mix as the networks increased programming to fill the broadcast day. News was “an invaluable builder of prestige,” noted Sig Mickelson, who joined CBS as an executive in 1949 and served as head of its news division throughout the 1950s. “It helped create an image that was useful in attracting audiences and stimulating commercial sales, not to mention maintaining favorable government relations. . . . News met the test of ‘public service.’ ” As usual, CBS led the way, inaugurating a fifteen-minute evening news program in 1944. It was broadcast on Thursdays and Fridays at 8:00 PM, the two nights of the week the network was on the air. NBC launched its own short Sunday evening newscast in 1945 as the lead-in to its ninety minutes of programming. Both programs resembled the newsreels that were regularly shown in movie theaters, a mélange of filmed stories with voice-over narration by off-screen announcers.

Considering the limited technology available, this was not surprising. Newsreels offered television news producers the most readily applicable model for a visual presentation of news, and the first people the networks hired to produce news programs were often newsreel veterans. But newsreels relied on 35mm film and were expensive and time-consuming to produce, and they had never been employed for breaking news. Aside from during the war, when they were filled with military stories that employed footage provided by the government, they specialized in fluff, events that were staged and would make the biggest impression on the screen: celebrity weddings, movie premiers, beauty contests, ship launches. In the mid-1940s, recognizing this shortcoming, producers at WCBW, CBS’s wholly owned subsidiary in New York, developed a number of innovative techniques for “visualizing” stories for which they had no film and established the precedent of sending a reporter to cover local stories.

These conventions were well established when the networks, in response to booming sales of television sets, expanded their evening schedules to seven days a week and launched regular weeknight newscasts. NBC’s premiered first, in February 1948. Sponsored by R. J. Reynolds, the makers of Camel cigarettes, it was produced for the network by the Fox Movietone newsreel company and had no on-screen news-readers. CBS soon followed suit, with the CBS Evening News, in April 1948. Relying on film provided by another newsreel outfit, Telenews, it featured a rotating cast of announcers, including Douglas Edwards, who had only reluctantly agreed to work in television after failing to break into the top tier of the network’s radio correspondents. In the late summer, after CBS president Frank Stanton convinced Edwards of television’s potential, Edwards was installed as the program’s regular on-screen newsreader, its recognizable “face.” DuMont created an evening newscast as well. But its News from Washington, which reached only the handful of stations that were owned by or affiliated with the network, was canceled in less than a year, and DuMont’s subsequent attempt, Camera Headlines, suffered the same fate and was off the air by 1950. ABC’s experience with news was similarly frustrating. Its first newscast, News and Views, began airing in August 1948 and was soon canceled. It didn’t try to broadcast another one until 1952, when it launched an ambitious prime-time news program called ABC All Star News, which combined filmed news reports with man-on-the street interviews, a technique popularized by local stations. By this time, however, the prime-time schedules of all the networks were full of popular entertainment programs, and All Star News, which failed to attract viewers, was pulled from the air after less than three months.

In February 1949, NBC, eager to make up ground lost to CBS, transformed its weeknight evening newscast into the Camel News Caravan, with John Cameron Swayze, a veteran of NBC’s radio division, as sole on-camera newsreader. Film for the program was acquired from a variety of sources, including foreign and domestic newsreel agencies and freelance stringers. But Swayze’s narration and on-screen presence distinguished the broadcast from its earlier incarnation. He sat at a desk that prominently displayed the Camel logo and presented an overview of the day’s major headlines, sometimes accompanied by film and still photos, but sometimes in the form of a “tell-story”— Swayze on camera reading from a script. In between, he would plug Camels and even occasionally light up, much to his sponsor’s delight. One of the show’s highlights was a whirlwind review of stories for which producers had no visuals, which Swayze would introduce by announcing, “Now let’s go hopscotching the news for headlines!” Swayze was popular with viewers and hosted the broadcast for seven years. He became well known to the public, especially for this nightly sign off, “That’s the story, folks. Glad we could get together.”

The Camel News Caravan was superficial, and Swayze’s tone undeniably glib, as critics at the time noted. But the assumption that guided its production did not set particularly high standards. As Reuven Frank, who joined the show as its main writer in 1950 and soon became its producer, recalled, “We assumed that almost everyone who watched us had read a newspaper . . . that our contribution . . . would be pictures. The people at home, knowing what the news was, could see it happen.” Yet over the next few years, especially after William McAndrew became head of NBC’s news division and Frank was installed as the program’s producer, the News Caravansteadily improved. Making good use of the largesse provided by R. J. Reynolds, which more than covered the news department’s rapidly expanding budget, the show increased its use of filmed reports, acquired from foreign sources like the BBC and other European news agencies, the US government and military, and the network’s growing corps of inhouse cameramen and technicians. It also came to rely more and more on the network’s staff of reporters, including a young North Carolinian named David Brinkley, and reporters at NBC’s “O-and-Os,” the five television stations that the network owned and operated. In the days before network bureaus, journalists at network O-and-Os were responsible for combing their cities for stories of potential national interest. NBC also employed stringers on whom it relied for material from cities or regions where it had no O-and-Os. Airing at 7:45 PM, right before the network’s lineup of prime-time entertainment programs, the News Caravan became the first widely viewed news program of the television age. Its success gave McAndrew and his staff greater leverage in their efforts to command network resources and put added pressure on their main rival.

The CBS Evening News, broadcast at 7:30, was also very much a work-in-progress. Influenced by the experiments in “visualizing” news that CBS producers had conducted at the network’s flagship New York City O-and-O in the mid-1940s, it was produced by a mix of radio people like Edwards and newcomers from other fields. Most of the radio people, however, were second-stringers. The network’s leading radio personnel, including Murrow and his comrades, had little interest in moving to television. Though this disturbed Paley and his second-in-command, CBS president Frank Stanton, it allowed CBS’s fledgling television news unit to escape from the long shadow of the network’s radio news operation, and it increased the influence of staff committed to the tradition of “visualizing.” With few radio people willing to work on the program, the network was forced to hire new staff from outside the network. These newcomers from the wire services, photojournalism, and news and photographic syndicates brought a lively spirit of innovation to CBS’s nascent television news division. They were impressed by the notion of “visualizing,” and they resolved that TV news ought to be different from radio news, “an amalgam of existing news media, with a substantial infusion of showmanship from the stage and motion pictures.”

The most important new hire was Don Hewitt, an ambitious, energetic twenty-five-year-old who joined the small staff of the CBS Evening News in 1948 and soon become its producer. Despite his age, Hewitt was already an experienced print journalist, and his resume included a stint at ACME News Pictures, a syndicate that provided newspapers with photographs. He was well aware of the power of pictures, and when he joined CBS, he brought a new sensibility and willingness to experiment. Under Hewitt, the Edwards program made rapid strides. Eager to find ways of compensating for television’s technical limitations, Hewitt made extensive use of still photos and created a graphic arts department to produce charts, maps, and captions to illustrate tell-stories. To make Edwards’s delivery more natural and smooth, he introduced a new machine called a TelePrompTer, which replaced the heavy cue cards on which his script had been written. Expanding on the experiments of CBS’s early “visualizers,” Hewitt devised a number of clever devices to provide visuals for stories—for example, using toy soldiers to illustrate battles during the Korean War. He was the principal figure behind the shift to 16mm film, which was easier and less expensive to produce, and the network’s decision to establish its own in-house camera crews. His most significant innovation, however, was the double-projector system that he developed to mix narration and film. This technique, which was copied throughout the industry, made possible a new kind of filmed report that would become the archetypal television news package: a reporter on camera, often at the scene of a story, beginning with a “stand-upper” that introduces the story; then film of other scenes, while the reporter’s words, recorded separately, serve as voice-over narration; finally, at the end, a “wrap-up,” where the reporter appears on camera again. By the early 1950s, the CBS newscast, now titled Douglas Edwards with the News, was adding viewers and winning plaudits from critics. And it had gained the respect of many of the network’s radio journalists, who now agreed to contribute to the program and other television news shows.

During the 1950s, Don Hewitt (left) was perhaps the most influential producer of television news. He was responsible not only for CBS’s successful evening newscast, but also worked on See It Now and other network programs. Douglas Edwards (right) anchored the broadcast from the late 1940s to 1962, when he was replaced by Walter Cronkite. Photo courtesy of CBS/Photofest.

The big networks were not the only innovators. In the late 1940s, with network growth limited and many stations still independent, local stations developed many different kinds of programs, including news shows. WPIX, a New York City station owned by theDaily News, the city’s most popular tabloid, established a daily news program in June 1948. The Telepix Newsreel aired twice a day, at 7:30 PM and 11:00 PM, and specialized in coverage of big local events like fires and plane crashes. Its staff went to great lengths to acquire film of these stories, which it hyped with what would become a standard teaser, “film at eleven.” Like its print cousin, it also featured lots of human-interest stories and man-on-the-street interviews. A Chicago station, WGN, developed a similar program, the Chicagoland Newsreel, which was also successful. The real pioneer was KTLA in Los Angeles. Run by Klaus Landsberg, a brilliant engineer, KTLA established the most technologically sophisticated news program of the era. Employing relatively small, portable cameras and mobile live transmitters, its reporters excelled in covering breaking news stories, and it would remain a trailblazer in the delivery of breaking news throughout the 1950s and 1960s. It was Landsberg, for example, who first conceived of putting a TV camera in a helicopter.

But such programs were the exception. Most local stations offered little more than brief summaries of wire-service headlines, and the expense of film technology led most to emphasize live entertainment programs instead of news. Believing that viewers got their news from local papers and radio stations, television stations saw no need to duplicate their efforts. Not until the 1960s, when new, inexpensive video and microwave technology made local newsgathering economically feasible, did local stations, including network affiliates, expand their news programming.

The television news industry’s first big opportunity to display its potential occurred in 1948, when the networks descended on Philadelphia for the political conventions. The major parties had selected Philadelphia with an eye on the emerging medium of television. Sales were booming, and Philadelphia was on the coaxial cable, which was reaching more and more cities as the weeks and months passed. By the time the Republicans convened in July, it extended from Boston to Richmond, Virginia, with the potential for reaching millions of viewers. Radio journalists had been covering the conventions for two decades, but with lucrative entertainment programs on network schedules, it hadn’t paid to produce “gavel-to-gavel” coverage—just bulletins, wrap-ups, and the acceptance speeches of the nominees. In 1948, however, television was a wide-open field, and with much of the broadcast day open—or devoted to unsponsored programming that cost nothing to preempt—the conventions were a great showcase. In cities where they were broadcast, friends and neighbors gathered in the homes of early adopters, in bars and taverns, even in front of department store display windows, where store managers had carefully arranged TVs to draw the attention of passers-by. Crowds on the sidewalk sometimes overflowed into the street, blocking traffic. “No more effective way could have been found to stimulate receiver sales than these impromptu TV set demonstrations,” suggested Sig Mickelson.

Because of the enormous technical difficulties and a lack of experience, the networks collaborated extensively. All four networks used the same pictures, provided by a common pool of cameras set up to focus on the podium and surrounding area. NBC’s coverage was produced by Life magazine and featured journalists from Henry Luce’s media empire as well as Swayze and network radio stars H. V. Kaltenborn and Richard Harkness. CBS’s starred Murrow, Quincy Howe, and Douglas Edwards, newly installed on the Evening News and soon to be its sole newsreader. ABC relied on the gossip columnist and radio personality Walter Winchell. Lacking its own news staff, DuMont hired the Washington-based political columnist Drew Pearson to provide commentary. Many of these announcers did double duty, providing radio bulletins, too. With cameras still heavy and bulky, there were no roving floor reporters conducting interviews with delegates and candidates; instead, interviews occurred in makeshift studios set up in adjacent rooms off the main convention floor. Accordingly, there was little coverage of anything other than events occurring on the podium, and it was print journalists who provided Americans with the behindthe-scenes drama, particularly at the Democrats’ convention, where Southern delegates, angered by the party’s growing commitment to civil rights, walked out in protest and chose Strom Thurmond to run as the nominee of the hastily organized “Dixiecrats.” The conventions were a hit with viewers. Though there were only about 300,000 sets in the entire US, industry research suggested that as many as 10 million Americans saw at least some convention coverage thanks to group viewing and department store advertising and special events.

Four years later, when the Republicans and Democrats again gathered for their conventions, this time in Chicago, the networks were better prepared. Besides experience, they brought more nimble and sophisticated equipment. And, thanks to the spread of the coaxial cable, there were in a position to reach a nationwide audience. Excited by the geometric increase in receiver sales, and inspired by access to new markets that seemed to make it possible to double or even triple the number of television households, major manufacturers signed up as sponsors, and advertisements in newspapers urged consumers to buy sets to “see the conventions.” Coverage was much wider and more complete than in 1948. Several main pool cameras with improved zoom capabilities focused on the podium, while each network deployed between twenty and twenty-five cameras on the periphery and at downtown hotels and in mobile units. “Never before,” noted Mickelson, the CBS executive responsible for the event, “had so many television cameras been massed at one event.”

Meanwhile, announcers from each of the networks explained what was occurring and provided analysis and commentary. NBC’s main announcer was Bill Henry, a Los Angeles print journalist. He was assisted by Kaltenborn and Harkness. Henry sat in a tiny studio and watched the proceedings through monitors, and did not appear on camera. CBS’s coverage differed and established a new precedent. Its main announcer, Walter Cronkite, provided essentially the same narration, explanation, and commentary as Henry. But his face appeared on-screen, in a tiny window in the corner of the screen; when there was a lull on the convention floor, the window expanded to fill the entire screen. Cronkite, an experienced wire service correspondent, had just joined CBS after a successful stint at WTOP, its Washington affiliate. Mickelson had been impressed with his ability to explain and ad lib, and he insisted that CBS use Cronkite rather than the far more experienced and well-known Robert Trout. Mickelson conceded that, from his years of radio work, Trout excelled at “creating word pictures.” But, with television, this was a superfluous gift. The cameras delivered the pictures. “What we needed was interpretation of the pictures on the screen. That was Cronkite’s forte.”

When print journalists asked Mickelson on the eve of the conventions what exact role Cronkite would play, he responded by suggesting that his new hire would be the “anchorman,” a term that soon came to refer to newsreaders like Swayze and Edwards as well. Yet in coining this term, Mickelson was referring to the complex process that Don Hewitt had conceived to provide more detailed and up-to-the-minute coverage of the convention. Recognizing that the action was on the floor, and that if TV journalists were to match the efforts of print reporters they needed to be able to report from there as quickly as possible, Hewitt mounted a second camera that could pan the floor and zoom in on floor reporters armed with walkie-talkies and flashlights, which they used to inform Hewitt when they had an interview or report ready to deliver. It worked like clockwork: “They combed through the delegations, talked to both leaders and members, queried them on motivations and prospective actions, and kept relaying information to the editorial desk.” It was then filtered and collated and passed on to Cronkite, who served as the “anchor” of the relay, delivering the latest news and ad-libbing with the poise and self-assurance that he would display at subsequent conventions and during live coverage of space flights and major breaking news. Cronkite’s seemingly effortless ability to provide viewers with useful and interesting information about the proceedings won praise from television critics and boosted CBS’s reputation with viewers.

NBC was not so successful. In keeping with the network’s—and RCA’s—infatuation with technology, it sought to cover events on the convention floor with a new gadget, a small, hand-held, live-television camera that could transmit pictures and needn’t be connected by wire. As Frank recalled, “It could roam the floor . . . showing delegates reacting to speakers and even join a wireless microphone for interviews.” But it regularly malfunctioned and contributed little to NBC’s coverage. More effective and popular were a series of programs that Bill McAndrew developed to provide background. Convention Call was broadcast twice a day during the conventions, before sessions and when they adjourned for breaks. Its hosts encouraged viewers to call in and ask NBC reporters to explain what was occurring, especially rules of procedure. The show sparked a flood of calls that overwhelmed telephone company switchboards and forced NBC to switch to telegrams instead.

Ratings for network coverage of the conventions exceeded expectations. Approximately 60 million viewers saw at least some of the conventions on television, with an estimated audience of 55 million tuning in at their peak. And the conventions inspired viewers to begin watching the evening newscasts and contributed to an increase in their popularity. Television critics praised the networks for their contributions to civic enlightenment. Jack Gould of the New York Times suggested that television had “won its spurs” and was “a welcome addition to the Fourth Estate.”

Conventions, planned in advance at locations well-suited for television’s limited technology, were ideal events for the networks to cover. These were the days before front-loaded primaries made them little more than coronations of nominees determined months beforehand, and the parties were undergoing important changes that were often revealed in angry debates and frantic back-room deliberations. And while print journalists remained the most complete source for such information, television allowed viewers to see it in real time, and its stable of experienced reporters and analysts proved remarkably adept at conveying the drama and explaining the stakes.

To read more about That’s the Way It Is, click here.

Add a Comment
28. Can We Race Together? An Autopsy

Berrey_Enigma_9780226246239_cvr_IFt

“Can We Race Together? An Autopsy”*

by Ellen Berrey

***

Corporate diversity dialogues are ripe for backlash, the research shows,
even without coffee counter gimmicks.

Corporate executives and university presidents are, yet again, calling for public discussion on race and racial inequality. Revelations about the tech industry’s diversity problem have company officials convening panels on workplace barriers, and, at the University of Oklahoma spokespeople and students are organizing town-hall sessions in response to a fraternity’s racist chant.

The most provocative of the efforts was Starbucks’ failed Race Together program. In March, the company announced that it would ask baristas to initiate dialogues with customers about America’s most vexing dilemma. Although public outcry shut down those conversations before they even got to “Hello,” Starbucks said it would nonetheless carry on Race Together with forums and special USA Today discussion guides. As someone who has done sociological research on diversity initiatives for the past 15 years, I was intrigued.

 For a moment, let’s take this seriously

What would conversations about race have looked like if they played out as Starbucks imagined, given the social science of race? Can companies, in Starbucks’ CEO Howard Schultz’s words, “create a more empathetic and inclusive society—one conversation at a time”? A data-driven autopsy of Starbucks’ ambitions is in order.

Surprisingly, Starbucks turned its sights on the provocative issue of racial inequality—not just feel-good cultural differences (or, thank goodness, the sort of “respectability politics” that, under well-intentioned cover, focus on the moral flaws of black people). Most Americans, especially those of us who are white, are ill-informed on the topic of inequality. We generally do not recognize our personal prejudice. We routinely, and incorrectly, insist that we are colorblind and that racism is a thing of the past, as sociologist Eduardo Bonilla-Silva has documented. When we do try to talk about race, we usually resort to what sociologists Joyce Bell and Doug Hartmann call the “happy talk” of diversity, without a language for discussing who comes out ahead and who gets pushed behind.

Starbucks pulls back the veil on our unconscious

How to take this on? Starbucks opted to tackle the thorny issue of unacknowledged prejudice—the cognitive biases that predispose a person against racial minorities and in favor of white people. The company intended to offer “insight into the divisive role unconscious bias plays in our society and the role empathy can play to bridge those divides.” The conversation guide it distributed the first week described a bias experiment in which lawyers were asked to assess an error-ridden memo. When told that the (fictional) author was white, the lawyers commented “has potential.” When told he was black, they remarked “can’t believe he went to NYU.”

Perhaps this was a promising starting point. Americans prefer psychological explanations; we like to think that terrorism, poverty, obesity, and other social ills are rooted in the individual’s psyche.

 A comforting thought: I’m not racist

We also do not want to see ourselves as complicit in the segregation of our communities, workplaces, or friendships. We definitely don’t want the stigma of being “racist.” Even white supremacists resist that label. So if it’s true that we can’t see our own bias, as Starbucks told us, we can take comfort in our innocence.

Starbucks’ description of the bias experiment actually took the conversation where it never seems to venture: to the advantages that white people enjoy. White people get help, forgiveness, and the inside track far more often than do people of color. But Starbucks stopped before pointing the finger at who gives white people these advantages.

The rest of Race Together veered off in a confused direction, mostly bent on educated enlightenment. The conversation guide was a mishmash of racial utopianism (the millennials have it figured out!), demography as destiny (immigration changes everything!), triumph over a troublesome past (progress!), testimonies by people of color (the one white guy is clueless!), statistics, inspired introspection, and social network tallies (“I have ____ friends of a different race”!).

Not your daddy’s diversity training

Companies have been trying to positively address race for decades. Typically, they do so through diversity management within their own workforce. Their stated purpose is to increase the numbers of people of color in the top ranks or improve the corporate culture. Most diversity management strategies, however, are far from effective (unless they make someone responsible for results), as shown by as sociologists Alexandra Kalev, Frank Dobbin, and Erin Kelly. Corporate aggrandizement and the façade of legal compliance seem as much the goals as actual change.

Race Together most closely resembled diversity training, which tries to undo managerial stereotyping through educational exchange, but this time the exchange was between capitalists and consumers. And it bucked the typical managerial spin. Usually, the kicker is the business case for diversity: this will boost productivity and profits. Instead, Starbucks made the diversity case for business. Consumption, supposedly, would create inclusion and equity. That would be its own reward. There was no clear connection to its specific business goals, beyond (disgruntled) buzz about the brand.

What were you thinking, Howard Schultz?

Briefly, let’s revisit what made Starbucks’ over-the-counter conversations so offensive. Starbucks was asking low-wage, young, disproportionately minority workers to prompt meaningful exchanges about race with uncaffeinated, mostly white and affluent customers. Even under the best of circumstances, diversity dialogues tend to put the burden of explaining racism on people of color. Here, baristas were supposed to walk the third rail during the morning rush hour without specialized training, much less extra compensation. One sociological term for this is Arlie Hochschild‘s “emotional labor.” The employee was required to tactfully manage customers’ feelings. The most likely reaction from coffee drinkers? Microaggressions of avoidance, denial, and eye-rolling.

The alternative, for Starbucks so-called “partners,” was disgruntled defiance. At my local Starbucks, when I asked about these conversations, the manager emphatically said, “We’re not participating.” The barista next to her was blunt: “We think it’s bullshit.”

Swiftly, the company came out with public statements that had the air of faux intention and cover-up, as if to say, “We’re not retreating; we’re merely advancing in the other direction.” Starbucks had promised a year of Race Together, but the collapse of the café stunt made an all-out retreat more likely: one more forum, one more ad, then silence.

 This doesn’t work…

Race Together trod treacherous ground. The research shows that diversity training backfires when it attempts to ferret out prejudice. It puts white people on the defensive and creates a backlash against people of color. For committed consumers, Starbucks was messing with the equivocally best part about capitalism: that you can give someone money and they give you a thing. For activists, this all smelled wrong (i.e., not how you want your latté). Like co-opted social justice.

… Does anyone in HQ ever ask what works?

Starbucks was wise to shift closer to the traditional role of a coffee house—the so-called Third Place between work and home that Schultz has long exalted. Hopefully, the company looks to proven models for productive conversations on race. Organizations such as the Center for Racial Justice Innovation push forward discussions that recognize racism as systemic, not as isolated individual attitudes and bad behaviors. This helps to avoid what people hate most about diversity trainings: forced discourse about superficial differences (“are you a daytime or nighttime person?”) and the wretched hunt for guilty bad guys.

According to social psychologists, unconscious bias can be minimized when people have positive incentives for interpersonal, cross-racial relationships. Wearing a sports jersey for the same team is impressively effective for getting white people to cooperate with African Americans, as shown in a study led by psychologist Jason Nier. The idea is to not provoke white people’s fear and avoidance of doing wrong. It is to motivate people to try to do what’s right by establishing a shared identity

Starbucks also needs to wrestle with its goal of “together.” That’s not always the outcome of conversations about race. Political scientist Katherine Cramer Walsh found that participants in civic dialogues on race commonly walk away with a heightened awareness of their differences, not with the unity that meeting organizers hope to foster.

 Is it better to abandon ship?

Despite its missteps, Starbucks, in fact, alit on hopeful insights. Individuals can ignite change, and empathy and listening are starting points. The company deserves some applause for taking the risk and for its deliberate focus on inequality. Undoubtedly, working-class, minority millennials could teach the rest of the country something about race (and executives something about company policy).

The truth hurts

But let’s be clear about what Race Together was not. It was not about addressing institutional discrimination. In that scenario, Starbucks would have issued a press release about eliminating patterns of unfair hiring and firing. It would have overhauled a corporate division of labor that channels racial minorities into lower-tier, nonunionized jobs. It might very well have closed stores in gentrifying neighborhoods.

Those solutions start with incisive diagnosis, not personal reflection. (The U.S. Department of Justice did just that when it scrutinized racial profiling in traffic stops and court fines in Ferguson, Missouri.) Those solutions require change in corporate policy.

To make Race Together honest, Starbucks needed to recognize an ugly truth: America’s race problem is not an inability to talk. It is a failure to rectify the unfair disadvantages hoisted on people of color and the unearned privileges that white people enjoy. Corporations, in their internal operations, are complicit in these very dynamics. So, too, are long-standing government policies, such as tax deductions of home mortgage interest (white folks are far more likely to own their homes). And white Americans may not want to hear it, but racial inequality is, in large measure, rooted in our collective choices: where we’ll pay property taxes, who we’ll tell about a job lead, what we’ll deem criminal, and even when we’ll smile or scowl. Howard Schultz, are you listening?

*This piece was originally published at the Society Pages, http://www.thesocietypages.org

***

Ellen Berrey teaches in the Department of Sociology at the University of Buffalo, SUNY, and is an affiliated scholar of the American Bar Foundation. Her book The Enigma of Diversity: The Language of Race and the Limits of Racial Justice will publish in April 2015.

Add a Comment
29. Excerpt: Paying with Their Bodies

9780226210094

An excerpt from Paying with Their Bodies: American War and the Problem of the Disabled Veteran by John M. Kinder

***

Thomas H. Graham

On August 30, 1862, Thomas H. Graham, an eighteen-year-old Union private from rural Michigan, was gut-shot at the Second Battle of Bull Run near Manassas Junction, Virginia. One of 10,000 Union casualties in the three-day battle, Graham had little chance of survival. Penetrating gunshot wounds to the abdomen were among the deadliest injuries of the Civil War, killing 87 percent of patients—either from the initial trauma or the inevitable infection. Quickly evacuated, he was sent by ambulance to Washington, DC, where he was admitted to Judiciary Square Hospital the next day. Physicians took great interest in Graham’s case, and over the following nine months, the young man endured numerous operations to suture his wounds. Deemed fully disabled, he was eventually discharged from service on June 6, 1863.

But Graham’s injuries never healed completely. His colon remained perforated, and he had open sinuses just above his left leg where a conoidal musket ball had entered and exited his body. As Dr. R. C. Hutton, Graham’s pension examiner, reported shortly after the Civil War’s end, “From each of these sinuses there is constantly escaping an unhealthy sanious discharge, together with the faecal contents of the bowels. Occasionally kernels of corn, apple seeds, and other indigestible articles have passed through the stomach and been ejected through these several sinuses.” Broad-shouldered and physically strong, Graham attempted to make a living as a day laborer and later as a teacher, covering his open wounds with a bandage. By the early 1870s, however, he bore “a sallow, sickly countenance” and could no longer hold a job, dress his injuries, or even stand on his own two feet. Most pitiful of all, the putrid odor from his “artificial anus” made him a social pariah. Regarding Graham’s case as “utterly hopeless,” Hutton concluded, “he would have died long ago from utter detestation of his condition, were it not for his indomitable pluck and patriotism.” Within a few months, Graham was dead, but hundreds of thousands lived on, altering the United States’ response to disabled veterans for decades to come.

 Arthur Guy Empey

For American readers during World War I, no contemporary account offered a more compelling portrait of life on the Western Front than Arthur Guy Empey’s autobiography, “Over the Top,” by an American Soldier Who Went (1917). Disgusted by his own country’s refusal to enter the Great War, Empey had joined the British army in 1916, eventually serving with the Royal Fusiliers in northwestern France. Invalided out of service a year later, the former New Jersey National Guardsman became an instant celebrity, electrifying US audiences with his tales from the front lines. Despite nearly dying on several occasions, Empey looked back on his time in the trenches with profound nostalgia. “War is not a pink tea,” he reflected, “but in a worthwhile cause like ours, mud, rats, cooties, shells, wounds, or death itself, are far outweighed by the deep sense of satisfaction felt by the man who does his bit.”

Beneath the surface of Empey’s rollicking narrative, however, was a far more disturbing story. For all of the author’s giddy enthusiasm, Empey made little effort to hide the Great War’s insatiable consumption of soldier’s bodies, including his own. During a nighttime raid on a German trench, Empey was shot in the face at close range, the bullet smashing his cheekbones just below his left eye. As he staggered back toward his own lines, he discovered the body of an English soldier hanging on coil of barbed wire: “I put my hand on his head, the top of which had been blown off by a bomb. My fingers sank into the hole. I pulled my hand back full of blood and brains, then I went crazy with fear and horror and rushed along the wire until I came to our lane.” Before reaching shelter, Empey was wounded twice more in the left shoulder, the second time causing him to black out. He awoke to find himself choking on his own blood, a “big flap from the wound in my cheek . . . hanging over my mouth.” Empey spent the next thirty- six hours in no man’s land waiting for help.

As he recuperated in England, Empey’s mood swung between exhilaration and deep depression: “The wound in my face had almost healed and I was a horrible- looking sight— the left cheek twisted into a knot, the eye pulled down, and my mouth pointing in a north by northwest direction. I was very down- hearted and could imagine myself during the rest of my life being shunned by all on account of the repulsive scar.” Although reconstructive surgery did much to restore his prewar appearance, Empey never recovered entirely. Like hundreds of thousands of Americans who followed him, he was forever marked by his experiences on the Western Front.

Elsie Ferguson in “Hero Land”

In the immediate afterglow of World War I, Americans welcomed home the latest generation of wounded warriors as national heroes— men whose bodies bore the scars of Allied victory. Among the scores of prominent supporters was Elsie Ferguson, a Broadway actress and film star renowned for her maternal beauty and patrician demeanor.

During the war years, “The Aristocrat of the Silent Screen” had been an outspoken champion of the Allied effort, raising hundreds of thousands of dollars for liberty bonds through her stage performances and public rallies. After the Armistice, she regularly visited injured troops at Debarkation Hospital No. 5, one of nine makeshift convalescent facilities established in New York City in the winter of 1918. Nicknamed “Hero Land,” No. 5 was housed in the lavish, nine- story Grand Central Palace, and was temporary home to more than 3,000 sick and wounded soldiers recently returned from European battlefields.

Unlike Ferguson’s usual cohort, who reser ved their heroics for the big screen, the patients at No. 5 did not resemble matinee idols— far from it. Veterans of Château-Thierry, Belleau Wood, and the Meuse-Argonne, many of the men were prematurely aged by disease and loss of limb. Others endured constant pain, their bodies wracked with the lingering effects of shrapnel and poisonous gas.

Like most observers in the early days of the Armistice, Ferguson was optimistic about such men’s prospects for recovery. Chronicling her visits in Motion Picture Magazine, she reported that the residents of Hero Land received the finest care imaginable. Besides regular excursions to hot spots throughout the city, convalescing vets enjoyed in- house film screenings and stage shows, and the hospital storeroom (staffed by attractive Red Cross volunteers) was literally overflowing with cigarettes, chocolate bars, and “all the good things waiting to give comfort and pleasure to the men who withheld nothing in their giving to their country.” The patients themselves were upbeat to a man and, in Ferguson’s view, seemed to harbor no ill will about their injuries. Reflecting upon a young marine from Minnesota, now missing an arm and too weak to leave his bed, she echoed the sentiments of many postwar Americans: “The world loves these fighting men and a uniform is a sure passport to anything they want.”

Still, the actress cautioned her readers against expecting too much, too soon. The road to readjustment was a long one, and Ferguson warned that the United States would never be a “healed nation” until its disabled doughboys were back on their feet.

Sunday at the Hippodrome

On the afternoon of Sunday, March 24, 1919, more than 5,000 spectators crowded into New York City’s Hippodrome Theater to attend the culmination of the International Conference on Rehabilitation of the Disabled. The purpose of the conference was to foster an exchange of ideas about the rehabilitation of wounded and disabled soldiers in the wake of World War I. Earlier sessions held the week before at Carnegie Hall and the Waldorf-Astoria, had been attended primarily by specialists in the field, among them representatives from the US Army Office of the Surgeon General, the French Ministry of War, the British Ministries of Pensions and Labor, and the Canadian Department of Soldiers’ Civil Re-Establishment. But the final day was meant for a different audience. Part vaudeville, part civic revival, it was organized to raise mass support for the rehabilitation movement and to honor the men whose bodies bore the scars of the Allied victory.

The afternoon’s program opened with the debut performance of the People’s Liberty Chorus, a hastily organized vocal group whose female members were dressed as Red Cross nurses and arranged in a white rectangle across the back of the theater’s massive stage. As they belted out patriotic anthems, an American flag and other patriotic symbols flashed in colored lights above their heads. Between songs, the event’s host, former New York governor Charles Evans Hughes, introduced inspirational speakers, among them publisher Douglas C. McMurtrie, the foremost advocate of soldiers’ rehabilitation in the United States. In his own address, Hughes paid homage to the men and women working to reconstruct the bodies and lives of America’s war-wounded. He also extended a warm greeting to the more than 1,000 disabled soldiers and sailors in the audience, many transported to the theater in Red Cross ambulances from nearby hospitals and convalescent centers.

The high point of the afternoon’s proceedings came near the program’s end, when a small group of disabled men took the stage. Lewis Young, a bilateral arm amputee, thrilled the onlookers by lighting a cigarette and catching a ball with tools strapped to his shoulders. Charles Bennington, a professional dancer with one leg amputated above the knee, danced the “buck and wing” on his wooden peg, kicking his prosthetic high above his head. The last to address the crowd was Charles Dowling, already something of a celebrity for triumphing over his physical impairments. At the age of fourteen, Dowling had been caught in a Minnesota blizzard. The frostbite in his extremities was so severe that he eventually lost both legs and one arm to the surgeon’s saw. Now a bank president, Republican state congressman, and married father of three, he offered a message of hope to his newly disabled comrades:

I have found that you do not need hands and feet, but you do need courage and character. You must play the game like a thoroughbred. . . . You have been handicapped by the Hun, who could not win the fight. For most of you it will prove to be God’s greatest blessing, for few begin to think until they find themselves up against a stone wall.

Dowling stood before them as living proof that with hard work and careful preparation even the most severely disabled man could achieve lasting success. Furthermore, he chided the nondisabled in the audience not to “coddle” or “spoon-feed” America’s wounded warriors: “Don’t treat these boys like babies. Treat them like what they have proved themselves to be—men.”

The Sweet Bill

On December 15, 1919, representatives of the American Legion, the United States’ largest organization of Great War veterans, gathered in Washington, DC, for the first skirmish in a decades- long campaign to expand federal benefits for disabled veterans. They had been invited by the head of the War Risk Insurance Bureau, R. G. Cholmeley-Jones, to take part in a three-day conference on reforming veterans’ legislation. Foremost on the Legionnaires’ agenda was the immediate passage of the Sweet Bill, a measure that would raise the base compensation rate for war-disabled veterans from $30 to $80 a month. Submitted by Representative Burton E. Sweet (R-IA) three months earlier, the bill had passed by a wide margin in the House but had yet to reach the Senate floor. Some members of the Senate Appropriations Committee were put off by the high cost of the legislation (upward of $80 million a year); others felt that the country had more pressing concerns—such as the fate of the League of Nations— than disabled veterans’ relief. Meanwhile, as one veteran-friendly journalist lamented, war-injured doughboys languished in a kind of legislative limbo: “Men with two or more limbs gone, both eyes shot out, virulent tuberculosis and gas cases—these are the kind of men who have suffered from congress [sic] inaction.”

After an opening day of mixed results, the Legionnaires reconvened on the Hill the following afternoon to press individual lawmakers about the urgency of the problem. That evening, leading members of Congress hosted the lobbyists at a dinner party in the Capitol basement. Before the meal began, Legionnaire H. H. Raegge, a single- leg amputee from Texas, caught a streetcar to nearby Walter Reed Hospital and returned with a group of convalescing vets. The men waited as the statesmen praised the Legionnaires’ stalwart patriotism; then Thomas W. Miller, the chairman of the Legion’s Legislative Committee, rose from his seat and introduced the evening’s surprise guests. “These men are only twenty minutes away from your Capitol, Mr. Chairman [Indiana Republican senator James Eli Watson], and twenty minutes away from your offices, Mr. Cholmeley-Jones,” Miller announced to the audience. “Every man has suffered—actually suffered—not only from his wounds, but in his spirit, which is a condition this great Nation’s Government ought to change.” Over the next three hours, the men from Walter Reed testified about the low morale of convalescing veterans, the “abuses” suffered at the hands of the hospital officials, and their relentless struggle to make ends meet. By the time it was over, according to one eye-witness, the lawmakers were reduced to tears. Within forty-eight hours, the Sweet Bill—substantially amended according to the Legion’s recommendations—sailed through the Senate, and on Christmas Eve, Woodrow Wilson signed it into law.

For the newly formed America Legion, the Sweet Bill’s passage represented more than a legislative victory. It marked the debut of the famed “Legion Lobby,” whose skillful deployment of sentimentality and hard-knuckle politics have made it one of the most influential (and feared) pressure groups in US history. No less important, the story of the Sweet Bill became one of the group’s founding myths, retold—often with new details and rhetorical flourish—at veterans’ reunions throughout the following decades.4 Its message was self- aggrandizing, but it also had an element of truth: in the face of legislative gridlock, the American Legion was the best friend a disabled veteran could have.

Forget-Me-Not Day

On the morning of Saturday, December 17, 1921, an army of high school girls, society women, and recently disabled veterans assembled for one of the largest fund- raising campaigns since the end of World War I. The group’s mission was to sell millions of handcrafted, crepe-paper forget-me-nots to be worn in remembrance of disabled veterans. Where the artificial blooms were unavailable, volunteers peddled sketches of the pale blue flowers or cardboard tags with the phrase “I Did Not Forget” printed on the front. The sales drive was the brainchild of the Disabled American Veterans of the World War (DAV), and proceeds went toward funding assorted relief programs for permanently injured doughboys. Event supporters hoped high turnout would put to rest any doubt about the nation’s appreciation of disabled veterans and their families. As governor Albert C. Ritchie told his Maryland residents on the eve of the flower drive: “Let us organize our gratitude so that in a year’s time there will not be a single disabled soldier who can point an accusing finger at us.”

Over the next decade, National Forget-Me-Not Day became a minor holiday in the United States. In 1922, patients from Washington, DC, hospitals presented a bouquet of forget- me- nots to first lady Florence Kling Harding, at the time recovering from a major illness. Her husband wore one of the little flowers pinned to his lapel, as did the entire White House staff. That same year, Broadway impresario George M. Cohan orchestrated massive Forget-Me-Not Day concerts in New York City. As bands played patriotic tunes, stage actresses worked the crowds, smiling, flirting, and raking in coins by the bucketful. According to press reports at the time, the flower sales were meant to perform a double duty for disabled vets. Pinned to a suit jacket or dress, a forget-me-not bloom provided a “visible tribute” to the bodily sacrifices of the nation’s fighting men. As the manufacture of remembrance flowers evolved into a cottage industry for indigent vets, the sales drive acquired an additional motive: to turn a “community liability” into a “community asset.”

Although press accounts of Forget-Me-Not Day reassured readers that “Americans Never Forget,” many disabled veterans remained skeptical.

From the holiday’s inception, the DAV tended to frame Forget-Me-Not Day in antagonistic terms, using the occasion to vent its frustration with the federal government, critics of veterans’ policies, and a forgetful public. Posters from the first sales drive featured an anonymous amputee on crutches, coupled with the accusation “Did you call it charity when they gave their legs, arms and eyes?” As triumphal memories of the Great War waned, moreover, Forget-Me-Not Day slogans turned increasingly hostile. “They can’t believe the nation is grateful if they are allowed to go hungry,” sneered one DAV slogan, two years before the start of the Great Depression. Another characterized the relationship between civilians and disabled vets as one of perpetual indebtedness: “You can never give them as much as they gave you.”

 James M. Kirwin

On November 26, 1939, three months after the start of World War II in Europe, James M. Kirwin, pastor of the St. James Catholic Church in Port Arthur, Texas, devoted his weekly newspaper column to one of the most haunting figures of the World War era: the “basket case.” Originating as British army slang in World War I, the term referred to quadruple amputees, men so horrifically mangled in combat they had to be carried around in wicker baskets. Campfire stories about basket cases and other “living corpses” had circulated widely during the Great War’s immediate aftermath. And Kir win, a staunch isolationist fearful of US involvement in World War II, was eager to revive them as object lessons in the perils of military adventurism. “The basket case is helpless, but not useless,” the preacher explained. “He can tell us what war is. He can tell us that if the United States sends troops to Europe, your son, your brother, father, husband, or sweetheart, may also be a basket case.” In Kir win’s mind, mutilated soldiers were not heroes to be venerated; they were monstrosities, hideous reminders of why the United States should avoid overseas war- making at all costs. Facing an upsurge in pro- war sentiment, the reverend implored his readers to take the lessons of the basket case to heart: “We must not add to war’s carnage and barbarity by drenching foreign fields with American blood. . . . Looking at the basket case, we know that for civilization’s sake, we dare not, MUST NOT.”

Harold Russell

The most famous disabled veteran of the “Good War” era never saw action overseas. On June 6, 1944, Harold Russell was serving as an Army instructor at Camp Mackall, North Carolina, when a defective explosive blew off both of his hands. Sent to Walter Reed Medical Center, Russell despaired the thought of spending the rest of his days a cripple. As he recounted in his 1981 autobiography, The Best Years of My Life, “For a disabled veteran in 1944, ‘rehabilitation’ was not a realistic prospect. For all I knew, I was better off dead, and I had plenty of time to figure out if I was right.” Not long after his arrival, his mood brightened after watching Meet Charlie McGonegal, an Army documentary about a rehabilitated veteran of World War I. Inspired by McGonegal’s example—“I watched the film in awe,” he recalled—Russell went on to star in his own Army rehabilitation film, and was eventually tapped by director William Wyler to act alongside Fredric March and Dana Andrews in The Best Years of Our Lives (1946), a Hollywood melodrama about three veterans attempting to pick up their lives after World War II.

Russell played Homer Parrish, a former high school quarterback turned sailor who lost his hands during an attack at sea. Much of Russell’s section of the film follows Homer’s anxieties about burdening his family—and especially his fiancée, Wilma—with his disability. In the film’s most poignant scene, Homer engages in a form of striptease, removing his articulated metal hooks and baring his naked stumps to Wilma—and, it turns out, to the largest ticket-buying audience since the release of Gone with the Wind. Even as Homer decries his own helplessness—“I’m as dependent as a baby,” he protests—Wilma tucks him into bed and pledges her everlasting love and fidelity. In the film’s finale, the young couple is married; however, there is little to suggest that Homer’s struggles are over.

Though Russell worried what disabled veterans would make of the film, The Best Years of the Our Lives was a critical and box-office smash. For his portrayal of Homer Parrish, Russell would win not one but two Academy Awards (one for Best Supporting Actor and the other for “bringing aid and comfort to disabled veterans”). He would spend the next few decades working with American Veterans (AMVETS) and other veterans’ groups to change public perceptions of disabled veterans. “Tragically, if somebody said ‘physically disabled’ in 1951,” he later observed, “too many Americans thought only of street beggars. We DAV’s were determined to change that.” In 1961, he was appointed as vice chairman of the president’s Committee on Employment of the Handicapped, succeeding to the chairman’s role three years later.

A decade before his death in 2002, Russell returned to the public spotlight when he was forced to sell one of his Oscars to pay for his wife’s medical bills.

Tammy Duckworth

At first glance, Ladda Tammy Duckworth bears little resemblance to the popular stereotype of a wounded hero. The daughter of a Thai immigrant and an ex-Marine, the “self-described girlie girl” joined the ROTC in the early 1990s. Earning a master’s degree in international affairs at George Washington University, she enlisted in the Illinois National Guard with the sole purpose of becoming a helicopter pilot, one of the few combat careers open to women at the time. Life in uniform was far from easy. Dubbed “mommy platoon leader,” she endured routine verbal abuse from her male cohort. As she recalled to reporter Adam Weinstein, the men in her unit “knew that I was hypersensitive about wanting to be one of the guys, that I wanted to be—pardon my language—a swinging dick, just like ever yone else, so they just poked. And I let them, that’s the dumb thing.” She persisted all the same, eventually coming to command more than forty troops.

On November 12, 2004, the thirty-six-year-old Duckworth was copiloting a Black Hawk helicopter in the skies above Iraq when a rocket-propelled grenade exploded just below her feet. The blast tore off her lower legs and she lost consciousness as her copilot struggled to guide the chopper to the ground. Duckworth awoke in a Baghdad field hospital, her right arm shattered and her body dangerously low on blood. Once stabilized, she followed the aerial trajectory of thousands of severely injured Iraq War soldiers— first to Germany and then on to Walter Reed Medical Center (in her words, the “amputee petting zoo”), where she became an instant celebrity and a prized photo-companion for visiting politicians.

Duckworth has since devoted her career to public service and veterans’ advocacy. Narrowly defeated in her bid for Congress in 2006, she headed the Illinois Veterans Bureau from 2006 to 2009 and later went on to serve in the Obama administration as assistant secretary of the Department of Veterans Affairs. At the VA, she “boosted ser vices for homeless vets and created an Office of Online Communications, staffing it with respected militar y bloggers to help with troops’ day-to-day questions.” Yet politics was never far from her heart, and in November 2012, Duckworth defeated Tea Party incumbent Joe Walsh to become the first female disabled veteran to serve in the House of Representatives.

Balanced atop her high-tech prostheses (one colored red, white, and blue; the other, a camouflage green), Duckworth might easily be caricatured as a supercrip, a term disability scholars use to describe inspirational figures who by sheer force of will manage to “triumph” over their disabilities and achieve extraordinary success. Indeed, it’s easy to be awed by her remarkable determination, both before and after her injury. However, as just one of tens of thousands of disabled veterans of Afghanistan and Iraq, Duckworth is far less remarkable than many of us would like to believe. Nearly a century after Great War evangelists predicted the end of war-produced disability, she is a public reminder that the goal of safe, let alone disability-free, combat remains as illusive as ever.

To read more about Paying with Their Bodies, click here.

Add a Comment
30. Excerpt: A Significant Life

9780226235677

 

“A Meaningful Life”

An excerpt from A Significant Life: Human Meaning in a Silent Universe by Todd May

***

Let us start with a question. What does it mean to ask about the meaningfulness of life? It seems a simple question, but there are many ways to inflect it. We might ask,“What is the meaning of life?” Or we could ask it in the plural: “What are the meanings of life?” If we put the question either of these ways, we seem to be asking for a something orsomethings, a what that gives a human life its meaningfulness. The universe is thought or hoped to contain something—a meaning—that is the point of our being alive. If the universe contains a meaning, then the task for us becomes one of discovery. It is built into the universe, part of its structure. In the image that some philosophers like to use, it is part of the “furniture” of the universe.

When we say that the meaning of life is independent of us—that is, independent of what any of us happens to believe about it—we do not need to believe that there would be a meaning to our lives even if none of us were around to live it. We only need to believe that whatever meaning there is to our lives, it is not in any way up to us what it is. What makes our lives meaningful, whether it arises at the same time as we do or not, does not arise as part of us.

The idea that something exists independent of us and that it is our task to discover it, is how Camus thought of the meaning of life. If our lives are to be meaningful, it can only be because the universe contains a meaning that we can discern. And it is the failure not only to have discerned it but to have any prospect of discerning it that causes him to despair. The silence of the universe, the silence that affronts human nature’s need for meaning, is that of the universe regarding meaning itself.

The universe, after all, is not silent about everything. It has yielded numerous of its workings to our inquiry. In many ways, the universe seems loquacious, and perhaps increasingly so. There are scientists who believe that physics may be on the cusp of articulating a unified theory of the universe. This unified theory would give us a complete account of its structure. But nowhere in this theory is there glimpsed a meaning that would satisfy our need for one. This is because either such a meaning does not exist or, if it does, it eludes our ability to recognize it.

The idea that the universe is meaningful precisely because it contains a meaning independent of us is not foreign to the history of philosophy. It is also not foreign to our own more everyday way of thinking. It has a long history, a history as long as the history of philosophy itself, and indeed probably longer. One form this way of thinking has taken is that of the ancient philosopher Aristotle.

For my own part, I long detested Aristotle’s thought, what little I knew of it. For me, Aristotle was just a set of sometimes disjointed writings that I somehow had to get through in order to pass my qualifying exams in graduate school. It wasn’t until a number of years into my teaching career that a student persuaded me to read him again. In particular, he insisted, the Nicomachean Ethics would speak to me. I doubted this, but I respected the student, so one semester I decided to incorporate a large part of theEthics into a course I was teaching on moral theory. Teaching a philosopher is often a way to develop sympathy for him or her. It forces one to take up the thinker’s perspective. Before embarking on the course, I recalled the words of the great historian of science Thomas Kuhn, who once said that he came to realize that he did not understand a thinker until he could see the world through that thinker’s eyes. In fact, he said that he realized this after reading Aristotle’s Physics. I figured that if anything would do the trick with Aristotle, teaching his Ethics would be it.

It did the trick.

Not only do I find myself teaching the Ethics on a regular basis. Once, in a moment of hubris, I signed up to teach a senior-level seminar on Aristotle’s general philosophy. In doing so, I told my students that I would try to defend every aspect of his thought, even the most obsolete aspects of his physics and biology. This forced me and the students to take his thought seriously as a synoptic vision of human life and the universe in which it unfolds.

Aristotle’s ethics, his view of a human life as a trajectory arcing from birth to death and his attempt to comprehend what the trajectory of a good human life would be, has left its mark on my own view of meaningfulness. His attempt to bring together the various elements of a life—reason, desire, the need for food and shelter—into a coherent whole displays a wisdom rarely found even among the most enlightened minds in the history of philosophy. It stands out particularly against the background of more recent developments in philosophy, which often concern themselves less with wisdom and more with specialized problems and the interpretations of other thinkers.

Aristotle talks not in terms of meaning, but of the good. So the ethical question for Aristotle is, What is the good aimed at by human being? Or, to put it in more Aristotelean terms, What is the human telos? It is, in the Greek term, eudaemonia.Eudaemonia literally means“good” (eu) “spirit” (daemon). The term is commonly translated as “happiness.” However, happiness as we use the word does not seem to capture much of what Aristotle portrays as a good human life. For Aristotle,eudaemonia is a way of living, a way of carrying out the trajectory of one’s life. A more recent and perhaps better translation is “flourishing.” Flourishing may seem a bit more technical than happiness, or perhaps a bit more dated, but that is one advantage it possesses. Rather than carrying our own assumptions into the reading of the term, it serves as a cipher. Its meaning can be determined by what Aristotle says about a good life rather than by what we already think about happiness.

Flourishing is the human telos. It is what being human is structured to aim at. Not all humans achieve a flourishing life. In fact, Aristotle thinks that a very flourishing life is rare. It is not difficult to see why. In order to flourish, one must have a reasonably strong mental and physical constitution, be nourished by the right conditions when one is young, be willing to cultivate on’s virtue as one matures, and not face overwhelming tragedy during one’s life. Many of us can attain to some degree of flourishing over the course of our lives, but a truly flourishing life: that is seldom achieved.

What is it to flourish, to trace a path in accordance with the good for human beings? The Nicomachean Ethics is a fairly long book. The English translation runs to several hundred pages. There are discussions of justice, friendship, desire, contemplation, politics, and the soul, all of which figure in detailing the aspects of flourishing. But Aristotle’s general definition of the good for human beings is concise: “The human good turns out to be the soul’s activity that expresses virtue.” The good life, the flourishing life, is an ongoing activity. And that activity expresses the character of the person living it, her virtue.

For Aristotle, the good life is not merely a state. One does’t arrive at a good life. The telos of a human life is not an end result, where one becomes something and then spends the rest of one’s life in that condition that one becomes. It is not like nirvana, an exiting of the trials of human existence into a state where they no longer disturb one’s inner calm. It is, instead, active and engaged with the world. It is an ongoingexpression of who one is. This does not mean that there is no inner peace. A person whose life is virtuous, Aristotle tells us, experiences more pleasure than a person whose life is not, and is unlikely to be undone by the tribulations of human existence. And a virtuous person, because he has more perspective, will certainly possess an inner calm that is not entirely foreign to the idea of nirvana. However, a good life is not simply the possession of that calm. It is one’s very way of being in the world.

To be virtuous, to have any virtue to express, requires us to mold ourselves in a certain way. It requires us to fashion ourselves into virtuous people. Human beings are structured with the capacity to be virtuous. But most of us never get there. We are lazy, or we do not have the right models to follow, or else we do have the right models to follow but do’t recognize them, or some combination of these failures and perhaps others. A human being, unless she is severely damaged by her genetic constitution or early profound trauma, can become virtuous, whether or not she really does. But to do so takes work, the kind of work that molds on’s character into someone whose behavior consistently expresses virtue. Most of us are only partly up to the task.

What is this virtue that a good life expresses? For Aristotle, virtues are in the plural. Moreover, they come in two types, corresponding to two aspects of the human soul. The human soul has three parts. There is the vegetative or nutritive part: the part that breathes, sleeps, keeps the organism running biologically. Then there is desire. It is directed toward the world, wanting to have or to do or to be certain things. But desire is not blind. It is responsive to reason, which is the third part of the soul. And because desire is responsive to reason, there are virtues associated with desire, just as there are virtues associated with reason. There are virtues of character and virtues of thought.

The vegetative or nutritive part of the soul cannot not have its own virtues, because it is immune to reason. Unlike desire, it cannot be controlled or directed or channeled. To be capable of virtue is to be capable of developing it. It is not to be already endowed with it. This requires that we can both recognize the virtue to be developed and develop ourselves in accordance with it. And to do that, we must be able to apply reason. I can apply reason to my desire to vent anger on my child when he has failed to recognize the need to share his toys with his little sister. In fact, I can do more than this. I can notice the anger when it begins to appear in inappropriate situations, reflect on its inappropriateness, lay the anger aside, and eventually mold myself into the kind of person who does’t get angry when there is no need for it. With anger I can do this, but not with breathing.

The virtues of character include, among others, sincerity, temperance, courage, good temper, and modesty. For Aristotle, all of these virtues are matters of finding the right mean between extremes. Good temper, for example, is the mean between spiritlessness and irascibility. It is the mean I try to develop when I learn to refrain from getting angry in situations that do not call for it, as with my child. If I never got angry at all, though, that would not display good character any more than a readiness to vent would. There are situations that call for anger: when my child is older and does something knowingly cruel to another, or when my country acts callously toward its most vulnerable citizens. Virtues of character are matters of balance. We reflect on our desires, asking which among them to develop and when. Sometimes we need to learn restraint; sometimes, alternatively, we need to elicit expression. We are all (almost all) born with the ability to do this. What we need are models to show us the way and a willingness to work on ourselves.

Virtues of thought, in contrast to virtues of character, are matters of reason alone: understanding, wisdom, and intelligence, for instance. The goal of virtues of thought is to come to understand ourselves and our world. Like the virtues of character, they are active. And like the development of virtues of character, the development of virtues of thought is not a means to an end. The goal of these virtues is not simply to gain knowledge. It is to remain engaged intellectually with the world. As Aristotle tells us, “Wisdom produces happiness [or flourishing], not in the way that medical science produces health but in the way that health produces health.”

This point is easy to miss in our world. In contrast to when I attended college, many of my students are encouraged to think of their time in a university as nothing more than job training. It is not that previous generations were not encouraged to think in these terms. But there were other terms as well, terms concerning what might, perhaps a bit quaintly, be called “the life of the mind.” In 1998, the New York Times reported that “in the [annual UCLA] survey taken at the start of the fall semester, 74.9 percent of freshmen chose being well off as an essential goal and 40.8 percent chose developing a philosophy. In 1968, the numbers were reversed, with 40.8 percent selecting financial security and 82.5 percent citing the importance of developing a philosophy.” The threat faced at many universities to the humanities, from foreign languages to history to philosophy, signals a leery or even dismissive attitude toward a view of the university as helping students to “develop a philosophy.” Aristotle insists that a good life is not one where our mental capacities are taken to be means to whatever ends are sought by our desires. It is instead a life in which our mental capacities are exercised as an end in itself. In fact, although we need not follow him this far, for Aristotle contemplation is the highest good that a life can achieve. It is the good he associates with the gods.

What might a good life look like, a life that Aristotle envisions as the good life for human beings? How might we picture it?

We need not think of someone with almost superhuman capacities. A good person is not someone larger than life. Even less should we think of someone entirely altruistic, who dedicates her life to the good of others. That is a more Christian conception of a good life. It is foreign to Aristotle’s world, where a good life involves a dedication to self-cultivation. Last, we should not light upon famous people as examples of those who lead good lives. It may be that there are good lives among the famous. But a good life is not one that seeks fame, so whether a good person is famous or popular would be irrelevant to her. For this reason, we might expect fewer good lives among the famous, since public recognition often alights upon those who chase after it.

Instead, a good life is likely to be found among on’s peers, but not among many of them. It would be among those who take up their lives seriously as a task, a task of a particular sort. They see themselves as material to be molded, even disciplined. Their discipline is dedicated to make them act and react in the proper ways to the conditions in which they find themselves. This discipline is not blind, however. It is not a military kind of discipline, where the goal is mindless conformity. It is a more reflective discipline, one where an understanding of the world and a desire to act well in it are combined to yield a person that we might call, in the contemporary sense of the word, wise. e

It would be a mistake to picture the good life as overly reflective, though, and for two reasons. First, a good life, in keeping with the Greek ideal, requires sound mind and sound body. Cultivation of character is not inconsistent with cultivation of physical health. In fact, if recent studies are to be believed, good physical health contributes to a healthy mind. It is, of course, not the sole contributant. We cannot assume that because someone is athletic, he is a paragon of good character. On the contrary, the list of examples that would tell against this assumption would make for long and depressing reading. There is a mean to athleticism as there is a mean to the virtues. But the person lost to reflection, the person who mistakes himself for an ethereal substance or sees his body merely as an encumbrance to thought, is not leading a good human life. Even if, for Aristotle, contemplation is the highest good, it can only be sustained over a long period by the gods. The good human life is an embodied one.

Second, if a person cultivates herself rightly, then over time there should be less of a need for continued discipline. A good person will learn to act well automatically. It will become part of her nature. Someone who is flourishing, confronted with a choice between helping others where it would benefit herself and helping them in spite of the lack of benefit, would not even take the possibility of benefit as a relevant factor in the situation. It would remain in the background, never rising to the level of a consideration worth taking into account. The fact that she would benefit just wouldn’t matter.

It is not surprising, then, that for Aristotle the person who does not think of acting poorly, for whom it is not even a possibility, is leading a better life than someone who is tempted to evil but struggles, even successfully, with herself to overcome it. The latter person has not cultivated herself in the right way. She may be strong in battle but she is weak in character. This is one of the reasons Aristotle says that a good life is more pleasurable to the one living it than a bad one. Someone who has become virtuous is at peace with herself. She knows who she is and what she must do and does not wrestle with herself to do it. Rather, she takes pleasure in not having to wrestle with herself.

It might be thought that the good life would be solitary or overly self-involved. But for Aristotle this is not so. In fact, what he calls true friendship is possible only among the virtuous. The weak will always want something from a friend: encouragement, support, entertainment, flattery, a sense of one’s own significance. It is only when these needs are left behind that one can care for another for the sake of that other. True friendship and the companionship that comes with it are not the offspring of need; they are the progeny of strength. They arise when the question between friends is not what each can receive from the other but what each can offer.

The flourishing life depicted by Aristotle is certainly an attractive one. It is attractive both inside and out, to oneself and to others. To be in such control of one’s life and to have such a sense of direction must be a rewarding experience to the person living it. As Aristotle says, it is the most pleasurable life. And from the other side, there is much good that such a life brings to others. This good is given freely, as an excess or overflow of one’s own resources, rather than as an investment in future gain. It is a life that is lived well and does good.

But is it a meaningful life?

In order to answer that question, we must know something about what makes a life meaningful. If we were ask Aristotle whether this life is meaningful, he would undoubtedly answer yes. The reason for this returns us to the framework of his thought. Everything has its good, its telos. To live according to on’s telos is to be who or what one should be. It is to find one’s place in the universe. For Aristotle, in contrast to Camus, the universe is not silent. It is capable of telling us our role and place. Or better, since the universe does not actually tell it to us, whisper it in our ear as it were, it allows us to find it. What we need to do is reflect upon the universe and upon human beings, and to notice the important facts about our human nature and abilities. Once we know these, we can figure out what the universe intends for us. That is what theNicomachean Ethics does. When Camus seeks in vain for meaningfulness, he is seeking precisely what Aristotle thinks is always there, inscribed in the nature of things, part of the furniture of the universe.

The problem for us is that we are not Aristotle, or one of his contemporaries. We do not share the framework of his time. The universe is not ordered in such a way that everything has its telos. The cosmos is not for us as rational a place as he thought. Perhaps it can confer meaning on what we do. But even if it can, it will not be by means of allocating to everything its role in a judiciously organized whole.

***

To read more about A Significant Life, click here.

Add a Comment
31. Free e-book for April: Hybrid

9780226437132

Just in time for your ur-garden, our free-ebook for April is Noel Kingsbury’s Hybrid: The History and Science of Plant Breeding.

***

Disheartened by the shrink-wrapped, Styrofoam-packed state of contemporary supermarket fruits and vegetables, many shoppers hark back to a more innocent time, to visions of succulent red tomatoes plucked straight from the vine, gleaming orange carrots pulled from loamy brown soil, swirling heads of green lettuce basking in the sun.

With Hybrid, Noel Kingsbury reveals that even those imaginary perfect foods are themselves far from anything that could properly be called natural; rather, they represent the end of a millennia-long history of selective breeding and hybridization. Starting his story at the birth of agriculture, Kingsbury traces the history of human attempts to make plants more reliable, productive, and nutritious—a story that owes as much to accident and error as to innovation and experiment. Drawing on historical and scientific accounts, as well as a rich trove of anecdotes, Kingsbury shows how scientists, amateur breeders, and countless anonymous farmers and gardeners slowly caused the evolutionary pressures of nature to be supplanted by those of human needs—and thus led us from sparse wild grasses to succulent corn cobs, and from mealy, white wild carrots to the juicy vegetables we enjoy today. At the same time, Kingsbury reminds us that contemporary controversies over the Green Revolution and genetically modified crops are not new; plant breeding has always had a political dimension.

A powerful reminder of the complicated and ever-evolving relationship between humans and the natural world, Hybrid will give readers a thoughtful new perspective on—and a renewed appreciation of—the cereal crops, vegetables, fruits, and flowers that are central to our way of life.

***
Download your copy of Hybrid, here.

Add a Comment
32. Excerpt: Southern Provisions

9780226141114

An excerpt from Southern Provisions: The Creation and Revival of a Cuisine by David S. Shields

***

Rebooting a Cuisine

“I want to bring back Carolina Gold rice. I want there to be authentic Lowcountry cuisine again. Not the local branch of southern cooking incorporated.” That was Glenn Roberts in 2003 during the waning hours of a conference in Charleston exploring “ The Cuisines of the Lowcountry and the Caribbean.”

When Jeffrey Pilcher, Nathalie Dupree, Marion Sullivan, Robert Lukey, and I brainstormed this meeting into shape over 2002, we paid scant attention to the word cuisine.1 I’m sure we all thought that it meant something like “a repertoire of refined dishes that inspired respect among the broad public interested in food.” We probably chose “cuisines” rather than “foodways” or “cookery” for the title because its associations with artistry would give it more splendor in the eyes of the two institutions—the College of Charleston and Johnson & Wales University—footing the administrative costs of the event. Our foremost concern was to bring three communities of people into conversation: culinary historians, chefs, and provisioners (i.e., farmers and fishermen) who produced the food cooked along the southern Atlantic coast and in the West Indies. Theorizing cuisine operated as a pretext.

Glenn Roberts numbered among the producers. The CEO of Anson Mills, he presided over the American company most deeply involved with growing, processing, and selling landrace grains to chefs. I knew him only by reputation. He grew and milled the most ancient and storied grains on the planet—antique strains of wheat, oats, spelt, rye, barley, faro, and corn—so that culinary professionals could make use of the deepest traditional flavor chords in cookery: porridges, breads, and alcoholic beverages. Given Roberts’s fascination with grains, expanding the scope of cultivars to include Carolina’s famous rice showed intellectual consistency. Yet I had always pegged him as a preservationist rather than a restorationist. He asked me, point-blank, whether I wished to participate in the effort to restore authentic Lowcountry cuisine.

Roberts pronounced cuisine with a peculiar inflection, suggesting that it was something that was and could be but that in 2003 did not exist in this part of the South. I knew in a crude way what he meant. Rice had been the glory of the southern coastal table, yet rice had not been commercially cultivated in the region since a hurricane breached the dykes and salted the soil of Carolina’s last commercial plantation in 1911. (Isolated planters on the Combahee River kept local stocks going until the Great Depression, and several families grew it for personal use until World War II, yet Carolina Gold rice disappeared on local grocers’ shelves in 1912.)

When Louisa Stoney and a network Charleston’s grandes dames gathered theirCarolina Rice Cook Book in 1901, the vast majority of ingredients were locally sourced. When John Martin Taylor compiled his Hoppin’ John’s Lowcountry Cooking in 1992,4 the local unavailability of traditional ingredients and a forgetfulness about the region’s foodways gave the volume a shock value, recalling the greatness of a tradition while alerting readers to its tenuous hold on the eating habits of the people.

Glenn Roberts had grown up tasting the remnants of the rice kitchen, his mother having mastered in her girlhood the art of Geechee black skillet cooking. In his younger days, Roberts worked on oyster boats, labored in fields, and cooked in Charleston restaurants, so when he turned to growing grain in the 1990s, he had a peculiar perspective on what he wished for: he knew he wanted to taste the terroir of the Lowcountry in the food.5 Because conventional agriculture had saturated the fields of coastal Carolina with pesticides, herbicides, and chemical fertilizers, he knew he had to restore the soil as well as restore Carolina Gold, and other crops, into cultivation.

I told Roberts that I would help, blurting the promise before understanding the dimensions of what he proposed. Having witnessed the resurgence in Creole cooking in New Orleans and the efflorescence of Cajun cooking in the 1980s, and having read John Folse’s pioneering histories of Louisiana’s culinary traditions, I entertained romantic visions of lost food-ways being restored and local communities being revitalized. My default opinions resembled those of an increasing body of persons, that fast food was aesthetically impoverished, that grocery preparations (snacks, cereals, and spreads) had sugared and salted themselves to a brutal lowest common denominator of taste, and that industrial agriculture was insuring indifferent produce by masking local qualities of soil with chemical supplementations. When I said “yes,” I didn’t realize that good intentions are a kind of stupidity in the absence of an attuned intuition of the problems at hand. When Roberts asked whether I would like to restore a cuisine, my thoughts gravitated toward the payoffs on the consumption end of things: no insta-grits made of GMO corn in my shrimp and grits; no farm-raised South American tiger shrimp. In short, something we all knew around here would be improved.

It never occurred to me that the losses in Lowcountry food had been so great that we all don’t know jack about the splendor that was, even with the aid of historical savants such as “Hoppin’ John” Taylor. Nor did I realize that traditional cuisines cannot be understood simply by reading old cookbooks; you can’t simply re-create recipes and—voilà! Roberts, being a grower and miller, had fronted the problem: cuisines had to be understood from the production side, from the farming, not just the cooking or eating. If the ingredients are mediocre, there will be no revelation on the tongue. There is only one pathway to understanding how the old planters created rice that excited the gastronomes of Paris—the path leading into the dustiest, least-used stacks in the archive, those holding century-and-a-half-old agricultural journals, the most neglected body of early American writings.

In retrospect, I understand why Roberts approached me and not some chef with a penchant for antiquarian study or some champion of southern cooking. While interested in culinary history, it was not my interest but my method that drew Roberts. He must’ve known at the time that I create histories of subjects that have not been explored; that I write “total histories” using only primary sources, finding, reading, and analyzing every extant source of information. He needed someone who could navigate the dusty archive of American farming, a scholar who could reconstruct how cuisine came to be from the ground up. He found me in 2003.

At first, questions tugged in too many directions. When renovating a cuisine, what is it, exactly, that is being restored? An aesthetic of plant breeding? A farming system? A set of kitchen practices? A gastronomic philosophy? We decided not to exclude questions at the outset, but to pursue anything that might serve the goals of bringing back soil, restoring cultivars, and renovating traditional modes of food processing. The understandings being sought had to speak to a practice of growing and kitchen creation. We should not, we all agreed, approach cuisine as an ideal, a theoretical construction, or a utopian possibility.

Our starting point was a working definition of that word I had used so inattentively in the title of the conference: cuisine. What is a cuisine? How does it differ from diet, cookery, or food? Some traditions of reflection on these questions were helpful. Jean-François Revel’s insistence in Culture and Cuisine that cuisines are regional, not national, because of the enduring distinctiveness of local ingredients, meshed with the agricultural preoccupations of our project. Sidney Mintz usefully observed that a population “eats that cuisine with sufficient frequency to consider themselves experts on it. They all believe, and care that they believe, that they know what it consists of, how it is made, and how it should taste. In short, a genuine cuisine has common social roots.” The important point here is consciousness. Cuisine becomes a signature of community and, as such, becomes a source of pride, a focus of debate, and a means of projecting an identity in other places to other people.

There is, of course, a commercial dimension to this. If a locale becomes famous for its butter (as northern New York did in the nineteenth century) or cod (as New England did in the eighteenth century), a premium is paid in the market for those items from those places. The self-consciousness about ingredients gives rise to an artistry in their handling, a sense of tact from long experience of taste, and a desire among both household and professional cooks to satisfy the popular demand for dishes by improving their taste and harmonizing their accompaniments at the table.

One hallmark of the maturity of a locale’s culinary artistry is its discretion when incorporating non-local ingredients with the products of a region’s field, forest, and waters. Towns and cities with their markets and groceries invariably served as places where the melding of the world’s commodities with a region’s produce took place. Cuisines have two faces: a cosmopolitan face, prepared by professional cooks; and a common face, prepared by household cooks. In the modern world, a cuisine is at least bimodal in constitution, with an urbane style and a country vernacular style. At times, these stylistic differences become so pronounced that they described two distinct foodways—the difference between Creole and Cajun food and their disparate histories, for example. More frequently, an urban center creates its style elaborating the bounty of the surrounding countryside—the case of Baltimore and the Tidewater comes to mind.

With a picture of cuisine in hand, Roberts and I debated how to proceed in our understanding. In 2004 the Carolina Gold Rice Foundation was formed with the express purpose of advancing the cultivation of land-race grains and insuring the repatriation of Carolina Gold. Dr. Merle Shepard of Clemson University (head of the Clemson Coastal Experimental Station at Charleston), Dr. Richard Schulze (who planted the first late twentieth-century crops of Carolina Gold on his wetlands near Savannah), Campbell Coxe (the most experienced commercial rice farmer in the Carolinas), Max E. Hill (historian and planter), and Mack Rhodes and Charles Duell (whose Middleton Place showcased the historical importance of rice on the Lowcountry landscape) formed the original nucleus of the enterprise.

It took two and a half years before we knew enough to reformulate our concept of cuisine and historically contextualize the Carolina Rice Kitchen well enough to map our starting point for the work of replenishment—a reboot of Lowcountry cuisine. The key insights were as follows: The enduring distinctiveness of local ingredients arose from very distinct sets of historical circumstances and a confluence of English, French Huguenot, West African, and Native American foodways. What is grown where, when, and for what occurred for very particular reasons. A soil crisis in the early nineteenth century particularly shaped the Lowcountry cuisine that would come, distinguishing it from food produced and prepared elsewhere.

The landraces of rice, wheat, oats, rye, and corn that were brought into agriculture in the coastal Southeast were, during the eighteenth century, planted as cash crops, those same fields being replanted season after season, refreshed only with manuring until the early nineteenth century. Then the boom in long staple Sea Island cotton, a very “exhausting” plant, pushed Lowcountry soil into crisis. (A similar crisis related to tobacco culture and soil erosion because of faulty plowing methods afflicted Maryland, Virginia, and North Carolina.) The soil crisis led to the depopulation of agricultural lands as enterprising sons went westward seeking newly cleared land, causing a decline in production, followed by rising farm debt and social distress. The South began to echo with lamentations and warnings proclaimed by a generation of agrarian prophets—John Taylor of Caroline County in Virginia, George W. Jeffreys of North Carolina, Nicholas Herbemont of South Carolina, and Thomas Spalding of Georgia. Their message: Unless the soil is saved; unless crop rotations that build nutrition in soil be instituted; unless agriculture be diversified—then the long-cultivated portions of the South will become a wasteland. In response to the crisis in the 1820s, planters formed associations; they published agricultural journals to exchange information; they read; they planted new crops and employed new techniques of plowing and tilling; they rotated, intercropped, and fallowed fields. The age of experiment began in American agriculture with a vengeance.

The Southern Agriculturist magazine (founded 1828) operated as theengine of changes in the Lowcountry. In its pages, a host of planter-contributors published rotations they had developed for rice, theories of geoponics (soil nourishment), alternatives to monoculture, and descriptions of the world of horticultural options. Just as Judge Jesse Buel in Albany, New York, systematized the northern dairy farm into a self-reliant entity with livestock, pastures, fields, orchard, garden, and dairy interacting for optimum benefit, southern experimentalists conceived of the model plantation. A generation of literate rice planters—Robert F. W. Allston, J. Bryan, Calvin Emmons, James Ferguson, William Hunter, Roswell King, Charles Munnerlyn, Thomas Pinckney, and Hugh Rose— contributed to the conversation, overseen by William Washington, chair of the Committee on Experiments of the South Carolina Agricultural Society. Regularizing the crop rotations, diversifying cultivars, and rationalizing plantation operations gave rise to the distinctive set of ingredients that coalesced into what came to be called the Carolina Rice Kitchen, the cuisine of the Lowcountry.

Now, in order to reconstruct the food production of the Lowcountry, one needs a picture of how the plantations and farms worked internally with respect to local markets, in connection with regional markets, and in terms of commodity trade. One has to know how the field crops, kitchen garden, flower and herb garden, livestock pen, dairy, and kitchen cooperated. Within the matrix of uses, any plant or animal that could be employed in multiple ways would be more widely raised in a locality and more often cycled into cultivation. The sweet potato, for instance, performed many tasks on the plantation: It served as winter feed for livestock, its leaves as fodder; it formed one of the staple foods for slaves; it sold well as a local-market commodity for the home table; and its allelopathic (growth-inhibiting chemistry) made it useful in weed suppression. Our first understandings of locality came by tracing the multiple transits of individual plants through farms, markets, kitchens, and seed brokerages.

After the 1840s, when experiments stabilized into conventions on Low-country plantations, certain items became fixtures in the fields. Besides the sweet potato, one found benne (low-oil West African sesame), corn, colewort/kale/collards, field peas, peanuts, and, late in the 1850s, sorghum. Each one of these plant types would undergo intensive breeding trials, creating new varieties that (a) performed more good for the soil and welfare of the rotation’s other crops; (b) attracted more purchasers at the market; (c) tasted better to the breeder or his livestock; (d) grew more productively than other varieties; and (e) proved more resistant to drought, disease, and infestation than other varieties.

From 1800 to the Civil War, the number of vegetables, the varieties of a given vegetable, the number of fruit trees, the number of ornamental flowers, and the numbers of cattle, pigs, sheep, goat, and fowl breeds all multiplied prodigiously in the United States, in general, and the Low-country, in particular. The seedsman, the orchardist, the livestock breeder, the horticulturist—experimentalists who maintained model farms, nurseries, and breeding herds—became fixtures of the agricultural scene and drove innovation. One such figure was J. V. Jones of Burke County, Georgia, a breeder of field peas in the 1840s and ’50s. In the colonial era, field peas (cowpeas) grew in the garden patches of African slaves, along with okra, benne, watermelon, and guinea squash. Like those other West African plants, their cultivation was taken up by white planters. At first, they grew field peas as fodder for livestock because it inspired great desire among hogs, cattle, and horses. (Hence the popular name cowpea.) Early in the nineteenth century, growers noticed that it improved soils strained by “exhausting plants.” With applications as a green manure, a table pea, and livestock feed, the field pea inspired experiments in breeding with the ends of making it less chalky tasting, more productive, and less prone to mildew when being dried to pea hay. Jones reported on his trials. He grew every sort of pea he could obtain, crossing varieties in the hopes of breeding a pea with superior traits.

  1. Blue Pea, hardy and prolific. A crop of this pea can be matured in less than 60 days from date of planting the seed. Valuable.
  2. Lady, matures with No. 1. Not so prolific and hardy. A delicious table pea.
  3. Rice, most valuable table variety known, and should be grown universally wherever the pea can make a habitation.
  4. Relief, another valuable table kind, with brown pods.
  5. Flint Crowder, very profitable.
  6. Flesh, very profitable.
  7. Sugar, very profitable.
  8. Grey, very profitable. More so than 5, 6, 7. [Tory Pea]
  9. Early Spotted, brown hulls or pods.
  10. Early Locust, brown hulls, valuable.
  11. Late Locust, purple hulls, not profitable.
  12. Black Eyes, valuable for stock.
  13. Early Black Spotted, matures with nos. 1, 2, and 3.
  14. Goat, so called, I presume, from its spots. Very valuable, and a hard kind to shell.
  15. Small Black, very valuable, lies on the field all winter with the power of reproduction.
  16. Large Black Crowder, the largest pea known, and produces great and luxuriant vines. A splendid variety.
  17. Brown Spotted, equal to nos. 6, 7, 8 and 14.
  18. Claret Spotted, equal to nos. 6, 7, 8 and 14.
  19. Large Spotted, equal to nos. 6, 7, 8 and 14.
  20. Jones Little Claret Crowder. It is my opinion a greater quantity in pounds and bushels can be grown per acre of this pea, than any other grain with the knowledge of man. Matures with nos. 1, 2, 3, 9 and 13, and one of the most valuable.
  21. Jones Black Hull, prolific and profitable.
  22. Jones Yellow Hay, valuable for hay only.
  23. Jones no. 1, new and very valuable; originated in the last 2 years.
  24. Chickasaw, its value is as yet unknown. Ignorance has abused it.
  25. Shinney or Java, this is the Prince of Peas.

The list dramatizes the complex of qualities that bear on the judgments of plant breeders—flavor, profitability, feed potential, processability, ability to self-seed, productivity, and utility as hay. And it suggests the genius of agriculture in the age of experiment—the creation of a myriad of tastes and uses.

At this juncture, we confront a problem of culinary history. If one writes the history of taste as it is usually written, using the cookbook authors and chefs as the spokespersons for developments, one will not register the multiple taste options that pea breeders created. Recipes with gnomic reticence call for field peas (or cowpeas). One would not know, for example, that the Shinney pea, the large white lady pea, or the small white rice pea would be most suitable for this or that dish. It is only in the agricultural literature that we learn that the Sea Island red pea was the traditional pea used in rice stews, or that the red Tory pea with molasses and a ham hock made a dish rivaling Boston baked beans.

Growers drove taste innovation in American grains, legumes, and vegetables during the age of experiment. And their views about texture, quality, and application were expressed in seed catalogs, agricultural journals, and horticultural handbooks. If one wishes to understand what was distinctive about regional cookery in the United States, the cookbook supplies but a partial apprehension at best. New England’s plenitude of squashes, to take another example, is best comprehended by reading James J. H. Gregory’s Squashes: How to Grow Them (1867), not Mrs. N. Orr’s De Witt’sConnecticut Cook Book, and Housekeeper’s Assistant (1871). In the pages of the 1869 annual report of the Massachusetts Board of Agriculture, we encounter the expert observation, “As a general rule, the Turban and Hubbard are too grainy in texture to enter the structure of that grand Yankee luxury, a squash pie. For this the Marrow [autumnal marrow squash] excels, and this, I hold, is now the proper sphere of this squash; it is now a pie squash.” No cookbook contains so trenchant an assessment, and when the marrow squash receives mention, it suggests it is a milder-flavored alternative to the pumpkin pie.

Wendell Berry’s maxim that “eating is an agricultural act” finds support in nineteenth-century agricultural letters. The aesthetics of planting, breeding, and eating formed a whole sense of the ends of agriculture. No cookbook would tell you why a farmer chose a clay pea to intercrop with white flint corn, or a lady pea, or a black Crowder, but a reader of the agricultural press would know that the clay pea would be plowed under with the corn to fertilize a field (a practice on some rice fields every fourth year), that the lady pea would be harvested for human consumption, and that the black Crowder would be cut for cattle feed. Only reading a pea savant like J. V. Jones would one know that a black-eyed pea was regarded as “valuable for stock” but too common tasting to recommend it for the supper table.

When the question that guides one’s reading is which pea or peasshould be planted today to build the nitrogen level of the soil and complement the grains and vegetables of Lowcountry cuisines, the multiplicity of varieties suggests an answer. That J. V. Jones grew at least four of his own creations, as well as twenty-one other reputable types, indicates that one should grow several sorts of field peas, with each sort targeted to a desired end. The instincts of southern seed savers such as Dr. David Bradshaw, Bill Best, and John Coykendall were correct—to preserve the richness of southern pea culture, one had to keep multiple strains of cowpea viable. Glenn Roberts and the Carolina Gold Rice Foundation have concentrated on two categories of peas—those favored in rice dishes and those known for soil replenishment. The culinary peas are the Sea Island red pea, known for traditional dishes such as reezy peezy, red pea soup, and red pea gravy; and the rice pea, cooked as an edible pod pea, for most hoppin’ John recipes and for the most refined version of field peas with butter. For soil building, iron and clay peas have been a mainstay of warm-zone agriculture since the second half of the nineteenth century.

It should be clear by this juncture that this inquiry differs from the projects most frequently encountered in food history. Here, the value of a cultivar or dish does not reside in its being a heritage marker, a survival from an originating culture previous to its uses in southern planting and cooking. The Native American origins of a Chickasaw plum, the African origins of okra, the Swedish origins of the rutabaga don’t much matter for our purposes. This is not to discount the worth of the sort of etiological food genealogies that Gary Nabhan performs with the foods of Native peoples, that Karen Hess performed with the cooking of Jewish conversos, or that Jessica Harris and others perform in their explorations of the food of the African diaspora, but the hallmark of the experimental age was change in what was grown—importation, alteration, ramification, improvement, and repurposing. The parched and boiled peanuts/pindars of West Africa were used for oil production and peanut butter. Sorghum, or imphee grass, employed in beer brewing and making flat breads in West Africa and Natal became in the hands of American experimentalists a sugar-producing plant. That said, the expropriations and experimental transformations did not entirely supplant traditional uses. The work of agronomist George Washington Carver at the Tuskegee Agricultural Experiment Station commands particular notice because it combines its novel recommendations for industrial and commercial uses of plants as lubricants, blacking, and toothpaste, with a thoroughgoing recovery of the repertoire of Deep South African American sweet potato, cowpea, and peanut cookery in an effort to present the maximum utility of the ingredients.

While part of this study does depend on the work that Joyce E. Chaplin and Max Edelson have published on the engagement of southern planters with science, it departs from the literature concerned with agricultural reform in the South. Because this exploration proceeds from the factum brutum of an achieved regional cuisine produced as the result of agricultural innovations, market evolutions, and kitchen creativity, it stands somewhat at odds with that literature, arguing the ineffectuality of agricultural reform. Works in this tradition—Charles G. Steffen’s “In Search of the Good Overseer” or William M. Mathew’s Edmund Ruffin and the Crisis of Slavery in the Old South—argue that what passed for innovation in farming was a charade, and that soil restoration and crop diversification were fitful at best. When a forkful of hominy made from the white flint corn perfected in the 1830s on the Sea Islands melts on one’s tongue, there is little doubting that something splendid has been achieved.

The sorts of experiments that produced white flint corn, the rice pea, and the long-grain form of Carolina Gold rice did not cease with the Civil War. Indeed, with the armistice, the scope and intensity of experimentation increased as the economies of the coast rearranged from staple production to truck farming. The reliance on agricultural improvement would culminate in the formation of the network of agricultural experimental stations in the wake of the Hatch Act of 1886. One finding of our research has been that the fullness of Lowcountry agriculture and the efflorescence of Lowcountry cuisine came about during the Reconstruction era, and its heyday continued into the second decade of the twentieth century.

The Lowcountry was in no way exceptional in its embrace of experiments and improvement or insular in its view of what should be grown. In the 1830s, when Carolina horticulturists read about the success that northern growers had with Russian strains of rhubarb, several persons attempted with modest success to grow it in kitchen gardens. Readers of Alexander von Humboldt’s accounts of the commodities of South America experimented with Peruvian quinoa in grain rotations. Because agricultural letters and print mediated the conversations of the experimentalists, and because regional journals reprinted extensively from other journals from other places, a curiosity about the best variety of vegetables, fruits, and berries grown anywhere regularly led many to secure seed from northern brokers (only the Landreth Seed Company of Pennsylvaniamaintained staff in the Lowcountry), or seedsmen in England, France, and Germany. Planters regularly sought new sweet potato varieties from Central and South America, new citrus fruit from Asia, and melons wherever they might be had.

Because of the cosmopolitan sourcing of things grown, the idea of a regional agriculture growing organically out of the indigenous productions of a geographically delimited zone becomes questionable. (The case of the harvest of game animals and fish is different.) There is, of course, a kind of provocative poetry to reminding persons, as Gary Nabhan has done, that portions of the Southeast once regarded the American chestnut as a staple and food mapping an area as “Chestnut Nation,” yet it has little resonance for a population that has never tasted an American chestnut in their lifetime. Rather, region makes sense only as a geography mapped by consciousness—by a community’s attestation in naming, argumentation, and sometimes attempts at legal delimitation of a place.

We can see the inflection of territory with consciousness in the history of the name “Lowcountry.” It emerges as “low country” in the work of early nineteenth-century geographers and geologists who were attempting to characterize the topography of the states and territories of the young nation. In 1812 Jedidiah Morse uses “low country” in the American Universal Gazetteer to designate the coastal mainland of North Carolina, South Carolina, and Georgia. Originally, the Sea Islands were viewed as a separate topography. “ The sea coast,” he writes, “is bordered with a fine chain of islands, between which and the shore there is a very convenient navigation. The main land is naturally divided into the Lower and Upper country. The low country extends 80 or 100 miles from the coast, and is covered with extensive forests of pitch pine, called pine barrens, interspersed with swamps and marshes of rich soil.” Geologist Elisha Mitchell took up the characterization in his 1828 article, “On the Character and Origin of the Low Country of North Carolina,” defining the region east of the Pee Dee River to the Atlantic coast by a stratigraphy of sand and clay layers as the low country. Within a generation, the designation had entered into the usage of the population as a way of characterizing a distinctive way of growing practiced on coastal lands. Wilmot Gibbs, a wheat farmer in Chester County in the South Carolina midlands, observed in a report to the US Patent Office: “ The sweet potatoes do better, much better on sandy soil, and though not to be compared in quantity and quality with the lowcountry sweet potatoes, yet yield a fair crop.” Two words became one word. And when culture—agriculture—inflected the understanding of region, the boundaries of the map altered. The northern boundary of rice growing and the northern range of the cabbage palmetto were just north of Wilmington, North Carolina. The northern bound of USDA Plant Hardiness Zone 8 in the Cape Fear River drainage became the cultural terminus of the Lowcountry. Agriculturally, the farming on the Sea Islands differed little from that on the mainland, so they became assimilated into the cultural Lowcountry. And since the Sea Islands extended to Amelia Island, Florida, the Lowcountry extended into east Florida. What remained indistinct and subject to debate was the interior bound of the Lowcountry. Was the St. Johns River region in Florida assimilated into it, or not? Did it end where tidal flow became negligible upriver on the major coastal estuaries? Perceptual regions that do not evolve into legislated territories, such as the French wine regions, should be treated with a recognition of their mutable shape.

Cuisines are regional to the extent that the ingredients the region supplies to the kitchen are distinctive, not seen as a signature of another place. Consequently, Lowcountry cuisine must be understood comparatively, contrasting its features with those of other perceived styles, such as “southern cooking” or “tidewater cuisine” or “New Orleans Creole cooking” or “American school cooking” or “cosmopolitan hotel gastronomy.” The comparisons will take place, however, acknowledging that all of these styles share a deep grammar. A common store of ancient landrace grains (wheat, spelt, rye, barley, oats, corn, rice, millet, faro), the oil seeds and fruits (sesame, sunflower, rapeseed, linseed, olive), the livestock, the root vegetables, the fruit trees, the garden vegetables, the nuts, the berries, the game, and the fowls—all these supply a broad canvas against which the novel syncretisms and breeders’ creations emerge. It is easy to overstate the peculiarity of a region’s farming or food.

One of the hallmarks of the age of experiment was openness to new plants from other parts of the world. There was nothing of the culinary purism that drove the expulsion of “ignoble grapes” from France in the 1930s. Nor was there the kind of nationalist food security fixation that drives the current Plant Protection and Quarantine (PPQ ) protocols of the USDA. In that era, before crop monocultures made vast stretches of American countryside an uninterrupted banquet for viruses, disease organisms, and insect pests, nightmares of continental pestilence did not roil agronomists. The desire to plant a healthier, tastier, more productive sweet potato had planters working their connections in the West Indies and South America for new varieties. Periodically, an imported variety—a cross between old cultivated varieties, a cross between a traditional and an imported variety, or a sport of an old or new variety—proved something so splendid that it became a classic, a brand, a market variety, a seed catalog–illustrated plant. Examples of these include the Carolina African peanut, the Bradford watermelon, the Georgia pumpkin yam, the Hanson lettuce, Sea Island white flint corn, the Virginia peanut, the Carolina Long Gold rice, the Charleston Wakefield cabbage, and the Dancy tangerine. That something from a foreign clime might be acculturated, becoming central to an American regional cuisine, was more usual than not.

With the rise of the commercial seedsmen, naming of vegetable varieties became chaotic. Northern breeders rebranded the popular white-fleshed Hayman sweet potato, first brought from the West Indies into North Carolina in 1854, as the “Southern Queen sweet potato” in the hope of securing the big southern market, or as the “West Indian White.” Whether a seedsman tweaked a strain or not, it appeared in the catalogs as new and improved. Only with the aid of the skeptical field-trial reporters working the experimental stations of the 1890s can one see that the number of horticultural and pomological novelties named as being available for purchase substantially exceeds the number of varieties that actually exist.

Numbers of plant varieties enjoyed sufficient following to resist the yearly tide of “new and improved” alternatives. They survived over decades, supported by devotees or retained by experimental stations and commercial breeders as breeding stock. Of Jones’s list of cowpeas, for instance, the blue, the lady, the rice, the flint Crowder, the claret, the small black, the black-eyed, and Shinney peas still exist in twenty-first-century fields, and two remain in commercial cultivation: the lady and the Crowder.

In order to bring back the surviving old varieties important in traditional Lowcountry cuisine yet no longer commercially farmed, Dr. Merle Shepard, Glenn Roberts, or I sought them in germplasm banks andthrough the networks of growers and seed savers. Some important items seem irrevocably lost: the Neunan’s strawberry and the Hoffman seedling strawberry, both massively cultivated during the truck-farming era in the decades following the Civil War. The Ravenscroft watermelon has perished. Because of the premium placed on taste in nineteenth-century plant and fruit breeding, we believed the repatriation of old strains to be important. Yet we by no means believed that skill at plant breeding suddenly ceased in 1900. Rather, the aesthetics of breeding changed so that cold tolerance, productivity, quick maturity, disease resistance, transportability, and slow decay often trumped taste in the list of desiderata. The recent revelation that the commercial tomato’s roundness and redness was genetically accomplished at the expense of certain of the alleles governing taste quality is only the most conspicuous instance of the subordination of flavor in recent breeding aesthetics.

We have reversed the priority—asserting the primacy of taste over other qualities in a plant. We cherish plants that in the eyes of industrial farmers may seem inefficient, underproductive, or vulnerable to disease and depredation because they offer more to the kitchen, to the tongue, and to the imagination. The simple fact that a plant is heirloom does not make it pertinent for our purposes. It had to have had traction agriculturallyand culinarily. It had to retain its vaunted flavor. Glenn Roberts sought with particular avidity the old landrace grains because their flavors provided the fundamental notes comprising the harmonics of Western food, both bread and alcohol. The more ancient, the better. I sought benne, peanuts, sieva beans, asparagus, peppers, squashes, and root vegetables. Our conviction has been—and is—that the quality of the ingredients will determine the vitality of Lowcountry cuisine.

While the repertoire of dishes created in Lowcountry cuisine interested us greatly, and while we studied the half-dozen nineteenth-century cookbooks, the several dozen manuscript recipe collections, and the newspaper recipe literature with the greatest attention, we realized that our project was not the culinary equivalent of Civil War reenactment, a kind of temporary evacuation of the present for some vision of the past. Rather, we wanted to revive the ingredients that had made that food so memorable and make the tastes available again, so the best cooks of this moment could combine them to invoke or invent a cooking rich with this place. Roberts was too marked by his Californian youth, me by formative years in Japan, Shepard by his long engagement with Asian food culture, Campbell Coxe with his late twentieth-century business mentality, to yearn for some antebellum never-never land of big house banqueting. What did move us, however, was the taste of rice. We all could savor the faint hazelnut delicacy, the luxurious melting wholesomeness of Carolina Gold. And we all wondered at those tales of Charleston hotel chefs of the Reconstruction era who could identify which stretch of which river where a plate of gold rice had been nourished. They could, they claimed, taste the water and the soil in the rice.

The quality of ingredients depends upon the quality of the soil, and this book is not, to my regret, a recovery of the lost art of soil building. Though we have unearthed, with the aid of Dr. Stephen Spratt, a substantial body of information about crop rotations and their effects, and though certain of these traditional rotations have been followed in growing rice, benne, corn, beans, wheat, oats, et cetera, we can’t point to a particular method of treating soil that we could attest as having been sufficient and sustainable in its fertility in all cases. While individual planters hit upon soil-building solutions for their complex of holdings, particularly in the Sea Islands and in the Pee Dee River basin, these were often vast operations employing swamp muck, rather than dung, as a manure. Even planter-savants, such as John Couper and Thomas Spalding, felt they had not optimized the growing potential of their lands. Planters who farmed land that had suffered fertility decline and were bringing it back to viability often felt dissatisfaction because its productivity could not match the newly cleared lands in Alabama, Louisiana, Texas, and Mississippi. Lowcountry planters were undersold by producers to the west. Hence, coastal planters heeded the promises of the great advocates of manure—Edmund Ruffin’s call to crush fossilized limestone and spread calcareous manures on fields, or Alexander von Humboldt’s scientific case for Peruvian guano—as the answer to amplifying yield per acre. Those who could afford it became guano addicts. Slowly, southern planters became habituated to the idea that in order to yield a field needed some sort of chemical supplementation. It was then a short step to industrially produced chemical fertilizers.

What we now know to be irrefutably true, after a decade of Glenn Roberts’s field work, is that grain and vegetables grown in soil that has never been subjected to the chemical supplementations of conventional agriculture, or that has been raised in fields cleansed of the chemicals by repeated organic grow-outs, possess greater depth and distinct local inflections of flavor. Tongues taste terroir. This is a truth confirmed by the work of other cuisine restorationists in other areas—I think particularly of Dan Barber’s work at Stone Barns Center in northern New York and John Coykendall’s work in Tennessee.

Our conviction that enhancing the quality of flavors a region produces as the goal of our agricultural work gives our efforts a clarity of purpose that enables sure decision making at the local level. We realize, of course, the human and animal health benefits from consuming food free of toxins and chemical additives. We know that the preservation of the soil and the treatment of water resources in a non-exploitative way constitute a kind of virtue. But without the aesthetic focus on flavor, the ethical treatment of resources will hardly succeed. When pleasure coincides with virtue, the prospect of an enduring change in the production and treatment of food takes on solidity.

Since its organization a decade ago, the Carolina Gold Rice Foundation has published material on rice culture and the cultivation of landrace grains. By 2010 it became apparent that the information we had gleaned and the practical experience we had gained in plant repatriations had reached a threshold permitting a more public presentation of our historical sense of this regional cuisine, its original conditions of production, and observations on its preparation. After substantial conversation about the shape of this study with Roberts, Shepard, Bernard L. Herman, John T. Edge, Nathalie Dupree, Sean Brock, Linton Hopkins, Jim Kibler, and Marcie Cohen Ferris, I determined that it should not resort to the conventional chronological, academic organization of the subject, nor should it rely on the specialized languages of botany, agronomy, or nutrition. My desire in writing Southern Provisions was to treat the subject so that a reader could trace the connections between plants, plantations, growers, seed brokers, markets, vendors, cooks, and consumers. The focus of attention had to alter, following the transit of food from field to market, from garden to table. The entire landscape of the Lowcountry had to be included, from the Wilmington peanut patches to the truck farms of the Charleston Neck, from the cane fields of the Georgia Sea Islands to the citrus groves of Amelia Island, Florida. For comparison’s sake, there had to be moments when attention turned to food of the South generally, to the West Indies, and to the United States more generally.

In current books charting alternatives to conventional agriculture, there has been a strong and understandable tendency to announce crisis. This was also the common tactic of writers at the beginning of the age of experimentation in the 1810s and ’20s. Yet here, curiosity and pleasure, the quest to understand a rich world of taste, direct our inquiry more than fear and trepidation.

***

To read more about Southern Provisions, click here.

Add a Comment
33. Excerpt: The Territories of Science and Religion

9780226184487

Introduction

An excerpt from The Territories of Science and Religion by Peter Harrison

***

The History of “Religion”

In the section of his monumental Summa theologiae that is devoted to a discussion of the virtues of justice and prudence, the thirteenth-century Dominican priest Thomas Aquinas (122–74) investigates, in his characteristically methodical and insightful way, the nature of religion. Along with North African Church Father Augustine of Hippo (354–430), Aquinas is probably the most influential Christian writer outside of the biblical authors. From the outset it is clear that for Aquinas religion (religio) is a virtue—not, incidentally, one of the preeminent theological virtues, but nonetheless an important moral virtue related to justice. He explains that in its primary sense religiorefers to interior acts of devotion and prayer, and that this interior dimension is more important than any outward expressions of this virtue. Aquinas acknowledges that a range of outward behaviors are associated with religio—vows, tithes, offerings, and so on—but he regards these as secondary. As I think is immediately obvious, this notion of religion is rather different from the one with which we are now familiar. There is no sense in which religio refers to systems of propositional beliefs, and no sense of different religions (plural). Between Thomas’s time and our own, religion has been transformed from a human virtue into a generic something, typically constituted by sets of beliefs and practices. It has also become the most common way of characterizing attitudes, beliefs, and practices concerned with the sacred or supernatural.

Aquina’s understanding of religio was by no means peculiar to him. Before the seventeenth century, the word “religion” and its cognates were used relatively infrequently. Equivalents of the term are virtually nonexistent in the canonical documents of the Western religions—the Hebrew Bible, the New Testament, and the Qur’an. When the term was used in the premodern West, it did not refer to discrete sets of beliefs and practices, but rather to something more like “inner piety,” as we have seen in the case of Aquinas, or “worship.” As a virtue associated with justice, moreover,religio was understood on the Aristotelian model of the virtues as the ideal middle point between two extremes—in this case, irreligion and superstition.

The vocabulary of “true religion” that we encounter in the writings of some of the Church Fathers offers an instructive example. “The true religion” is suggestive of a system of beliefs that is distinguished from other such systems that are false. But careful examination of the content of these expressions reveals that early discussions about true and false religion were typically concerned not with belief, but rather worship and whether or not worship is properly directed. Tertullian (ca. 160–ca. 220) was the first Christian thinker to produce substantial writings in Latin and was also probably the first to use the expression “true religion.” But in describing Christianity as “true religion of the true god,” he is referring to genuine worship directed toward a real (rather than fictitious) God. Another erudite North African Christian writer, Lactantius (ca. 240–ca. 320), gives the first book of his Divine Institutes the title “De Falsa religione.” Again, however, his purpose is not to demonstrate the falsity of pagan beliefs, but to show that “the religionus ceremonies of the [pagan] gods are false,” which is just to say that the objects of pagan worship are false gods. His positive project, an account of true religion, was “to teach in what manner or by what sacrifice God must be worshipped.” Such rightly directed worship was for Lactantius “the duty of man, and in that one object the sum of all things and the whole course of a happy life consists.”

Jerome’s choice of religio for his translation of the relatively uncommon Greekthreskeia in James 1:27 similarly associates the word with cult and worship. In the English of the King James version the verse is rendered: “Pure and undefiled religion [threskeia] before God the Father is this, To visit the fatherless and widows in their affliction, and to keep himself unspotted from the world.” The import of this passage is that the “religion” of the Christians is a form of worship that consists in charitable acts rather than rituals. Here the contrast is between religion that is “vain” (vana) and that which is “pure and undefiled” (religion munda et inmaculata). In the Middle Ages this came to be regarded as equivalent to a distinction between true and false religion. The twelfth-century Distinctiones Abel of Peter the Chanter (d. 1197), one of the most prominent of the twelfth-century theologians at the University of Paris, makes direct reference to the passage from James, distinguishing religion that is pure and true (munda et vera) from that which is vain and false (vana et falsa). His pupil, the scholastic Radulfus Ardens, also spoke of “true religion” in this context, concluding that it consists in “the fear and love of God, and the keeping of his commandments.” Here again there is no sense of true and false doctrinal content.

Perhaps the most conspicuous use of the expression “true religion” among the Church Fathers came in the title of De vera religion (On True religion), written by the great doctor of the Latin Church, Augustine of Hippo. In this early work Augustine follows Tertullian and Lactantius in describing true religion as rightly directed worship. As he was to relate in the Retractions: “I argued at great length and in many ways that true religion means the worship of the one true God.” It will come as no surprise that Augustine here suggests that “true religion is found only in the Catholic Church.” But intriguingly when writing the Retractions he was to state that while Christian religion is a form of true religion, it is not to be identified as the true religion. This, he reasoned, was because true religion had existed since the beginning of history and hence before the inception of Christianity. Augustine addressed the issue of true and false religion again in a short work, Six Questions in Answer to the Pagans, written between 406 and 412 and appended to a letter sent to Deogratius, a priest at Carthage. Here he rehearses the familiar stance that true and false religion relates to the object of worship: “What the true religion reprehends in the superstitious practices of the pagans is that sacrifice is offered to false gods and wicked demons.” But again he goes on to explain that diverse cultic forms might all be legitimate expressions of true religion, and that the outward forms of true religion might vary in different times and places: “it makes no difference that people worship with different ceremonies in accord with the different requirements of times and places, if what is worshipped is holy.” A variety of different cultural forms of worship might thus be motivated by a common underlying “religion”: “different rites are celebrated in different peoples bound together by one and the same religion.” If true religion could exist outside the established forms of Catholic worship, conversely, some of those who exhibited the outward forms of Catholic religion might lack “the invisible and spiritual virtue of religion.”

This general understanding of religion as an inner disposition persisted into the Renaissance. The humanist philosopher and Platonist Marsilio Ficino (143–99) thus writes of “christian religion,” which is evidenced in lives oriented toward truth and goodness. “All religion,” he wrote, in tones reminiscent of Augustine, “has something good in it; as long as it is directed towards God, the creator of all things, it is true Christian religion.” What Ficino seems to have in mind here is the idea that Christian religion is a Christlike piety, with “Christian” referring to the person of Christ, rather than to a system of religion—“the Christian religion.” Augustine’s suggestion that true and false religion might be displayed by Christians was also reprised by the Protestant Reformer Ulrich Zwingli, who wrote in 1525 of “true and false religion as displayed by Christians.”

It is worth mentioning at this point that, unlike English, Latin has no article—no “a” or “the.” Accordingly, when rendering expressions such as “vera religion” or “christiana religio” into English, translators had to decide on the basis of context whether to add an article or not. As we have seen, such decisions can make a crucial difference, for the connotations of “true religion” and “christian religion” are rather different from those of “the true religion” and “the Christian religion.” The former can mean something like “genuine piety” and “Christlike piety” and are thus consistent with the idea of religion as an interior quality. Addition of the definite article, however, is suggestive of a system of belief. The translation history of Protestant Reformer John Calvin’s classic Institutio Christianae Religionis (1536) gives a good indication both of the importance of the definite article and of changing understandings of religion in the seventeenth century. Calvin’s work was intended as a manual for the inculcation of Christian piety, although this fact is disguised by the modern practice of rendering the title in English as The Institutes of the Christian Religion. The title page of the first English edition by Thomas Norton bears the more faithful “The Institution of Christian religion” (1561). The definite article is placed before “Christian” in the 1762 Glasgow edition: “The Institution of the Christian religion.” And the now familiar “Institutes” appears for the first time in John Allen’s 1813 edition: “The Institutes of the Christian religion.” The modern rendering is suggestive of an entity “the Christian religion” that is constituted by its propositional contents—“the institutes.” These connotations were completely absent from the original title. Calvin himself confirms this by declaring in the preface his intention “to furnish a kind of rudiments, by which those who feel some interest in religion might be trained to true godliness.”

With the increasing frequency of the expressions“religion” and “the religions” from the sixteenth century onward we witness the beginning of the objectification of what was once an interior disposition. Whereas for Aquinas it was the “interior” acts of religion that held primacy, the balance now shifted decisively in favor of the exterior. This was a significant new development, the making of religion into a systematic and generic entity. The appearance of this new conception of religion was a precondition for a relationship between science and religion. While the causes of this objectification are various, the Protestant Reformation and the rise of experimental natural philosophy were key factors, as we shall see in chapter 4.

The History of “Science”

It is instructive at this point to return to Thomas Aquinas, because when we consider what he has to say on the notion of science (scientia) we find an intriguing parallel to his remarks on religion. In an extended treatment of the virtues in the Summa theologiae, Aquinas observes that science (scientia) is a habit of mind or an“intellectual virtue.” The parallel with religio, then, lies in the fact that we are now used to thinking of both religion and science as systems of beliefs and practices, rather than conceiving of them primarily as personal qualities. And for us today the question of their relationship is largely determined by their respective doctrinal content and the methods through which that content is arrived at. For Aquinas, however, both religioand scientia were, in the first place, personal attributes.

We are also accustomed to think of virtues as belonging entirely within the sphere of morality. But again, for Aquinas, a virtue is understood more generally as a“habit” that perfects the powers that individuals possess. This conviction—that human beings have natural powers that move them toward particular ends—was related to a general approach associated with the Greek philosopher Aristotle (384–322 BC), who had taught that all natural things are moved by intrinsic tendencies toward certain goals (tele). For Aristotle, this teleological movement was directed to the perfection of the entity, or to the perfection of the species to which it belonged. As it turns out, one of the natural tendencies of human beings was a movement toward knowledge. As Aristotle famously wrote in the opening lines of the Metaphysics, “all men by nature desire to know.” In this scheme of things, our intellectual powers are naturally directed toward the end of knowledge, and they are assisted in their movement toward knowledge by acquired intellectual virtues.

One of the great revolutions of Western thought took place in the twelfth and thirteenth centuries, when much Greek learning, including the work of Aristotle, was rediscovered. Aquinas played a pivotal role in this recovery of ancient wisdom, making Aristotle one of his chief conversation partners. He was by no means a slavish adherent of Aristotelian doctrines, but nonetheless accepted the Greek philosophe’s premise that the intellectual virtues perfect our intellectual powers. Aquinas identified three such virtues—understanding (intellectus), science (scientia), and wisdom (sapientia). Briefly, understanding was to do with grasping first principles, science with the derivation of truths from those first principles, and wisdom with the grasp of the highest causes, including the first cause, God. To make progress in science, then, was not to add to a body of systematic knowledge about the world, but was to become more adept at drawing “scientific” conclusions from general premises. “Science” thus understood was a mental habit that was gradually acquired through the rehearsal of logical demonstrations. In Thomas’s words: “science can increase in itself by addition; thus when anyone learns several conclusions of geometry, the same specific habit of science increases in that man.”

These connotations of scientia were well known in the Renaissance and persisted until at least the end of the seventeenth century. The English physician John Securis wrote in 1566 that“science is a habit” and “a disposition to do any thing confirmed and had by long study, exercise, and use.” Scientia is subsequently defined in Thomas Holyoake’sDictionary (1676) as, properly speaking, the act of the knower, and, secondarily, the thing known. This entry also stresses the classical and scholastic idea of science as “a habit of knowledge got by demonstration.” French philosopher René Descartes (1596–1650) retained some of these generic, cognitive connotations when he defined scientiaas “the skill to solve every problem.”

Yet, according to Aquinas, scientia, like the other intellectual virtues, was not solely concerned with rational and speculative considerations. In a significant departure from Aristotle, who had set out the basic rationale for an ethics based on virtue, Aquinas sought to integrate the intellectual virtues into a framework that included the supernatural virtues (faith, hope, and charity),“the seven gifts of the spirit,” and the nine “fruits of the spirit.” While the various relations are complicated, particularly when beatitudes and vices are added to the equation, the upshot of it all is a considerable overlap of the intellectual and moral spheres. As philosopher Eleonore Stump has written, for Aquinas “all true excellence of intellect—wisdom, understanding andscientia—is possible only in connection with moral excellence as well.” By the same token, on Aquinas’s understanding, moral transgressions will have negative consequences for the capacity of the intellect to render correct judgments: “Carnal vices result in a certain culpable ignorance and mental dullness; and these in turn get in the way of understanding and scientia.” Scientia, then, was not only a personal quality, but also one that had a significant moral component.

The parallels between the virtues of religio and scientia, it must be conceded, are by no means exact. While in the Middle Ages there were no plural religions (or at least no plural religions understood as discrete sets of doctrines), there were undeniably sciences (scientiae), thought of as distinct and systematic bodies of knowledge. The intellectual virtue scientia thus bore a particular relation to formal knowledge. On a strict definition, and following a standard reading of Aristotle’s Posterior Analytics, a body of knowledge was regarded as scientific in the event that it had been arrived at through a process of logical demonstration. But in practice the label “science” was extended to many forms of knowledge. The canonical divisions of knowledge in the Middle Ages—what we now know as the seven “liberal arts” (grammar, logic, rhetoric, arithmetic, astronomy, music, geometry)—were then known as the liberal sciences. The other common way of dividing intellectual territory derived from Aristotle’s classification of theoretical or speculative philosophy. In his discussion of the division and methods of the sciences, Aquinas noted that the standard classification of the seven liberal sciences did not include the Aristotelian disciplines of natural philosophy, mathematics, and theology. Accordingly, he argued that the label “science” should be given to these activities, too. Robert Kilwardby (ca. 1215–79), successively regent at the University of Oxford and archbishop of Canterbury, extended the label even further in his work on the origin of the sciences, identifying forty distinct scientiae.

The English word “science” had similar connotations. As was the case with the Latinscientia, the English term commonly referred to the subjects making up the seven liberal arts. In catalogs of English books published between 1475 and 1700 we encounter the natural and moral sciences, the sciences of physick (medicine), of surgery, of logic and mathematics. Broader applications of the term include accounting, architecture, geography, sailing, surveying, defense, music, and pleading in court. Less familiarly, we also encounter works on the science of angels, the science of flattery, and in one notable instance, the science of drinking, drolly designated by the author the “eighth liberal science.” At nineteenth-century Oxford “science” still referred to elements of the philosophy curriculum. The idiosyncrasies of English usage at the University of Oxford notwithstanding, the now familiar meaning of the English expression dates from the nineteenth century, when “science” began to refer almost exclusively to the natural and physical sciences.

Returning to the comparison with medieval religio, what we can say is that in the Middle Ages both notions have a significant interior dimension, and that what happens in the early modern period is that the balance between the interior and exterior begins to tip in favor of the latter. Over the course of the sixteenth and seventeenth centuries we will witness the beginning of a process in which the idea of religion and science as virtues or habits of mind begins to be overshadowed by the modern, systematic entities“science” and “religion.” In the case of scientia, then, the interior qualities that characterized the intellectual virtue of scientia are transferred to methods and doctrines. The entry for “science” in the 1771 Encyclopaedia Britannica thus reads, in its entirety: “SCIENCE, in philosophy, denotes any doctrine, deduced from self-evident and certain principles, by a regular demonstration.” The logical rigor that had once been primarily a personal characteristic now resides primarily in the corresponding body of knowledge.

The other significant difference between the virtues of religio and scientia lies in the relation of the interior and exterior elements. In the case of religio, the acts of worship are secondary in the sense that they are motivated by an inner piety. In the case ofscientia, it is the rehearsal of the processes of demonstration that strengthens the relevant mental habit. Crucially, because the primary goal is the augmentation of mental habits, gained through familiarity with systematic bodies of knowledge (“the sciences”), the emphasis was less on the production of scientific knowledge than on the rehearsal of the scientific knowledge that already existed. Again, as noted earlier, this was because the “growth” of science was understood as taking place within the mind of the individual. In the present, of course, whatever vestiges of the scientific habitusremain in the mind of the modern scientist are directed toward the production of new scientific knowledge. In so far as they exist at all—and for the most part they have been projected outward onto experimental protocols—they are a means and not the end. Overstating the matter somewhat, in the Middle Ages scientific knowledge was an instrument for the inculcation of scientific habits of mind; now scientific habits of mind are cultivated primarily as an instrument for the production of scientific knowledge.

The atrophy of the virtues of scientia and religio, and the increasing emphasis on their exterior manifestations in the sixteenth and seventeenth centuries, will be discussed in more detail in chapter 4. But looking ahead we can say that in the physical realm virtues and powers were removed from natural objects and replaced by a notion of external law. The order of things will now be understood in terms of laws of nature—a conception that makes its first appearance in the seventeenth century—and these laws will take the place of those inherent tendencies within things that strive for their perfection. In the moral sphere, a similar development takes place, and human virtues will be subordinated to an idea of divinely imposed laws—in this instance, moral laws. The virtues—moral and intellectual—will be understood in terms of their capacity to produce the relevant behaviors or bodies of knowledge. What drives both of these shifts is the rejection of an Aristotelian and scholastic teleology, and the subsequent demise of the classical understanding of virtue will underpin the early modern transformation of the ideas of scientia and religio.

Science and Religion?

It should by now be clear that the question of the relationship between science (scientia) and religion (religio) in the Middle Ages was very different from the modern question of the relationship between science and religion. Were the question put to Thomas Aquinas, he may have said something like this: Science is an intellectual habit; religion, like the other virtues, is a moral habit. There would then have been no question of conflict or agreement between science and religion because they were not the kinds of things that admitted those sorts of relations. When the question is posed in our own era, very different answers are forthcoming, for the issue of science and religion is now generally assumed to be about specific knowledge claims or, less often, about the respective processes by which knowledge is generated in these two enterprises. Between Thomas’s time and our own, religio has been transformed from a human virtue into a generic something typically constituted by sets of beliefs and practices. Scientia has followed a similar course, for although it had always referred both to a form of knowledge and a habit of mind, the interior dimension has now almost entirely disappeared. During the sixteenth and seventeenth centuries, both religion and science were literally turned inside out.

Admittedly, there would have been another way of posing this question in the Middle Ages. In focusing on religio and scientia I have considered the two concepts that are the closest linguistically to our modern “religion” and “science.” But there may be other ancient and medieval precedents of our modern notions “religion” and “science,” that have less obvious linguistic connections. It might be argued, for example, that two other systematic activities lie more squarely in the genealogical ancestry of our two objects of interest, and they are theology and natural philosophy. A better way to frame the central question, it could then be suggested, would be to inquire about theology (which looks very much like a body of religionus knowledge expressed propositionally) and natural philosophy (which was the name given to the systematic study of nature up until the modern period), and their relationship.

There is no doubt that these two notions are directly relevant to our discussion, but I have avoided mention of them up until now, first, because I have not wished to pull apart too many concepts at once and, second, because we will be encountering these two ideas and the question of how they fit into the trajectory of our modern notions of science and religion in subsequent chapters. For now, however, it is worth briefly noting that the term “theology” was not much used by Christian thinkers before the thirteenth century. The word theologia appears for the first time in Plato (ca. 428–348 BC), and it is Aristotle who uses it in a formal sense to refer to the most elevated of the speculative sciences. Partly because of this, for the Church Fathers “theology” was often understood as referring to pagan discourse about the gods. Christian writers were more concerned with the interpretation of scripture than with “theology,” and the expression “sacred doctrine” (sacra doctrina) reflects their understanding of the content of scripture. When the term does come into use in the later Middle Ages, there were two different senses of “theology”—one a speculative science as described by Aristotle, the other the teaching of the Christian scriptures.

Famously, the scholastic philosophers inquired as to whether theology (in the sense ofsacra doctrina) was a science. This is not the place for an extended discussion of that commonplace, but the question does suggest one possible relation between science and theology—that theology is a species of the genus “science.” Needless to say, this is almost completely disanalogous to any modern relationship between science and religion as we now understand them. Even so, this question affords us the opportunity to revisit the relationship between virtues and the bodies of knowledge that they were associated with. In so far as theology was regarded as a science, it was understood in light of the virtue of scientia outlined above. In other words, theology was also understood to be, in part, a mental habit. When Aquinas asks whether sacred doctrine is one science, his affirmative answer refers to the fact that there is a single faculty or habit involved. His contemporary, the Franciscan theologian Bonaventure (1221–74), was to say that theological science was a habit that had as its chief end “that we become good.” The “subtle doctor,” John Duns Scotus (ca. 1265–1308), later wrote that the “science” of theology perfects the intellect and promotes the love of God: “The intellect perfected by the habit of theology apprehends God as one who should be loved.” While these three thinkers differed from each other significantly in how they conceptualized the goals of theology, what they shared was a common conviction that theology was, to use a current expression somewhat out of context, habit forming.

As for “natural philosophy” (physica, physiologia), historians of science have argued for some years now that this is the closest ancient and medieval analogue to modern science, although they have become increasingly sensitive to the differences between the two activities. Typically, these differences have been thought to lie in the subject matter of natural philosophy, which traditionally included such topics as God and the soul, but excluded mathematics and natural history. On both counts natural philosophy looks different from modern science. What has been less well understood, however, are the implications of the fact that natural philosophy was an integral part of philosophy. These implications are related to the fact that philosophy, as practiced in the past, was less about affirming certain doctrines or propositions than it was about pursuing a particular kind of life. Thus natural philosophy was thought to serve general philosophical goals that were themselves oriented toward securing the good life. These features of natural philosophy will be discussed in more detail in the chapter that follows. For now, however, my suggestion is that moving our attention to the alternative categories of theology and natural philosophy will not yield a substantially different view of the kinds of historical transitions that I am seeking to elucidate.

To read more about The Territories of Science and Religion, click here.

Add a Comment
34. Facebook’s A Year in Books drafts The Structure of Scientific Revolutions

9780226458120

In his sixth pick for the social network’s online book club (“A Year of Books”), Facebook founder Mark Zuckerberg recently drafted Thomas Kuhn’s The Structure of Scientific Revolutionsa 52-year-old book still one of the most often cited academic resources of all time, and one of UCP’s crowning gems of twentieth century scholarly publishing. Following in the footsteps of Pixar founder Ed Catmull’s Creativity, Inc., Zuckerberg’s most recent pick, Structure will be the subject of a Facebook thread with open commenting, for the next two weeks, in line with the methodology of “A Year of Books.” If you’re thinking about reading along, the 50th Anniversary edition includes a compelling Introduction by Ian Hacking that situates the book’s legacy, both in terms of its contribution to a scientific vernacular (“paradigm shifting”) and its value as a scholarly publication of mass appeal (“paradigm shifting”).

Or, in Zuckerberg’s own words:

It’s a history of science book that explores the question of whether science and technology make consistent forward progress or whether progress comes in bursts related to other social forces. I tend to think that science is a consistent force for good in the world. I think we’d all be better off if we invested more in science and acted on the results of research. I’m excited to explore this theme further.

And from the Guardian:

“Before Kuhn, the normal view was that science simply needed men of genius (they were always men) to clear away the clouds of superstition, and the truth of nature would be revealed,” [David Papineau, professor of philosophy at King’s College London] said. “Kuhn showed it is much more interesting than that. Scientific research requires a rich network of prior assumptions (Kuhn reshaped the term ‘paradigm’ to stand for these), and changing such assumptions can be traumatic, and is always resisted by established interests (thus the need for scientific ‘revolutions’).”

Kuhn showed, said Papineau, that “scientists are normal humans, with prejudices and personal agendas in their research, and that the path to scientific advances runs through a complex social terrain”.

“We look at science quite differently post-Kuhn,” he added.

To read more about Structure, click here.

To read an excerpt from Ian Hacking’s Introduction to the 50th Anniversary edition, click here.

Add a Comment
35. Excerpt: Who Freed the Slaves?

9780226178202

An excerpt from Who Freed the Slaves?: The Fight over the Thirteenth Amendment 

by Leonard L. Richards

***

Prologue

WEDNESDAY, JUNE 15, 1864

James Ashley never forgot the moment. After hours of debate, Schuyler Colfax, the Speaker of the House of Representatives, had finally gaveled the 159 House members to take their seats and get ready to vote.

Most of the members were waving a fan of some sort, but none of the fans did much good. Heat and humidity had turned the nation’s capitol into a sauna. Equally bad was the stench that emanated from Washington’s back alleys, nearby swamps, and the twenty-one hospitals in and about the city, which now housed over twenty thousand wounded and dying soldiers. Worse yet was the news from the front lines. According to some reports, the Union army had lost seven thousand men in less than thirty minutes at Cold Harbor. The commanding general, Ulysses S. Grant, had been deemed a “fumbling butcher.”

Nearly everyone around Ashley was impatient, cranky, and miserable. But Ashley was especially downcast. It was his job to get Senate Joint Resolution Number 16, a constitutional amendment to outlaw slavery in the United States, through the House of Representatives, and he didn’t have the votes.

The need for the amendment was obvious. Of the nation’s four million slaves at the outset of the war, no more than five hundred thousand were now free, and, to his disgust, many white Americans intended to have them reenslaved once the war was over. The Supreme Court, moreover, was still in the hands of Chief Justice Roger B. Taney and other staunch proponents of property rights in slaves and state’s rights. If they ever got the chance, they seemed certain not only to strike down much of Lincoln’s Emancipation Proclamation but also to hold that under the Constitution only the states where slavery existed had the legal power to outlaw it.

Six months earlier, in December 1863, when Ashley and his fellow Republicans had proposed the amendment, he had been more upbeat. He knew that getting the House to abolish slavery, which in his mind was the root cause of the war, was not going to be easy. It required a two-thirds vote. But he had thought that Republicans in both the Senate and the House might somehow muster the necessary two-thirds majority. No longer did they have to worry about the united opposition of fifteen slave states. Eleven of the fifteen were out of the Union, including South Carolina and Mississippi, the two with the highest percentage of slaves, and Virginia, the one with the largest House delegation. In addition, the war was in its thirty-third month. Hundreds of thousands of Northern men had been killed on the battlefield. The one-day bloodbath at Antietam was now etched into the memory of every one of his Toledo constituents as well as every member of Congress. So, too, was the three-day battle at Gettysburg.

If Republicans held firm, all they needed to push the amendment through the House was a handful of votes from their opponents, either from the border slave state representatives who had remained in the Union or from free state Democrats. It was his job to get those votes. He was the bill’s floor manager.

Back in December, Ashley had been the first House member to propose such an amendment. Although few of his colleagues realized it, he had been toying with the idea for nearly a decade. He had made a similar proposal in September 1856, when it didn’t have a chance of passing.

He was a political novice at the time, just twenty-nine years old, and known mainly for being big and burly, six feet tall and starting to spread around the middle, with a wild mane of curly hair and a loud, resonating voice. He had just gotten established in Toledo politics. He had moved there three years earlier from the town of Portsmouth, in southern Ohio, largely because he had just gotten married and was in deep trouble for helping slaves flee across the Ohio River. He was not yet a Congressman. Nor was he running for office. He was just campaigning for the Republican Party’s first presidential candidate, John C. Frémont, and Richard Mott, a House member who was up for reelection. In doing so, he gave a stump speech at a grove near Montpelier, Ohio.

 

James M. Ashley, congressman from Ohio. Brady-Handy Photograph Collection, Library of Congress (lC-Bh824-5303).

The speech lasted two hours. In most respects, it was a typical Republican stump speech. It was mainly a collection of stories, many from his youth, living and working along the Ohio River. Running through it were several themes that tied the stories together and foreshadowed the rest of his career. In touting the two candidates, he blamed the nation’s troubles on a conspiracy of slaveholders and Northern men with Southern principles, or as he called them “slave barons” and “doughfaces.” These men, he claimed, had deliberately misconstrued the Bible, misinterpreted the Constitution, and gained complete control of the federal government. “For nearly half a century,” he told his listeners, some two hundred thousand slave barons had “ruled the nation, morally and politically, including a majority of the Northern States, with a rod of iron.” And before “the advancing march of these slave barons,” the “great body of Northern public men” had “bowed down . . . with their hands on their mouths and mouths in the dust, with an abasement as servile as that of a vanquished, spiritless people, before their conquerors.”

Across the North, many Republican spokesmen were saying much the same thing. What made Ashley’s speech unusual was that he made no attempt to hide his radicalism. He made it clear to the crowd at Montpelier that he would do almost anything to destroy slavery and the men who profited from it. He had learned to hate slavery and the slave barons during his boyhood, traveling with his father, a Campbellite preacher, through Kentucky and western Virginia, and later working as a cabin boy on the Ohio River. Never would he forget how traumatized he had been as a nine-year-old seeing for the first time slaves in chains being driven down a road to the Deep South, whipping posts on which black men had been beaten, and boys his own age being sold away from their mothers. Nor would he ever forget the white man who wouldn’t let his cattle drink from a stream in which his father was baptizing slaves. How, he had wondered, could his father still justify slavery? Certainly, it didn’t square with the teachings of Christ or what his mother was teaching him back home.

Ashley also made it clear to the crowd at Montpelier that he had violated the Fugitive Slave Law more times than he could count. He had actually begun helping slaves flee bondage in 1839, when he was just fifteen years old, and he had continued doing so after the Fugitive Slave Act of 1850 made the penalties much stiffer. To avoid prosecution, he and his wife had fled southern Ohio in 1851. Would he now mend his ways? “Never!” he told his audience. The law was a gross violation of the teachings of Christ, and for that reason he had never obeyed it and with “God’s help . . . never shall.”

What, then, should his listeners do? The first step was to join him in supporting John C. Frémont for president and Richard Mott for another term in Congress. Another was to join him in never obeying the “infamous fugitive-slave law”—the most “unholy” of the laws that these slave barons and their Northern sycophants had passed. And perhaps still another, he suggested, was to join him in pushing for a constitutional amendment outlawing “the crime of American slavery” if that should become “necessary.”

The last suggestion, in 1856, was clearly fanciful. Nearly half the states were slave states. Thus getting two-thirds of the House, much less two-thirds of the Senate, to support an amendment outlawing slavery was next to impossible. Ashley knew that. Perhaps some in his audience, especially those who cheered the loudest, thought otherwise. But not Ashley. Although still a political neophyte, he knew the rules of the game. He was also good with numbers, always had been, and always would be. Nonetheless, he told his audience to put it on their “to do” list.

Five years later, in December 1861, Ashley added to the list. By then he was no longer a political neophyte. He had been twice elected to Congress. Eleven states had seceded from the Union, and the Civil War was in its eighth month. As chairman of the House Committee on Territories, he proposed that the eleven states no longer be treated as states. Instead they should be treated as “territories” under the control of Congress, and Congress should impose on them certain conditions before they were allowed to regain statehood. More specifically, Congress should abolish slavery in these territories, confiscate all rebel lands, distribute the confiscated lands in plots of 160 acres or fewer to loyal citizens of any color, disfranchise the rebel leaders, and establish new governments with universal adult male suffrage. Did that mean, asked one skeptic, that black men were to receive land? And the right to vote? Yes, it did. And if such measures were enacted, said Ashley, he felt certain that the slave barons would be forever stripped of their power.

Ashley’s goal was clear. The 1850 census, from which Ashley and most Republicans drew their numbers, had indicated that just a few Southern families had the lion’s share of the South’s wealth. Especially potent were the truly big slaveholders—families with over one hundred slaves. There were 105 such family heads in Virginia, 181 in Georgia, 279 in Mississippi, 312 in Alabama, 363 in South Carolina, and 460 in Louisiana. With respect to landholdings, there were 371 family heads in Louisiana with more than one thousand acres, 481 in Mississippi, 482 in South Carolina, 641 in Virginia, 696 in Alabama, and 902 in Georgia.

In Ashley’s view, virtually all these wealth holders were rebels, and the Congress should go after all their assets. Strip them of their slaves. Strip them of their land. Strip them of their right to hold office. Halfhearted measures, he contended, would lead only to half-hearted results. Taking away a slave baron’s slaves undoubtedly would hobble him, but it wouldn’t destroy him. With his vast landholdings, he would soon be back in power. And with the right to hold office, he would not only have economic power but also political power. And with the end of the three-fifths clause, the clause in the Constitution that counted slaves as only three-fifths of a free person when it came to tabulating seats in Congress and electoral votes, the South would have more power than ever before.

When Ashley made this proposal in December 1861, everyone on his committee told him it was much too radical ever to get through Congress. He knew that. But he also knew that there were men in Congress who agreed with him, including four of the seven men on his committee, several dozen in the House, maybe a half-dozen in the Senate, and even some notables such as Representative Thaddeus Stevens of Pennsylvania and Senator Ben Wade of Ohio.

The trouble was the opposition. It was formidable. Not only did it include the “Peace” Democrats, men who seemingly wanted peace at any price, men whom Ashley regarded as traitors, but also “War” Democrats, men such as General George McClellan, General Don Carlos Buell, and General Henry Halleck, men who were leading the nation’s troops. Also certain to oppose him were the border state Unionists, especially the Kentuckians, and most important of all, Abraham Lincoln. Against such opposition, all Ashley and the other radicals could do was push, prod, and hope to get maybe a piece or two of the total package enacted.

Two years later, in December 1863, Ashley thought it was indeed “necessary” to strike a deathblow against slavery. He also thought it was possible to get a few pieces of his 1861 package into law. So, just after the House opened for its winter session, he introduced two measures. One was a reconstruction bill that followed, at least at first glance, what Lincoln had called for in his annual message. Like Lincoln, Ashley proposed that a seceded state be let back into the Union when only 10 percent of its 1860 voters took an oath of loyalty.

Had he suddenly become a moderate? A conservative? Not quite. To Lincoln’s famous 10 percent plan, Ashley added two provisions. One would take away the right to vote and to hold office from all those who had fought against the Union or held an office in a rebel state. That was a significant chunk of the population. The other would give the right to vote to all adult black males. That was even a bigger chunk of the population, especially in South Carolina and Mississippi.

The other measure that Ashley proposed that December was the constitutional amendment that outlawed slavery. A few days later, Representative James F. Wilson of Iowa made a similar proposal. The wording differed, but the intent was the same. The Constitution had to be amended, contended Wilson, not only to eradicate slavery but also to stop slaveholders and their supporters from launching a program of reenslavement once the war was over. Then, several weeks later, Senator John Henderson of Missouri and Senator Charles Sumner of Massachusetts introduced similar amendments. Sumner’s was the more radical. The Massachusetts senator not only wanted to end slavery. He also wanted to end racial inequality.

The Senate Judiciary Committee then took charge. They ignored Sumner’s cry for racial justice and worked out the bill’s final language. The wording was clear and simple: “Neither slavery nor involuntary servitude, except as a punishment for crime, whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.”

On April 8, 1864, the committee’s wording came before the Senate for a final vote. Although a few empty seats could be found in the men’s gallery, the women’s gallery was packed, mainly by church women who had organized a massive petition drive calling on Congress to abolish slavery. Congress for the most part had ignored their hard work. But to the women’s delight, thirty-eight senators now voted for the amendment, six against, giving the proposed amendment eight votes more than what was needed to meet the two-thirds requirement.

All thirty Republicans in attendance voted aye. The no votes came from two free state Democrats, Thomas A. Hendricks of Indiana and James McDougall of California, and four slave state senators: Garrett Davis and Lazarus W. Powell of Kentucky and George R. Riddle and Willard Saulsbury of Delaware. Especially irate was Saulsbury. A strong proponent of reenslavement, he made sure that the women knew that he regarded them with contempt. In a booming voice, he told them on leaving the Senate floor that all was lost and that there was no longer any chance of ever restoring the eleven Confederate states to the Union.

Now, nine weeks later, the measure was before the House. And its floor manager, James Ashley, expected the worst. He kept a close count. And, as the members voted, he realized that he was well short of the required two-thirds. Of the eighty Republicans who were in attendance, seventy-nine eventually cast aye votes and one abstained. Of the seventeen slave state representatives in attendance, eleven voted aye and six nay. But of the sixty-two free state Democrats, only four voted for the amendment while fifty-eight voted nay. As a result, the final vote was going to be ninety-four to sixty-four. That was eleven shy of the necessary two-thirds majority.

The outcome was even worse than Ashley had anticipated. “Educated in the political school of Jefferson,” he later recalled, “I was absolutely amazed at the solid Democratic vote against the amendment on the 15th of June. To me it looked as if the golden hour had come, when the Democratic party could, without apology, and without regret, emancipate itself from the fatal dogmas of Calhoun, and reaffirm the doctrines of Jefferson. It had always seemed to me that the great men in the Democratic party had shown a broader spirit in favor of human liberty than their political opponents, and until the domination of Mr. Calhoun and his States-rights disciples, this was undoubtedly true.”

Despite the solid Democratic vote against the resolution, there was still one way that Ashley could save the amendment from certain congressional death. And that was to take advantage of a House rule that allowed a member to bring a defeated measure up for reconsideration if he intended to change his vote. To make use of this rule, however, Ashley had to change his vote before the clerk announced the final tally. He had voted aye along with his fellow Republicans. He now had to get into the “no” column. That he did. The final vote thus became ninety-three to sixty-five.

Two weeks later, Representative William Steele Holman, Democrat of Indiana, asked Ashley when he planned to call for reconsideration. Ashley told him not now but maybe after the next election. The trick, he said, was to find enough men in Holman’s party who were “naturally inclined to favor the amendment, and strong enough to meet and repel the fierce partisan attack which were certain to be made upon them.”

Holman, Ashley knew, would not be one of them. Although the Indiana Democrat had once been a staunch supporter of the war effort, he opposed the destruction of slavery. Not only had he just voted against the amendment—he had vehemently denounced it. Holman, as Ashley viewed him, was thus one of the “devil’s disciples.” He was beyond redemption. And with this in mind, Ashley set about to find at least eleven additional House members who would stand their ground against men like Holman.

To read more about Who Freed the Slaves?, click here.

Add a Comment
36. Excerpt: Invisible by Philip Ball

9780226238890
Recipes for Invisibility, an excerpt
by Philip Ball
***

 “Occult Forces”

Around 1680 the English writer John Aubrey recorded a spell of invisibility that seems plucked from a (particularly grim) fairy tale. On a Wednesday morning before sunrise, one must bury the severed head of a man who has committed suicide, along with seven black beans. Water the beans for seven days with good brandy, after which a spirit will appear to tend the beans and the buried head. The next day the beans will sprout, and you must persuade a small g irl to pick and shell them. One of these beans, placed in the mouth, will make you invisible.

This was tried, Aubrey says, by two Jewish merchants in London, who could’t acquire the head of a suicide victim and so used instead that of a poor cat killed ritualistically. They planted it with the beans in the garden of a gentleman named Wyld Clark, with his permission. Aubrey’s deadpan relish at the bathetic outcome suggests he was sceptical all along– for he explains that Clark’s rooster dug up the beans and ate them without consequence.

Despite the risk of such prosaic setbacks, the magical texts of the Middle Ages and the early Enlightenment exude confidence in their prescriptions, however bizarre they might be. Of course the magic will work, if you are bold enough to take the chance. This was not merely a sales pitch. The efficacy of magic was universally believed in those days. The common folk feared it and yearned for it, the clergy condemned it, and the intellectuals and philosophers, and a good many charlatans and tricksters, hinted that they knew how to do it.

It is among these fanciful recipes that the quest begins for the origins of invisibility as both a theoretical possibility and a practical technology in the real world. Making things invisible was a kind of magic–but what exactly did that mean?

Historians are confronted with the puzzle of why the tradition of magic lasted so long and laid roots so deep, when it is manifestly impotent. Some of that tenacity is understandable enough. The persistence of magical medicines, for example, isn’t so much of a mystery given that in earlier ages there were no more effective alternatives and that medical cause and effect has always been difficult to establish – people do sometimes get better, and who is to say why? Alchemy, meanwhile, could be sustained by trickery, although that does not solely or even primarily account for its longevity as a practical art: alchemists made much else besides gold and even their gold-making recipes could sometimes change the appearance of metals in ways that might have suggested they were on the right track. As for astrology, it’s persistence even today testifies in part to how readily it can be placed beyond the reach of any attempts at falsification.

But how do you fake invisibility? Either you can see something or someone, or you can’t.

Well, one might think so. But that isn’t the case at all. Magicians have always possessed the power of invisibility. What has changed is the story they tell about how it is done. What has changed far less, however, is our reasons for wishing it to be done and our willingness to believe that it can be. In this respect, invisibility supplies one of the most eloquent testimonies to our changing view of magic – not, as some rationalists might insist, a change from credulous acceptance to hard-headed dismissal, but something far more interesting.

Let’s begin with some recipes. Here is a small selection from what was doubtless once a much more diverse set of options, many of which are now lost. It should give you some intimation of what was required.

John Aubrey provides another prescription, somewhat tamer than the previous one and allegedly from a Rosicrucian source (we’ll see why later):

Take on Midsummer night, at xii [midnight], Astrologically, when all the Planets are above the earth, a Serpent, and kill him, and skinne him: and dry it in the shade, and bring it to a powder. Hold it in your hand and you will be invisible.

If it is black cats you want, look to the notorious Grand Grimoire. Like many magical books, this is a fabrication of the eighteenth century (or perhaps even later), validated by an ostentatious pseudo-history. The author is said to be one‘Alibeck the Egyptian’, who allegedly wrote the following recipe in 1522:

Take a black cat, and a new pot, a mirror, a lighter, coal and tinder. Gather water from a fountain at the strike of midnight. Then you light your fire, and put the cat in the pot. Hold the cover with your left hand without moving or looking behind you, no matter what noises you may hear. After having made it boil 24 hours, put the boiled cat on a new dish. Take the meat and throw it over your left shoulder, saying these words:“accipe quod tibi do, et nihil ampliùs.” [Accept my offering, and don’t delay.] Then put the bones one by one under the teeth on the left side, while looking at yourself in the mirror; and if they do not work, throw them away, repeating the same words each time until you find the right bone; and as soon you cannot see yourself any more in the mirror, withdraw, moving backwards, while saying: “Pater, in manus tuas commendo spiritum meum.” [Father, into your hands I commend my spirit.] This bone you must keep.

Sometimes it was necessary to summon the help of demons, which was always a matter fraught with danger. A medieval manual of demonic magic tells the magician to go to a field and inscribe a circle on the ground, fumigate it and sprink le it, and himself, with holy water while reciting Psalm 51:7 (‘Cleanse me with hyssop, and I shall be clean . . .’). He then conjures several demons and commands them in God’s name to do his bidding by bringing him a cap of invisibility. One of them will fetch this item and exchange it for a white robe. If the magician does not return to the same place in three days, retrieve his robe and burn it, he will drop dead within a week. In other words, this sort of invisibility was both heretical and hazardous. That is perhaps why instructions for invisibility in an otherwise somewhat quotidian fifteenth-century book of household management from Wolfsthurn Castle in the Tyrol have been mutilated by a censorious reader.

Demons are, after all, what you might expect to find in a magical grimoire. TheGrimorium Verum (True Grimoire) is another eighteenth-century fake attributed to Alibeck the Eg yptian; it was alternatively called the Secret of Secrets, an all-purpose title alluding to an encyclopaedic Arabic treatise popular in the Middle Ages. ‘Secrets’ of course hints alluringly at forbidden lore, although in fact the word was often also used simply to refer to any specialized knowledge or skill, not necessarily something intended to be kept hidden. This grimoire says that invisibility can be achieved simply by reciting a Latin prayer – largely just a list of the names of demons whose help is being invoked, and a good indication as to why magic spells came to be regarded as a string of nonsense words:

Athal, Bathel, Nothe, Jhoram, Asey, Cleyungit, Gabellin, Semeney, Mencheno, Bal, Labenenten, Nero, Meclap, Helateroy, Palcin, Timgimiel, Plegas, Peneme, Fruora, Hean, Ha, Ararna, Avira, Ayla, Seye, Peremies, Seney, Levesso, Huay, Baruchalù, Acuth, Tural, Buchard, Caratim, per misericordiam abibit ergo mortale perficiat qua hoc opus ut invisibiliter ire possim . . .

. . . and so on. The prescription continues in a rather freewheeling?tion using characters written in bat’s blood, before calling on yet more demonic ‘masters of invisibility’ to ‘perform this work as you all know how, that this experiment may make me invisible in such wise that no one may see me’.

A magic book was scarcely complete without a spell of invisibility. One of the most notorious grimoires of the Middle Ages, called the Picatrix and based on a tenth-century Arabic work, gives the following recipe.* You take a rabbit on the ‘24th night of the Arabian month’, behead it facing the moon, call upon the ‘angelic spirit’ Salmaquil, and then mix the blood of the rabbit with its bile. (Bury the body well – if it is exposed to sunlight, the spirit of the Moon will kill you.) To make yourself invisible, anoint your face with this blood and bile at nighttime, and ‘you will make yourself totally hidden from the sight of others, and in this way you will be able to achieve whatever you desire’.

‘Whatever you desire’ was probably something bad, because that was usually the way with invisibility. A popular trick in the eighteenth century, known as the Hand of Glory, involved obtaining (don’t ask how) the hand of an executed criminal and preserving it chemically, then setting light to a finger or inserting a burning candle between the fingers. With this talisman you could enter a building unseen and take what you liked, either because you are invisible or because everyone inside is put to sleep.

These recipes seem to demand a tiresome attention to materials and details. But really, as attested in The Book of Abramelin (said to be a system of magic that the Eg yptian mage Abramelin taught to a German Jew in the fifteenth century), it was quite simple to make yourself invisible. You need only write down a‘magic square’ – a small grid in which numbers (or in Abramelin’s case, twelve symbols representing demons) form particular patterns – and place it under your cap. Other grimoires made the trick sound equally straightforward, albeit messy: one should carry the heart of a bat, a black hen, or a frog under the right arm.

Perhaps most evocative of all were accounts of how to make a ring of invisibility, popularly called a Ring of Gyges. The twentieth-century French historian Emile Grillot de Givry explained in his anthology of occult lore how this might be accomplished:

The ring must be made of fixed mercury; it must be set with a little stone to be found in a lapwing’s nest, and round the stone must be engraved the words,“Jésus passant ✠ par le milieu d’eux ✠ s’en allait.” You must put the ring on your finger, and if you look at yourself in a mirror and cannot see the ring it is a sure sign that it has been successfully manufactured.

Fixed mercury is an ill-defined alchemical material in which the liquid metal is rendered solid by mixing it with other substances. It might refer to the chemical reaction of mercury with sulphur to make the blackish-red sulphide, for example, or the formation of an amalgam of mercury with gold. The biblical reference is to the alleged invisibility of Christ mentioned in Luke 4:30 (‘Jesus passed through the midst of them’) and John 8:59 (see page 155). And the lapwing’s stone is a kind of mineral – of which, more below. Invisibility is switched on or off at will by rotating the ring so that this stone sits facing outward or inward (towards the palm), just as Gyges rotated the collet.

Several other recipes in magical texts repeat the advice to check in a mirror that the magic has worked. That way, one could avoid embarrassment of the k ind suffered by a Spaniard who, in 1582, decided to use invisibility magic in his attempt to assassinate the Prince of Orange. Since his spells could not make clothes invisible, he had to strip naked, in which state he arrived at the palace and strolled casually through the gates, unaware that he was perfectly visible to the guards. They followed the outlandish intruder until the purpose of his mission became plain, whereupon they seized him and flogged him.

Some prescriptions combined the alchemical preparation of rings with a necromantic invocation of spirits. One, appearing in an eighteenth-century French manuscript, explains how, if the name of the demon Tonucho is written on parchment and placed beneath a yellow stone set into a gold band while reciting an appropriate incantation, the demon is trapped in the ring and can be impelled to do one’s bidding.

Other recipes seem to refer to different qualities of invisibility. One might be unable to see an object not because it has vanished as though perfectly transparent, but because it lies hidden by darkness or mist, so that the‘cloaking’ is apparent but what it cloaks is obscured. Or one might be dazzled by a play of light (see page 25), or experience some other confusion of the senses. There is no single view of what invisibility consists of, or where it resides. These ambiguities recur throughout the history of the invisible.

Partly for this reason, it might seem hard to discern any pattern in these prescriptions– any common themes or ingredients that might provide a clue to their real meaning. Some of them sound like the cartoon sorcery of wizards stirring bubbling cauldrons. Others are satanic, or else high-minded and allegorical, or merely deluded or fraudulent. They mix pious dedications to God with blasphemous entreaties to uncouthly named demons. That diversity is precisely what makes the tradition of magic so difficult to grasp: one is constantly wondering if it is a serious intellectual enterprise, a smokescreen for charlatans, or the credulous superstition of folk belief. The truth is that magic in the Western world was all of these things and for that very reason has been able to permeate culture at so many different levels and to leave traces in the most unlikely of places: in theoretical physics and pulp novels, the cults of modern mystics and the glamorous veils of cinema. The ever-present theme of invisibility allows us to follow these currents from their source.

*Appearing hard on the heels of an unrelated discussion of the Chaldean city of Adocentyn, it betrays the cut-and-paste nature of many such compendia.

“Making Magic”

Many of the recipes for invisibility from the early Renaissance onward therefore betray an ambiguous credo. They are often odd, sometimes ridiculous, and yet there are indications that they are not mere mumbo-jumbo dreamed up by lunatics or charlatans, but hint at a possible rationale within the system of natural magic.

It’s no surprise, for example, that eyes feature prominently among the ingredients. From a modern perspective the association might seem facile: you grind up an eyeball and therefore people can’t see you. But to an adept of natural magic there would have been a sound causative principle at work, operating through the occult network of correspondences: an eye for an eye, you might say. A medieval collec?tion of Greek magical works from the fourth century AD known as the Cyranides contains some particularly grotesque recipes of this sort for ointments of invisibility. One involves grinding together the fat or eye of an owl, a ball of beetle dung and perfumed olive oil, and then anointing the entire body while reciting a selection of unlikely names. Another uses instead ‘the eye of an ape or of a man who had a violent death’, along with roses and sesame oil. An eighteenth-century text spuriously associated with Albertus Magnus (he was a favourite source of magical lore even in his own times) instructs the magician to‘pierce the right eye of a bat, and carry it with you and you will be invisible’. One of the cruellest prescriptions instructs the magician to cut out the eyes of a live owl and bury them in a secret place.

A fifteenth-century Greek manuscript offers a more explicitly optical theme than Aubrey’s head-grown beans, stipulating that fava beans are imbued with invisibility magic when placed in the eye sockets of a human skull. Even though one must again call upon a pantheon of fantastically named demons, the principle attested here has a more naturalistic flavour: ‘As the eyes of the dead do not see the living, so these beans may also have the power of invisibility.’

Within the magic tradition of correspondences, certain plants and minerals were associated with invisibility. For example, the dust on brown patches of mature fern leaves was said to be a charm of invisibility: unlike other plants, they appeared to possess neither flowers nor seeds, but could nevertheless be found surrounded by their progeny.

The classical stone of invisibility was the heliotrope (sun-turner), also called bloodstone: a form of green or yellow quartz (chalcedony) flecked with streaks of a red mineral that is either iron oxide or red jasper. The name alludes to the ston’s tendency to reflect and disperse light, itself a sign of special optical powers. In his Natural History, Pliny says that magicians assert that the heliotrope can make a person invisible, although he scoffs at the suggestion:

In the use of this stone, also, we have a most glaring illustration of the impudent effrontery of the adepts in magic, for they say that, if it is combined with the plant heliotropium, and certain incantations are then repeated over it, it will render the person invisible who carries it about him.

The plant mentioned here, bearing the same name as the mineral, is a genus of the borage family, the flowers of which were thought to turn to face the sun. How a mineral is‘combined’ with a plant isn’t clear, but the real point is that the two substances are again bound by a system of occult correspondence.

Agrippa repeated Pliny’s claim in the sixteenth century, minus the scepticism:

There is also another vertue of it [the bloodstone] more wonderfull, and that is upon the eyes of men, whose sight it doth so dim, and dazel, that it doth not suffer him that carries it to see it, & this it doth not do without the help of the Hearb of the same name, which also is called Heliotropium.

It is more explicit here that the magic works by dazzlement: the person wearing a heliotrope is ‘invisible’ because the light it reflects befuddles the senses. That is why kings wear bright jewels, explained Anselm Boetius, physician to the Holy Roman Emperor Rudolf II in 1609: they wish to mask their features in brilliance. This use of gems that spark le, reflect and disperse light to confuse and blind the onlooker is attributed by Ben Jonson to the Rosicrucians, who were often popu?larly associated with magical powers of invisibility (see pages 32–3). In his poem The Underwood, Jonson writes of

The Chimera of the Rosie-Crosse,
Their signs, their seales, their hermetique rings;
Their jemme of riches, and bright stone that brings
Invisibilitie, and strength, and tongues.

The bishop Francis Godwin indicates in his fantastical fiction The Man in the Moone(1634), an early vision of space travel, that invisibility jewels were commonly deemed to exist, while implying that their corrupting temptations made them subject to divine prohibition. Godwin’s space-voyaging hero Domingo Gonsales asks the inhabitants of the Moon

whether they had not any kind of Jewell or other means to make a man invisible, which mee thought had beene a thing of great and extraordinary use . . . They answered that if it were a thing faisible, yet they assured themselves that God would not suffer it to be revealed to us creatures subject to so many imperfections, being a thing so apt to be abused to ill purposes.

Other dazzling gemstones were awarded the same‘virtue’, chief among them the opal. This is a form of silica that refracts and reflects light to produce rainbow iridescence, indeed called opalescence.

Whether opal derives from the Greek opollos,‘seeing’ – the root of ‘optical’ – is disputed, but opal’s streaked appearance certainly resem?bles the iris of the eye, and it has long been associated with the evil eye. In the thirteenth-century Book of Secrets, yet again falsely attributed to Albertus Magnus, the mineral is g iven the Greek name for eye (ophthalmos) and is said to cause invisibility by bedazzlement:

Take the stone Ophthalmus, and wrap it in the leaf of the Laurel, or Bay tree; and it is called Lapis Obtalmicus, whose colour is not named, for it is of many colours. And it is of such virtue, that it blindeth the sights of them that stand about. Constantius [probably Constantine the Great] carrying this in his hand, was made invisible by it.

It is’t hard to recognize this as a variant of Pliny’s recipe, complete with cognate herb. In fact it isn’t entirely clear that this Ophthalmus really is opal, since elsewhere in theBook of Secrets that mineral is called Quiritia and isn’t associated with invisibility. This reflects the way that the book was, like so many medieval handbooks and encyoclopedias, patched together from a variety of sources.

Remember the‘stone from the lapwing’s nest’ mentioned by Grillot de Givry? His source was probably an eighteenth-century text called the Petit Albert – a fabrication, with the grand full title of Marvelous Secrets of Natural and Qabalistic Magic, attributed to a ‘Little Albert’ and obviously trading once more on the authority of the ‘Great Albert’ (Magnus). The occult revivalist Arthur Waite gave the full account of this recipe from the Petit Albert in his Book of Ceremonial Magic (1913), which asserts that the bird plays a further role in the affair:

Having placed the ring on a palette-shaped plate of fixed mercury, compose the perfume of mercury, and thrice expose the ring to the odour thereof; wrap it in a small piece of taffeta corresponding to the colour of the planet, carry it to the peewit’s [lapwing’s] nest from which the stone was obtained, let it remain there for nine days, and when removed, fumigate it precisely as before. Then preserve it most carefully in a small box, made also of fixed mercury, and use it when required.

Now we can get some notion of what natural magic had become by the time the Petit Albert was cobbled together. It sounds straightforward enough, but who is going to do all this? Where will you find the lapwin’s nest with a stone in it in the first place? What is this mysterious ‘perfume of mercury’? Will you take the ring back and put it in the nest for nine days and will it still be there later if you do? The spell has become so intricate, so obscure and vexing, that no one will try it. The same character is evident in a nineteenth-century Greek manuscript called the Bernardakean Magical Codex, in which Aubrey’s instructions for growing beans with a severed head are elaborated beyond all hope of success: you need to bury a black cat’s head under an ant hill, water it with human blood brought every day for forty days from a barber (those were the days when barbers still doubled as blood-letters), and check to see if one of the beans has the power of invisibility by looking into a new mirror in which no one has previously looked. If the spell doesn’t work (and the need to check each bean shows that this is always a possibility), it isn’t because the magic is ineffectual but because you must have done something wrong somewhere along the way. In which case, will you find another black cat and begin over? Unlikely; instead, aspiring magicians would buy these books of ‘secrets’, study their prescriptions and incantations and thereby become an adept in a magical circle: someone who possesses powerful secrets, but does not, perhaps, place much store in actually putting them to use. Magical books thus acquired the same talismanic function as a great deal of the academic literature today: to be read, learnt, cited, but never used.

To read more about Invisible, click here.

 

Add a Comment
37. The AACM at 50

OLDAACM

2015 marks the 50th anniversary of the Association for the Advancement of Creative Musicians, Inc.(AACM), founded on Chicago’s South Side by musicians Muhal Richard Abrams (pianist/composer), pianist Jodie Christian (pianist), drummer Steve McCall (drummer), and Phil Cohran (composer).

A recent piece in the New York Times by Nate Chinen baseline summarizes their achievements:

Over the half-century of its existence, the association has been one of this country’s great engines of experimental art, producing work with an irreducible breadth of scope and style. By now the organization’s significance derives not only from the example of its first wave—including Mr. Abrams, still formidable at 84—but also from an influence on countless uncompromising artists, many of whom are not even members of its chapters in Chicago and New York.

The AACM is legendary, well beyond—but also emphatically intertwined with—their Chicago origins. With an aim to “provide an atmosphere conducive to the development of its member artists and to continue the AACM legacy of providing leadership and vision for the development of creative music,” the AACM turned jazz on its head, rolled it sideways, stood it upright again, and then leaned on it with a combination of effortless grace and righteous pressure during the second half of the twentieth century and beyond.

Among the events organized around the anniversary are Free at First (currently on view at Chicago’s DuSable Museum of African American History, and running through September 6, 2015) and the forthcoming exhibition The Freedom Principle: Experiments in Art and Music, 1965 to Now, at the MCA Chicago (opening in mid-July), which builds around the aesthetics championed by the association and their legacy.

This YouTube playlist should woo you pretty hard: http://bit.ly/1EHQMid

9780226476964

Our own connection to the AACM, worth every plug one can work in, is George E. Lewis’s definitive history A Power Stronger than Itself: The AACM and American Experimental Music (one of my favorite non-fiction books we’ve published). Lewis, who joined the AACM in 1971 when he was still a teenager, chronicles the group’s communal history via the twin channels of jazz and experimental cultural production, from the AACM’s founding in 1965 to the present. Personal, political, filled with archival details—as well as theory, criticism, and reportage—the book is a must-read jazz ethnography for anyone interested in the trajectory of AACM’s importance and influence, which as the NYT’s piece notes, began from a place of “originality and self-determination,” and landed somewhere that, if nothing else, in the words of Jason Moran, the Kennedy Center’s artistic director for jazz, “shifted the cultural landscape.”

To read more about A Power Stronger than Itself, click here.

Add a Comment
38. Blood Runs Green: Your nineteenth-century Chicago true crime novel

9780226248950

Below follows a well-contextualized teaser, or a clue (depending on your penchant for genre), from Sharon Wheeler’s full-length review of Blood Runs Green: The Murder that Transfixed Gilded Age Chicago at Inside Higher Ed.

Blood Runs Green is that rarer beast—academic research in the guise of a true crime account. But it leaps off the page like the best fictional murder mystery. Mind you, any author presenting these characters to a publisher under the banner of a novel would probably be sent away to rein in their over-fertile imagination. As Gillian O’Brien says: “The story had everything an editor could want: conspiracy, theft, dynamite, betrayal, and murder.”

So this is far more than just a racy account of a murder in 1880s Chicago, a city built by the Irish, so the boast goes (by the late 1880s, 17 per cent of its population was Irish or Irish-American). At the book’s core is the story of Irish immigrants in the US, and the fight for Irish independence through the secret republican society Clan na Gael. In England, and running parallel to events in America, is the saga of Charles Stewart Parnell, a British MP and leading figure in the Home Rule movement.

Who got bumped off is an easy one to answer: Patrick Cronin, a Chicago doctor, Clan na Gael supporter, and a chap renowned for belting out God Save Ireland at fundraising events. Whodunnit? Ah, well, now you’re asking.

To read more about Blood Runs Green, click here.

Add a Comment
39. Free e-book for March: Freud’s Couch, Scott’s Buttocks, Brontë’s Grave

9780226301310 2

Our free e-book for March is Freud’s Couch, Scott’s Buttocks, Brontë’s Grave by Simon Goldhill. Read more and download your copy below.

***

The Victorian era was the high point of literary tourism. Writers such as Charles Dickens, George Eliot, and Sir Walter Scott became celebrities, and readers trekked far and wide for a glimpse of the places where their heroes wrote and thought, walked and talked. Even Shakespeare was roped in, as Victorian entrepreneurs transformed quiet Stratford-upon-Avon into a combination shrine and tourist trap.

Stratford continues to lure the tourists today, as do many other sites of literary pilgrimage throughout Britain. And our modern age could have no better guide to such places than Simon Goldhill. In Freud’s Couch, Scott’s Buttocks, Brontë’s Grave, Goldhill makes a pilgrimage to Sir Walter Scott’s baronial mansion, Wordsworth’s cottage in the Lake District, the Brontë parsonage, Shakespeare’s birthplace, and Freud’s office in Hampstead. Traveling, as much as possible, by methods available to Victorians—and gamely negotiating distractions ranging from broken bicycles to a flock of giggling Japanese schoolgirls—he tries to discern what our forebears were looking for at these sites, as well as what they have to say to the modern mind. What does it matter that Emily Brontë’s hidden passions burned in this specific room? What does it mean, especially now that his fame has faded, that Scott self-consciously built an extravagant castle suitable for Ivanhoe—and star-struck tourists visited it while he was still living there? Or that Freud’s meticulous recreation of his Vienna office is now a meticulously preserved museum of itself? Or that Shakespeare’s birthplace features student actors declaiming snippets of his plays . . . in the garden of a house where he almost certainly never wrote a single line?

Goldhill brings to these inquiries his trademark wry humor and a lifetime’s engagement with literature. The result is a travel book like no other, a reminder that even today, the writing life still has the power to inspire.

To download a copy, click here.

 

Add a Comment
40. Every student who studied with the Rev. Gary Davis

RevGaryDavis2

The Reverend Gary Davis was born in Piedmont, South Carolina, on April 30, 1896. He died in Hammonton, New Jersey, on May 5, 1972. In between, he become one of the most protean guitar players of the twentieth century, and his finger-picking style influenced everyone from Bob Dylan and the Grateful Dead to Keb’ Mo’ and Blind Boy Fuller.

Born partially blind as the sole surviving son to two sharecroppers in the Jim Crow South, by the 1940s, Davis, ordained as a Baptist minister, was playing on Harlem streetcorners and storefronts, making his living as an itinerant, singing gospel preacher. By the beginning of the 1960s folk revival, he had moved in circles that included Lead Belly and Woody Guthrie, recorded a series of albums for the legendary Folkways label, and been embraced by a generation of educated, middle-class young people eager for fodder to spur a folk revival. See his performance at the 1965 Newport Folk Festival for further illumination of this cultural congruence. Even before his death in 1970, he was the subject of two television documentaries. Davis’s legacy, however, still exists outside a canon that has acknowledged his peers, including Muddy Waters and Robert Johnson—his music, like his troubled life, is the stuff of myth, and as such, has charted a more intimate course through a series of covers and the musical offerings of his students, a group numbering in the dozens.

C_Zack_Devil_9780226234106_jkt_MB

In concert with the publication of Say No to the Devil: The Life and Music of the Reverend Gary Davis, the first biography of Davis, written by Ian Zack, one of his former students, we’re putting together a special feature, with performances, of every student who studied with the Reverend. In the meantime, here’s a teaser list of some names, just to give you a sense of the breadth of those who sought Davis out, and in whose own compositions, the master player’s gospel still lingers.

Phil Allen
Roy Book Binder
Danny Birch
Rick Blaufeld
Rory Block
Larry Brezer
David Bromberg
Ian Buchanan
Harry Chapin
Ry Cooder
Bruce Cornforth
Dion DiMucci
John Dyer
Allen Evans (one of Davis’s final guitar pupils; a week or two before his death, Davis gave him a two-and-a-half hour lesson, then wanted to arm wrestle)
Joan Fenton
Janis Fink
Geno Forman
Blind Boy Fuller
John Gibbon
Stefan Grossman
Ernie Hawkins
Larry Johnson
Steve Katz
Nick Katzman
Jesse Lee Kincaid
Ken Kipnis
Barry Kornfeld
John Mankiewicz
Woody Mann
Alexander McEwen
Rory McEwen
Dean Meredith (drove Davis to visit Woody Guthrie at Brooklyn State Hospital in 1964)
North Peterson
Rick Ruskin
Alex Shoumatoff
Alan Smithline
John Townley
Dave Van Ronk
Bob Weir
Tom Winslow
Geoff Withers

As a bonus, here’s Bob Dylan playing Davis’s arrangement of the song “Candy Man,” from a 1961 Minneapolis hotel tape (it’s really good):

 

Add a Comment
41. Excerpt: Seeing Green

9780226169903

An excerpt from Seeing Green: The Use and Abuse of American Environmental Images

by Finis Dunaway

***

“The Crying Indian”

It may be the most famous tear in American history. Iron Eyes Cody, an actor in native garb, paddles a birch bark canoe on water that seems at first tranquil and pristine but becomes increasingly polluted along his journey. He pulls his boat from the water and walks toward a bustling freeway. As the lone Indian ponders the polluted landscape and stares at vehicles streaming by, a passenger hurls a paper bag out a car window. The bag bursts on the ground, scattering fast-food wrappers all over his beaded moccasins. In a stern voice, the narrator comments: “Some people have a deep abiding respect for the natural beauty that was once this country. And some people don’t.” The camera zooms in closely on Iron Eyes Cody’s face to reveal a single tear falling, ever so slowly, down his cheek (fig. 5.1).

This tear made its television debut in 1971 at the close of a public service advertisement for the antilitter organization Keep America Beautiful. Appearing in languid motion on television, the tear would also circulate in other visual forms, stilled on billboards and print media advertisements to become a frame stopped in time, forever fixing the image of Iron Eyes Cody as the Crying Indian. Garnering many advertising accolades, including two Clio Awards, and still ranked as one of the best commercials of all time, the Crying Indian spot enjoyed tremendous airtime during the 1970s, allowing it to gain, in advertising lingo, billions of “household impressions” and achieve one of the highest viewer recognition rates in television history. After being remade multiple times to support Keep America Beautiful, and after becoming indelibly etched into American public culture, the commercial has more recently been spoofed by various television shows, including The Simpsons (always a reliable index of popular culture resonance),King of the Hill, and Penn & Teller: Bullshit. These parodies—together with the widely publicized reports that Iron Eyes Cody was actually born Espera De Corti, an Italian-American who literally played Indian in both his life and onscreen—may make it difficult to view the commercial with the same degree of moral seriousness it sought to convey to spectators at the time. Yet to appreciate the commercial’s significance, to situate Cody’s tear within its historical moment, we need to consider why so many viewers believed that the spot represented an image of pure feeling captured by the camera. As the television scholar Robert Thompson explains: “The tear was such an iconic moment. . . . Once you saw it, it was unforgettable. It was like nothing else on television. As such, it stood out in all the clutter we saw in the early 70s.”

FIGURE 5.1. The Crying Indian. Advertising Council / Keep America Beautiful advertisement, 1971. Courtesy of Ad Council Archives, University of Illinois, record series 13/2/203.

As a moment of intense emotional expression, Iron Eyes Cody’s tear compressed and concatenated an array of historical myths, cultural narratives, and political debates about native peoples and progress, technology and modernity, the environment and the question of responsibility. It reached back into the past to critique the present; it celebrated the ecological virtue of the Indian and condemned visual signs of pollution, especially the heedless practices of the litterbug. It turned his crying into a moment of visual eloquence, one that drew upon countercultural currents but also deflected the radical ideas of environmental, indigenous, and other protest groups.

At one level, this visual eloquence came from the tear itself, which tapped into a legacy of romanticism rekindled by the counterculture. As the writer Tom Lutz explains in his history of crying, the Romantics enshrined the body as “the seal of truth,” the authentic bearer of sincere emotion. “To say that tears have a meaning greater than any words is to suggest that truth somehow resides in the body,” he argues. “For [Romantic authors], crying is superior to words as a form of communication because our bodies, uncorrupted by culture or society, are naturally truthful, and tears are the most essential form of speech for this idealized body.”

Rather than being an example of uncontrolled weeping, the single tear shed by Iron Eyes Cody also contributed to its visual power, a moment readily aestheticized and easily reproduced, a drop poised forever on his cheek, seemingly suspended in perpetuity. Cody himself grasped how emotions and aesthetics became intertwined in the commercial. “The final result was better than anybody expected,” he noted in his autobiography. “In fact, some people who had been working on the project were moved to tears just reviewing the edited version. It was apparent we had something of a 60-second work of art on our hands.” The aestheticizing of his tear yielded emotional eloquence; the tear seemed to express sincerity, an authentic record of feeling and experience. Art and reality merged to offer an emotional critique of the environmental crisis.

That the tear trickled down the leathered face of a Native American (or at least someone reputed to be indigenous) made its emotionality that much more poignant, its critique that much more palpable. By designing the commercial around the imagined experience of a native person, someone who appears to have journeyed out of the past to survey the current landscape, Keep America Beautiful (KAB) incorporated the counterculture’s embrace of Indianness as a marker of oppositional identity.

Yet KAB, composed of leading beverage and packaging corporations and staunchly opposed to many environmental initiatives, sought to interiorize the environmentalist critique of progress, to make individual viewers feel guilty and responsible for the degraded environment. Deflecting the question of responsibility away from corporations and placing it entirely in the realm of individual action, the commercial castigated spectators for their environmental sins but concealed the role of industry in polluting the landscape. A ghost from the past, someone who returns to haunt the contemporary American imagination, the Crying Indian evoked national guilt for the environmental crisis but also worked to erase the presence of actual Indians from the landscape. Even as Red Power became a potent organizing force, KAB conjured a spectral Indian to represent the native experience, a ghost whose melancholy presence mobilized guilt but masked ongoing colonialism, whose troubling visitation encouraged viewers to feel responsible but to forget history. Signifying resistance and secreting urgency, his single tear glossed over power to generate a false sense of personal blame. For all its implied sincerity, many environmentalists would come to see the tear as phony and politically problematic, the liquid conclusion to a sham campaign orchestrated by corporate America.

Before KAB appropriated Indianness by making Iron Eyes Cody into a popular environmental symbol, the group had promoted a similar message of individual responsibility through its previous antilitter campaigns. Founded in 1951 by the American Can Company and the Owens-Illinois Glass Company, a corporate roster that later included the likes of Coca-Cola and the Dixie Cup Company, KAB gained the support of the Advertising Council, the nation’s preeminent public service advertising organization. Best known for creating Smokey Bear and the slogan “Only You Can Prevent Forest Fires” for the US Forest Service, the Ad Council applied the same focus on individual responsibility to its KAB advertising.

The Ad Council’s campaigns for KAB framed litter as a visual crime against landscape beauty and an affront to citizenship values. David F. Beard, a KAB leader and the director of advertising for Reynolds Metals Company, described the litter problem in feverish tones and sought to infuse the issue with a sense of crisis. “During this summer and fall, all media will participate in an accelerated campaign to help to curb the massive defacement of the nation by thoughtless and careless people,” he wrote in 1961. “The bad habits of littering can be changed only by making all citizens aware of their responsibilities to keep our public places as clean as they do their own homes.” The KAB fact sheet distributed to media outlets heightened this rhetoric of urgency by describing litter as an infringement upon the rights of American citizens who “derive much pleasure and recreation from their beautiful outdoors. . . . Yet their enjoyment of the natural and man- made attractions of our grand landscape is everywhere marred by the litter which careless people leave in their wake.” “The mountain of refuse keeps growing,” draining public coffers for continual cleanup and even posing “a menace to life and health,” the Ad Council concluded.

And why had this litter crisis emerged? The Ad Council acknowledged that “more and more products” were now “wrapped and packaged in containers of paper, metal and other materials”—the very same disposable containers that were manufactured, marketed, and used by the very same companies that had founded and directed KAB. Yet rather than critique the proliferation of disposables, rather than question the corporate decisions that led to the widespread use of these materials, KAB and the Ad Council singled out “individual thoughtlessness” as “the outstanding factor in the litter nuisance.”

Each year Beard’s rhetoric became increasingly alarmist as he began to describe the antilitter effort as the moral equivalent of war. “THE LITTERBUGS ARE ON THE LOOSE,” he warned newspapers around the nation, “and we’re counting on you to take up arms against them. . . . Your newspaper is a big gun in the battle against thoughtless littering.” Each year the campaign adopted new visuals to illustrate the tag line: “Bit by bit . . . every litter bit hurts.” “This year we are taking a realistic approach to the litter problem, using before-and-after photographs to illustrate our campaign theme,” Beard reported in 1963. “We think you’ll agree that these ads pack a real wallop.” These images showed a white family or a group of white teenagers enjoying themselves in one photograph but leaving behind unsightly debris in the next. The pictures focused exclusively on places of leisure—beaches, parks, and lakes—to depict these recreational environments as spaces treasured by white middle-class Americans, the archetypal members of the national community. The fight against litter thus appeared as a patriotic effort to protect the beauty of public spaces and to reaffirm the rights and responsibilities of citizenship, especially among the social group considered to exemplify the American way of life.

In 1964, though, Beard announced a shift in strategy. Rather than appealing to citizenship values in general, KAB would target parents in particular by deploying images of children to appeal to their emotions. “This year we are . . . reminding the adult that whenever he strews litter he is remiss in setting a good example for the kids—an appeal which should hit . . . with more emotional force than appealing primarily to his citizenship,” he wrote. The campaign against litter thus packaged itself as a form of emotional citizenship. Situating private feelings within public spaces, KAB urged fathers and mothers to see littering as a sign of poor parenting: “The good citizenship habits you want your children to have go overboard when they see you toss litter away.”

These new advertisements featured Susan Spotless, a young white girl who always wore a white dress—completely spotless, of course— together with white shoes, white socks, and a white headband. In the ads, Susan pointed her accusatory finger at pieces of trash heedlessly dropped by her parents (fig. 5.2). The goal of this campaign, Beard explained, was “to dramatize the message that ‘Keeping America Beautiful’ is a family affair’”—a concept that would later be applied not just to litter, but to the entire environmental crisis. Susan Spotless introduced a moral gaze into the discourse on litter, a gaze that used the wagging finger of a child to condemn individual adults for being bad parents, irresponsible citizens, and unpatriotic Americans. She played the part of a child who not only had a vested interest in the future but also appealed to private feelings to instruct her parents how to be better citizens. Launched in 1964, the same year that the Lyndon Johnson campaign broadcast the “Daisy Girl” ad, the Susan Spotless campaign also represented a young white girl as an emblem of futurity to promote citizenship ideals.

Throughout the 1960s and beyond, the Ad Council and KAB continued to present children as emotional symbols of the antilitter agenda. An ad from the late 1960s depicted a chalkboard with children’s antilitter sentiments scrawled across it: “Litter is not pretty. Litter is not healthy. Litter is not clean. Litter is not American.” What all these campaigns assumed was a sense of shared American values and a faith that the United States was fundamentally a good society. The ads did not attempt to mobilize resistant images or question dominant narratives of nationalism. KAB did not in any way attempt to appeal to the social movements and gathering spirit of protest that marked the 1960s.

With this background history in mind, the Crying Indian campaign appears far stranger, a surprising turn for the antilitter movement. KAB suddenly moved from its rather bland admonishments about litter to encompass a broader view of pollution and the environmental crisis. Within a few years it had shifted from Susan Spotless to the Crying Indian. Rather than signaling its commitment to environmentalism, though, this new representational strategy indicated KAB’s fear of the environmental movement.

FIGURE 5.2. “Daddy, you forgot . . . every litter bit hurts!” Advertising Council / Keep America Beautiful advertisement, 1964. Courtesy of Ad Council Archives, University of Illinois, record series 13/2/207.

The soft drink and packaging industries—composed of the same companies that led KAB —viewed the rise of environmentalism with considerable trepidation. Three weeks before the first Earth Day, the National Soft Drink Association (NSDA) distributed a detailed memo to its members, warning that “any bottling company” could be targeted by demonstrators hoping to create an “attention-getting scene.” The memo explained that in March, as part of a “‘dress rehearsal’” for Earth Day, University of Michigan students had protested at a soft drink plant by dumping a huge pile of nonreturnable bottles and cans on company grounds. Similar stunts, the memo cautioned, might be replicated across the nation on Earth Day.

And, indeed, many environmental demonstrations staged during the week surrounding Earth Day focused on the issue of throwaway containers. All these protests held industry—not consumers—responsible for the proliferation of disposable items that wasted natural resources and created a solid waste crisis. In Atlanta, for example, the week culminated with an “Ecolog y Trek”—featuring a pickup truck full of bottles and cans—to the Coca-Cola company headquarters. FBI surveillance agents, posted at fifty locations around the United States to monitor the potential presence of radicals at Earth Day events, noted that in most cases the bottling plants were ready for the demonstrators. Indeed, the plant managers heeded the memo’s advice: they not only had speeches prepared and “trash receptacles set up” for the bottles and cans hauled by participants, but also offered free soft drinks to the demonstrators. At these protests, environmental activists raised serious questions about consumer culture and the ecological effects of disposable packaging. In response, industry leaders in Atlanta and elsewhere announced, in effect: “Let them drink Coke.”

The NSDA memo combined snideness with grudging respect to emphasize the significance of environmentalism and to warn about its potential impact on their industry: If legions of consumers imbibed the environmentalist message, would their sales and profi ts diminish? “Those who are protesting, although many may be only semi- informed, have a legitimate concern for the environment they will inherit,” the memo commented. “From a business point of view, the protestors . . . represent the growing numbers of today’s and tomorrow’s soft drink consumers. An industry whose product sales are based on enjoyment of life must be concerned about ecological problems.” Placed on the defensive by Earth Day, the industry recognized that it needed to formulate a more proactive public relations effort.

KAB and the Ad Council would devise the symbolic solution that soft drink and packaging industries craved: the image of the Crying Indian. The conceptual brilliance of the ad stemmed from its ability to incorporate elements of the countercultural and environmentalist critique of progress into its overall vision in order to offer the public a resistant narrative that simultaneously deflected attention from industry practices. When Iron Eyes Cody paddled his birch bark canoe out of the recesses of the imagined past, when his tear registered shock at the polluted present, he tapped into a broader current of protest and, as the ad’s designers knew quite well, entered a cultural milieu already populated by other Ecological Indians.

In 1967 Life magazine ran a cover story titled “Rediscovery of the Red-man,” which emphasized how certain notions of Indianness were becoming central to countercultural identity. Native Americans, the article claimed, were currently “being discovered again—by the hippies. . . . Viewing the dispossessed Indian as America’s original dropout, and convinced that he has deeper spiritual values than the rest of society, hippies have taken to wearing his costume and horning in on his customs.” Even as the article revealed how the counterculture trivialized native culture by extracting symbols of imagined Indianness, it also indicated how the image of the Indian could be deployed as part of an oppositional identity to question dominant values.

While Life stressed the material and pharmaceutical accoutrements the counterculture ascribed to Indianness— from beads and headbands to marijuana and LSD—other media sources noted how many counter-cultural rebels found ecological meaning in native practices. In 1969, as part of a special issue devoted to the environmental crisis,Look magazine profiled the poet Gary Snyder, whose work enjoyed a large following among the counterculture. Photographed in the nude as he held his smiling young child above his head and sat along a riverbank, Snyder looked like the archetypal natural man, someone who had found freedom in nature, far away from the constraints and corruptions of modern culture. In a brief statement to the magazine he evoked frontier mythology to contrast the failures of the cowboy with the virtues of the Indian. “We’ve got to leave the cowboys behind,” Snyder said. “We’ve got to become natives of this land, join the Indians and recapture America.”

Although the image of the Ecological Indian grew out of longstanding traditions in American culture, it circulated with particular intensity during the late 1960s and early 1970s. A 1969 poster distributed by activists in Berkeley, California, who wanted to protect “People’s Park” as a communal garden, features a picture of Geronimo, the legendary Apache resistance fighter, armed with a rifle. The accompanying text contrasts the Indians’ reverence for the land with the greed of white men who turned the space into a parking lot. Likewise, a few weeks before Earth Day, the New York Times Magazine reported on Ecology Action, a Berkeley-based group. The author was particularly struck by one image that appeared in the group’s office. “After getting past the sign at the door, the visitor is confronted with a large poster of a noble, if somewhat apprehensive, Indian. The first Americans have become the culture heroes of the ecology movement.” Native Americans had become symbolically important to the movement, because, one of Ecology Action’s leaders explained, “‘the Indians lived in harmony with this country and they had a reverence for the things they depended on.’”

Hollywood soon followed suit. The 1970 revisionist Western Little Big Man, one of the most popular films of the era, portrayed Great Plains Indians living in harmony with their environment, respecting the majestic herds of bison that filled the landscape. While Indians killed the animals only for subsistence, whites indiscriminately slaughtered the creatures for profit, leaving their carcasses behind to amass, in one memorable scene, enormous columns of skins for the market. One film critic noted that “the ominous theme is the invincible brutality of the white man, the end of ‘natural’ life in America.”18

In creating the image of the Crying Indian, KAB practiced a sly form of propaganda. Since the corporations behind the campaign never publicized their involvement, audiences assumed that KAB was a disinterested party. KAB documents, though, reveal the level of duplicity in the campaign. Disingenuous in joining the ecology bandwagon, KAB excelled in the art of deception. It promoted an ideology without seeming ideological; it sought to counter the claims of a political movement without itself seeming political. The Crying Indian, with its creative appropriation of countercultural resistance, provided the guilt-inducing tear KAB needed to propagandize without seeming propagandistic.

Soon after the first Earth Day, Marsteller agreed to serve as the volunteer ad agency for a campaign whose explicit purpose was to broaden the KAB message beyond litter to encompass pollution and the environmental crisis. Acutely aware of the stakes of the ideological struggle, Marsteller’s vice president explained to the Ad Council how he hoped the campaign would battle the ideas of environmentalists—ideas, he feared, that were becoming too widely accepted by the American public. “The problem . . . was the attitude and the thinking of individual Americans,” he claimed. “They considered everyone else but themselves as polluters. Also, they never correlated pollution with litter. . . . The ‘mind-set’ of the public had to be overcome. The objective of the advertising, therefore, would be to show that polluters are people—no matter where they are, in industry or on a picnic.” While this comment may have exaggerated the extent to which the American public held industry and industry alone responsible for environmental problems (witness the popularity of the Pogo quotation), it revealed the anxiety felt by corporate leaders who saw the environmentalist insurgency as a possible threat to their control over the means of production.19

As outlined by the Marsteller vice president, the new KAB advertising campaign would seek to accomplish the following ideological objectives: It would conflate litter with pollution, making the problems seem indistinguishable from one another; it would interiorize the sense of blame and responsibility, making viewers feel guilty for their own individual actions; it would generalize and universalize with abandon, making all people appear equally complicit in causing pollution and the environmental crisis. While the campaign would still sometimes rely on images of young white children, images that conveyed futurity to condemn the current crisis, the Crying Indian offered instead an image of the past returning to the haunt the present.

Before becoming the Crying Indian, Iron Eyes Cody had performed in numerous Hollywood films, all in roles that embodied the stereotypical, albeit contradictory, characteristics attributed to cinematic Indians. Depending on the part, he could be solemn and stoic or crazed and bloodthirsty; most of all, though, in all these films he appeared locked in the past, a visual relic of the time before Indians, according to frontier myth, had vanished from the continent.

The Crying Indian ad took the dominant mythology as prologue; it assumed that audiences would know the plotlines of progress and disappearance and would imagine its prehistoric protagonist suddenly entering the contemporary moment of 1971. In the spot, the time- traveling Indian paddles his canoe out of the pristine past. His long black braids and feather, his buckskin jacket and beaded moccasins— all signal his pastness, his inability to engage with modernity. He is an anachronism who does not belong in the picture.

The spectral Indian becomes an emblem of protest, a phantomlike figure whose untainted ways allow him to embody native ecological wisdom and to critique the destructive forces of progress. He confronts viewers with his mournful stare, challenging them to atone for their environmental sins. Although he has glimpsed various signs of pollution, it is the final careless act—the one passenger who flings trash at his feet—that leads him to cry. At the moment the tear appears, the narrator, in a baritone voice, intones: “People start pollution. People can stop it.” The Crying Indian does not speak. The voice-over sternly confi rms his tearful judgment and articulates what the silent Indian cannot say: Industry and public policy are not to blame, because individual people cause pollution. The resistant narrative becomes incorporated into KAB’s propaganda effort. His tear tries to alter the public’s “mind-set,” to deflect attention away from KAB’s corporate sponsors by making individual Americans feel culpable for the environmental crisis.

Iron Eyes Cody became a spectral Indian at the same moment that actual Indians occupied Alcatraz Island—located, ironically enough, in San Francisco Bay, the same body of water in which the Crying Indian was paddling his canoe. As the ad was being filmed, native activists on nearby Alcatraz were presenting themselves not as past-tense Indians but as coeval citizens laying claim to the abandoned island. For almost two years—from late 1969 through mid-1971, a period that overlapped with both the filming and release of the Crying Indian commercial— they demanded that the US government cede control of the island. The Alcatraz activists, composed mostly of urban Indian college students, called themselves the “Indians of All Tribes” to express a vision of pan- Indian unity—an idea also expressed by the American Indian Movement (AIM) and the struggle for Red Power. On Alcatraz they hoped to create several centers, including an ecological center that would promote “an Indian view of nature—that man should live with the land and not simply on it.”

While the Crying Indian was a ghost in the media machine, the Alcatraz activists sought to challenge the legacies of colonialism and contest contemporary injustices—to address, in other words, the realities of native lives erased by the anachronistic Indians who typically populated Hollywood film. “The Alcatraz news stories are somewhat shocking to non-Indians,” the Indian author and activist Vine Deloria Jr. explained a few months after the occupation began. “It is difficult for most Americans to comprehend that there still exists a living community of nearly one million Indians in this country. For many people, Indians have become a species of movie actor periodically dispatched to the Happy Hunting Grounds by John Wayne on the ‘Late, Late Show.’” The Indians on Alcatraz, Deloria believed, could advance native issues and also potentially teach the United States how to establish a more sustainable relationship with the land. “Non-Indian society has created a monstrosity of a culture where . . . the sun can never break through the smog,” he wrote. “It just seems to a lot of Indians that this continent was a lot better off when we were running it.” While the Crying Indian and Deloria both upheld the notion of native ecological wisdom, they did so in diametrically opposed ways. Iron Eyes Cody’s tear, ineffectual and irrelevant to contemporary Indian lives, evoked only the idea of Indianness, a static symbol for polluting moderns to emulate. In contrast, the burgeoning Red Power movement demonstrated that native peoples would not be consigned to the past, and would not act merely as screens on which whites could project their guilt and desire.

A few weeks after the Crying Indian debuted on TV, the Indians of All Tribes were removed from Alcatraz. Iron Eyes Cody, meanwhile, repeatedly staked out a political position quite different from that of AIM, whose activists protested and picketed one of his films for its stereotypical and demeaning depictions of native characters. Still playing Indian in real life, Cody chastised the group for its radicalism. “The American Indian Movement (AIM) has some good people in it, and I know them,” he later wrote in his autobiography. “But, while the disruptions it has instigated helped put the Indians on the world map, its values and direction must change. AIM must work at encouraging Indians to work within the system if we’ve to really improve our lives. If that sounds ‘Uncle Tom,’ so be it. I’m a realist, damn it! The buffalo are never coming back.” Iron Eyes Cody, the prehistoric ghost, the past-tense ecological Indian, disingenuously condemned AIM for failing to engage with modernity and longing for a pristine past when buffalo roamed the continent.

Even as AIM sought to organize and empower Indian peoples to improve present conditions, the Crying Indian appears completely powerless, unable to challenge white domination. In the commercial, all he can do is lament the land his people lost.

To read more about Seeing Green, click here.

Add a Comment
42. 2015 PROSE Awards

header

Now in their 39th year, the PROSE Awards honor “the very best in professional and scholarly publishing by bringing attention to distinguished books, journals, and electronic content in over 40 categories,” as determined by a jury of peer publishers, librarians, and medical professionals.

As is the usual case with this kind of acknowledgement, we are honored and delighted to share several University of Chicago Press books that were singled-out in their respective categories as winners or runners-up for the 2015 PROSE Awards.

***

sch

Kurt Schwitters: Space, Image, Exile
By Megan R. Luke
Art History, Honorable Mention

***

debt

House of Debt: How They (and You) Caused the Great Recession, and How We Can Prevent It from Happening Again
By Atif Mian and Amir Sufi
Economics, Honorable Mention

***

mc

American School Reform: What Works, What Fails, and Why
By Joseph P. McDonald
Winner, Education Practice

***

lub

The Public School Advantage: Why Public Schools Outperform Private Schools
By Christopher A. Lubienski and Sarah Theule Lubienski
Winner, Education Theory

***

rud

Earth’s Deep History: How It Was Discovered and Why It Matters
By Martin J. S. Rudwick
Honorable Mention, History of STM

***

paso

The Selected Poetry of Pier Paolo Pasolini: A Bilingual Edition
By Pier Paolo Pasolini
Edited and translated by Stephen Sartarelli
Honorable Mention, Literature

***

kekes

How Should We Live?: A Practical Approach to Everyday Morality
By John Kekes
Honorable Mention, Philosophy

***

Congrats to all of the winners, honorable mentions, and nominees!

To read more about the PROSE Awards, click here.

Add a Comment
43. Excerpt: Renegade Dreams

9780226032719

An excerpt from Laurence Ralph’s

***

“Nostalgia, or the Stories a Gang Tells about Itself”

At the West Side Juvenile Detention Center, inmates hardly ever look you in the eyes. They almost never notice your face. Walk into a cell block at recreation time, for example, when young gang members are playing spades or sitting in the TV room watching a movie, and their attention quickly shifts to your shoes. They watch you walk to figure out why you came. I imagine what goes through their heads: Navy blue leather boots, reinforced steel toe, at least a size twelve. Must be a guard. That’s an easy one. Then the glass door swings open again. Expensive brown wingtips, creased khakis cover the tongue. A Northwestern law student come to talk about legal rights. Yep.

Benjamin Gregory wears old shoes, the kind a young affiliate wouldn’t be caught dead in. Still, the cheap patent leather shines, and, after sitting in the Detention Center’s waiting room for nearly an hour and a half, the squeak of his wingtips is a relief. It’s a muggy day, late in the spring of 2008. “I’ve been coming here for five years now,” he says. Mr. Gregory is a Bible-study instructor. “It’s a shame, but you can just tell which ones have their mothers and fathers, or someone who cares about them at home. Most of these kids don’t. Their pants gotta sag below their waist, even in prison garbs. All they talk about is selling drugs and gym shoes.”

Though I generally disagree with Mr. Gregory’s assessment of today’s young people—“hip hoppers,” as he calls them, not knowing I’m young enough to be counted in that group—his observations are, if not quite accurate, at least astute. The relationship between jail clothes and gym shoes is direct, with gang renegades—young gang affiliates that seasoned members claim don’t have the wherewithal to be in the gang—at the center. Until recently, Mr. Gregory couldn’t tell you what a gang renegade was; I educated him on the topic when he overheard inmates tossing the term around for sport. According to gang leaders, I tell him, renegades are to blame for gang underperformance. They are the chief instigators of “senseless” violence, say the leaders, and thus deserve any form of harm that befalls them, be it death, debility, or incarceration.

Ironically, Mr. Gregory’s generalized depiction of drug- and shoe-obsessed young inmates (shared by many prison guards, teachers, and even some scholars) can be compared to the way that gang members view renegades. Just as community leaders criticize the actions and affiliations of longtime Eastwoodians, older generations of gang members level critiques at young renegades. In what follows, I complicate the assumptions many have made about renegades by examining subjective versions of the Divine Knights’ contested—and contestable—history. Investigating the gang’s fraught past will help make clear the problems facing them at present. In the midst of unprecedented rates of incarceration, the anxieties that gang members harbor about the future of their organization are projected on to the youngest generation of gang members—and their gym shoes.

More precisely, in Eastwood gym shoes are emblems that embody historical consciousness. For gang members currently forty to sixty years old, the emergence of gym shoes signaled the end of an era in which affiliates pursued grassroots initiatives and involved themselves in local protest movements. Meanwhile, for the cohort of gang members who came of age in the “pre-renegade” era—those twenty-five to forty years old—gym shoes recall a time of rampant heroin trafficking, when battalions of young soldiers secured territories within a centralized leadership structure. As the younger of the two generations remembers it, this was the moment when loyalty began to translate into exorbitant profits. That these two elder generations of the Divine Knights hanker for a centralized and ordered system of governance places an enormous amount of pressure on the current generation, those gang members who are fifteen to twenty-five years old. We’ll see that just like the game of shoe charades that inmates play in jail, a renegade’s footwear can reveal his place in the world.

In the Divine Knights’ organization, wearing the latest pair of sneakers is considered the first status marker in the life and career of a gang member. For new members, having a fashionable pair of shoes signals one’s position as a legitimate affiliate. Later, in your teens and twenties, success is measured by whether a person can afford a nice car or your own apartment. Because most of the teenagers referred to as “renegades” have yet to progress to that stage, however, a fashionable pair of gym shoes is the pinnacle of possession.

Even though gang leaders claim that nowadays fashion trends of young gang members are too beholden to mainstream dictates anddon’t represent Divine Knights culture, gym shoes remain the badge of prestige most coveted by renegades. Exclusivity—whether or not the shoes can be easily purchased in ubiquitous commercial outlets like Foot Locker or only in signature boutiques—goes a long way in determining a shoe’s worth, as does pattern complexity: the more colors and textures that are woven onto the canvas of the shoe, the more valued that shoe becomes.

Over a two-year period during which I listened to gang members in informal settings and in facilitated focus groups with Divine Knights affiliates, I was able to sketch an outline of attributes concerning the five most popular gym shoes worn by young gang members in Eastwood. In some cases, the most popular brands and fashion trends evoke a past that has ceased to exist. Behold, the renegades’ “ Top 5” (in ascending order of significance):

№ 5

“ Tims,” or Timberland boots ($180), are not technically a gym shoe. But in Chicago, the term is used as a catchall for various types of men’s footwear. The construction boot of choice to tackle Chicago’s harsh winters, Tims serve a functional purpose in addition to being appreciated aesthetically. The tan “butter-soft” suede atop a thick rubber sole with dark brown leather ankle supports are staples of any shoe collection (and are typically the first pair of boots a renegade purchases). If in addition to the tan suede variety a person has Tims in other colors, he or she is thought to be an adept hustler in any climate.

№ 4

“Recs,” or Creative Recreations ($150), are a relatively new brand of sneaker popular with young renegades because they are available in an array of bright colors. Multiple textures—metallics, suedes, rubbers, and plastics—are combined on the synthetic leather canvas of each shoe. Recs also have a distinctive Velcro strap that runs across the toe. Considered the trendy of-the-moment shoe, Recs are held in high esteem by young renegades because they can only be found in a select few of Chicago’s signature boutiques.

№ 3

As the Timberland boot is to winter, the Air Force One, commonly referred to as “Air Forces” or “Ones” ($90), is to the other three seasons. This shoe is a staple of the renegade’s collection. If a young gang member has only one pair of gym shoes, they will likely be Ones. Although they come in a variety of color combinations, most affiliates begin with either white or black, with the expectation that their collection will grow in colorfulness. Moderately priced and available in a vast number of different styles, these might be the most popular gym shoes in the Divine Knights society.

№ 2

Signature shoes ($165). Young renegades are also likely to purchase the signature shoe of their favorite basketball player. For some, that’s LeBron James; for others, Kobe Bryant or, perhaps, Chicago-local Derrick Rose. As a gang member, one’s affinity for a particular player can override the aesthetic judgment of his or her friends. Still, purchasing a signature shoe entails several calculations, including when the shoe was released, which company manufactures them, and the popularity of the player in question at the moment. Given the danger that one’s signature shoe may prove undeserving of the time and effort invested in its purchase, no current player’s footwear can surpass the model by which the success of his shoe will no doubt be measured: Michael Jordan’s.

№ 1

“Jordans” ($230) are the signature shoe. A pair of Jordans is valuable to the young renegade for a number of reasons, chief among them that Michael Jordan, considered the greatest basketball player of all time, made his name playing for the Chicago Bulls. Thus, a particular geographic pride is associated with his apparel. Second, the risks involved with purchasing this particular signature shoe are greatly reduced because Jordan’s legacy is cemented in history. Third, since the first shoe one buys are not usually Jordans (because they are so expensive), there is a sense of achievement connected with finally being able to afford a pair.

Pre–renegade era Divine Knights can recall down to the year—sometimes even the day—that they purchased the same model of shoes currently being worn by young renegades. That older gang members hypocritically hassle renegades for the same consumer fetishes theythemselves once held dear bolsters the point that gym shoes have accrued additional symbolic value. At once, they point to the past and the future, similar to Eastwood’s greystones. Recall that greystones reference the past, specifically an era of Great Migration during which blacks traveled from the South to the Midwest in search of manufacturing jobs. At the same time, greystones are the primary form of capital for governmental investment. Just as city planners project future tax revenues based on empty and abandoned domiciles, a young renegade speculates on his future by buying a pair of Jordans.

For the Divine Knights, this form of speculation has, historically, required a young affiliate to position himself as a noteworthy member, thereby attracting the attention of a gang leader. Ideally, that leader will take a young Knight under his wing, bestow that affiliate with responsibilities, and reward his hard work with a share of the organization’s profits. In such a climate, adorning oneself with the most fashionable pair of shoes is a precondition for a person to prove himself worthy of the gang’s investment. A symbol of speculative capital, gym shoes—like greystones—are endowed with a double quality: They express highly charged notions of social mobility for one generation; and for another, older generation, they evoke a sense of nostalgia.

To fully understand the way in which the renegade’s gym shoes trigger an idealized notion of the past, it’s productive to dwell for a moment on the idea of nostalgia itself. From the initial use of the term—in 1688, when Johannes Hofer, a Swiss doctor, coined the term in his medical dissertation—nostalgia has been used to connect forms of social injury to the physical reality of the body. Hofer combined two Greek roots to form the term for this newfound malady: nostos (return home) and algia (longing). It describes “a longing for a home that no longer exists or has never existed.” Among the first to become debilitated by and diagnosed with this disease were Swiss soldiers who had been hired to fight in the French Revolution. Upon returning home, these soldiers were struck with “nausea, loss of appetite, pathological changes in the lungs, brain inflammation, cardiac arrests, high fever, and a propensity for suicide.” One of nostalgia’s most persistent symptoms was an ability to see ghosts.

To cure nostalgia, doctors prescribed anything from a trip to the Swiss Alps to having leeches implanted and then pulled from the skin, to sizable doses of opium. Nothing seemed to work. The struggles of ensuing generations only confirmed the difficulty, if not impossibility, of a cure. By the end of the eighteenth century, the meaning of nostalgia had shifted from a curable, individual sickness to what, literature scholar Svetlana Boym once called an incurable “historical emotion.” The burdens of nostalgia—the pressing weight of its historical emotion—are still very much with us. Interrupting the present with incessant flashes of the past, nostalgia retroactively reformulates cause and effect, and thus our linear notions of history.

“I love this walking stick,” Mr. Otis says to me. “And it’s not just ’cause I’m an old man, either.” He taps the stick on his stoop, adding, “I’ve had it since I was your age.”

Of all the Divine Knights symbols, the cane is Mr. Otis’s favorite. This is ironic, given that young gang members increasingly need canes as a consequence of the very violence Mr. Otis laments. Still, this seasoned gang veteran doesn’t associate his cane with injury but with pride and a masterful breadth of knowledge about his organization. When he was young, Mr. Otis tells me, canes, a symbol of gang unity, were hand-drawn on the custom-made shirts the Knights wore. Nowadays, Mr. Otis’s generation often contrasts the stability of the cane and its understated sophistication against the extravagance of sneakers. Why, I ask on a dusky October evening, is the cane his most cherished emblem? Mr. Otis clenches his hand into a fist, then releases one digit at a time, enumerating each of the gang’s symbols.

“Well, the top hat represents our ability to make things happen, like magicians do,” he says, wiggling his pinkie. Next comes the ring finger. “ The dice represents our hustle. You know what they say: Every day as a Divine Knight is a gamble. The playboy rabbit,” he continues, “represents that we’re swift in thought, silent in movement, and sm-o-o-th in deliverance. Of course, the champagne glass represents celebration.” Mr. Otis pauses briefly. “You can probably tell that all of these symbols have the young boys thinking that gang life is about trying to be pimps and players. But the cane”—signified by the pointer finger—“the cane represents consciousness. The knowledge that you must rely on the wisdom from your elders. The cane represents that we have to support one another—and support the community—to survive.”

We can’t see much on nights like this, but that doesn’t stop us from sitting on the stoop and watching the corner. The lights on Mr. Otis’s street either don’t work or are never on. In fact, were it not for the lamppost at the street’s end that serves as a mount for a police camera, the streetlights wouldn’t serve any purpose at all. Residents dismissively refer to the camera as the “blue light.” The device, which rests in a white box topped with a neon blue half-sphere, lights up every few seconds. Stationed to surveil the neighborhood, the blue light fulfills another unintended purpose: in the absence of working streetlights, the intermittent flash nearly illuminates the entire street. It is a vague luminescence, but just enough to make clear the molded boards of the vacant houses across the street. You can also distinguish the occasional trash bag blowing in the wind, like urban tumbleweed.

And you can spot the T-shirts—all of the young Eastwoodians in white tees—but that’s about all the blue light at the end of the street can brighten for Mr. Otis and me. From where we sit, you can’t identify the owners of those shirts; their faces aren’t perceptible, not even their limbs—just clusters of white tees floating in the distance, ghost-like. Mr. Otis, a veteran both of Eastwood stoops and Eastwood’s oldest gang, sees the ghosts as fleeting images of the “good ol’ gang,” as he calls it—a gang about to sink into oblivion.

Mr. Otis watches the street intently, as if he’s being paid for the task. And in a sense, he is: central to Mr. Otis’s work at the House of Worship’s homeless shelter is the supervision of his neighborhood. His street credentials, however, are far more valuable than anything he can see from his stoop. Mr. Otis was one of the first members to join the nascent gang in the 1950s. This was during the second Great Migration, when African Americans moved from the South to Chicago, settling in European immigrant neighborhoods. Back then, black youths traveled in packs for camaraderie, and to more safely navigate streets whose residents resented their presence. Because they were known to fight their white peers over access to recreational spaces, the image of black gangs as groups of delinquents emerged.

Mr. Otis became a leader of the Divine Knights in the 1960s, around the age of twenty-six. For the next forty years, he was—and remains—prominent both in the gang and in the community. Nowadays, he speaks about his youth with a mix of fondness and disdain. The two great narratives of his life, community decline and gang devolution, are also interwoven. “ Things were different when we were on the block,” he says. “We did things for the community. We picked up trash, even had a motto: ‘Where there is glass, there will be grass.’ And white folks couldn’t believe it. The media, they were shocked. Channel Five and Seven came around here, put us on the TV screen for picking up bottles.”

In these lively recollections, Mr. Otis connects the Divine Knights’ community-service initiatives to the political struggles of the civil rights movement. As a youngster, Mr. Otis was part of a gang whose stated goal was to end criminal activity. Around this time, in the mid-1960s, a radical new thesis articulated by criminologists and the prison-reform movement gained momentum. These researchers argued that people turned to crime because social institutions had largely failed them. Major street gangs became recipients of private grants and public funds (most notably from President Johnson’s War on Poverty) earmarked for community organization, the development of social welfare programs, and profit-making commercial enterprises. The Divine Knights ofthe 1960s opened community centers, reform schools, and a number of small businesses and management programs. Such were the possibilities when Reverend Dr. Martin Luther King Jr. relocated his family to a home near Eastwood.

In local newspaper articles, King explained that his decision to live on the West Side was political as well as purposeful. “I don’t want to be a missionary in Chicago, but an actual resident in a slum section so that we can deal firsthand with the problems,” King said. “We want to be in a section that typifies all the problems that we’re seeking to solve in Chicago.”

King’s organization, the Southern Christian Leadership Conference (SCLC), geared up for a broad attack on racism in the North. Their first northern push focused on housing discrimination; and they referred to it as “the open-housing campaign” because the SCLC wanted to integrate Chicago’s predominately white neighborhoods. As the SCLC gathered community support for their cause in May 1966, they developed relationships with Chicago’s street gangs. On Memorial Day, a riot broke out after a white man killed a black man with a baseball bat. Chaos ensued, resulting in the destruction of many local businesses. Gang members were rumored to be among the looters. Some civil rights leaders, in turn, feared that a spate of recent riots might jeopardize their campaign of nonviolence. When, during a rally at Soldier Field, a gang affiliate overheard a member of the SCLC state his reluctance to involve “gang fighters,” Chicago gang members (including many Knights) took this as a sign of disrespect and threatened to abandon King. A Chicago gang member was quoted as saying:

I brought it back to [a gang leader named] Pep and said if the dude feel this way and he’s supposed to be King’s number one man, then we don’t know how King feels and I believe we’re frontin’ ourselves off. Pep say there wasn’t no reason for us to stay there so we rapped with the other groups and when we gave our signal, all the [gang members] stood up and just split. When we left, the place was half empty and that left the King naked.

Days after the Soldier Field incident, in an effort to mend fences,King set up a meeting in his apartment and reassured gang membersthat he “needed the troops.” The Divine Knights were among the Chicago gangs to subsequently reaffirm their allegiance to King. After meeting with various gangs, top SCLC representatives were confident thatgangs could not only be persuaded to refrain from rioting, but might alsobe convinced to help calm trouble that might arise on their respective turfs. Moreover, “the sheer numbers of youths loyal to these organizations made them useful to the Southern Christian Leadership Conference’s objective of amassing an army of nonviolent protesters—even if including them came with the additional challenge of keeping them nonviolent.”

In June 1966, the Divine Knights were persuaded to participate in the two marches that Dr. King led into all-white neighborhoods during the Chicago Freedom Movement’s open-housing campaign. Inspired by the movement’s demand that the Chicago City Council increase garbage collection, street cleaning, and building-inspection services in urban areas, the Knights organized their own platform for political action. They scheduled a press conference with local media outlets to unveil their agenda on April 4, 1968. But just before the reporters arrived, King was assassinated. Less than twenty-four hours later, East-wood erupted in riots. The fires and looting following King’s murder destroyed many of the establishments along Murphy Road, Eastwood’s major commercial district at the time.

Many store owners left the neighborhood when insurance companies canceled their policies or prohibitively increased premiums, making it difficult to rebuild businesses in their previous location. This cycle of disinvestment, which peaked after King’s murder but had been steadily increasing since 1950, affected all of Eastwood’s retailers. By 1970, 75 percent of the businesses that had buoyed the community just two decades earlier were shuttered. There has not been a significant migration of jobs, or people, into Eastwood since World War II.

In the decades after the massive fires and looting, Mr. Otis and other gang elders maintain that the Divine Knights saw their power decline because they could do little to stop the other factions of the Knights from rioting. Neighborhood residents not affiliated with the gang were likewise dismayed. Here was evidence, with King’s murder, that the injustices allegedly being fought by the Divine Knights were, in fact, intractable. From Mr. Otis’s perspective, the disillusionment that accompanied King’s death, and the riots that followed—not to mention other assassinations, such as that of Black Panther leader Fred Hampton—all but ensured a downward spiral. The noble promise of the civil rights era was shattered, its decline as awful as its rise was glorious.

For Mr. Otis, the modern-day Divine Knights are as much about their forgotten history of activism as anything else. So on nights such as these—sitting on his stoop, watching the latest generation of gang members—he feels it his duty to share a finely honed civil rights legacy narrative with a novice researcher. “ Take notes on that,” he says. “Writethat down. We, the Divine Knights, got government money to build a community center for the kids. We were just trying to show ’em all: gangs don’t have to be bad, you know. Now these guys don’t have no history. They’re ‘Anonymous,’ ” Mr. Otis says sarcastically, referring to the name of one of many factions in this new renegade landscape, the Anonymous Knights.

Out in front of Mr. Otis’s stoop, ten or so gang members face each other like an offense about to break huddle. And then they do just that. The quarterback—Kemo Nostrand, the gang leader—approaches, retrieves a cell phone from his car, and then rejoins the loiterers. I ask Mr. Otis about Kemo and his crew: “Are they as disreputable as the younger gang members?”

“Look at ’em,” Mr. Otis says. “ They’re all outside, ain’t they? Drinking, smoking, wasting their lives away. They’re all outside.”

Nostalgia for the politically oriented gang is a desire for a different present as much as it is a yearning for the past. In Mr. Otis’s lamentations about the contemporary state of the gang, structural changes in the American social order are reduced to poor decision making. Mr. Otis and gang members of his generation fail to acknowledge that the gang’s latter-day embrace of the drug economy was not a simple matter of choice. The riots also marked the end of financial assistance for street organizations wanting to engage in community programming. When drug dealing emerged as a viable economic alternative for urban youth in the late 1970s, politicians had more than enough ammunition to argue that the Knights would always be criminal, as opposed to a political organization. The fact that both the local and federal government feared gangs like the Divine Knights for their revolutionary potential is airbrushed out of the romantic histories that Mr. Otis tells, where he invokes civilized marches in criticism of the gang’s present-day criminal involvement. In his version, for example, there is no mention of the gang members who, even during the civil rights heyday, were not at all civic-minded.

Whether or not this glorious perception of a political gang persists (or if it ever existed in the way Mr. Otis imagines), it is deployed nevertheless. Like the shiny new surveillance technology responsible fortransforming a person’s visage into a ghostly specter at night, the rosy civil rights lens through which Mr. Otis views the gang helps fashion the image that haunts him. Nostalgia, this historical emotion, reorders his memory.

The interview unfolds in a West Side Chicago barbershop, long since closed. Red Walker, the short, stocky, tattoo-covered leader of the Roving Knights—a splinter group of the Divine Knights—reminisces about what it has been like growing up in a gang. Walker has been a member of the Roving Knights for twenty years (since he was nine). Now, as a captain of the gang set, Red feels that the organization’s biggest problem is a lack of leadership. Comparing the gang of old to the one he now commands, he says, wistfully, “When I was growing up, we had chiefs. We had honor. There were rules that Knights had to follow, a code that gang members were expected to respect.”

A few of the Roving Knights’ strictures, according to Red: If members of the gang were shooting dice and somebody’s mom walked down the street, the Knights would move out of respect. When young kids were coming home from school, the Knights would temporarily suspend the sale of drugs. “We would take a break for a couple of hours,” Red says. “Everybody understood that. And plus, when I was coming up in the gang, you had to go to school. You could face sanctions if you didn’t. And nobody was exempt. Not even me.”

Red’s mother, he says, was a “hype”—the favored West Side term for drug addict. His father wasn’t present, and he didn’t have siblings. Red did, however, have a “soldier” assigned to him, whose responsibilities included taking him to school in the morning and greeting him when he got out. “Made sure I did my homework and everything,” Red says. “ These kids don’t have that. There’s no structure now. They govern themselves, so we call them renegades.”

It’s likely I will meet a lot of renegades on the streets of Eastwood, Red warns. Most are proudly independent, boisterous of their self-centered goals. Red says, “ They’ll even tell you, ‘Yeah, I’m just out for self. I’m trying to get my paper. Fuck the gang, the gang is dead.’ They’ ll tell you, straight up. But, you know what? They’re the ones that’s killing it, them renegades. I even had one in my crew.” Plopping down in the barber’s chair beside me, Red indicates that the story he’s about to tell is somewhat confidential, but he’s going to tell me anyway because he likes me—I’m a “studious motherfucker,” he jokes.

“You know how niggers be in here selling everything, right?” Red says. (He is referring to the daily transactions involving bootleg cable, DVDs, CDs, and candy.) “Well, back in the day, a long, long, long time ago, niggers used to sell something the police didn’t like us selling. We used to sell”—here Red searches for the right euphemism, settling on “muffins.” “Yeah, we had a bakery in this motherfucker. And cops, they hate muffins. So they would come up in here, try to be friendly, they’d snoop around, get they free haircut, and try to catch someone eating muffins or selling muffins, or whatever. But they could never catch nobody with muffin-breath around here. Never.”

One day, though, the police apprehended one of the “little shorties” working for Red, and the young man happened to have a muffin in his pocket. “Now, this wasn’t even an entire muffin. It was like a piece of a muffin—a crumb,” Red says. “Shorty wouldn’t have got in a whole lot of trouble for a crumb, you know? But this nigger sung. The nigger was singing so much, the cops didn’t have to turn on the radio. They let him out on the next block. He told about the whole bakery: the cooks, the clients. He told on everybody. And I had to do a little time behind that. That’s why in my new shop,” Red continues, glaring again at the recorder, “WE. DO. NOT. SELL. MUFFINS. ANY. MORE.”

Red pauses, seemingly satisfied by his disavowal of any current illegal muffin activity, then adds, “But, real talk: That’s how you know a renegade. No loyalty. They’ll sell you down the river for a bag of weed and a pair of Jordans.”

To read more about Renegade Dreams, click here.

Add a Comment
44. A Show-Trial: An excerpt from Bengt Jangfeldt’s Mayakovsky

9780226056975

“A Show-Trial”

Excerpted from Mayakovsky: A Biography by Bengt Jangfeldt

***

Mayakovsky returned to Moscow on 17 or 18 September. The following day, Krasnoshchokov was arrested, accused of a number of different offenses. He was supposed to have lent money to his brother Yakov, head of the firm American–Russian Constructor, at too low a rate of interest, and to have arranged drink– and sex–fueled orgies at the Hotel Europe in Petrograd, paying the Gypsy girls who entertained the company with pure gold. He was also accused of having passed on his salary from the Russian–American Industrial Corporation ($200 a month) to his wife (who had returned to the United States), of having bought his mistress flowers and furs out of state funds, of renting a luxury villa, and of keeping no fewer than three horses. Lenin was now so ill that he had not been able to intervene on Krasnoshchokov’s behalf even if he had wanted to.

His arrest was a sensation of the first order. It was the first time that such a highly placed Communist had been accused of corruption, and the event cast a shadow over the whole party apparatus. Immediately after Krasnoshchokov’s arrest, and in order to prevent undesired interpretations of what had happened, Valerian Kuybyshev, the commissar for Workers’ and Peasants’ Inspection, let it be known that “incontrovertible facts have come to light which show Krasnoshchokov has in a criminal manner exploited the resources of the economics department [of the Industry Bank] for his own use, that he has arranged wild orgies with these funds, and that he has used bank funds to enrich his relatives, etc.” He had, it was claimed, “in a criminal manner betrayed the trust placed in him and must be sentenced to a severe punishment.”

Krasnoshchokov was, in other words, judged in advance. There was no question of any objective legal process; the intention was to set an example: “The Soviet power and the Communist Party will […] root out with an iron hand all sick manifestations of the NEP and remind those who ‘let themselves be tempted’ by the joys of capitalism that they live in a workers’ state run by a Communist party.” Krasnoshchokov’s arrest was deemed so important that Kuybyshev’s statement was printed simultaneously in the party organ Pravda and the government organ Izvestiya. Kuybyshev was a close friend of the prosecutor Nikolay Krylenko, who had led the prosecution of the Socialist Revolutionaries the previous year, and who in time would turn show trials and false charges into an art form.

When Krasnoshchokov was arrested, Lili and Osip were still in Berlin. In the letter that Mayakovsky wrote to them a few days after the arrest, the sensational news is passed over in total silence. He gives them the name of the civil servant in the Berlin legation who can give them permission to import household effects (which they had obviously bought in Berlin) into Russia; he tells them that the squirrel which lives with them is still alive and that Lyova Grinkrug is in the Crimea. The only news item of greater significance is that he has been at Lunacharsky’s to discuss Lef and is going to visit Trotsky on the same mission. But of the event which the whole of Moscow was talking about, and which affected Lili to the utmost degree—not a word.

Krasnoshchokov’s trial took place at the beginning of March 1924. Sitting in the dock, apart from his brother Yakov, were three employees of the Industry Bank. Krasnoshchokov, who was a lawyer, delivered a brilliant speech in his own defense, explaining that, as head of the bank, he had the right to fix lending rates in individual cases and that one must be flexible in order to obtain the desired result. As for the charges of immoral behavior he maintained that his work necessitated a certain degree of official entertainment and that the “luxury villa” in the suburb of Kuntsevo was an abandoned dacha which in addition was his sole permanent dwelling. (It is one of the ironies of history that the house had been owned before the Revolution by the Shekhtel family and accordingly had often had Mayakovsky as a guest—see the chapter “Volodya”). Finally, he pointed out that his private life was not within the jurisdiction of the law.

This opinion was not shared by the court, which ruled that Krasnoshchokov had lived an immoral life during a time when a Communist ought to have set a good example and not surrender to the temptations offered by the New Economic Policy. Krasnoshchokov was also guilty of having used his position to “encourage his relatives’ private business transactions” and having caused the bank to lose 10,000 gold rubles. He was sentenced to six years’ imprisonment and in addition three years’ deprivation of citizen’s rights. Moreover, he was excluded from the Communist Party. His brother was given three years’ imprisonment, while the other three coworkers received shorter sentences.

Krasnoshchokov had in fact been a very successful bank director. Between January 1923 and his arrest in September he had managed to increase the Industry Bank’s capital tenfold, partly thanks to a flexible interest policy which led to large American investments in Russia. There is a good deal of evidence that the charges against him were initiated by persons within the Finance Commissariat and the Industry Bank’s competitor, the Soviet National Bank. Shortly before his arrest Krasnoshchokov had suggested that the Industry Bank should take over all the National Bank’s industrial–financial operations. Exactly the opposite happened: after Krasnoshchokov’s verdict was announced, the Industry Bank was subordinated to the Soviet National Bank.

There is little to suggest that the accusations of orgies were true. Krasnoshchokov was not known to be a rake, and his “entertainment expenses” were hardly greater than those of other highly placed functionaries. But he had difficulties defending himself, as he maintained not one mistress but two—although he had a wife and children. The woman who figured in the trial was not, as one might have expected, Lili, but a certain Donna Gruz—Krasnoshchokov’s secretary, who six years later would become his second wife. This fact undoubtedly undermined his credibility as far as his private life was concerned.

When Lili and Elsa showed Nadezhda Lamanova’s dresses in Paris in the winter of 1924, it attracted the attention of both the French and the British press, where this photograph was published with the caption “soviet sack fashion.—Because of the lack of textiles in Soviet Russia, Mme. Lamanoff, a Moscow fashion designer, had this dress made out of sackcloth from freight bales.”

By the time the judgment was announced, Lili had been in Paris for three weeks. She was there for her own amusement and does not seem to have had any particular tasks to fulfill. But she had with her dresses by the Soviet couturier Nadezhda Lamanova which she and Elsa showed off at two soirees organized by a Paris newspaper. She would like to go to Nice, she confided in a letter home to Moscow on 23 February, but her plans were frustrated by the fact that Russian emigrants were holding a congress there. She was thinking of traveling to Spain instead, or somewhere else in France, to “bake in the sun for a week or so.” But she remained in Paris, where she and Elsa went out dancing the whole time. Their “more or less regular cavaliers” were Fernand Léger (whom Mayakovsky had got to know in Paris in 1922) and an acquaintance from London who took them everywhere with him, “from the most chic of places to the worst of dives.” “It has been nothing but partying here,” she wrote. “Elsa has instituted a notebook in which she writes down all our rendezvous ten days in advance!” As clothes are expensive in Paris too, she asks Osip and Mayakovsky to send her a little money in the event of their managing to win “some mad sum of money” at cards.

When she was writing this letter, there were still two weeks to go before Krasnoshchokov’s trial. “How is A[lexander] M[ikhailovich]?” she asked, in the middle of reporting on the fun she was having. But she did not receive a reply, or if she did, it has not been preserved. On 26 March, after a month in Paris, she took the boat to England to visit her mother, who was in poor health, but that same evening she was forced to return to Calais after being stopped at passport control in Dover—despite having a British visa issued in Moscow in June 1923. What she did not know was that after her first visit to England in October 1922 she had been declared persona non grata, something which all British passport control points “for Europe and New York” had been informed of in a secret circular of 13 February 1923.

 

 

“You can’t imagine how humiliating it was to be turned back at the British border,” she wrote to Mayakovsky: “I have all sorts of theories about it, which I’ll tell you about when we I see you. Strange as it may seem, I think they didn’t let me in because of you.” She guessed right: documents from the Home Office show that it was her relationship with Mayakovsky, who wrote “extremely libellous articles” in Izvestiya, which had proved her undoing. Strangely enough, despite being refused entry to Britain, she was able to travel to London three weeks later. The British passport authorities have no record of her entry to the country. Did she come in by an illegal route?

At the same time that Lili traveled to Paris, Mayakovsky set out on a recital tour in Ukraine. Recitals were an important source of income for him. During his stay in Odessa he mentioned in a newspaper interview that he was planning to set out soon on a trip round the world, as he had been invited to give lectures and read poems in the United States. Two weeks later he was back in Moscow, and in the middle of April he went to Berlin, where Lili joined him about a week later. According to one newspaper, Mayakovsky was in the German capital “on his way to America.”

The round–the–world trip did not come off, as Mayakovsky failed to obtain the necessary visas. It was not possible to request an American visa in Moscow, as the two countries lacked diplomatic ties. Mayakovsky’s plan was therefore to try to get into the United States via a third country. Britain’s first Labour government, under Ramsay MacDonald, had scarcely recognized the Soviet Union (on 1 February 1924) before Mayakovsky requested a British visa, on 25 March. From England he planned to continue his journey to Canada and India. In a letter to Ramsay MacDonald, Britain’s chargé d’affaires in Moscow asked for advice about the visa application. Mayakovsky was not known to the mission, he wrote, but was “a member of the Communist party and, I am told, is known as a Bolshevik propagandist.” Mr. Hodgson would not have needed to do this if he had known that on 9 February, the Home Office had also issued a secret circular about Mayakovsky, “one of the principal leaders of the ‘Communist’ propaganda and agitation section of the ‘ROSTA,’” who since 1921 had been writing propaganda articles for Izvestiya and “should not be given a visa or be allowed to land in the United Kingdom” or any of its colonies. In Mayakovsky’s case the circular was sent to every British port, consulate, and passport and military checkpoint, as well as to Scotland House and the India Office. But in the very place where people really ought to have known about it, His Majesty’s diplomatic mission in Moscow, they were completely unaware of it.

While he waited for an answer from the British, Mayakovsky made a couple of appearances in Berlin where he talked about Lef and recited his poems. On the 9 May he traveled back to Moscow in company with Lili and Scotty, the Scotch terrier she had picked up in England, tired of waiting for notification that never came. When he got to Moscow he found out that on 5 May London had instructed the British mission in Moscow to turn down his visa application.

VLADIMIR ILYICH

The preliminary investigation and subsequent trial of Krasnoshchokov caused a great stir, but it would certainly have got even more column inches if it had not been played out in the shadow of a significantly more important event. On 21 January 1924, Vladimir Lenin died after several years of illness.

Among the thousands of people jostling one another in the queues which snaked around in front of Trade Unions House, where the leader of the Revolution lay in state, were Mayakovsky, Lili, and Osip. Lenin’s death affected Mayakovsky deeply. “It was a terrible morning when he died,” Lili recalled. “We wept in the queue in Red Square where we were standing in the freezing cold to see him. Mayakovsky had a press card, so we were able to bypass the queue. I think he viewed the body ten times. We were all deeply shaken.”

Mayakovsky with Scotty, whom Lili bought in England. The picture was taken in the summer of 1924 at the dacha in Pushkino. Scotty loved ice cream, and, according to Rodchenko, Mayakovsky regarded “with great tenderness how Scotty ate and licked his mouth.” “He took him in his arms and I photographed them in the garden,” the photographer remembered. “I took two pictures. Volodya kept his tender smile, wholly directed at Scotty.” The photograph with Scotty is in fact one of the few where Mayakovsky can be seen smiling.

The feelings awakened by Lenin’s death were deep and genuine, and not only for his political supporters. Among those queuing were Boris Pasternak and Osip Mandelstam, who shared a far more lukewarm attitude to the Revolution and its leader. “Lenin dead in Moscow!” exclaimed Mandelstam in his coverage of the event. “How can one fail to be with Moscow in this hour! Who does not want to see that dear face, the face of Russia itself ? The time? Two, three, four? How long will we stand here? No one knows. The time is past. We stand in a wonderful nocturnal forest of people. And thousands of children with us.”

Shortly after Lenin’s death Mayakovsky tackled his most ambitious project to date: a long poem about the Communist leader. He had written about him before, in connection with his fiftieth birthday in 1920 (“Vladimir Ilyich!”), and when Lenin suffered his first stroke in the winter of 1923 (“We Don’t Believe It!”), but those were shorter poems. According to Mayakovsky himself, he began pondering a poem about Lenin as early as 1923, but that may well have been a rationalization after the event. What set his pen in motion was in any case Lenin’s death in January 1924.

Mayakovsky had only a superficial knowledge of Lenin’s life and work and was forced to read up on him before he could write about him. His mentor, as on so many other occasions, was Osip, who supplied him with books and gave him a crash course in Leniniana. Mayakovsky himself had neither the time nor the patience for such projects. The poem was written during the summer and was ready by the beginning of October 1924. It was given the title “Vladimir Ilyich Lenin” and was the longest poem Mayakovsky ever wrote; at three thousand lines, it was almost twice as long as “About This.” In the autumn of 1924 he gave several poetry readings and fragments of the poem were printed in various newspapers. It came out in book form in February 1925.

The line to the Trade Unions’ House in Moscow, where Lenin was lying in state.

So the lyrical “About This” was followed by an epic poem, in accordance with the conscious or unconscious scheme that directed the rhythm of Mayakovsky’s writing. If even a propaganda poem like “To the Workers in Kursk” was dedicated to Lili, such a dedication was impossible in this case. “Vladimir Ilyich Lenin” was dedicated to the Russian Communist Party, and Mayakovsky explains why, with a subtle but unambiguous reference to “About This”:

I can write
about this,
about that,
but now
is not the time
for love–drivel.
All my
resounding power
as a poet
give to you,
attacking class.

In “Vladimir Ilyich Lenin” Lenin is portrayed as a Messiah–like figure, whose appearance on the historical scene is an inevitable consequence of the emergence of the working class. Karl Marx revealed the laws of history and, with his theories, “helped the working class to its feet.” But Marx was only a theoretician, who in the fullness of time would be replaced by someone who could turn theory into practice, that is, Lenin.

The poem is uneven, which is not surprising considering the format. From a linguistic point of view—the rhyme, the neologisms—it is undoubtedly comparable to the best of Mayakovsky’s other works, and the depiction of the sorrow and loss after Lenin’s death is no less than a magnificent requiem. But the epic, historical sections are too long and prolix. The same is true of the tributes to the Communist Party, which often rattle with empty rhetoric (which in turn can possibly be explained by the fact that Mayakovsky was never a member of the party):

I want
once more to make the majestic word
“PARTY”
shine.
One individual!
Who needs that?!
The voice of an individual
is thinner than a cheep.
Who hears it—
except perhaps his wife?

The party
is a hand with millions of fingers
clenched
into a single destroying fist.
The individual is rubbish,
the individual is zero  …
We say Lenin,
but mean
The Party.
We say
The Party,
but mean Lenin.

One of the few reviewers who paid any attention to the poem, the proletarian critic and anti–Futurist G. Lelevich, was quite right in pointing out that Mayakovsky’s “ultraindividualistic” lines in “About This” stand out as “uniquely honest” in comparison with “Vladimir Ilyich Lenin,” which “with few exceptions is rationalistic and rhetorical.” This was a “tragic fact” that Mayakovsky could only do something about by trying to “conquer himself.” The Lenin poem, wrote Lelevich, was a “flawed but meaningful and fruitful attempt to tread this path.”

Lelevich was right to claim that “About This” is a much more convincing poem than the ode to Lenin. But the “tragic” thing was not what Lelevich perceived as such, but something quite different, namely, Mayakovsky’s denial of the individual and his importance. In order to “conquer” himself, that is, the lyrical impulse within himself, he would have to take yet more steps in that direction—which he would in fact do, although it went against his innermost being.

If there is anything of lasting value in “Vladimir Ilyich Lenin,” it is not the paeans of praise to Lenin and the Communist Party—poems of homage are seldom good—but the warnings that Lenin, after his death, will be turned into an icon. The Lenin to whom Mayakovsky pays tribute was born in the Russian provinces as “a normal, simple boy” and grew up to be the “most human of all human beings.” If he had been “king–like and god–like” Mayakovsky would without a doubt have protested and taken a stance “opposed to all processions and tributes”:

I ought
to have found words
for lightning–flashing curses,
and while
I
and my yell
were trampled underfoot
I should have
hurled blasphemies
against heaven
and tossed
like bombs at the Kremlin
my: NO!

The worst thing Mayakovsky can imagine is that Lenin, like Marx, will become a “cooling plaster dotard imprisoned in marble.” This is a reference back to “The Fourth International,” in which Lenin is depicted as a petrified monument.

I am worried that
processions
and mausoleums,
celebratory statues
set in stone,
will drench
Leninist simplicity
in syrup–smooth balsam—

Mayakovsky warns, clearly blind to the fact that he himself is contributing to this development with his seventy–five–page long poem.

The fear that Lenin would be canonized after his death was deeply felt—and well grounded. It did not take long before Gosizdat (!) began advertising busts of the leader in plaster, bronze, granite, and marble, “life–size and double life–size.” The busts were produced from an original by the sculptor Merkurov—whom Mayakovsky had apostrophized in his Kursk poem—and with the permission of the Committee for the Perpetuation of the Memory of V. I. Lenin. The target groups were civil–service departments, party organizations and trade unions, cooperatives, and the like.

After his return from Berlin in May 1924, Mayakovsky met with the Japanese author Tamisi Naito, who was visiting Moscow. Seated at the table next to Mayakovsky and Lili is Sergey Tretyakov’s wife, Olga. To left of Naito (standing in the center) are Sergey Eisenstein and Boris Pasternak.

The Lef members’ tribute to the dead leader was of a different nature. The theory section in the first issue of Lef for 1924 was devoted to Lenin’s language, with contributions by leading Formalists such as Viktor Shklovsky, Boris Eikhenbaum, Boris Tomashevsky, and Yury Tynyanov—groundbreaking attempts to analyze political language by means of structuralist methods. Lenin was said to have “decanonized” the language, “cut down the inflated style,” and so on, all in the name of linguistic efficiency. This striving for powerful simplicity was in line with the theoretical ambitions of the Lef writers but stood in stark contrast to the canonization of Lenin which was set in train by his successors as soon as his corpse was cold.

This entire issue of Lef was in actual fact a polemic against this development—indirectly, in the essays about Lenin’s language, and in a more undisguised way in the leader article. In a direct reference to the advertisements for Lenin busts, the editorial team at Lef in their manifesto “Don’t Trade in Lenin!” sent the following exhortation to the authorities:

We insist:
Don’t make matrices out of Lenin.
Don’t print his portrait on posters, oilcloths, plates, drinking
vessels, cigarette boxes.
Don’t turn Lenin into bronze.
Don’t take from him his living gait and human physiognomy,
which he managed to preserve at the same time as he led history.
Lenin is still our present.
He is among the living.
We need him living, not dead.
Therefore:
Learn from Lenin, but don’t canonize him.
Don’t create a cult around a man who fought against all kinds of
cults throughout his life.
Don’t peddle artifacts of this cult.
Don’t trade in Lenin.

In view of the extravagant cult of Lenin that would develop later in the Soviet Union, the text is insightful to the point of clairvoyance. But the readers of Lef were never to see it. According to the list of contents, the issue began on page 3 with the leader “Don’t Trade in Lenin!” But in the copies that were distributed, this page is missing and the pagination begins instead on page 5. The leadership of Gosizdat, which distributed Lef, had been incensed by the criticism of the advertisements for Lenin busts and had removed the leader. As if by some miracle, it has been preserved in a few complimentary copies which made it to the libraries before the censor’s axe fell.

To read more about Mayakovsky, click here.

Add a Comment
45. Everything’s coming up Howie

 

 

18download-superJumbo

Adam Gopnik, writing in the New Yorker, recently profiled eminent American sociologist Howard S. Becker (Howie, please: “Only my mother ever called me Howard”), one of the biggest names in the field for over half a century, yet still, as with so many purveyors of haute critique, better known in France. Becker is no wilting lily on these shores, however—since the publication of his pathbreaking Outsiders: Studies in the Sociology of Deviance (1963), he’s been presiding as grand doyen over methodological confrontations with the particularly slippery slopes of human existence, including our very notion of “deviance.” All this, a half dozen or so honorary degrees, a lifetime achievement award, a smattering of our most prestigious fellowships, and the 86-year-old Becker is still going strong, with his most recent book published only this past year.

From the New Yorker profile:

This summer, Becker published a summing up of his life’s method and beliefs, called “What About Mozart? What About Murder?” (The title refers to the two caveats or complaints most often directed against his kind of sociology’s equable “relativism”: how can you study music as a mere social artifact—what about Mozart? How can you consider criminal justice a mutable convention—what about Murder?) The book is both a jocular personal testament of faith and a window into Becker’s beliefs. His accomplishment is hard to summarize in a sentence or catchphrase, since he’s resolutely anti-theoretical and suspicious of “models” that are too neat. He wants a sociology that observes the way people act around each other as they really do, without expectations about how they ought to.

The provenances of that sociology have included: jazz musicians, marijuana users, art world enthusiasts, social science researchers, medical students, musicologists, murderers, and “youth,” to name a few.

9780226166490

As mentioned earlier, his latest book What About Mozart? What About Murder?  considers the pull of two methodologies: one, more pragmatic, which addresses its subjects with caution and rigor on a case-by-case basis, and the other, which employs a more speculative approach (guesswork) by asking “killer questions” that force us to reposition our stance on hypothetical situations, such as whether or not, indeed, murder is always already (*Becker might in fact kill me for a foray into that particular theoretical shorthand*) “deviant.”

Via Gopnik:

His work is required reading in many French universities, even though it seems to be a model of American pragmatism, preferring narrow-seeming “How?” and “Who, exactly?” questions to the deeper “Why?” and “What?” supposedly favored by French theory. That may be exactly its appeal, though: for the French, Becker seems to combine three highly American elements—jazz, Chicago, and the exotic beauties of empiricism.

On the heels of his appearance in the New Yorker, Becker participated in a recent, brief sitdown with the New York Times, where he relayed thoughts on Charlie Hebdo and the French media, Nate Silver, and jazz trios, among other concerns.

From that New York Times Q & A:

I work out in a gym with a trainer twice a week. Oh, it’s pure torture, but I’m 86 so you’ve got to do something to stay in shape. I do a mixture of calisthenics, Pilates and yoga—a lot of work on balance. My trainer has this idea that every year on my birthday I should do the same number of push-ups as I have years old. We work up to it over the year. I was born on the anniversary of the great San Francisco earthquake and fire in 1906. It seems auspicious but I don’t know why.

Auspicious indeed.

To read more by Becker, click here.

Add a Comment
46. Excerpt: How Many is Too Many?

C_Cafaro_How_9780226190655_jkt

Excerpted from

How Many is Too Many?: The Progressive Argument for Reducing Immigration into the United States 

by Philip Cafaro

***

How many immigrants should we allow into the United States annually, and who gets to come?

The question is easy to state but hard to answer, for thoughtful individuals and for our nation as a whole. It is a complex question, touching on issues of race and class, morals and money, power and political allegiance. It is an important question, since our answer will help determine what kind of country our children and grandchildren inherit. It is a contentious question: answer it wrongly and you may hear some choice personal epithets directed your way, depending on who you are talking to. It is also an endlessly recurring question, since conditions will change, and an immigration policy that made sense in one era may no longer work in another. Any answer we give must be open to revision.

This book explores the immigration question in light of current realities and defends one provisional answer to it. By exploring the question from a variety of angles and making my own political beliefs explicit, I hope that it will help readers come to their own well-informed conclusions. Our answers may differ, but as fellow citizens we need to keep talking to one another and try to come up with immigration policies that further the common good.

Why are immigration debates frequently so angry? People on one side often seem to assume it is just because people on the other are stupid, or immoral. I disagree. Immigration is contentious because vital interests are at stake and no one set of policies can fully accommodate all of them. Consider two stories from among the hundreds I’ve heard while researching this book.

* * *

It is lunchtime on a sunny October day and I’m talking to Javier, an electrician’s assistant, at a home construction site in Longmont, Colorado, near Denver. He is short and solidly built; his words are soft-spoken but clear. Although he apologizes for his English, it is quite good. At any rate much better than my Spanish.

Javier studied to be an electrician in Mexico, but could not find work there after school. “You have to pay to work,” he explains: pay corrupt officials up to two years’ wages up front just to start a job. “Too much corruption,” he says, a refrain I find repeated often by Mexican immigrants. They feel that a poor man cannot get ahead there, can hardly get started.

So in 1989 Javier came to the United States, undocumented, working various jobs in food preparation and construction. He has lived in Colorado for nine years and now has a wife (also here illegally) and two girls, ages seven and three. “I like USA, you have a better life here,” he says. Of course he misses his family back in Mexico. But to his father’s entreaties to come home, he explains that he needs to consider his own family now. Javier told me that he’s not looking to get rich, he just wants a decent life for himself and his girls. Who could blame him?

Ironically one of the things Javier likes most about the United States is that we have rules that are fairly enforced. Unlike in Mexico, a poor man does not live at the whim of corrupt officials. When I suggest that Mexico might need more people like him to stay and fight “corruption,” he just laughs. “No, go to jail,c he says, or worse. Like the dozens of other Mexican and Central American immigrants I have interviewed for this book, Javier does not seem to think that such corruption could ever change in the land of his birth.

Do immigrants take jobs away from Americans? I ask. “American people no want to work in the fields,” he responds, or as dishwashers in restaurants. Still, he continues, “the problem is cheap labor.” Too many immigrants coming into construction lowers wages for everyone— including other immigrants like himself.

“The American people say, all Mexicans the same,” Javier says. He does not want to be lumped together with “all Mexicans,” or labeled a problem, but judged for who he is as an individual. “I don’t like it when my people abandon cars, or steal.” If immigrants commit crimes, he thinks they should go to jail, or be deported. But “that no me.” While many immigrants work under the table for cash, he is proud of the fact that he pays his taxes. Proud, too, that he gives a good day’s work for his daily pay (a fact confirmed by his coworkers).

Javier’s boss, Andy, thinks that immigration levels are too high and that too many people flout the law and work illegally. He was disappointed, he says, to find out several years ago that Javier was in the country illegally. Still he likes and respects Javier and worries about his family. He is trying to help him get legal residency.

With the government showing new initiative in immigration enforcement—including a well-publicized raid at a nearby meat-packing plant that caught hundreds of illegal workers—there is a lot of worry among undocumented immigrants. “Everyone scared now,” Javier says. He and his wife used to go to restaurants or stores without a second thought; now they are sometimes afraid to go out. “It’s hard,” he says. But: “I understand. If the people say, ‘All the people here, go back to Mexico,’ I understand.”

Javier’s answer to one of my standard questions—“How might changes in immigration policy affect you?”—is obvious. Tighter enforcement could break up his family and destroy the life he has created here in America. An amnesty would give him a chance to regularize his life. “Sometimes,” he says, “I dream in my heart, ‘If you no want to give me paper for residence, or whatever, just give me permit for work.’ ”

* * *

It’s a few months later and I’m back in Longmont, eating a 6:30 breakfast at a café out by the Interstate with Tom Kenney. Fit and alert, Tom looks to be in his mid-forties. Born and raised in Denver, he has been spraying custom finishes on drywall for twenty-five years and has had his own company since 1989. “At one point we had twelve people running three trucks,” he says. Now his business is just him and his wife. “Things have changed,” he says.

Although it has cooled off considerably, residential and commercial construction was booming when I interviewed Tom. The main “thing that’s changed” is the number of immigrants in construction. When Tom got into it twenty-five years ago, construction used almost all native-born workers. Today estimates of the number of immigrant workers in northern Colorado range from 50% to 70% of the total construction workforce. Some trades, like pouring concrete and framing, use immigrant labor almost exclusively. Come in with an “all-white” crew of framers, another small contractor tells me, and people do a double-take.

Tom is an independent contractor, bidding on individual jobs. But, he says, “guys are coming in with bids that are impossible.” After all his time in the business, “no way they can be as efficient in time and materials as me.” The difference has to be in the cost of labor. “They’re not paying the taxes and insurance that I am,” he says. Insurance, workmen’s compensation, and taxes add about 40% to the cost of legally employed workers. When you add the lower wages that immigrants are often willing to take, there is plenty of opportunity for competing contractors to underbid Tom and still make a tidy profit. He no longer bids on the big new construction projects and jobs in individual, custom-built houses are becoming harder to find.

“I’ve gone in to spray a house and there’s a guy sleeping in the bathtub, with a microwave set up in the kitchen. I’m thinking, ‘You moved into this house for two weeks to hang and paint it, you’re gonna get cash from somebody, and he’s gonna pick you up and drive you to the next one.’ ” He seems more upset at the contractor than at the undocumented worker who labors for him.

In this way, some trades in construction are turning into the equivalent of migrant labor in agriculture. Workers do not have insurance or workmen’s compensation, so if they are hurt or worn out on the job, they are simply discarded and replaced. Workers are used up, while the builders and contractors higher up the food chain keep more of the profits for themselves. “The quality of life [for construction workers] has changed drastically,” says Tom. “I don’t want to live like that. I want to go home and live with my family.”

Do immigrants perform jobs Americans don’t want to do? I ask. The answer is no. “My job is undesirable,” Tom replies. “It’s dirty, it’s messy, it’s dusty. I learned right away that because of that, the opportunity is available to make money in it. That job has served me well”—at least up until recently. He now travels as far away as Wyoming and southern Colorado to find work. “We’re all fighting for scraps right now.”

Over the years, Tom has built a reputation for quality work and efficient and prompt service, as I confirmed in interviews with others in the business. Until recently that was enough to secure a good living. Now though, like a friend of his who recently folded his small landscaping company (“I just can’t bid ’em low enough”), Tom is thinking of leaving the business. He is also struggling to find a way to keep up the mortgage payments on his house.

He does not blame immigrants, though. “If you were born in Mexico, and you had to fight for food or clothing, you would do the same thing,” Tom tells me. “You would come here.”

* * *

Any immigration policy will have winners and losers. So claims Harvard economist George Borjas, a leading authority on the economic impacts of immigration. My interviews with Javier Morales and Tom Kenney suggest why Borjas is right.

If we enforce our immigration laws, then good people like Javier and his family will have their lives turned upside down. If we limit the numbers of immigrants, then good people in Mexico (and Guatemala, and Vietnam, and the Philippines …) will have to forgo opportunities to live better lives in the United States.

On the other hand, if we fail to enforce our immigration laws or repeatedly grant amnesties to people like Javier who are in the country illegally, then we forfeit the ability to set limits to immigration. And if immigration levels remain high, then hard-working men and women like Tom and his wife and children will probably continue to see their economic fortunes decline. Economic inequality will continue to increase in America, as it has for the past four decades.

In the abstract neither of these options is appealing. When you talk to the people most directly affected by our immigration policies, the dilemma becomes even more acute. But as we will see further on when we explore the economics of immigration in greater detail, these appear to be the options we have.

Recognizing trade-offs—economic, environmental, social—is indeed the beginning of wisdom on the topic of immigration. We should not exaggerate such conflicts, or imagine conflicts where none exist, but neither can we ignore them. Here are some other trade-offs that immigration decisions may force us to confront:

  • Cheaper prices for new houses vs. good wages for construction workers.
  • Accommodating more people in the United States vs. preserving wildlife habitat and vital resources.
  • Increasing ethnic and racial diversity in America vs. enhancing social solidarity among our citizens.
  • More opportunities for Latin Americans to work in the United States vs. greater pressure on Latin American elites to share wealth and opportunities with their fellow citizens.

The best approach to immigration will make such trade-offs explicit, minimize them where possible, and choose fairly between them when necessary.

Since any immigration policy will have winners and losers, at any particular time there probably will be reasonable arguments for changing the mix of immigrants we allow in, or for increasing or decreasing overall immigration, with good people on all sides of these issues. Whatever your current beliefs, by the time you finish this book you should have a much better understanding of the complex trade-offs involved in setting immigration policy. This may cause you to change your views about immigration. It may throw your current views into doubt, making it harder to choose a position on how many immigrants to let into the country each year; or what to do about illegal immigrants; or whether we should emphasize country of origin, educational level, family reunification, or asylum and refugee claims, in choosing whom to let in. In the end, understanding trade-offs ensures that whatever policies we wind up advocating for are more consciously chosen, rationally defensible, and honest. For such a contentious issue, where debate often generates more heat than light, that might have to suffice.

* * *

Perhaps a few words about my own political orientation will help clarify the argument and goals of this book. I’m a political progressive. I favor a relatively equal distribution of wealth across society, economic security for workers and their families, strong, well-enforced environmental protection laws, and an end to racial discrimination in the United States. I want to maximize the political power of common citizens and limit the influence of large corporations. Among my political heroes are the three Roosevelts (Teddy, Franklin, and Eleanor), Rachel Carson, and Martin Luther King Jr.

I also want to reduce immigration into the United States. If this combination seems odd to you, you are not alone. Friends, political allies, even my mother the social worker shake their heads or worse when I bring up the subject. This book aims to show that this combination of political progressivism and reduced immigration is not odd at all. In fact, it makes more sense than liberals’ typical embrace of mass immigration: an embrace shared by many conservatives, from George W. Bush and Orrin Hatch to the editorial board of the Wall Street Journal and the US Chamber of Commerce.

In what follows I detail how current immigration levels—the highest in American history—undermine attempts to achieve progressive economic, environmental, and social goals. I have tried not to oversimplify these complex issues, or mislead readers by cherry-picking facts to support pre-established conclusions. I have worked hard to present the experts’ views on how immigration affects US population growth, poorer workers’ wages, urban sprawl, and so forth. Where the facts are unclear or knowledgeable observers disagree, I report that, too.

This book is divided into four main parts. Chapters 1 and 2 set the stage for us to consider how immigration relates to progressive political goals. Chapter 2, “Immigration by the Numbers,” provides a concise history of US immigration policy. It explains current policy, including who gets in under what categories of entry and how many people immigrate annually. It also discusses population projections for the next one hundred years under different immigration scenarios, showing how relatively small annual differences in immigration numbers quickly lead to huge differences in overall population.

Part 2 consists of chapters 3–5, which explore the economics of immigration, showing how flooded labor markets have driven down workers’ wages in construction, meatpacking, landscaping, and other economic sectors in recent decades, and increased economic inequality. I ask who wins and who loses economically under current immigration policies and consider how different groups might fare under alternative scenarios. I also consider immigration’s contribution to economic growth and argue that unlike fifty or one hundred years ago America today does not need a larger economy, with more economic activity or higher levels of consumption, but rather a fairer economy that better serves the needs of its citizens. Here as elsewhere, the immigration debate can clarify progressive political aspirations; in this case, helping us rethink our support for endless economic growth and develop a more mature understanding of our economic goals.

Part 3, chapters 6–8, focuses on the environment. Mass immigration has increased America’s population by tens of millions of people in recent decades and is set to add hundreds of millions more over the twenty-first century. According to Census Bureau data our population now stands at 320 million people, the third-largest in the world, and at current immigration rates could balloon to over 700 million by 2100. This section examines the environmental problems caused by a rapidly growing population, including urban sprawl, overcrowding, habitat loss, species extinctions, and increased greenhouse gas emissions. I chronicle the environmental community’s historic retreat from population issues over the past four decades, including the Sierra Club’s failed attempts to adopt a consensus policy on immigration, and conclude that this retreat has been a great mistake. Creating an ecologically sustainable society is not just window dressing; it is necessary to pass on a decent future to our descendants and do our part to solve dangerous global environmental problems. Because sustainability is incompatible with an endlessly growing population, Americans can no longer afford to ignore domestic population growth.

Part 4, chapters 9–11, looks for answers. The chapter “Solutions” sketches out a comprehensive proposal for immigration reform in line with progressive political goals, focused on reducing overall immigration levels. I suggest shifting enforcement efforts from border control to employer sanctions—as several European nations have done with great success—and a targeted amnesty for illegal immigrants who have lived in the United States for years and built lives here (Javier and his wife could stay, but their cousins probably would not get to come). I propose changes in US trade and aid policies that could help people create better lives where they are, alleviating some of the pressure to emigrate. In these ways, Americans can meet our global responsibilities without doing so on the backs of our own poor citizens, or sacrificing the interests of future generations. A companion chapter considers a wide range of reasonable progressive “Objections” to this more restrictive immigration policy. I try to answer these objections honestly, focusing on the trade-offs involved. A short concluding chapter reminds readers of all that is at stake in immigration policy, and affirms that we will make better policy with our minds open.

How Many Is Too Many? shows that by thinking through immigration policy progressives can get clearer on our own goals. These do not include having the largest possible percentage of racial and ethnic minorities, but creating a society free of racial discrimination, where diversity is appreciated. They do not include an ever-growing economy, but feature an economy that works for the good of society as a whole. They most certainly do not include a crowded, cooked, polluted, ever-more-tamed environment, but instead a healthy, spacious landscape that supports us with sufficient room for wild nature. Finally our goals should include playing our proper role as global citizens, while still paying attention to our special responsibilities as Americans. Like it or not those responsibilities include setting US immigration policy.

* * *

Although I hope readers across the political spectrum will find this book interesting, I have written it primarily for my fellow progressives. Frankly, we need to think harder about this issue than we have been. Just because Rush Limbaugh and his ilk want to close our borders does not necessarily mean progressives should be for opening them wider. But this is not an easy topic to discuss and I appreciate your willingness to consider it with me. In fact I come to this topic reluctantly myself. I recognize immigration’s contribution to making the United States one of the most dynamic countries in the world. I also find personal meaning in the immigrant experience.

My paternal grandfather came to America from southern Italy when he was twelve years old. As a child I listened entranced to his stories, told in an accent still heavy after half a century in his adopted country. Stories of the trip over and how excited he was to explore everything on the big ship (a sailor, taking advantage of his curiosity, convinced him to lift some newspapers lying on deck, to see what was underneath …). Stories of working as a journeyman shoe repairman in cities and towns across upstate New York and Ohio (in one store, the foreman put my grandfather and his lathe in the front window so passers-by would stop to watch how fast and well he did his work). Stories of settling down and starting his own business, marrying Nana, raising a family.

I admired Grandpa’s adventurousness in coming to a new world, his self-reliance, his pride in his work, and his willingness to work hard to create a better future for himself and his family, including, eventually, me. Stopping by the store, listening to him chat with his customers, I saw clearly that he was a respected member of his community. When he and the relatives got together for those three-hour meals that grew ever longer over stories, songs, and a little wine, I felt part of something special, something different from my everyday life and beyond the experience of many of my friends.

So this book is not a criticism of immigrants! I know that many of today’s immigrants, legal and illegal, share my grandfather’s intelligence and initiative. The lives they are creating here are good lives rich in love and achievement. Nor is it an argument against all immigration: I favor reducing immigration into the United States, not ending it. I hope immigrants will continue to enrich America for many years to come. In fact, reducing current immigration levels would be a good way to insure continued widespread support for immigration.

Still, Americans sometimes forget that we can have too much of a good thing. Sometimes when Nana passes the pasta, it’s time to say basta. Enough.

When to say enough, though, can be a difficult question. How do we know when immigration levels need to be scaled back? And do any of us, as the descendants of immigrants, have the right to do so?

Answering the first question, in detail, is one of the main goals of this book. Speaking generally I think we need to reduce immigration when it seriously harms our society, or its weakest members. The issues are complex, but I think any country should consider reducing immigration:

  • When immigration significantly drives down wages for its poorer citizens.
  • When immigrants are regularly used to weaken or break unions.
  • When immigration appears to increase economic inequality within a society.
  • When immigration makes the difference between stabilizing a country’s population or doubling it within the next century.
  • When immigration-driven population growth makes it impossible to rein in sprawl, decrease greenhouse gas emissions sufficiently, or take the other steps necessary to create an ecologically sustainable society.
  • When rapid demographic shifts undermine social solidarity and a sense of communal purpose.
  • When most of its citizens say that immigration should be reduced.

Of course, there may also be good reasons to continue mass immigration: reasons powerful enough to outweigh such serious social costs or the expressed wishes of a nation’s citizens. But they had better be important. And in the case at hand they had better articulate responsibilities that properly belong to the United States and its citizens—and not help our “sender” countries avoid their own problems and responsibilities. Reversing gross economic inequality and creating a sustainable society are the primary political tasks facing this generation of Americans. Progressives should think long and hard before we accept immigration policies that work against these goals.

But what about the second question: do Americans today have a right to reduce immigration? To tell Javier’s cousins, perhaps, that they cannot come to America and make better lives for themselves and their families?

Yes, we do. Not only do we have a right to limit immigration into the United States, as citizens we have a responsibility to do so if immigration levels get so high that they harm our fellow citizens, or society as a whole. Meeting this responsibility may be disagreeable, because it means telling good people that they cannot come to America to pursue their dreams. Still, it may need to be done.

Those of us who want to limit immigration are sometimes accused of selfishness: of wanting to hog resources or keep “the American way of life” for ourselves. There may be some truth in this charge, since many Americans’ interests are threatened by mass immigration. Still, some of those interests seem worth preserving. The union carpenter taking home $30 an hour who owns his own house, free and clear, or the outdoorsman walking quietly along the edge of a favorite elk meadow or trout stream, may want to continue to enjoy these good things and pass them on to their sons and daughters. What is wrong with that?

Besides, the charge of selfishness cuts both ways. Restaurant owners and software tycoons hardly deserve the Mother Teresa Self-Sacrifice Medal when they lobby Congress for more low-wage workers. The wealthy progressive patting herself on the back for her enlightened views on immigration probably hasn’t ever totaled up the many ways she and her family benefit from cheap labor.

In the end our job as citizens is to look beyond our narrow self-interest and consider the common good. Many of us oppose mass immigration not because of what it costs us as individuals, but because we worry about the economic costs to our fellow citizens, or the environmental costs to future generations. Most Americans enjoy sharing our country with foreign visitors and are happy to share economic opportunities with reasonable numbers of newcomers. We just want to make sure we preserve those good things that make this a desirable destination in the first place.

All else being equal, Americans would just as soon not interfere with other people’s decisions about where to live and work. In fact such a laissez-faire approach to immigration lasted for much of our nation’s history. But today all else is not equal. For one thing this is the age of jet airplanes, not tall-masted sailing ships or coal-fired steamers. It is much quicker and easier to come here than it used to be and the pool of would-be immigrants has increased by an order of magnitude since my grandfather’s day. (In 2006, there were 6. million applications for the 50,000 green cards available under that year’s “diversity lottery.” ) For another, we do not have an abundance of unclaimed land for farmers to homestead, or new factories opening up to provide work for masses of unskilled laborers. Unemployment is high and projected to remain high for the foreseeable future. For a third, we recognize new imperatives to live sustainably and do our part to meet global ecological challenges. Scientists are warning that we run grave risks should we fail to do so.

Americans today overwhelmingly support immigration restrictions. We disagree about the optimal amount of immigration, but almost everyone agrees that setting some limits is necessary. Of course, our immigration policies should be fair to all concerned. Javier Morales came to America illegally, but for most of his time here our government just winked at illegal immigration. It also taxed his paychecks. After two and a half decades of hard work that has benefited our country, I think we owe Javier citizenship. But we also owe Tom Kenney something. Perhaps the opportunity to prosper, if he is willing to work hard. Surely, at a minimum, government policies that do not undermine his own attempts to prosper.

* * *

The progressive vision is alive and well in the United States today. Most Americans want a clean environment with flourishing wildlife, a fair economy that serves all its citizens, and a diverse society that is free from racism. Still, it will take a lot of hard work to make this vision a reality and success is not guaranteed. Progressives cannot shackle our hopes to an outmoded immigration policy that thwarts us at every turn.

Given the difficulties involved in getting 320 million Americans to curb consumption and waste, there is little reason to think we will be able to achieve ecological sustainability while doubling or tripling that number. Mass immigration ensures that our population will continue growing at a rapid rate and that environmentalists will always be playing catch up. Fifty or one hundred years from now we will still be arguing that we should destroy this area rather than that one, or that we can make the destruction a little more aesthetically appealing—instead of ending the destruction. We will still be trying to slow the growth of air pollution, water use, or carbon emissions—rather than cutting them back.

But the US population would quickly stabilize without mass immigration. We can stop population growth—without coercion or intrusive domestic population policies—simply by returning to pre-1965 immigration levels.

Imagine an environmentalism that was not always looking to meet the next crisis and that could instead look forward to real triumphs. What if we achieved significant energy efficiency gains and were able to enjoy those gains with less pollution, less industrial development on public lands, and an end to oil wars, because those efficiency gains were not swallowed up by growing populations?

Imagine if the push to develop new lands largely ended and habitat for other species increased year by year, with a culture of conservation developed around restoring and protecting that habitat. Imagine if our demand for fresh water leveled off and instead of fighting new dam projects we could actually leave more water in our rivers.

And what of the American worker? It is hard to see how progressives will succeed in reversing current powerful trends toward ever greater economic inequality in a context of continued mass immigration, particularly with high numbers of relatively unskilled and poorly educated immigrants. Flooded labor markets will harm poorer workers directly, by driving down wages and driving up unemployment. Mass immigration will also continue to harm workers indirectly by making it harder for them to organize and challenge employers, by reducing the percentage of poor workers who are citizens and thus able to vote for politicians who favor the poor, and by limiting sympathy between the haves and havenots, since with mass immigration they are more likely to belong to different ethnic groups.

But it does not have to be this way. We can tighten labor markets and get them working for working people in this country. Combined with other good progressive egalitarian measures—universal health care; a living minimum wage; a more progressive tax structure—we might even reverse current trends and create a more economically just country.

Imagine meatpacking plants and carpet-cleaning companies competing with one another for scarce workers, bidding up their wages. Imagine unions able to strike those companies without having to worry about scabs taking their members’ jobs. Imagine college graduates sifting through numerous job offers, like my father and his friends did fifty years ago during that era’s pause in mass immigration, instead of having to wait tables and just hope for something better.

Imagine poor children of color in our inner cities, no longer looked on as a problem to be warehoused in failing schools, or jails, but instead seen as an indispensable resource: the solution to labor shortages in restaurants and software companies.

Well, why not? Why are we progressives always playing catch up? The right immigration policies could help lead us toward a more just, egalitarian, and sustainable future. They could help liberals achieve our immediate goals and drive the long-term political agenda. But we will not win these battles without an inspiring vision for a better society, or with an immigration policy that makes that vision impossible to achieve.

To read more about How Many is Too Many?, click here.

Add a Comment
47. Sandra M. Gustafson on the State of the Union (2015)

President Obama Delivers State Of The Union Address

As with the past few years, we are fortunate enough to have scholar Sandra M. Gustafson contribute a post following Barack Obama’s annual State of the Union address, positing the stakes for Obama’s rhetorical position in light of recent events in Ferguson, Missouri, and New York City (while pointing toward their more deeply embedded and disturbing legacies, respectively). Read Gustafson’s 2015 post in full after the jump below.

***

Lives that Matter: Reflections on the 2015 State of the Union Address

by Sandra M. Gustafson

 In his sixth State of the Union address, President Barack Obama summarized the major achievements of his administration to date–bringing the American economy back from the Great Recession, passing and implementing the Affordable Care Act, advancing civil rights, and winding down wars in Iraq and Afghanistan, while shifting the emphasis of US foreign policy toward diplomacy and multilateralism – and presented a framework for new initiatives that he called “middle class economics,” including affordable child care, a higher minimum wage, and free community college. Commentators compared the president’s emphasis on the successes of his six years in office to an athlete taking a victory lap. Some considered that tone odd in light of Republican midterm victories, while others speculated about his aspirations to shape the 2016 presidential election.  More and more, the president’s rhetoric and public actions inform an effort to shape his legacy, both in terms of the direction of his party and with regard to his historical reputation. The 2015 State of the Union address was a prime example of the narrative emerging from the White House.

The announcement earlier on the day of the address that the president will visit Selma, Alabama, to commemorate the fiftieth anniversary of Bloody Sunday and the movement to pass the Voting Rights Act was just one of many examples of how he has presented that legacy over the years: as an extension of the work of Martin Luther King, Jr. Community organizing, nonviolent protest, and political engagement are the central components of the route to social change that the president offered in The Audacity of Hope, his 2006 campaign autobiography. The need to nurture a commitment to progressive change anchored in an expanded electorate and an improved political system has been a regular theme of his time in office.

In the extended peroration that concluded this State of the Union address, the president alluded to his discussion of deliberative democracy in The Audacity of Hope. He called for “a better politics,” which he described as one where “we appeal to each other’s basic decency instead of our basest fears,” “where we debate without demonizing each other; where we talk issues and values, and principles and facts,” and “where we spend less time drowning in dark money for ads that pull us into the gutter, and spend more time lifting young people up with a sense of purpose and possibility.” He also returned to his 2004 speech to the Democratic National Convention in Boston, quoting a now famous passage, “there wasn’t a liberal America or a conservative America; a black America or a white America—but a United States of America.”

The president’s biracial background and his preference for “both/and” ways of framing conflicts has put him at odds with critics such as Cornell West and Tavis Smiley, who have faulted him for not paying sufficient attention to the specific problems of black America. The approach that Obama took in his address to the police killings of unarmed black men in Ferguson, Missouri, and New York City did not satisfy activists in the Black Lives Matter coalition, which issued a rebuttal to his address in the form of a State of the Black Union message. To the president’s claim that “The shadow of crisis has passed, and the State of the Union is strong,” the activists responded emphatically, offering a direct rebuttal in the subtitle of their manifesto: “The Shadow of Crisis has NOT Passed.” Rejecting his assertions of economic growth and social progress, they assembled a list of counterclaims.

The president came closest to engaging the concerns of the activists when he addressed the issue of violence and policing. “We may have different takes on the events of Ferguson and New York,” he noted, juxtaposing micronarratives of “a father who fears his son can’t walk home without being harassed” and “the wife who won’t rest until the police officer she married walks through the front door at the end of his shift.” By focusing on the concerns of a father and a wife, rather than the young man and the police officer at risk, he expanded the possibilities for identification in a manner that echoes his emphasis on family. The “State of the Black Union” extends the notion of difference in an alternative direction and responds with a macronarrative couched in terms of structural violence: “Our schools are designed to funnel our children into prisons. Our police departments have declared war against our community. Black people are exploited, caged, and killed to profit both the state and big business. This is a true State of Emergency. There is no place for apathy in this crisis. The US government has consistently violated the inalienable rights our humanity affords.”

To the president’s language of the nation as a family, and to his statement that “I want our actions to tell every child in every neighborhood, your life matters, and we are committed to improving your life chances[,] as committed as we are to working on behalf of our own kids,” the manifesto responds by rejecting his image of national solidarity and his generalization of the “black lives matter” slogan. Instead it offers a ringing indictment: “This corrupt democracy was built on Indigenous genocide and chattel slavery. And continues to thrive on the brutal exploitation of people of color. We recognize that not even a Black President will pronounce our truths. We must continue the task of making America uncomfortable about institutional racism. Together, we will re-imagine what is possible and build a system that is designed for Blackness to thrive.”  After presenting a list of demands and declaring 2015 “the year of resistance,” the manifesto concludes with a nod to Obama’s 2008 speech on race, “A More Perfect Union”: “We the People, committed to the declaration that Black lives matter, will fight to end the structural oppression that prevents so many from realizing their dreams. We cannot, and will not stop until America recognizes the value of Black life.”

This call-and-response between the first African American president and a coalition of activists has two registers.  One register involves the relationship between part and whole (e pluribus unum). President Obama responds to demands that he devote more attention to the challenges facing Black America by emphasizing that he is the president of the entire nation. What is at stake, he suggests, is the ability of an African American to represent a heterogeneous society.

The other register of the exchange exemplifies a persistent tension over the place of radicalism in relation to the institutions of democracy in the United States.  The Black Lives Matter manifesto draws on critiques of American democracy in Black Nationalist, Black radical, and postcolonial thought. As I discuss in Imagining Deliberative Democracy in the Early American Republic, these critiques have roots reaching back before the Civil War, to abolitionist leaders such as David Walker and Maria Stewart, and even earlier to the Revolutionary War veteran and minister Lemuel Haynes. The recently released film Selma, which portrays the activism leading to the passage of the 1965 Voting Rights Act, highlights the tactics of Dr. King and his associates as they pressure President Johnson to take up the matter of voting. The film characterizes the radical politics of Malcolm X and the threat of violence as a means to enhance the appeal of King’s nonviolent approach, an argument that Malcolm himself made. It then includes a brief scene in which Malcolm meets with Coretta Scott King in a tentative rapprochement that occurred shortly before his assassination. This tripartite structure of the elected official, the moderate or nonviolent activist, and the radical activist willing to embrace violence has become a familiar paradigm of progressive social change.

Aspects of this paradigm inform Darryl Pinckney’s “In Ferguson.” Reporting on the violence that followed the grand jury’s failure to indict Officer Darren Wilson for Michael Brown’s killing, Pickney quotes the Reverend Osagyefo Sekou, one of the leaders of the Don’t Shoot coalition, on the limits of electoral politics. Voting is “an insider strategy,” Sekou says. “If it’s only the ballot box, then we’re finished.” Pickney also cites Hazel Erby, the only black member of the seven-member county council of Ferguson, who explained the overwhelmingly white police force as a result of low voter turnout. Pinckney summarizes: “The city manager of Ferguson and its city council appoint the chief of police, and therefore voting is critical, but the complicated structure of municipal government is one reason many people have been uninterested in local politics.” This type of local narrative has played a very minor role in the coverage.  It occupies a register between President Obama’s micronarratives focused on individuals and families, on the one hand, and the structural violence macronarrative of the Black Lives Matter manifesto on the other. This middle register is where specific local situations are addressed and grassroots change happens. It can also provide insight into broad structural problems that might otherwise be invisible.

The value of this middle register of the local narrative emerges in the light that Rachel Aviv shines on police violence in an exposé of the Albuquerque Police Department. In “Your Son is Deceased,” Aviv focuses on the ordeal of the middle class Torres family when Christopher Torres, a young man suffering from schizophrenia, is shot and killed by police in the backyard of the family home. Christopher’s parents, a lawyer and the director of human resources for the county, are refused information and kept from the scene of their son’s killing for hours. They learn what happened to Christopher only through news reports the following day. The parallels between the Torres and Brown cases are striking, as are the differences. Though the confrontation with the police that led to Torres’s death happened just outside his home, and though his parents knew and worked with city officials including the mayor, his death and the official response to it share haunting similarities with that of Brown. Aviv does not ignore the issue of race and ethnicity, mentioning the sometimes sharp conflicts in this borderlands region between Latino/as, Native Americans, and whites.  But in presenting her narrative, she highlights the local factors that foster the corruption that she finds to be endemic in the Albuquerque Police Department; she also foregrounds mental illness as a decisive element in a number of police killings–one that crosses racial and economic boundaries.

There is a scene in Selma, in which Dr. King invites his colleagues to explore the dimensions of the voter suppression problem. They begin listing the contributing factors—the literacy tests, the poll tax—and then one of the organizers mentions laws requiring that a sponsor who is a voter must vouch for someone who wishes to register. The sponsor must know the would-be voter and be able to testify to her or his character. In rural areas of the South, there might not be a registered black voter for a hundred miles, and so many potential voters could not find an acquaintance to sponsor them.  The organizers agree this should be their first target, since without a sponsor, a potential voter cannot even reach the downstream hurdles of the literacy test and the poll tax. This practice of requiring a sponsor was specifically forbidden in the Voting Rights Act. At present, there are attempts to revive a version of the voucher test.

*

Selma as a whole, and this scene in particular, exemplifies many of the central features of democratic self-governance that Danielle Allen describes in Our Declaration: A Reading of the Declaration of Independence in Defense of Equality. Allen, a classicist and political theorist at Princeton’s Institute for Advanced Study, develops what she calls a “slow reading” of the Declaration of Independence in order to draw out the meaning of equality, which she relates to political processes focused on democratic deliberation and writing. From the language of the Declaration, Allen draws five interconnected facets of the ideal of equality.  Equality, she explains, involves freedom from domination, for both states and individuals. It also involves “recognizing and enabling the general human capacity for political judgment” coupled with “access to the tool of government.”  She finds equality to be produced through the Aristotelian “potluck method,” whereby individuals contribute their special forms of knowledge to foster social good, and through reciprocity or mutual responsiveness, which contributes to equality of agency. And she defines equality as “co-creation, where many people participate equally in creating a world together.”[i]

Selma illustrates all of these features of equality at work in the Civil Rights Movement, and the discussion of how to prioritize different aspects of voter suppression is a compelling dramatization of the “potluck method.” Following Allen, what is called for now is the sharing of special knowledge among individuals and communities affected by violent policing, including representatives of the police.  The December killings of New York City police officers Wenjian Liu and Rafael Ramos further heightened the polarization between police and protestors. President Obama offered one strategy for defusing that polarization in his State of the Union address when he presented scenarios designed to evoke reciprocity and mutual responsiveness.  Christopher Torres’s killing introduces an additional set of issues about the treatment of people with mental illness that complicates the image of a white supremacist state dominating black bodies—as does the fact that neither Liu nor Ramos was white.

What is needed now is a forum to produce and publicize a middle register of knowledge that addresses both local circumstances, such as the overly complicated government structure in Ferguson or the corruption in the Albuquerque Police Department, and more systemic problems such as the legacy of racism, a weak system of mental health care, and ready access to guns. Such a forum would exemplify the potluck method and embody the ideals of deliberative democracy as President Obama described them in The Audacity of Hope. Noting the diffuse operations of power in the government of the United States, he emphasized the importance of building a deliberative democracy where, “all citizens are required to engage in a process of testing their ideas against an external reality, persuading others of their point of view, and building shifting alliances of consent.” The present focus on police violence offers an opportunity to engage in such a democratic deliberation. The issues are emotional, and the stakes are high. But without the social sharing that Aristotle compared to a potluck meal, we will all remain hungry for solutions.

[i] In “Equality as Singularity:  Rethinking Literature and Democracy,” I relate Allen’s treatment of equality to the approach developed by French theorist Pierre Rosanvallon and consider both in relation to literature. The essay appears in a forthcoming special issue of New Literary History devoted to political theory.

*

Sandra M. Gustafson is professor of English and American studies at the University of Notre Dame. She is writing a book on conflict and democracy in classic American fiction with funding from the National Endowment for the Humanities.

To read more about Imagining Deliberative Democracy in the Early American Republic, click here.

Add a Comment
48. Free e-book for February: Floating Gold

9780226430362

Our free e-book for February is Christopher Kemp’s idiosyncratic exegesis on the backstory of whale poop,

Floating Gold: A Natural (and Unnatural) History of Ambergris.

***

“Preternaturally hardened whale dung” is not the first image that comes to mind when we think of perfume, otherwise a symbol of glamour and allure. But the key ingredient that makes the sophisticated scent linger on the skin is precisely this bizarre digestive by-product—ambergris. Despite being one of the world’s most expensive substances (its value is nearly that of gold and has at times in history been triple it), ambergris is also one of the world’s least known. But with this unusual and highly alluring book, Christopher Kemp promises to change that by uncovering the unique history of ambergris.

A rare secretion produced only by sperm whales, which have a fondness for squid but an inability to digest their beaks, ambergris is expelled at sea and floats on ocean currents for years, slowly transforming, before it sometimes washes ashore looking like a nondescript waxy pebble. It can appear almost anywhere but is found so rarely, it might as well appear nowhere. Kemp’s journey begins with an encounter on a New Zealand beach with a giant lump of faux ambergris—determined after much excitement to nothing more exotic than lard—that inspires a comprehensive quest to seek out ambergris and its story. He takes us from the wild, rocky New Zealand coastline to Stewart Island, a remote, windswept island in the southern seas, to Boston and Cape Cod, and back again. Along the way, he tracks down the secretive collectors and traders who populate the clandestine modern-day ambergris trade.

Floating Gold is an entertaining and lively history that covers not only these precious gray lumps and those who covet them, but presents a highly informative account of the natural history of whales, squid, ocean ecology, and even a history of the perfume industry. Kemp’s obsessive curiosity is infectious, and eager readers will feel as though they have stumbled upon a precious bounty of this intriguing substance.

Download your free copy of Floating Gold, here.

Add a Comment
49. Excerpt: Elaine Conis’s Vaccine Nation

9780226923765

An excerpt from Vaccine Nation: America’s Changing Relationship with Immunization

by Elaine Conis

(recent pieces featuring the book at the Washington Post and Bloomberg News)

***

“Mumps in Wartime”

Between 1963 and 1969, the nation‘s flourishing pharmaceutical industry launched several vaccines against measles, a vaccine against mumps, and a vaccine against rubella in rapid succession. The measles vaccine became the focus of the federally sponsored eradication campaign described in the previous chapter; the rubella vaccine prevented birth defects and became entwined with the intensifying abortion politics of the time. Both vaccines overshadowed the debut of the vaccine against mumps, a disease of relatively little concern to most Americans in the late 1960s. Mumps was never an object of public dread, as polio had been, and its vaccine was never anxiously awaited, like the Salk polio vaccine had been. Nor was mumps ever singled out for a high–profile immunization campaign or for eradication, as measles had been. All of which made it quite remarkable that, within a few years of its debut, the mumps vaccine would be administered to millions of American children with little fanfare or resistance.

The mumps vaccine first brought to market in 1968 was developed by Maurice Hilleman, then head of Virus and Cell Biology at the burgeoning pharmaceutical company Merck. Hilleman was just beginning to earn a reputation as a giant in the field of vaccine development; upon his death in 2005, the New York Times would credit him with saving “more lives than any other scientist in the 20th century.” Today the histories of mumps vaccine that appear in medical textbooks and the like often begin in 1963, when Hilleman‘s daughter, six–year–old Jeryl Lynn, came down with a sore throat and swollen glands. A widower who found himself tending to his daughter‘s care, Hilleman was suddenly inspired to begin work on a vaccine against mumps—which he began by swabbing Jeryl Lynn‘s throat. Jeryl Lynn‘s viral strain was isolated, cultured, and then gradually weakened, or attenuated, in Merck‘s labs. After field trials throughout Pennsylvania proved the resulting shot effective, the “Jeryl–Lynn strain” vaccine against mumps, also known as Mumpsvax, was approved for use.

But Hilleman was not the first to try or even succeed at developing a vaccine against mumps. Research on a mumps vaccine began in earnest during the 1940s, when the United States‘ entry into World War II gave military scientists reason to take a close look at the disease. As U.S. engagement in the war began, U.S. Public Health Service researchers began reviewing data and literature on the major communicable infections affecting troops during the First World War. They noted that mumps, though not a significant cause of death, was one of the top reasons troops were sent to the infirmary and absent from duty in that war—often for well over two weeks at a time. Mumps had long been recognized as a common but not “severe” disease of childhood that typically caused fever and swelling of the salivary glands. But when it struck teens and adults, its usually rare complications—including inflammation of the reproductive organs and pancreas—became more frequent and more troublesome. Because of its highly contagious nature, mumps spread rapidly through crowded barracks and training camps. Because of its tendency to inflame the testes, it was second only to venereal disease in disabling recruits. In the interest of national defense, the disease clearly warranted further study. PHS researchers estimated that during World War I, mumps had cost the United States close to 4 million “man days” from duty, contributing to more total days lost from duty than foreign forces saw.

The problem of mumps among soldiers quickly became apparent during the Second World War, too, as the infection once again began to spread through army camps. This time around, however, scientists had new information at hand: scientists in the 1930s had determined that mumps was caused by a virus and that it could, at least theoretically, be prevented through immunization. PHS surgeon Karl Habel noted that while civilians didn‘t have to worry about mumps, the fact that infection was a serious problem for the armed forces now justified the search for a vaccine. “To the military surgeon, mumps is no passing indisposition of benign course,” two Harvard epidemiologists concurred. Tipped off to the problem of mumps by a U.S. Army general and funded by the Office of Scientific Research and Development (OSRD), the source of federal support for military research at the time, a group of Harvard researchers began experiments to promote mumps virus immunity in macaque monkeys in the lab.

Within a few years, the Harvard researchers, led by biologist John Enders, had developed a diagnostic test using antigens from the monkey‘s salivary glands, as well as a rudimentary vaccine. In a subsequent set of experiments, conducted both by the Harvard group and by Habel at the National Institute of Health, vaccines containing weakened mumps virus were produced and tested in institutionalized children and plantation laborers in Florida, who had been brought from the West Indies to work on sugar plantations during the war. With men packed ten to a bunkhouse in the camps, mumps was rampant, pulling workers off the fields and sending them to the infirmary for weeks at a time. When PHS scientists injected the men with experimental vaccine, one man in 1,344 went into anaphylactic shock, but he recovered with a shot of adrenaline and “not a single day of work was lost,” reported Habel. To the researchers, the vaccine seemed safe and fairly effective—even though some of the vaccinated came down with the mumps. What remained, noted Enders, was for someone to continue experimenting until scientists had a strain infective enough to provoke a complete immune response while weak enough not to cause any signs or symptoms of the disease.

Those experiments would wait for well over a decade. Research on the mumps vaccine, urgent in wartime, became a casualty of shifting national priorities and the vagaries of government funding. As the war faded from memory, polio, a civilian concern, became the nation‘s number one medical priority. By the end of the 1940s, the Harvard group‘s research was being supported by the National Foundation for Infantile Paralysis, which was devoted to polio research, and no longer by OSRD. Enders stopped publishing on the mumps virus in 1949 and instead turned his full–time attention to the cultivation of polio virus. Habel, at the NIH, also began studying polio. With polio occupying multiple daily headlines throughout the 1950s, mumps lost its place on the nation‘s political and scientific agendas.

Although mumps received scant resources in the 1950s, Lederle Laboratories commercialized the partially protective mumps vaccine, which was about 50 percent effective and offered about a year of protection. When the American Medical Association‘s Council on Drugs reviewed the vaccine in 1957, they didn‘t see much use for it. The AMA advised against administering the shot to children, noting that in children mumps and its “sequelae,” or complications, were “not severe.” The AMA acknowledged the vaccine‘s potential utility in certain populations of adults and children—namely, military personnel, medical students, orphans, and institutionalized patients—but the fact that such populations would need to be revaccinated every year made the vaccine‘s deployment impractical. The little professional discussion generated by the vaccine revealed a similar ambivalence. Some observers even came to the disease‘s defense. Edward Shaw, a physician at the University of California School of Medicine, argued that given the vaccine‘s temporary protection, “deliberate exposure to the disease in childhood … may be desirable”: it was the only way to ensure lifelong immunity, he noted, and it came with few risks. The most significant risk, in his view, was that infected children would pass the disease to susceptible adults. But even this concern failed to move experts to urge vaccination. War had made mumps a public health priority for the U.S. government in the 1940s, but the resulting technology (imperfect as it was) generated little interest or enthusiasm in a time of peace, when other health concerns loomed larger.

After the war but before the new live virus vaccine was introduced, mumps went back to being what it long had been: an innocuous and sometimes amusing childhood disease. The amusing nature of mumps in the 1950s is evident even in seemingly serious documents from the time. When the New York State health department published a brochure on mumps in 1955, they adopted a light tone and a comical caricature of chipmunk–cheeked “Billy” to describe a brush with the disease. In the Chicago papers, health columnist and Chicago Medical Society president Theodore Van Dellen noted that when struck with mumps, “the victim is likely to be dubbed ‘moon–face.‘” Such representations of mumps typically minimized the disease‘s severity. Van Dellen noted that while mumps did have some unpleasant complications—including the one that had garnered so much attention during the war—“the sex gland complication is not always as serious as we have been led to believe.” The health department brochure pointed out that “children seldom develop complications,” and should therefore not be vaccinated: “Almost always a child is better off having mumps: the case is milder in childhood and gives him life–long immunity.”

Such conceptualizations helped shape popular representations of the illness. In press reports from the time, an almost exaggeratedly lighthearted attitude toward mumps prevailed. In Atlanta, papers reported with amusement on the oldest adult to come down with mumps, an Englishwoman who had reached the impressive age of ninety–nine. Chicago papers featured the sad but cute story of the boy whose poodle went missing when mumps prevented him from being able to whistle to call his dog home. In Los Angeles, the daily paper told the funny tale of a young couple forced to exchange marital vows by phone when the groom came down with mumps just before the big day.Los Angeles Times readers speculated on whether the word “mumps” was singular or plural, while Chicago Daily Defender readers got to laugh at a photo of a fat–cheeked matron and her fat–cheeked cocker spaniel, heads wrapped in matching dressings to soothe their mumps–swollen glands. Did dogs and cats actually get the mumps? In the interest of entertaining readers, newspapers speculated on that as well.

The top reason mumps made headlines throughout the fifties and into the sixties, however, was its propensity to bench professional athletes. Track stars, baseball players, boxers, football stars, and coaches all made the news when struck by mumps. So did Washington Redskins player Clyde Goodnight, whose story revealed a paradox of mumps at midcentury: the disease was widely regarded with casual dismissal and a smirk, even as large enterprises fretted over its potential to cut into profits. When Goodnight came down with a case of mumps in 1950, his coaches giddily planned to announce his infection to the press and then send him into the field to play anyway, where the Pittsburgh Steelers, they gambled, would be sure to leave him open for passes. But the plan was nixed before game time by the Redskins‘ public relations department, who feared the jubilant Goodnight might run up in the stands after a good play and give fans the mumps. Noted one of the team‘s publicists: “That‘s not good business.”

When Baltimore Orioles outfielder Frank Robinson came down with the mumps during an away game against the Los Angeles Angels in 1968, however, the tone of the team‘s response was markedly different. Merck‘s new Mumpsvax vaccine had recently been licensed for sale, and the Orioles‘ managers moved quickly to vaccinate the whole team, along with their entire press corps and club officials. The Orioles‘ use of the new vaccine largely adhered to the guidelines that Surgeon General William Stewart had announced upon the vaccine‘s approval: it was for preteens, teenagers, and adults who hadn‘t yet had a case of the mumps. (For the time being, at least, it wasn‘t recommended for children.) The Angels‘ management, by contrast, decided not to vaccinate their players—despite their good chances of having come into contact with mumps in the field.

Baseball‘s lack of consensus on how or whether to use the mumps vaccine was symptomatic of the nation‘s response as a whole. Cultural ambivalence toward mumps had translated into ambivalence toward the disease‘s new prophylactic, too. That ambivalence was well–captured in the hit movie Bullitt, which came out the same year as the new mumps vaccine. In the film‘s opening scene, San Francisco cop Frank Bullitt readies himself for the workday ahead as his partner, Don Delgetti, reads the day‘s headlines aloud. “Mumps vaccine on the market … the government authorized yesterday what officials term the first clearly effective vaccine to prevent mumps … ,” Delgetti begins—until Bullitt sharply cuts him off. “Why don‘t you just relax and have your orange juice and shut up, Delgetti.” Bullitt, a sixties icon of machismo and virility, has more important things to worry about than the mumps. So, apparently, did the rest of the country. The Los Angeles Times announced the vaccine‘s approval on page 12, and the New York Times buried the story on page 72, as the war in Vietnam and the race to the moon took center stage.

Also ambivalent about the vaccine—or, more accurately, the vaccine‘s use—were the health professionals grappling with what it meant to have such a tool at their disposal. Just prior to Mumpsvax‘s approval, the federal Advisory Committee on Immunization Practices at the CDC recommended that the vaccine be administered to any child approaching or in puberty; men who had not yet had the mumps; and children living in institutions, where “epidemic mumps can be particularly disruptive.” Almost immediately, groups of medical and scientific professionals began to take issue with various aspects of these national guidelines. For some, the vaccine‘s unknown duration was troubling: ongoing trials had by then demonstrated just two years of protection. To others, the very nature of the disease against which the shot protected raised philosophical questions about vaccination that had yet to be addressed. The Consumers Union flinched at the recommendation that institutionalized children be vaccinated, arguing that “mere convenience is insufficient justification for preventing the children from getting mumps and thus perhaps escorting them into adulthood without immunity.” The editors of the New England Journal of Medicine advised against mass application of mumps vaccine, arguing that the “general benignity of mumps” did not justify “the expenditure of large amounts of time, efforts, and funds.” The journal‘s editors also decried the exaggeration of mumps‘ complications, noting that the risk of damage to the male sex glands and nervous system had been overstated. These facts, coupled with the ever–present risk of hazards attendant with any vaccination program, justified, in their estimation, “conservative” use of the vaccine.

This debate over how to use the mumps vaccine was often coupled with the more generalized reflection that Mumpsvax helped spark over the appropriate use of vaccines in what health experts began referring to as a new era of vaccination. In contrast to polio or smallpox, the eradication of mumps was far from urgent, noted the editors of the prestigious medical journal the Lancet. In this “next stage” of vaccination, marked by “prevention of milder virus diseases,” they wrote, “a cautious attitude now prevails.” If vaccines were to be wielded against diseases that represented only a “minor inconvenience,” such as mumps, then such vaccines needed to be effective, completely free of side effects, long–lasting, and must not in any way increase more severe adult forms of childhood infections, they argued. Immunization officials at the CDC acknowledged that with the approval of the mumps vaccine, they had been “forced to chart a course through unknown waters.” They agreed that the control of severe illnesses had “shifted the priorities for vaccine development to the remaining milder diseases,” but how to prevent these milder infections remained an open question. They delineated but a single criterion justifying a vaccine‘s use against such a disease: that it pose less of a hazard than its target infection.

To other observers, this was not enough. A vaccine should not only be harmless—it should also produce immunity as well as or better than natural infection, maintained Oklahoma physician Harris Riley. The fact that the mumps vaccine in particular became available before the longevity of its protection was known complicated matters for many weighing in on the professional debate. Perhaps, said Massachusetts health officer Morton Madoff, physicians should be left to decide for themselves how to use such vaccines as “a matter of conscience.” His comment revealed a hesitancy to delineate policy that many displayed when faced with the uncharted territory the mumps vaccine had laid bare. It also hinted at an attempt to shift future blame in case mumps vaccination went awry down the line—a possibility that occurred to many observers given the still–unknown duration of the vaccine‘s protection.

Mumps was not a top public health priority in 1967—in fact, it was not even a reportable disease—but the licensure of Mumpsvax would change the disease‘s standing over the course of the next decade. When the vaccine was licensed, editors at the Lancet noted that there had been little interest in a mumps vaccine until such a vaccine became available. Similarly, a CDC scientist remarked that the vaccine had “stimulated renewed interest in mumps” and had forced scientists to confront how little they knew about the disease‘s etiology and epidemiology. If the proper application of a vaccine against a mild infection remained unclear, what was clear—to scientists at the CDC at least—was that such ambiguities could be rectified through further study of both the vaccine and the disease. Given a new tool, that is, scientists were determined to figure out how best to use it. In the process of doing so, they would also begin to create new representations of mumps, effectively changing how they and Americans in general would perceive the disease in the future.

A Changing Disease

Shortly after the mumps vaccine‘s approval, CDC epidemiologist Adolf Karchmer gave a speech on the infection and its vaccine at an annual immunization conference. In light of the difficulties that health officials and medical associations were facing in trying to determine how best to use the vaccine, Karchmer devoted his talk to a review of existing knowledge on mumps. Aside from the fact that the disease caused few annual deaths, peaked in spring, and affected mostly children, particularly males, there was much scientists didn‘t know about mumps. They weren‘t certain about the disease‘s true prevalence; asymptomatic cases made commonly cited numbers a likely underestimate. There was disagreement over whether the disease occurred in six– to seven–year cycles. Scientists weren‘t sure whether infection was truly a cause of male impotence and sterility. And they didn‘t know the precise nature of the virus‘s effects on the nervous system. Karchmer expressed a concern shared by many: if the vaccine was administered to children and teens, and if it proved to wear off with time, would vaccination create a population of non–immune adults even more susceptible to the disease and its serious complications than the current population? Karchmer and others thus worried—at this early stage, at least—that trying to control mumps not only wouldn‘t be worth the resources it would require, but that it might also create a bigger public health problem down the road.

To address this concern, CDC scientists took a two–pronged approach to better understanding mumps and the potential for its vaccine. They reinstated mumps surveillance, which had been implemented following World War I but suspended after World War II. They also issued a request to state health departments across the country, asking for help identifying local outbreaks of mumps that they could use to study both the disease and the vaccine. Within a few months, the agency had dispatched teams of epidemiologists to study mumps outbreaks in Campbell and Fleming Counties in Kentucky, the Colin Anderson Center for the “mentally retarded” in West Virginia, and the Fort Custer State Home for the mentally retarded in Michigan.

The Fort Custer State Home in Augusta, Michigan, hadn‘t had a single mumps outbreak in its ten years of existence when the CDC began to investigate a rash of 105 cases that occurred in late 1967. In pages upon pages of detailed notes, the scientists documented the symptoms (largely low–grade fever and runny noses) as well as the habits and behaviors of the home‘s children. They noted not only who slept where, who ate with whom, and which playgrounds the children used, but also who was a “toilet sitter,” who was a “drippy, drooley, messy eater,” who was “spastic,” who “puts fingers in mouth,” and who had “impressive oral–centered behavior.” The index case—the boy who presumably brought the disease into the home—was described as a “gregarious and restless child who spends most of his waking hours darting from one play group to another, is notably untidy and often places his fingers or his thumbs in his mouth.” The importance of these behaviors was unproven, remarked the researchers, but they seemed worth noting. Combined with other observations—such as which child left the home, for example, to go on a picnic with his sister—it‘s clear that the Fort Custer children were viewed as a petri dish of infection threatening the community at large.

Although the researchers‘ notes explicitly stated that the Fort Custer findings were not necessarily applicable to the general population, they were presented to the 1968 meeting of the American Public Health Association as if they were. The investigation revealed that mumps took about fifteen to eighteen days to incubate, and then lasted between three and six days, causing fever for one or two days. Complications were rare (three boys ages eleven and up suffered swollen testes), and attack rates were highest among the youngest children. The team also concluded that crowding alone was insufficient for mumps to spread; interaction had to be “intimate,” involving activities that stimulated the flow and spread of saliva, such as the thumb–sucking and messy eating so common among not only institutionalized children but children of all kinds.

Mumps preferentially strikes children, so it followed that children offered the most convenient population for studying the disease‘s epidemiology. But in asking a question about children, scientists ipso facto obtained an answer—or series of answers—about children. Although mumps had previously been considered a significant healthproblem only among adults, the evidence in favor of immunizing children now began to accumulate. Such evidence came not only from studies like the one at Fort Custer, but also from local reports from across the country. When Bellingham and Whatcom Counties in Washington State made the mumps vaccine available in county and school clinics, for example, few adults and older children sought the shot; instead, five– to nine–yearolds were the most frequently vaccinated. This wasn‘t necessarily a bad thing, said Washington health officer Phillip Jones, who pointed out that there were two ways to attack a health problem: you could either immunize a susceptible population or protect them from exposure. Immunizing children did both, as it protected children directly and in turn stopped exposure of adults, who usually caught the disease from kids. Immunizing children sidestepped the problem he had noticed in his own county. “It is impractical to think that immunization of adults and teen–agers against mumps will have any significant impact on the total incidence of adult and teen–age mumps. It is very difficult to motivate these people,” said Jones. “On the other hand, parents of younger children eagerly seek immunization of these younger children and there are numerous well–established programs for the immunization of children, to which mumps immunization can be added.”

Setting aside concerns regarding the dangers of giving children immunity of unknown duration, Jones effectively articulated the general consensus on immunization of his time. The polio immunization drives described in chapters 1 and 2 had helped forge the impression that vaccines were “for children” as opposed to adults. The establishment of routine pediatric care, also discussed in chapter 1, offered a convenient setting for broad administration of vaccines, as well as an audience primed to accept the practice. As a Washington, D.C., health officer remarked, his district found that they could effectively use the smallpox vaccine, which most “mothers” eagerly sought for their children, as “bait” to lure them in for vaccines against other infections. The vaccination of children got an added boost from the news that Russia, the United States‘ key Cold War opponent and foil in the space race, had by the end of 1967 already vaccinated more than a million of its youngsters against mumps.

The initial hesitation to vaccinate children against mumps was further dismantled by concurrent discourse concerning a separate vaccine, against rubella (then commonly known as German measles). In the mid1960s, rubella had joined polio and smallpox in the ranks of diseases actively instilling fear in parents, and particularly mothers. Rubella, a viral infection that typically caused rash and a fever, was harmless in children. But when pregnant women caught the infection, it posed a risk of harm to the fetus. A nationwide rubella epidemic in 1963 and 1964 resulted in a reported 30,000 fetal deaths and the birth of more than 20,000 children with severe handicaps. In fact, no sooner had the nation‘s Advisory Committee on Immunization Practices been formed, in 1964, than its members began to discuss the potential for a pending rubella vaccine to prevent similar outbreaks in the future. But as research on the vaccine progressed, it became apparent that while the shot produced no side effects in children, in women it caused a “rubella–like syndrome” in addition to swollen and painful joints. Combined with the fact that the vaccine‘s potential to cause birth defects was unknown, and that the vaccination of women planning to become pregnant was perceived as logistically difficult, federal health officials concluded that “the widespread immunization of children would seem to be a safer and more efficient way to control rubella syndrome.” Immunization of children against rubella was further justified based on the observation that children were “the major source of virus dissemination in the community.” Pregnant women, that is, would be protected from the disease as long as they didn‘t come into contact with it.

The decision to recommend the mass immunization of children against rubella marked the first time that vaccination was deployed in a manner that offered no direct benefit to the individuals vaccinated, as historian Leslie Reagan has noted. Reagan and, separately, sociologist Jacob Heller have argued that a unique cultural impetus was at play in the adoption of this policy: as an accepted but difficult–to–verify means of obtaining a therapeutic abortion at a time when all other forms of abortion were illegal, rubella infection was linked to the contentious abortion politics of the time. A pregnant woman, that is, could legitimately obtain an otherwise illegal abortion by claiming that she had been exposed to rubella, even if she had no symptoms of the disease. Eliminating rubella from communities through vaccination of children would close this loophole—or so some abortion opponents likely hoped. Eliminating rubella was also one means of addressing the growing epidemic of mental retardation, since the virus was known to cause birth defects and congenital deformities that led children to be either physically disabled or cognitively impaired. Rubella immunization promotion thus built directly upon the broader public‘s anxieties about abortion, the “crippling” diseases (such as polio), and mental retardation.

In its early years, the promotion of mumps immunization built on some of these same fears. Federal immunization brochures from the 1940s and 1950s occasionally mentioned that mumps could swell the brain or the meninges (the fluid surrounding the brain), but they never mentioned a risk of brain damage. In the late 1960s, however, such insinuations began to appear in reports on the new vaccine. Hilleman‘s early papers on the mumps vaccine trials opened with the repeated statement that “Mumps is a common childhood disease that may be severely and even permanently crippling when it involves the brain.” When Chicago announced Mumps Prevention Day, the city‘s medical director described mumps as a disease that can “contribute to mental retardation.” Though newspaper reporters focused more consistently on the risk that mumps posed to male fertility, many echoed the “news” that mumps could cause permanent damage to the brain. Such reports obscured substantial differentials of risk noted in the scientific literature. For unlike the link between mumps and testicular swelling, the relationship between mumps and brain damage or mental retardation was neither proven nor quantified, even though “benign” swelling of meninges was documented to appear in 15 percent of childhood cases. In a nation just beginning to address the treatment of mentally retarded children as a social (instead of private) problem, however, any opportunity to prevent further potential cases of brain damage, no matter how small, was welcomed by both parents and cost–benefit–calculating municipalities.

The notion that vaccines protected the health (and, therefore, the productivity and utility) of future adult citizens had long been in place by the time the rubella vaccine was licensed in 1969. In addition to fulfilling this role, the rubella vaccine and the mumps vaccine—which, again, was most commonly depicted as a guard against sterility and “damage to the sex glands” in men—were also deployed to ensure the existence of future citizens, by protecting the reproductive capacities of the American population. The vaccination of children against both rubella and mumps was thus linked to cultural anxiety over falling fertility in the post–Baby Boom United States. In this context, mumps infection became nearly as much a cause for concern in the American home as it had been in army barracks and worker camps two decades before. This view of the disease was captured in a 1973 episode of the popular television sitcom The Brady Bunch, in which panic ensued when young Bobby Brady learned he might have caught the mumps from his girlfriend and put his entire family at risk of infection. “Bobby, for your first kiss, did you have to pick a girl with the mumps?” asked his father, who had made it to adulthood without a case of the disease. This cultural anxiety was also evident in immunization policy discussions. CDC scientists stressed the importance of immunizing against mumps given men‘s fears of mumps–induced impotence and sterility—even as they acknowledged that such complications were “rather poorly documented and thought to occur rarely, if at all.”

As the new mumps vaccine was defining its role, the revolution in reproductive technologies, rights, and discourse that extended from the 1960s into the 1970s was reshaping American—particularly middle–class American—attitudes toward children in a manner that had direct bearing on the culture‘s willingness to accept a growing number of vaccines for children. The year 1967 saw more vaccines under development than ever before. Merck‘s own investment in vaccine research and promotion exemplified the trend; even as doctors and health officials were debating how to use Mumpsvax, Hilleman‘s lab was testing a combined vaccine against measles, rubella, and mumps that would ultimately help make the company a giant in the vaccine market. This boom in vaccine commodification coincided with the gradual shrinking of American families that new contraceptive technologies and the changing social role of women (among other factors) had helped engender.

The link between these two trends found expression in shifting attitudes toward the value of children, which were well–captured by Chicago Tribune columnist Joan Beck in 1967. Beck predicted that 1967 would be a “vintage year” for babies, for the 1967 baby stood “the best chance in history of being truly wanted” and the “best chance in history to grow up healthier and brighter and to get a better education than his forebears.” He‘d be healthier—and smarter—thanks in large part to vaccines, which would enable him to “skip” mumps, rubella, and measles, with their attendant potential to “take the edge off a child‘s intelligence.” American children might be fewer in number as well as costly, Beck wrote, but they‘d be both deeply desired and ultimately well worth the tremendous investment. This attitude is indicative of the soaring emotional value that children accrued in the last half of the twentieth century. In the 1960s, vaccination advocates appealed directly to the parent of the highly valued child, by emphasizing the importance of vaccinating against diseases that seemed rare or mild, or whose complications seemed even rarer still. Noted one CDC scientist, who extolled the importance of vaccination against such diseases as diphtheria and whooping cough even as they became increasingly rare: “The disease incidence may be one in a thousand, but if that one is your child, the incidence is a hundred percent.”

Discourse concerning the “wantedness” of individual children in the post–Baby Boom era reflected a predominantly white middle–class conceptualization of children. As middle–class birth rates continued to fall, reaching a nadir in 1978, vaccines kept company with other commodities—a suburban home, quality schooling, a good college—that shaped the truly wanted child‘s middle–class upbringing. From the late 1960s through the 1970s, vaccination in general was increasingly represented as both a modern comfort and a convenience of contemporary living. This portrayal dovetailed with the frequent depiction of the mild infections, and mumps in particular, as “nuisances” American no longer needed to “tolerate.” No longer did Americans of any age have to suffer the “variety of spots and lumps and whoops” that once plagued American childhood, noted one reporter. Even CDC publications commented on “the luxury and ease of health provided by artificial antigens” of the modern age.

And even though mumps, for one, was not a serious disease, remarked one magazine writer, the vaccination was there “for those who want to be spared even the slight discomfort of a case.” Mumps vaccination in fact epitomized the realization of ease of modern living through vaccination. Because it kept kids home from school and parents home from work, “it is inconvenient, to say the least, to have mumps,” noted a Massachusetts health official. “Why should we tolerate it any longer?” Merck aimed to capitalize on this view with ads it ran in the seventies: “To help avoid the discomfort, the inconvenience—and the possibility of complications: Mumpsvax,” read the ad copy. Vaccines against infections such as mumps might not be perceived as absolutely necessary, but the physical and material comfort they provided could not be undervalued.

To read more about Vaccine Nation, click here.

Add a Comment
50. Excerpt: In Search of a Lost Avant-Garde

9780226173818

 

An excerpt from In Search of a Lost Avant-Garde: An Anthropologist Investigates the Contemporary Art Museum

by Matti Bunzl

***

“JEFF KOONS <3 CHICAGO”

I’m sitting in the conference room on the fifth floor of the MCA, the administrative nerve center which is off limits to the public. It is late January and the temperatures have just plunged to near zero. But the museum staff is bustling with activity. With four months to go until the opening of the big Jeff Koons show, all hands are on deck. And there is a little bit of panic. Deadlines for the exhibit layout and catalogue are looming, and the artist has been hard to pin down. Everyone at the MCA knows why. Koons, who commands a studio that makes Warhol’s Factory look like a little workshop, is in colossal demand. For the MCA, the show has top priority. But for Koons, it is just one among many. In 2008 alone, he will have major exhibits in Berlin, New York, and Paris. The presentation at the Neue Nationalgalerie is pretty straightforward. Less so New York, where Koons is scheduled to take over the roof of the Metropolitan Museum, one of the city’s premiere art destinations. But it may well be the French outing that most preoccupies the artist. With an invitation to present his work at Versailles, the stakes could not be higher. Indeed, when the show opens in the fall, the photos of Koons’s work at the Rococo palace, shockingly incongruous yet oddly at home, go around the world.

But on this morning, there is good news to share. As the marketing department reports, Koons has approved the publicity strategy for the MCA show. Most everyone in the group of curators, museum educators, and development staffers breathes a sigh of relief, not unlike the response of Don Draper’s team after a successful pitch. Mad Men, which made its widely hailed debut only a few months earlier, is on my mind, in fact. Sure, there is no smoking and drinking at the MCA. But the challenge faced by the museum is a lot like that of the fictional Sterling Cooper: how to take a product and fit it with an indelible essence, a singularity of feeling. Jeff Koons is hardly as generic as floor cleaner or facial cream. But given his ubiquity across the global art scene, the MCA presentation still needs a hook, something that can give it the luster of uniqueness.

Koons has history in Chicago, and that turns out to be the key. Yes, he may have been born and raised in Pennsylvania, graduated from the Maryland Institute College of Art in Baltimore, and settled in New York, where, after working as a commodities trader, he became a professional artist. But in between, for one year in the mid‑1970s, Koons lived and studied in Chicago, taking courses at the School of the Art Institute and serving as an assistant to painter Ed Paschke. Enough to imagine the MCA show as a homecoming, and the second one at that. The first, it now appears, was the 1988 exhibit, which had paved his way to superstardom and cemented an enduring relationship between artist and city. No one in the room can be certain how Koons actually feels about his old stomping grounds. But the slogan stands: Jeff Koons Chicago.

 ***

It’s a few weeks later. The group is back on the fifth floor. The mood is determined. On the agenda for today is the ad campaign. It will be a “communications blitz,” one of the staffers on the marketing team says. It will start with postcards sent to museum members urging them to save the date of the opening. “We need to communicate that it will be a real happening!”

“Koons is like a rock star,” someone seconds, “and we need to treat him like that.” Apparently, Justin Timberlake stopped by the MCA a few weeks ago, causing pandemonium among the school groups that happened to be touring the museum at the time. “Koons is just like that!” one of the marketers enthuses. “No, he’s not,” I’m thinking to myself. But for what seems like an endless few seconds, no one has the heart to burst the bubble. Finally, someone conjectures that, Koons’s art‑star status notwithstanding, people might not know what he looks like. It is suggested that the postcard feature a face shot.

The graphics for the ad campaign and catalogue cover are central to the conversation. We look at mockups, and the marketers share their excitement about the splashy images. There is much oohing and aahing. But, then, a minor hiccup. A curator notes that one of the pieces depicted in the copy will not actually be in the show, the loan request having been refused. Another image presents an issue as well. That piece will be on display, but it belongs to another museum. Maybe it, too, should be purged. No one is overly concerned, though. Given a virtually inexhaustible inventory of snazziness, Koons’s oeuvre is certain to throw up excellent replacements.

Splashy images would also be the focal point of an advertorial the marketing department is considering for a Conde Nast publication. Such a piece, to be run in eitherVanity Fair or Architectural Digest, would be written by the MCA but laid out in the magazine’s style. It would complement the more conventional press strategies, like articles in local newspapers, and would be on message. This, as signs from the real world indicate that Koons may, in fact, truly like Chicago. He wants to attend a Bulls game with his kids while in town. From the standpoint of marketing, it’s a golden opportunity. “This artist likes Chicago sports,” one staffer gushes, something people would be “pleasantly surprised by.” The narrative that emerges is that of the local boy who made good. Indeed, when the official marketing copy arrives a few weeks later, it features sentences like these: “The kid who went to art school in Chicago and loved surrealism, dada, and Ed Paschke and the imagists—that kid made it big.”

 ***

A few weeks later still, back in the conference room on the fifth floor. Today’s topic: tie‑ins and merchandizing. Some of it is very straightforward, including a proposed relationship with Art Chicago, the local art fair. Other ideas are more outlandish, like a possible connection to The Incredible Hulk. The superhero movie, based on the Marvel Comics and starring Edward Norton, is set to open in mid‑June, and the staff member pitching the idea seems to be half joking. But as I look around the room, there are a lot of nods. The show, after all, will feature several pieces from Koons’s Hulk Elvis series, large paintings that juxtapose the cartoon character with such Americana as the Liberty Bell.

More concrete is a tie‑in with Macy’s. This past fall, a large balloon version of Koons’sRabbit made its debut in New York’s Thanksgiving Day Parade. For the MCA show, it could make a trip to Chicago, where it would be displayed at the Macy’s store in the Loop. I’m wondering if that might be a risky move. After all, the company is considered something of an interloper in the city, having taken over Marshall Field’s beloved flagship store in 2006. But the marketers are ecstatic about the opportunity. “This will really leverage the promotional aspects,” one of them exclaims. The word that keeps coming up is “cross‑marketing,” and I take away that Jeff Koons stuff might soon be everywhere.

Stuff, in fact, is what really gets the group going today. Koons, it turns out, is a veritable merchandizing machine, which means that a lot of things can be sold in conjunction with the show. The list of products bearing his art ranges from the affordably populist ( beach towels from Target) to the high‑end luxurious (designs by Stella McCartney). But the news gets even better. Koons has given the MCA permission to manufacture a whole new line of T‑shirts featuring Rabbit. We pass around production samples, and everyone agrees that the baby tees, in light blue and pink, are too cute for words.

Then it’s time for the food. As early as January, I heard about plans to delight museum patrons with Koons‑themed cuisine. Initially, there was some talk of cheesecake, but more recently, the word in the hallways has been cookies. Turns out, it’s a go. Koons just approved three separate designs. We’re back to oohing and aahing, when one of the marketers suggests that some of the cookies could be signed and sold as a limited edition. Another joke, I think. But he goes on, noting that some people would choose not to eat the cookies so they could sell them at Sotheby’s in a couple of decades. The atmosphere is jocund. So when one of the curators points out that worms would be crawling out of the cookies by then, it’s all taken as part of the general levity.

 ***

The concept of marketing is quite new to contemporary art museums. In the good old days, it was simply not seen as a necessity. Giving avantgarde art a place was the objective, which meant that institutions catered to a small group of cognoscenti and worried little about attracting the general public. All this changed once the spiral of growth started churning. Larger museums required larger audiences, not just to cover ever‑increasing overhead but to validate contemporary art museums’ reinvention as major civic players.

The MCA is paradigmatic. Until the late 1970s, there was no marketing operation per se. What little advertising was done ran under the rubric of “public relations,” which was itself subsumed under “programming.” Public relations became a freestanding unit in the 1980s, and by the end of the decade marketing was officially added to its agenda. But it was not until the move into the new building in 1996 that a stand‑alone marketing department was added to the institution’s roster. The department made its presence felt immediately. Right away, its half dozen members began issuing a glossy magazine, initially called Flux and later renamed MCAMag, MCA Magazine, and eventually MCA Chicago. A few years later, a sprightly e‑news enterprise followed, keyed to the ever‑expanding website.

But marketing is much more pervasive in the contemporary art museum. Just before I arrive at the MCA, for example, its marketing department spearheads the museum’s fortieth‑anniversary celebration, a forty‑day extravaganza with numerous events and free admission. Some of the first meetings I attend at the institution are the postmortems, where the marketers take a lead in tallying the successes and failures of the initiative. There is much talk of “incentivizing membership.” Branding, too, is emphasized, particularly the ongoing need to “establish the museum’s point of view” by defining the contemporary. The latter is especially pressing in light of the imminent opening of the Art Institute’s Modern Wing. But the key word that recurs is gateway. The MCA, the marketers consistently argue, needs shows that appeal to “new and broader audiences” and signal that all Chicagoans are “welcome at the museum.”

Jeff Koons is their big hope for 2008. In a handout prepared for a meeting in February, they explain why: “Jeff Koons is by far one of the (if not the) most well known living artists today.” With the recent sale of Hanging Heart making “news even in InTouch Weekly” and his participation in the Macy’s Thanksgiving Day Parade, he “is doing what no artist has done since Andy Warhol.” He is becoming part of the “mainstream.” Even more importantly, “the art itself helps to make this a gateway.” The “images of pop culture icons and infl atable children’s toys democratize the art experience. Even the most novice of art viewers feel entitled to react to his work.”

With this, the marketing department takes a leading role in the preparations for the Koons show. Its members help organize and run the weekly meetings coordinating the museum‑wide efforts and rally the troops when morale is down. This is done corporate‑style, as in an exercise in which staffers go around the table to share what excites them personally about Jeff Koons. The curators can be conspicuously silent when marketing talk dominates the agenda. But that doesn’t mean there aren’t any tensions.

 ***

I’m having lunch with one of the curators. We’re sitting on the cafeteria side of Puck’s at the MCA, the vast, high‑ceilinged restaurant on the museum’s main exhibition floor. The conversation circles around a loaded topic, the frustrations with the marketing department. “I understand where they’re coming from,” she tells me, only to add that they may not “believe in the same things I do.” I ask for specifics and get a torrent that boils down to this: The curator sees the MCA as a space for adventure and experimentation where visitors encounter a contemporary culture they don’t already know. What marketing wants to do, she says, is to give people a pleasant experience amidst items they already like. “If they had their way, it would be Warhol all the time.” Individual viewpoints vary, of course. But I’m hearing similar things from other members of the curatorial department. Marketing, they tell me, can be fecklessly populist and insufficiently attuned to the intricacies of contemporary art and artists.

The feelings are mutual, or, to be more accurate, inverted. To the marketers, the curatorial department sometimes comes off as elitist and quixotic. When its members talk about some of the art they want to show, one of them tells me, it can “just sound crazy.” “Sometimes,” she continues, “I don’t even know what they are doing.” Even more exasperating, however, is the curators’ seeming disinterest in growing the museum’s audience. “They never think about how to attract more viewers” is a complaint I hear on more than one occasion.

If there is a convergence of views, it is only that the other side has too much power and influence.

***

For a while, I think that the fretting might be personal. Every institution, after all, breeds animosities, petty and otherwise, and the ever‑receptive anthropologist would seem to be the perfect outlet. But I am struck that the grievances are never ad hominem. The MCA’s employees, in fact, seem to genuinely like and respect one another. This is not surprising. The museum, after all, is a nonprofit whose staffers, no matter how “corporate” in orientation, could pursue eminently more lucrative careers elsewhere. The resulting feeling is that “we are all in this together,” a sentiment I hear expressed with equal regularity and conviction.

What, then, is it between curation and marketing? Over time, I come to see the tensions as intrinsic to the quest of bringing contemporary art to ever‑larger audiences. The issue, in other words, is structural.

 ***

When marketers look at contemporary art, they see a formidable challenge. Here is a product the general public knows little about, finds largely incomprehensible, and, occasionally, experiences as outright scary. This is as far as one can be from, say, introducing a new kind of soap. There, the relevance and purpose of the generic product are already well established, leaving marketers to work the magic of brand association. Maybe the campaign is all about fragrance or vitality or sex appeal—what it won’t be about is how soap itself is good for you.

Much of the marketing in the domain of high culture works in this very manner. When Chicago’s Lyric Opera advertises its new season, for example, it can safely assume that folks have a pretty accurate sense of the genre. What’s more, there is little need to justify the basic merits of the undertaking. Most people don’t go to the opera. But even those who find it boring or tedious are likely to accede to its edifying nature.

The same holds true for universal museums. In Chicago, that would be the Art Institute, whose holdings span the globe and reach from antiquity to the present. Marketing has relevance there, too. But much like at the Lyric, the value of the product is readily understood. So is its basic nature, particularly when it can take the form of such widely recognized icons as Georges Seurat’s La Grande Jatte or Grant Wood’s American Gothic.

At its most elemental, the marketing of a museum is orchestrated on the marquees at its entrance. With this in mind, the Art Institute’s advertising strategy is clear. It is the uncontested classics that get top billing, whether they are culled from the museum’s unparalleled collection or make an appearance as part of a traveling show. Mounted between the ornate columns at the majestic Michigan Avenue entrance, a typical tripartite display, as the one from April 2007, looks like this: In the middle, bright red type on a blue banner advertises Cézanne to Picasso, a major show on the art dealer Ambroise Vollard, co‑organized by the Art Institute and fresh from its stop at the Metropolitan Museum in New York. On the left, a bisected flag adds to the theme with apples by Cézanne and Picasso’s The Old Guitarist, one of the museum’s best‑loved treasures. On the right, finally, a streamer with an ornately decorated plate—horse, rider, birds, and plants—publicizes Perpetual Glory, an exhibit of medieval Islamic ceramics. A couple of months down the road, the billboards announce a show of prints and drawings collected by prominent Chicago families, an exhibit on courtly art from West Africa, and free evening admission on Thursdays and Fridays. A little later still, it is an exhibit of sixteenth‑and seventeenth‑century drawings, a display of European tapestries, and the Art Institute’s logo.

What’s not on the Art Institute marquees is contemporary art. Sure, a canonized living master like my good friend Jasper Johns can make an occasional appearance. But the edgy fare served up by the museum’s contemporary department is absent. Nothing announces William Pope.L in 2007, Mario Ybarra Jr. in 2008, or Monica Bonvicini in 2009. The contemporary art market may be booming, but the Art Institute’s marketers assume that the general public cares only so much.

Their colleagues at the MCA don’t have that option. Tasked with marketing a contemporary art museum to an ever‑expanding audience, they have to find ways to engage the general public in their rarefied institution. It is an act of identification. “Often I, myself, don’t understand the art in the museum at first,” one marketer tells me, “but that gives me an advantage. I get where our audience is coming from.”

The issue goes far beyond marquees, then, although they are its perfect representation. For what’s at stake is the public imaginary of contemporary art. This is where marketing and curation are at loggerheads. The two departments ultimately seek to tell fundamentally different stories about the MCA and its contents. For the curators, the museum is a space for the new and therefore potentially difficult. For the marketers, that is precisely the problem. “People tend to spend leisure time doing something that is guaranteed to be a good use of their time,” they implore their colleagues. “That often means sticking with the familiar.” And so the stage is set for an uneasy dance, a perpetual pas de deux in which the partners are chained together while wearing repelling magnets.

***

To read more about In Search of a Lost Avant-Garde, click here.

Add a Comment

View Next 25 Posts