JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
On December 3, 1933, a reporter for the named Shepard Stone tried to answer a question that had begun to puzzle many of his readers: How was it that in a single year, the nation that had brought the world Goethe and Bach, Hegel and Beethoven, had fallen so completely under the sway of a short, mustachioed dictator named Adolf Hitler? To some analysts, the answer was fundamentally social, as Stone acknowledged. Starvation, political chaos, violence in the streets—all had plagued the Weimar Republic that Hitler’s fascist state replaced. But neither Stone nor his editors thought such privations were enough to explain Hitler’s rise. Rather, wrote Stone, “something intangible was necessary to coordinate the resentments and hatreds which these forces engendered.”
That something was propaganda. Above an enormous photograph of a Nazi rally, with floodlit swastika banners towering two stories high and row upon row of helmeted soldiers leaning toward the lights, the article’s headline told its story: “Hitler’s Showmen Weave a Magic Spell: By a Vast Propaganda Aimed at Emotions, Germany’s Trance is Maintained.” For Stone and his editors, fascism was a fundamentally psychological condition. Its victims swayed in time, linked by fellow feeling, unable to reason. In part, they responded to Hitler’s charisma. But they also responded to the power of mass media. Hitler famously “hypnotized” the crowds at mass rallies until they roared with applause. His voice then traveled out from those arenas in radio waves, reaching Germans across the nation and inspiring in them the same hypnotic allegiance. As Stone suggested, Hitler’s personal appeal alone could not have transformed the mindset of the entire populace. Only mass media could have turned a nation famous for its philosophers into a land of unthinking automata: “With coordinated newspaper headlines overpowering him, with radio voices beseeching him, with news reels and feature pictures arousing him, and with politicians and professors philosophizing for him, the individual German has been unable to salvage his identity and has been engulfed in a brown wave. Today few Germans can separate the chaff from the wheat. They are living in a Nazi dream and not in the reality of the world.”
During and after World War II, this belief would drive many intellectuals and artists to imagine pro-democratic alternatives to authoritarian psyches and societies, and to the mass-mediated propaganda that seemed to produce them. But before we can explore those alternatives, we need to revisit the anxieties that made them so important to their makers. In the years leading up to the war, the fear of mass media and mass psychology that animated Stone’s account became ubiquitous among American intellectuals, politicians, and artists. When they gazed across the Atlantic to Hitler’s Germany and, to a lesser extent, Stalin’s Soviet Union and Mussolini’s Italy, American journalists and social scientists saw their longstanding anxieties about the power of mass media harden into a specific fear that newspapers, radio, and film were engines of fascist socialization.
Since the late nineteenth century, writers in Europe and the United States had dreaded the rise of mass industrial society. Such a society fractured the psyches of its members and rendered them vulnerable to collective fits of irrational violence, many feared. Now analysts worried that mass media drew individual citizens into protofascistic relationships with the centers of political and commercial power and with one another. In the one-to-many communication pattern of mass media they saw a model of political dictatorship. In mass media audiences, they saw the shadows of the German masses turning their collective eyes toward a single podium and a single leader. To enter into such a relationship with media, many worried, was to rehearse the psychology of fascism. The rise of National Socialism in Germany demonstrated that such rehearsals could transform one of the most cultured of nations—and perhaps even America itself—into a bastion of authoritarianism.
Could It Happen Here?
In the early 1930s, popular writers tended to see Hitler as an ordinary man who had somehow risen to extraordinary heights. Journalist Dorothy Thompson, who interviewed Hitler in 1931, characteristically described him as “formless, almost faceless, a man whose countenance is a caricature, a man whose framework seems cartilaginous, without bones. He is inconsequent and voluble, ill poised, insecure. He is the very prototype of the Little Man.” How was it that such a man should have acquired such power? she wondered.
As Shepard Stone had pointed out, part of the answer was surely political. In the chaos of the Weimar years, Hitler and his National Socialists promised national rejuvenation. They also threatened violent ends for any who opposed them. Yet these explanations found a comparatively small place in the American popular press and scholarship of the time, where more cultural and characterological explanations often held sway. In 1941, for instance, William McGovern, a professor of political science at Northwestern University, published a representative if long-winded analysis of the origins of National Socialism entitled Somehow Hitler had managed to harvest those ideals and so transform a German cultural trait into a principle of national unity. For McGovern and others, it was not only German politics that had produced National Socialism, but something in the German mindset.
This conclusion presented a problem: If German totalitarianism was rooted in German culture, how could Americans explain the apparent rise of fascism in the United States? Though few remember the fact today, in the late 1930s, uniformed fascists marched down American streets and their voices echoed over the radio airwaves. The Catholic demagogue Father Coughlin, for example—founder of the “Radio League of the Little Flower”—was a ubiquitous presence on American radio for much of the decade. He formed a political party to oppose Roosevelt in 1936, endorsed and helped publish the anti-Semitic tract known as the Protocols of the Elders of Zion and by 1938 could be heard spewing anti-Semitic and pro-fascist propaganda on the radio to a regular audience of some 3,500,000 listeners. A Gallup poll taken in January 1939 reported that some 67 percent of these listeners agreed with his views.
Alongside Father Coughlin, Americans could track the activities of William Dudley Pelley’s Silver Legion of America—an anti-Semitic paramilitary group formed in 1933 and modeled after Hitler’s brownshirts and Mussolini’s blackshirts. Though Pelley claimed to hear the voices of distant spirits, his group still attracted fifteen thousand members at its peak. Americans could also follow the Crusader White Shirts in Chattanooga, Tennessee; the American National-Socialist Party; and, of course, the Ku Klux Klan. For more than a few Americans in the 1930s, fascists were not merely threats from overseas. They lived next door.
The group that attracted the greatest notice of the American press in this period was the Amerikadeutscher Volksbund. The Bund had been created in 1936, when self-styled “American Führer” Fritz Kuhn, a German-born American citizen, was elected head of a German-American organization known as the Friends of New Germany. At its largest, the Bund probably had no more than twenty-five thousand members, most of them Americans of German extraction. Even so, on the night of February 20, 1939, they managed to bring twenty thousand people to Madison Square Garden for a pro-fascist rally. Though the event ostensibly celebrated George Washington’s birthday, the Garden was hung with antiSemitic and pro-Nazi banners. Speakers wore uniforms clearly modeled on the military regalia of Nazi Germany. Three thousand uniformed men from the Bund’s pseudo–police force, the Ordnungsdienst, moved among the crowd, spotting and removing hecklers and soliciting donations. Throughout the rally, speakers and audience carefully proclaimed their pro-Americanism. They sang the “Star-Spangled Banner” and pledged “undivided” allegiance to the American flag. But speakers also launched a steady attack on Jews and the Roosevelt administration. One drew out the word “Roosevelt” in such a way that it sounded like “Rosenfeld.” Another tried to convince the audience that Judaism and communism were essentially the same social movement.
Outside the Garden, Mayor Fiorello La Guardia stationed 1,700 policemen to keep order. City leaders feared large and violent counterdemonstrations, but the mayor had refused to prevent the rally, arguing that permitting free speech was precisely what distinguished democratic America from fascist Germany. In the end, police counted approximately ten thousand mostly peaceful demonstrators and observers, some holding signs reading “Smash Anti-Semitism” and “Drive the Nazis Out of New York.” Journalists on the scene believed police estimates to be heavily exaggerated. Even if they were correct, pro-fascist rally-goers outnumbered protesters two to one. To reporters at the time, it seemed entirely plausible that the Bund enjoyed substantial support, at the ver y least among Americans of German origin, and perhaps among other communities as well.
Even before this rally, the Bund loomed large as an emblem of the threat fascism posed to the United States. On March 27, 1937, for instance, Life published a seven-page spread under the headline, “Like Communism It Masquerades as Americanism.” There on the first page of the piece, Americans could see a Bundist color guard at the Garden wearing imitations of Nazi brownshirt uniforms and standing in front of a massive portrait of George Washington. Another headline in the same feature underlined the visual point: “It Can Happen Here.”
The actual number of fascists in the United States never came anywhere near to becoming a sufficiently critical mass to challenge, let alone over-throw, the state. Yet in the late 1930s analysts across much of the political spectrum feared that it soon might. If it did, they reasoned, it would be because of one or both of two social forces. The first was a fascist fifth column inside the United States. In the 1930s, American journalists and politicians believed that Hitler’s Germany was engaging in a massive propaganda campaign inside the United States. Reporters noted that Germany had established active propaganda networks in European nations such as France, Norway, and the Netherlands, and suggested that they were exporting those tactics to American shores. In June of 1940, magazine announced, “These Are Signs of Fifth Columns Everywhere,” and published pictures of fascists congregating in South America, Asia, and Long Island. And despite the fact that Hitler’s regime had tried to distance itself from Fritz Kuhn, many Americans assumed that the Bund was as much as anything a front for Nazi interests in the United States.
German-American Bundists parade swastikas and American flags down East 86th Street, New York, October 30, 1939. Photograph from the Library of Congress, Prints and Photographs Division, NYWT&S Collection, LC-USZ62-117148.
The presence of Nazi agitators was only one part of the problem, though. The other was the power of language and of mass communication. Consider the national popularity of two groups that sought to challenge that power: the Institute for Propaganda Analysis and the General Semantics movement. Each presented a view of the individual psyche as vulnerable to irrational impulses and false beliefs. Each also suggested not only that communication could be manipulated by unscrupulous leaders, but that the media of communication—pictures, verbal language, symbols—were themselves naturally deceptive. Both agreed that the technologies of one-to-many communication amplified this power enormously. The individual American mind had become a battleground, and it was their mission to defend individual reason from the predations of fascism, of communication, and, potentially, of the individual’s own unconscious desires.
The Institute for Propaganda Analysis emerged in 1937 out of a class in “Education and Public Opinion” taught by Dr. Clyde Miller at Columbia’s Teacher’s College.Thanks to a $50,000 grant from Boston businessman Edward A. Filene, Miller, a number of New York-area colleagues, and aboard of advisors that included leading sociologists Hadley Cantril, Leonard Doob, and Robert Lynd began creating study materials for a group of high schools in Illinois, New York, and Massachusetts. They also began publishing a monthly newsletter aimed primarily at teachers; it soon had almost six thousand subscribers.
The newsletter offered its readers a detailed training regime designed to help Americans achieve a heightened state of rational alertness. In the Institute’s materials the words and pictures of the mass media were scrims that obscured the motives and actions of distant powers. The source of their power to persuade lay primarily in their ability to stir up the emotions. The Institute implied that Americans could build up a psychological barrier to such manipulation by wrestling with newspaper stories and radio news accounts. An Institute-sponsored guide for discussion group leaders published in 1938 noted that propaganda analysis should proceed in four stages: “1) survey the contents 2) search for evidence of the statements or claims 3) study the propagandist’s motive [and] 4) estimate the content’s persuasive force.” This work could be done alone or in groups, and it was a species of intellectual calisthenics. Much as members might exercise their bodies to ward off disease, so might they also exercise their reason so as to ward off the inflammation of their unconscious desires and its potentially authoritarian consequences.
For the members of the General Semantics movement, the fight against propaganda depended on decoupling symbols and words from their objects of reference. If “semantics” referred to the study of meaning, “general semantics” referred to the more specific and, in the minds of its practitioners, scientific study of language and reference. The term “general semantics” was coined by Polish philosopher and mathematician Alfred Korzybski in the early 1920s. Korzybski had published a series of articles and books in which he argued that human beings’ ability to pass knowledge down through time via language was what made them unique as a species. In 1933 he published an exceptionally influential extension of his early theories, entitled. At its core, the book argued that much human unhappiness in both the psychological and social realms could be traced to our inability to separate the pictures in our heads and the communicative processes that put them there from material reality itself. To solve this problem, Korzybski offered a course in close scientific reasoning and linguistic analysis. To alleviate the power that symbols and their makers have over us, he argued, human beings needed to parse the terms in which language presented the world to them. Having done so, they could begin to recognize the world as it was and thus to experience some degree of mental health.
General Semantics enjoyed a three-decade vogue among American intellectuals and the general public. In the years immediately before World War II, it seemed to offer new tools with which to confront not only the psychological threats posed by propaganda but a whole panoply of social and psychological ills. In his popular 1938 volume. Finally, when a little headway has been made against economic disaster, the peoples of Europe, more civilized than any other living group, prepare solemnly and deliberately to blow one another to molecules. . . . Confusions persist because we have no true picture of the world outside, and so cannot talk to one another about how to stop them.”
To be able to understand the world and change it, Chase argued, Americans needed to break down language itself, to dissolve its terms from their material-world referents, and so distinguish the pictures in their heads from reality. And nothing made the importance of that work clearer than the omnipresence of mass communication, propaganda, and the threat of a second world war. In 1941, linguist and future Senator S. I. Hayakawa’s volume brought Chase’s argument and Korzybski’s theories into the public eye. Like Chase, Hayakawa argued that “we live in an environment shaped and partially created by hitherto unparalleled semantic influences: commercialized newspapers, commercialized radio programs, ‘public relations counsels,’ and the propaganda technique of nationalistic madmen.” To survive this onslaught, citizens needed scientific techniques for interpreting and resisting semantic assaults.
They especially needed techniques for disabling their immediate emotional responses to individual symbols. Hayakawa argued that human nervous systems tended to translate flows of experience into static pictures. Without training in General Semantics, it did so automatically. This in turn led quite literally to individual and collective madness. That is, words like “Nazi” and “Jew” conjured instant emotional responses; individuals lost track of the fact that the terms lacked immediate referents and were in fact so general as to be practically meaningless. Moreover, in their rush to emotional judgment, Hayakawa feared that citizens would rush to war as well. The only solution was a deep study of language and, with it, of our own roles in the communication process. As Hayakawa put it, “Men react to meaningless noises, maps of non-existent territories, as if they stood for actualities, and never suspect that there is anything wrong with the process. . . . To cure these evils, we must first go to work on ourselves. . . . [We must] understand how language works, what we are doing when we open these irresponsible mouths of ours, and what it is that happens, or should happen, when we listen or read.”
In Terror and Wonder, Pulitzer Prize–winning Chicago Tribune architecture critic Blair Kamin assembled his most memorable writing from the past decade, as well as some polemical observations on the changing context of the built environment. Among them are two that have taken on a new life in the past couple of weeks: “The Donald’s Dud: Trump’s Skyscraper, Shortened by the Post-9/11 Fear of Heights, Reaches Only for Mediocrity” and “A Skyscraper of Many Faces: In Trump’s Context-Driven Chicago Skyscraper, Beauty Is in the Eye—and the Vantage Point—of the Beholder.” The first piece decries the original design, leaving little room for ambivalence; the other considers the finished construction, and all in all, mostly lauds its structure.
Fast forward. Trump’s skyscraper has now been branded unequivocally as part of Trump’s real estate empire, in twenty-foot-tall block letters that spell out his eponym. Kamin unleashed some sharp criticism of the sign in a Chicago Tribune column last week, pointing the blame at city government for allowing this particular type of self-aggrandizement to continue due to obscure politicking:
“It’s a lack of sophisticated design guidelines as well as the teeth to enforce them. Trump’s sign isn’t the only offender — it’s just the most egregious — in a city where skyline branding has run amok.”
“The mayor thinks the sign is awful,” Bill McCaffrey, a mayoral spokesman, told the Tribune on Wednesday. “It’s in very poor taste and scars what is otherwise an architecturally accomplished building.”
“Whatever the outcome in the Trump-Emanuel faceoff, Chicago needs to take the opposite tack, discouraging signs along its riverfront lest more Trump-style incursions defile what promises to be a great public space.”
Things escalated on Twitter, where Trump has long been known to broadcast his many opinions. Then Kamin was invited to appear on the Today Show, followed by a live call-in from Trump. It should be noted that Kamin’s three columns for the Tribune (here, here, and here) covering Trump’s skyscraper and the foibles of its branding are as prescient, intelligent, and thorough as the rest of his body of work, for which again, he won a Pulitzer Prize. That didn’t stop Trump from trying to dismiss him.
Here’s a direct quote from Trump’s phone call:
“This was started by a third-rate architectural critic for the Chicago Tribune, who I thought got fired. He was gone for a long period of time. Most people thought he got fired. All of a sudden he re-emerges, and to get a little publicity, he started this campaign.”
This past weekend, on June 8th to be exact, the Ocean Project and the World Ocean Network celebrated World Oceans Day. The event recognizes that there is “one world ocean” connecting the planet, and to this end, was known as “World Ocean Day” until 2009, when the “s” was added in accordance with the resolution passed by the United Nations General Assembly, which officially designated the annual date as “World Oceans Day.” Even this semantic quandry should evidence the passion yielded by those who champion and protect our hydrosphere—with that in mind, we’re revisiting The Deep, a project that launched new endeavors in “tidal” acquisitions for the Press, and has led to a remarkable list in the oceanic sciences (under the helm of Christie Henry, editorial director of the Sciences and Social Sciences).
The Deep explores the deepest realms of the ocean, revealing a cast of more than 200 sometimes terrifying and most mesmerizing creatures in crystalline detail, some photographed for the very first time. The website associated with the book features an image gallery,an animated sampler, and beautiful pages, including the below profile of the glowing sucker octopus, one of the world’s few bioluminescent creatures, native to the North Atlantic:
In the wake of The Deep(pun intended), Chicago’s recent books that focus on oceans and the life thriving within them include:
From Lawrence H. Summers, former Secretary of the Treasury and president emeritus of Harvard University, in the Financial Times:
“Atif Mian and Amir Sufi’s House of Debt, despite some tough competition, looks likely to be the most important economics book of 2014; it could be the most important book to come out of the 2008 financial crisis and subsequent Great Recession. Its arguments deserve careful attention, and its publication provides an opportunity to reconsider policy choices made in 2009 and 2010 regarding mortgage debt.”
House of Debt takes a complicated premise—unraveling the threads of the 2008 financial crisis from a tangle of Federal Reserve policies, insolvent investment banks, predatory mortgage lenders, and private label securities—and delivers a clean-cut conclusion: the Great Recession and Great Depression, as well as the current economic malaise in Europe, were caused by a large run-up in household debt followed by a significantly large drop in household spending. Recently, in addition to Summers’s endorsement in today’s Financial Times, the book has been profiled at the New York Times, the Wall Street Journal, the Atlantic, and the Economist, among others; Paul Krugman, writing for the NYT, noted that its associated House of Debt blog has “instantly become must reading.”
How do we move forward and break the cycle? With a direct attack on debt, say Mian and Sufi. More aggressive debt forgiveness after the crash helps, but as they illustrate, we can be rid of painful bubble-and-bust episodes only if the financial system moves away from its reliance on inflexible debt contracts.
To follow developments in global policy at the House of Debt blog, click here.
June 6, 2014, marks the 70th anniversary of the Allied invasion of Normandy, France: one of the most iconic moments of World War II, which resulted in an unprecedented loss of life and quite literally shifted the tide in the Allies’ favor, leading to a restoration of the French Republic. Yesterday, to commemorate the event, we ran an excerpt from historian Mary Louise Roberts’s D-Day through French Eyes: Normandy 1944, which approaches the battle for Normandy from the perspective of French civilians, bearing witness in their homes and as part of their everyday life. Today, we’re following up with a brief Q & A, via which Roberts expands on how our understanding of that single day in history—June 6, 1944—has changed a much larger story.
You can read more from Roberts on revisiting the other side of D-Day’s history at Medium here, and check out more from her book here.
On the 70th anniversary of D-Day, how would you say our perspective of the event has shifted since June 6, 1944?
The memory of any important event like D-Day undergoes change over time. If you read a novel such as Joseph Heller’s Catch-22, you’ll see an image of the American GI in Europe which is not flattering. Writers like Heller (and also, arguably, Kurt Vonnegut) used the American GI to register their protest against arbitrary authority in the 1960s—a broad cultural theme of that era. In the 1990s, which marked the fiftieth anniversary of the landings, we became used to thinking about the GIs in a brighter light. Thanks to Tom Brokaw’s notion of the “Greatest Generation,” as well as films such as Saving Private Ryan, we came to remember American soldiers in France almost exclusively as strong, self-sacrificing heroes.
The problem with this image of D-Day is that it ignores other agents who contributed greatly to the victory. Chief among them were the British, Canadian, and French armies, whose enormous efforts in Normandy should never be forgotten. Also overlooked by certain historians (Stephen Ambrose, for example) were the contributions of French civilians. In the days following the landings, these civilians committed small but substantial acts of courage. They guided the Americans through the woods and terrain, and gave them valuable intelligence concerning German military installations. In addition, they sheltered and cared for wounded paratroopers who had landed on the morning of the invasion.
So much of your work engages with reparative, reconstructive, and alternative accounts of history, esp. those surrounding sex, gender, and war: what was the impetus to start researching D-Day as seen by the Normans?
My previous book, What Soldiers Do,began with a simple question: What were relations like between the American GIs and French women during the years from 1944 to 1946? To answer that question, I began by looking at the US trench journal Stars and Stripes. By way of the pages of that journal, I quickly realized two things. First, the American stereotype of the French woman was that she was not only seductive but easy to seduce. And second, the GIs were getting “sold” on the invasion through an old gender narrative: the knight who comes to rescue the damsel in distress. The GIs were fed an image of France as a nation of woman awaiting American rescue. If you put those two things together, you have the version of the war presented to the average Joe in Stars and Stripes: if you fight bravely and liberate the women of France, you will be rewarded with kisses, embraces, and possibly more.
Much of the testimonies and first-person accounts of French civilians in D-Day through French Eyes rely on rich, sensory details and deeply personal narratives of terror and euphoria: What did it feel like to uncover these writings in the archive?
When I was researching What Soldiers Do, I traveled to many municipal and departmental archives in Normandy and Brittany in order to consult documents. Many of the archivists in these regions had collected unpublished memoirs from the summer of 1944. They were stunning. Not only did they demonstrate the efforts of the French to liberate themselves, but also they presented a newly intimate look at the American soldier that summer as he fought in the Norman bocage. I wanted very much to share these memoirs with the American public, and that is why I wrote D-Day through French Eyes.
What should we take away from D-Day through French Eyes?
The most important take-away from the book is just how much the French suffered for their freedom. The statistics are sobering: about three-thousand Normans died in the first few days of the invasion—the same as the GI death toll in those days. Before the summer was over, about 19,000 Frenchmen had lost their lives. Hundreds of thousands more watched their homes reduced to rubble, or came back to a hometown that had been completely destroyed. Death—the bodies of soldiers and animals—became an everyday sight for children, as well as for adults.
As World War II continued to rage, and though they yearned for liberation, by late spring 1944, the French in Normandy nonetheless steeled themselves for war, knowing that their homes and land and fellow citizens would have to bear the brunt of any incoming attack. The result of events that took place that June 6th—the largest seaborne invasion in history—led to a restoration of the French Republic and in story familiar to many, shifted the tide in favor of the Allied Forces. In D-Day through French Eyes, historian Mary Louise Roberts turns those usual stories of D-Day around, taking readers across the Channel to view the invasion from a range of gripping first-person accounts as seen by French citizens throughout the region. And as we approach the 70th anniversary of one of the most iconic military events of the twentieth century, we’ll be running an excerpt from the book (today) accompanied by a Q & A with Roberts (tomorrow), to honor, expand upon, and reinvigorate the story we thought we knew.
THE NIGHT OF ALL NIGHTS
For Normans, the invasion began with noise. Just before midnight on Monday night, the ﬁfth of June, hundreds of airplanes could be heard ﬂying south over the Cotentin Peninsula. The constant rumble of plane engines and the distant roar of artillery—these two sounds combined to create what one witness called “a ceaseless storm.” Together they awakened thousands of Normans from their deepest sleep of the night. They rose from their beds, ran outside in their nightclothes, peered at the sky, and tried to ﬁgure out what was happening. Is this it? they wondered, overcome with fear and excitement.
The sound of airplanes was by no means a novel phenomenon. In the past months, civilians had grown accustomed to planes ﬂying overhead—hundreds of them—almost every night. Allied bombing of strategic sights throughout northern France had become a common event. But this night was different; something new was happening. The aircraft were ﬂying close to the ground and reaching targets. In response, German machine guns and artillery were ﬁring furiously, contributing to the din. Soon the Norman night was ﬁlled with strange sights as well as sounds: the landing of parachutes and gliders, the dancing lights of artillery, the red glow of villages in ﬂames. These sights were terrible, frightening, but also oddly beautiful.
In her memoir, Madame Hamel-Hateau, a schoolteacher in Neuville-au-Plain, near Sainte-Mère-Église, captures the dreamlike magic of the night of June 5–6. Hamel-Hateau lived close to the village school and spoke some English. The paratrooper she meets is a pathﬁnder sent to illuminate the landing areas for thousands of paratroopers who would soon land in Normandy to begin the invasion. He is one of the very ﬁrst American servicemen to arrive in France.
In the month of June, the days no longer have an end and the night is really just a long twilight because the darkness is never complete. Around 10:00 p.m. this Monday, the ﬁfth of June, I have just gone to bed next to my mother. We are both sleeping on a daybed that we open up every night in the common room. Since the evacuation of Cherbourg, we have given our bedroom to my grandparents. The daybed faces the window, itself wide open on the night. In this way, from my bed, I am taking a moment to reﬂect on the end of this beautiful day. With sadness I think of a similar June night in 1940 when my boyfriend, Jean, had left to join the Free French. I had received news that he had landed in North Africa, so perhaps he was now in Italy? Perhaps it will be soon . . . I thought, but then refused to let my mind wander further. It was time to go to sleep.
Abruptly, the noise of airplanes breaks the night’s silence. We have gotten used to that sound. Since there are no military targets here and the railway is more than five miles away, we normally do not pay much attention. But the noise gets louder, and the sky begins to light up and get red. I rise out of bed, and soon the whole family is up as well. We go out into the courtyard. There everything seems calm. The only thing you can hear is the distant murmur of a bombardment in the direction of Quinéville. Yet there seems to be an endless number of planes mysteriously roaming about; theirengines create an incessant hum. Then the noise decreases and becomes vague and distant. “It’s just like the last time,” says my mother, “when they had to bomb the blockhouse on the coast.” And we all go back to bed.
Mama goes to sleep right away. But I sit on my bed and continue to study the rectangle of cloudless night carved out by the window. The need to sleep slowly overwhelms me, but my eyes remain wide open. It is in this sort of half sleep that I begin to see fantastic shadows, somber shapes against the clear blackness of the sky. Like big black umbrellas, they rain down on the ﬁelds across the way, and then disappear behind the black line of the hedges.
No, I am not dreaming. Grandma was also not sleeping, and saw them from the window of the bedroom. I wake up Mama and my aunt. We hurriedly get dressed and go out into the courtyard. Once again, the sky is ﬁlled with a continuous, ever-intensifying hum. The hedgerows are alive with a strange crackling sound. Monsieur Dumont, the neighbor across the street, a widower who lives with his three children, has also come out of his house. He comes toward us and shows me, hanging on the edge of the roof courtyard, a parachute. The Dumont kids follow their father and join us in the school courtyard. But the night has not yet revealed its secret.
An impatient curiosity is stronger than the fear that grips me. I leave the courtyard and make my way onto the road. At the fence of a neighboring ﬁeld, a man is sitting on the edge of the embankment. He is harnessed with big bags and armed from head to foot: riﬂe, pistol, and some sort of knife. He makes a sign for me to approach him. In English I ask him if his plane was shot down. He negates that and in a low voice shoots back the incredible news: “It’s the big invasion. . . . Thousands and thousands of paratroopers are landing in this countryside tonight.” His French is excellent. “I am an American soldier, but I speak your language well; my mother is a Frenchwoman of the Basse Pyrénées.” . . . I ask him, “What is going on along the coast? Are there landings? And what about the Germans?” I was babbling; my emotions were overwhelming my thoughts. Ignoring my questions, he asked me about the proximity of the enemy and its relative presence in the area. I reassured him: “There are no Germans here; the closest troops are stationed in Sainte-Mère-Église, almost two kilometers from here.”
The American tells me he would like to look at his map in a place where the light of his electrical torch will not be easily spotted. I propose that he come inside our house. He hesitates because he fears, he says, in the event that the Germans unexpectedly appear, he will put us in danger. I insist and reassure him: “Monsieur Dumont and my old aunt are going to watch the area around the school, one in front and the other in back.” Then the soldier follows us, limping; he explains to me that he sprained his ankle on landing. But he would not let me care for him; there are many things more important. . . . In the classroom, to which Grandmama, Mama, and the Dumont children follow us, he takes off one of his three or four satchels, tears off the sticky little bands that sealed it, and takes out the maps. He spreads one out on a desk; it is a map of the region. He asks me to show him his precise location. He is astonished to discover how far he is from his targets: the railway tracks and the little river called the Merderet bordering the Neuville swamp toward the west. I show him the road to follow in order to arrive there, where he is supposed to meet his comrades. He looks at his watch. Without thinking, I do as well. It is 11:20 p.m. He folds up his map, removes any trace of his presence, and after taking some chocolate out of his pocket which he gives to the children, so ﬂabbergasted they forget to eat, he leaves us. He is perfectly calm and self-controlled, but the hand I shake is a little sweaty and stiff. I wish him luck in a voice that tries to be cheerful. And he adds in English—so that only I can understand—“The days to come are going to be terrible. Good luck, mademoiselle, thank you, I will not forget you for the rest of my life.” And he disappears like a vision in a dream.
Once again, the mystery of the night deepens. We stay outside waiting for who-knows-what, keeping our voices low. Suddenly, there is an extraordinary blaze of light. The horizon in the direction of the sea lights up as if reﬂecting an immense ﬁre that has been lit over the ocean. The formidable growl of marine artillery can be heard even here, although mufﬂed by and submerged in a multitude of other inchoate sounds. The black silhouettes of airplanes arrive in the clouds and turn around in the sky. One of them passes just above our little school; it puts on its lights and releases . . . what? For an instant, we think it’s a stick of bombs. But we are only starting to throw ourselves on the ground when parachutes open and ﬂoat down like a mass of bubbles in the clear night. Then they scatter before disappearing in the confusion of the nocturnal countryside.
Another airplane passes over and releases its cargo. At ﬁrst, the parachutes seem carried by the wake of the plane; then they drop vertiginously downward; ﬁnally the silk domes open. The descent gets slower and slower as they approach the ground. Those men whose dangling legs can clearly be seen get there a little more rapidly than those who hold bags of foodstuffs, equipment, ammunition. In a few moments, the sky is nothing more than an immense ballet of parachutes.
The spectacle on the earth is no less extraordinary. From all corners of the countryside shoot bursts of multicolored rockets as if thrown by invisible jugglers. In the ﬁelds all around us, big black planes slide silently toward the earth. Like ﬂying Dutchmen, they land as if in a dream. These are the ﬁrst groups of gliders. Our parachutist had been part of a group of scouts sent to signal the descent and landing zones.
If you’re knowledgeable about the world of book design in Chicago, you’re likely familiar with the work of Isaac Tobin, senior designer at the University of Chicago Press, an Art Directors Club Young Gun winner, and a frequent mention on New City‘s “Lit 50” annual list. As one profiler at Chicagoist remarked, “[Tobin's] work is simple yet striking, satiric at times and always beautiful. We appreciate his ability to work from all angles and truly capture the essence of what the containing work is all about. If Tobin’s on the project, definitely judge a book by its cover.”
If unfamiliar, don’t miss an opportunity to see “a long row” of Tobin’s covers, among the decade-spanning work of more than 100 other Chicagoans at CHGO DSGN, an exhibit curated by Rick Valicenti for the City’s Department of Cultural Affairs and Special Events, on view at the Cultural Center through November 2, 2014.
As a bit of a teaser, some candid images from the show’s opening follow below:
Photo by: Jill Shimabukuro.
Issac Tobin, with adjacent design work. Photo by: Jill Shimabukuro.
More people watched his nationally syndicated television show between 1953 and 1955 than followed I Love Lucy. Decades after his death, the attendance records he set at Madison Square Garden, the Hollywood Bowl, and Radio City Music Hall still stand. Arguably the most popular entertainer of the twentieth century (check out the applause greeting his appearance on a 1984 episode of the David Letterman Show in the video clip below; also, “What do you do when you get Crisco on those rings?”), this very public figure nonetheless kept more than a few secrets. Darden Asbury Pyron leads us through the life of America’s foremost showman with his fresh, provocative, and definitive portrait of Liberace, an American boy.
Liberace’s career follows the trajectory of the classic American dream. Born in the Midwest to Polish-Italian immigrant parents, he was a child prodigy who, by the age of twenty, had performed with the Chicago Symphony Orchestra. Abandoning the concert stage for the lucrative and glittery world of nightclubs, celebrities, and television, Liberace became America’s most popular entertainer. While wildly successful and good-natured outwardly, Liberace, Pyron reveals, was a complicated man whose political, social, and religious conservativism existed side-by-side with a lifetime of secretive homosexuality. Even so, his swishy persona belied an inner life of ferocious aggression and ambition. Pyron relates this private man to his public persona and places this remarkable life in the rapidly changing cultural landscape of twentieth-century America.
Pyron presents Liberace’s life as a metaphor, for both good and ill, of American culture, with its shopping malls and insatiable hunger for celebrity. In this fascinating biography, Pyron complicates and celebrates our image of the man for whom the streets were paved with gold lamé.
Who is Burt Hooton? Your guess is as good as mine, or more likely, it’s better than mine. My answer is he’s no Mickey Lolich, but that’s because I grew up in Detroit—though, as Susan Sontag would say, Under the Sign of Jack Morris. But back to your guess—if you’re schooled in Cubs lore, come to the Wrigley Centennial Trivia Showdown on Wednesday, May 28th, at the Harold Washington Library, in celebration of the year that brought you the births of Sun Ra, Julio Cortázar, and a certain stadium. Your hosts are Stuart Shea, doyen of Cubs history, and the Chicago Tribune’s Rick Kogan, and you can win t-shirts, plates, commemorative posters, and gift certificates to Birrieria Zaragoza, Clark Street Sports, Girl and the Goat, The People’s Garment Company, & Tales, Taverns, and Towns.
May 22, 2014, is Sun Ra’s centennial—the day the otherworldly, interstellar traveler, cosmic philosopher, and avant-jazz musician would have turned 100 if he weren’t returned to his “Angel Race” (“I am not of this Earth.”) on the planet Saturn when he died in 1993. Sun Ra, along with his Arkestra, was a pioneering voice in afrofuturism, a fan of the improvised manifesto in music and verse, and a prolific (and versatile—his compositions mastered, then undermined, then regenerated almost every form of that very American medium: jazz) artist and performer. We are *lucky* enough to publish (or distribute) four books that touch on his contributions to twentieth-century culture, including three edited by Sun Ra curator-archivists John Corbett, Anthony Elms, and Terri Kapsalis, The Wisdom of Sun Ra: Sun Ra’s Polemical Broadsheets and Streetcorner Leaflets; Traveling the Spaceways: Sun Ra, the Astro Black, and Other Solar Myths; and Pathways to Unknown Worlds: Sun Ra, El Saturn, and Chicago’s Afro-Futurist Underground, 1954–1968. And in addition, fellow experimental jazz legend George Lewis’s award-winning A Power Stronger than Itself: The AACM and American Experimental Musiccaptures much of the legacy of Ra’s Chicago years on the AACM, a key period (1945 to 1961) in his evolution, when his sound changed from big-band jazz to the “cosmically oriented” sounds for which he would gain acclaim, notoriety, and influence on new communities of improvisers. This video clip showcases an interview with Ra from that time, when the Arkestra was on an international tour and shortly after Ra’s tenure in Egypt—it does a fine job of capturing some of the magic—the drive to iconoclastic innovation and new kinds of communicative and transcendental experiences—evident in Ra’s astounding body of work. Godspeed.
Today, we’re pleased to run the final installment of a conversation between Barbara J. Kingand Jessica Pierce, two of our most established experts on animal-human behavior. You can read Part I and Part II of their dialogue, on questions about animal confinement, evolution, and appropriate companionship, here and here. Below, they take on a particularly ethical dilemma: in light of evolution and morality, what should we and our animal companions eat for dinner?
PIERCE: Now, two questions for you:
1. Should we also “honor the evolutionary path” of humans, when it comes to food? And what exactly would this mean? Perhaps I am hypocritical: I honor the “natural” diet of my cat, but I don’t buy into arguments that there is some “natural” way of eating for humankind (and I am particularly skeptical of arguments that meat-eating is “natural” and therefore justified).
2. Which animals can we eat without too heavy a moral cost? Are there some?
KING: My cats are relieved that their species now joins dogs on the conditionally acceptable list! Seriously though, thanks for a good back-and-forth on that issue. As to your evolutionary question, I think there’s a distinction—a difference that makes a difference, if you will—between cats’ and humans’ evolutionary trajectories when it comes to food. As far as I know, cats have always been carnivores, and that won’t be changing anytime soon. The evolutionary trajectory of humans is more complicated. For one thing, we’re omnivores. This means that the contribution of meat to the diet of contemporary populations varies greatly and indeed the same was true for ancestral populations in prehistory. As a species, there’s no question that meat-eating played a role in our evolution, yet at the same time, we are flexible and facultative when it comes to what we eat, and we can stay healthy on many different kinds of diets. There isn’t, then, a single right or natural way to eat—which, when I’ve written in my blogging for NPR, outrages the more fierce advocates of the Paleo diet!
Even more important, to my mind, is the fact that our evolution is not only about our teeth and our guts, but also about our cognition, our emotional connection to other life around us, and our sense of ethics. Unlike cats, we can think critically, as you do all the time, Jessica, about our responsibility towards other creatures. Global inequities and global hunger mean that of course millions of people eat meat because other animals are a key protein source for them. For anyone across the world fortunate enough to have economic choices in the matter, “honoring our evolutionary path” could well, in my view, embrace vegetarianism or veganism.
I’m just now grabbing serious hold of the question of what animals we can eat without too heavy a moral cost? The way I’m approaching my next book is not to provide an across-the-board answer to that, because I’m uninterested in proscribing what people should or shouldn’t eat—that’s not my approach at all. At the moment, I’m myself a pescatarian, and I’m grappling with what that means and whether I want to continue to eat fish and if so, which fish. Instead I’m interested in writing, in what I hope are fresh ways about animals lots of people do eat—ranging from octopus to chickensand even to insects. I’ve become keenly intrigued by entomophagy, even experimenting just a bit with insect-eating, in order to think about what it may mean into the future for issues of global hunger and animal welfare.
In a way, we’re back where we started. It’s clear to me that you feel, too, a constant engagement in your own life with the issues you write about, yes?
PIERCE: Yes, and I suppose that I why I both love and sometimes hate my chosen career: the constant sense of engagement. Why I love it is obvious—it makes life, and work, a seamless web, and it keeps life interesting. Still, it feels as if I can never escape the moral questions, and can never reach a point of complete comfort with some of the most important choices I make from day to day: sharing my life with non-human animals and deciding what to eat.
To read more about the work of King and Pierce, click here and here.
We’re back with Part II of a conversation between anthropologist Barbara J. King and bioethicist Jessica Pierce on the lives of animals—and how our relationships with them correspond with certain philosophical and ethical ideals. King’s current project extends a nuanced look at the ethical questions raised by eating (or not eating) animals; its working title is Animals We Eat. Pierce, too, has a book in the works: Run, Spot, Run, a scientifically and philosophically grounded exploration of the ethics of pet ownership that seriously questions whether we are good for our pets. Here, their dialogue draws on the confines of animal ownership—and the implications of our own food ethics on the choices we make for our pets. You can read yesterday’s post here; be sure to join us tomorrow for the conversation’s final installment.
PIERCE: So let me ask you about cats, since it sounds like you share your life with several feline companions. I think cats pose an interesting and challenging case. Although I have cats in the “maybe” category, I don’t feel confident that this is the right place for them. It’s possible that they belong on the “yes” list.
One of the big issues, for me, is the inside-outside dilemma. I have a cat (Thor) who is an inside cat. The reasons I keep him inside seem pretty convincing: 1) Thor probably wouldn’t live very long if he roamed free. I live on a hillside populated by coyotes, eagles, foxes, and mountain lions. Thor has already been hunted by an eagle and a fox–both on the same day!–during one of his rare escapes. 2) Thor would undoubtedly hunt the birds and small critters that also live on the hillside, and I hate to depopulate the wild. BUT, I can’t get over the feeling that Thor is being held captive against his will, like a slave. He yearns for the wild. All day, every day he sits by the glass doors to the back and watches. He darts out whenever he can. As a compromise, I give Thor supervised outside time, where I basically follow him around like a helicopter mom, until I get tired of dodging cactus. But these little ten-minute excursions just seem to whet his appetite. Thor has a pretty happy life, I think, but it seems to be that he’s a happy slave. I would say that nearly every day, I think about just opening the door and saying to Thor “Go and have fun and take your chances. Hopefully I’ll see you tonight for dinner.”
What are your own thoughts on the ethics of cats as pets?
And, to bring things around to the topic of your own next book, what should we be feeding our dogs and cats? I’ve chosen veganism, for my own “food ethic,” but I don’t feel right about making Thor or my two dogs, Maya and Bella, vegan. Nevertheless, I cringe every time I go into the store to buy meat for the pets. How do you think about this question?
KING: Well, “which animals can we confine without too heavy a moral cost” is an extremely important question, and I’m so glad you are taking this on. In a way, it’s a parallel to the question I’m asking in my own new writing, which could be phrased as “which animals can we eat without too heavy a moral cost.” But right now, sure, let’s talk about cats! At the moment we live with five rescued cats in our house (they are entirely kept indoors), and we care also for eleven former feral cats that we rescued from a threatening situation (by humans), who live in a spacious pen in our yard, and two semi-feral cats who come and go in our yard. All of these eighteen, of course, we paid to have spayed or neutered. Other than that commonality, it’s a vast range: even after years of gentle care, we can’t quite a few of the formerly feral cats at all—they are still too wary. So is that pet-keeping or animal rescue? The two overlap, but not completely, I think.
But, yes, we grapple with ethical questions. We do feed all our cats meat, and for me this is honoring a particularly evolutionary path: cats are carnivores. Since we don’t buy or eat meat for ourselves, it’s not our favorite thing to be bringing into the house, but I see this as just one of those fallouts from a cross-species friendship. Having said that, I do find it very difficult when one of the yard cats hunts, captures, and kills a bird. I’m all the time being told that feral-cat rescue work is morally unacceptable because these cats take a great toll on birds and other wildlife. It’s not an easy situation, because I care about all animals. But I can’t see that cats’ hunting behaviors should doom them to inhumane treatment or even death anymore than coyotes’ hunting our cats (which in Virginia they most certainly do) should condemn the coyotes to that fate. And our own species’ negative impacts on bird and other wildlife populations is immense; we would do well to work on those aspects. For me, another key, as I’ve indicated, is spay-neuter, to work seriously to reduce cat populations.
I see that some of this is a bit muddled: I’m tacking between inside-the-house pets, and out-of-the-house rescued cats. For me the bottom line is that all our cats seem, insofar as we can assess this with any degree of accuracy, content. Our indoor cats don’t visibly yearn to be outside. Among the five of them, they have well-defined favorite napping places, routines, cat play partners, cat allies and sometimes cat enemies, with a fairly complex social system going to occupy their minds. All eighteen—indoor, pen, and yard—are dear to us, dear personalities, but I want to say very much that we aim to enhance their lives as much as we know they enhance ours. At this point, Jessica, let me throw it back to you. What do you think? Do cats still make it to the “conditionally acceptable” list, and if not, what do we do with the millions of pet cats?
PIERCE: I like your idea of “honoring the evolutionary path” of cats, by feeding them meat. And I use a similar logic with Thor: as I understand it, cats need animal protein to be optimally healthy, and it seems unfair to deprive them of this. But every time I feed Thor (and my dogs—they get meat, too), I feel sorry for the suffering of the animal that has become their dinner.
And yes, cats are on my “conditionally acceptable” list. I think there are lots of different contexts in which cats can and do co-exist with humans—as pampered indoor cats, as barn cats, and as feral colonies. Cats can probably thrive in all of these environments. Cats and humans can certainly form mutual attachments and can bond very deeply. I also think that veterinarians, behaviorists, and cat owners are paying a lot more attention to making sure that cats have stimulating and enriching experiences and aren’t suffering from boredom. The fact that millions of unwanted cats languish in shelters is simply awful, and with certain moral trepidation I endorse active spay/neuter campaigns. I agree with you that the impact of feral cat colonies on birds and small mammals is a drop in the bucket compared to our own human impacts (cars certainly kill more birds and small critters than cats), and it seems unfair to blame the cats for doing what comes naturally.
Join us tomorrow for the final installment—
And, in the meantime, read more about the work of King and Pierce, click here and here.
To those interested in the ethical and philosophical issues surrounding our attachment to—and fascination with—our companion species, Barbara J. King and Jessica Pierce need no introduction. From her initial anthropological observations of wild monkeys in Kenya and the plight of captive apes to her pathbreaking work on animal emotion and cognition, King has become one of our most trusted commentators on the lives of animals (Just this week, the research that informed her most recent book How Animals Grievewas cited by television’s Cesar Millan, better known as “the Dog Whisperer.”). Pierce, author of The Last Walk: Reflections on Our Pets at the End of Their Lives and coauthor (with Marc Bekoff) of Wild Justice: The Moral Lives of Animals, has spent the past two decades writing at the intersection of bioethics and human-animal interaction, defining the field of environmental bioethics along the way. Both writers are at work on new books (more on that tomorrow), so naturally, we thought to put them in conversation about the issues and personal stakes surrounding the work they do—the resulting dialogue is both touching and elucidating, and we’ll be running it on the blog for the rest of this week: stay tuned.
UCP: Barbara, while your next book is about eating (and/or not eating) animals, Jessica’s questions the very premise of pet ownership, asking whether we really are good for our pets. Both books should generate some controversy, in part because both seem likely to hit people powerfully in parts of their lives where emotion tends to trump rationality. I’m curious about whether Jessica’s previous book, The Last Walk, had that effect on you, as a pet owner (and animal rescuer)—does the scientist fight with the individual pet lover in a situation where the fate of your pet may be at stake?
KING: These days, there’s really no conflict between my science-y self and my animal-lover self. I’ve always felt a deep interest in, and emotional connection to, animals as individuals who express distinctive personalities as they go about their everyday lives. And happily, the study of animal behavior nowadays embraces this approach. I don’t mean that we scientists now project human emotions uncritically onto animals, but rather, we realize we aren’t the only emotional beings in the equation: we are feeling, thinking creatures observing and trying to tune into the behavior of other feeling, thinking creatures, ranging in my case from the wild baboons I studied in Kenya to the domestic or feral cats I now rescue, live with, and, when the time comes at the very end of life, help to die a good death.
And it was precisely in this framework that Jessica’s The Last Walk grabbed me and moved me so much. Jessica’s choice to place at the center of the book’s narrative her aging dog, Ody, and her emotions about Ody’s impending death and her desire to ensure it be a good death,was, for me, perfect, and complemented by hugely helpful material from science and medicine about how we may best approach our pets’ last years. The result in the book was a tangle of emotion and reason that mirrors what we feel in real life for our pets: not a tangle as in a fight, or a confusion, but a tangle as in a connected web that emerges naturally and with beautiful honesty.
Jessica, I’m intrigued by your book-in-progress. In recent years, I’ve been keenly participating in efforts to really see and stop what we do to wild animals like elephants and dolphins when we keep them captive, using them (in zoos and marine theme parks for example) for our own entertainment. But our pets are domesticated animals, not wild ones—or should be, as wild animals don’t make good pets. I don’t think them of them being “held captive” for selfish reasons. How are you, though, thinking about this issue?
PIERCE: I feel like I’ve undergone several (many!) shifts in perspective as I’ve researched the book on pet-keeping, partly depending on whether I’m working on a section about the beauty of the human-animal bond or a section about the widespread abuse pet animals. When I started thinking about this book (originally entitled Confessions of a Reformed Pet Addict), I had an inkling that pet keeping had some dark sides and I wasn’t entirely comfortable with it—despite being an active consumer of pets as my daughter was growing up. (I subscribed to the “children learn important things from animals” school of thought.) I would say that overall, the more I’ve read and thought about it, the more uncomfortable I’ve become and I’ve largely concluded that the whole pet-keeping enterprise is a very, very bad deal for the animals, despite how pleasing we humans might find it.
Usually the question “which animals make the best pets?” means something like this: Which animal will entertain me (or my child) the best? Which animal will be inexpensive and easy to care for? Which animal is least likely to bite my child’s finger off? I am asking the same question, from a different point of view. Which kinds of animals can be held captive by humans without causing undue physical harm or emotional suffering? Which animals can we confine without too heavy a moral cost?
One thing is clear to me—and you’ve already suggested it in your first email: Some animals make more appropriate companions than others, and wild animals in particular do not make good pets. Also on my list of “not so good pets” are birds and reptiles, and on the “morally problematic but perhaps acceptable” list are small rodents like rats and hamsters, rabbits, fish, and cats. Dogs are the only animal on my “conditionally acceptable” list.
Join us tomorrow for more of this conversation—
And, in the meantime, read more about the work of King and Pierce, click here and here.
Born in Pottsville, Pennsylvania, Becker earned in MA (1953) and PhD (1955) from the University of Chicago, where he studied with the economist Milton Friedman, and began teaching as an assistant professor in 1954, leaving Chicago in 1957 for Columbia University, where he conducted research at the National Bureau for Economic Research, and returning to Chicago in 1970, where he would spend the rest of his career.
Becker, who held a joint appointment as University Professor in the the Departments of Economics and Sociology, remained active well into his eighties, where his acute stance on the role of human capital in labor economics, free-market orientation, and commentator on the economic dimensions of social phenomena helped earn his reputation as “an original, prolific, and sometimes provocative” scholar.
In 1992, Becker won the Nobel Memorial Prize in Economic Sciences “for having extended the domain of microeconomic analysis to a wide range of human behavior and interaction, including non-market behavior.” In 2007, he was awarded the Presidential Medal of Freedom. This same year, he was among the founding Board of Editors for the Journal of Human Capital.
Modern economics too often seems to devolve into statistics and mathematical formulas, which is only one of the reasons the world will miss Gary Becker, who died on Saturday at age 83. The Nobel laureate always put the study of humanity first and foremost, applying the principles of his discipline to human capital and how it can best be utilized for the common good.
“He just pushed economics in so many different directions,” said [Kevin] Murphy, who collaborated with Mr. Becker in research on human capital, education, addiction and the economics of the family. “He believed that economics was helpful to understanding and improving people’s lives and that’s how he did his research and that’s how he taught.”
Murphy said Mr. Becker rarely ever talked about anything besides economics and his family.
“His commitment to his family and his commitment to economics were the two biggest things in his life and he liked it that way,” Murphy said. “He really loved economics and he loved the University of Chicago and he loved even more the combination of those two things.”
To visit Becker’s University of Chicago Press author’s page, click here.
To read the University of Chicago’s official memorial, click here.
Each year, the University of Chicago Press awards the Gordon J. Laing Prize, “to the faculty author, editor, or translator of a book published in the previous three years that has brought great distinction to the Press.”
“Tracing the cultural and scientific history of our understanding of memory, Winter introduces readers to innovative scientists and sensationalistic seekers. She draws on evidence ranging from scientific papers to diaries to movies in order to explore the way that new understandings from the laboratory have seeped out into psychiatrists’ offices, courtrooms and the culture at large. Along the way, she investigates the sensational battles over the validity of repressed memories and shows us how changes in technology—such as the emergence of recording devices and computers—have again and again altered the way we conceptualize and even try to study, the ways we remember.”
Winter, in turn, was kind enough to let us publish her remarks from the Laing Prize reception earlier this month; read them in full after the jump below.
From left: Alison Winter, associate professor of history; Garrett P. Kiely, director of the University of Chicago Press; and University President Robert J. Zimmer celebrate the University of Chicago Press awarding Winter the Gordon J. Laing Prize for 2014. Photo by: Robert Kozloff. Courtesy of: UChicagoNews.
Thank you. First of all, I want to say that it matters enormously that this decision was made by my peers. I have won a few prizes for previous work, but this is the one that means the most.
I want to thank my editor at the University of Chicago Press, Christie Henry, and also to remember the person who introduce me to the Press, the wonderful editor of my first book, Susan Abrams, who sadly is no longer with us. This award gives me a great opportunity to underscore just how engaged the Press is with the university, and with the intellectual life of the university. I was involved in a conference on the history of science two weeks ago, and several editors from the Press not only came to the conference but asked questions. People attending the conference from other universities, all with their own presses, remarked on how unusual and impressive that level of engagement and attention was.
The rest of what I’d like to say on this occasion is about the University of Chicago itself:
I came to the University of Chicago with this book project in the works. I had some notes, but nothing actually written. I had a topic, but I also had enormous deficits of expertise. My background was in Victorian history of science, while the material that would eventually become Memory was of a different period, a different country, and, to understand, required expertise in numerous areas about which I knew essentially nothing.
But the University of Chicago is the perfect place to be if that’s your problem. It is the ideal place in which to act according to what my husband, Adrian Johns, calls the “principle of hot pursuit”—the term comes, of course, from police chases that cross state lines, but what he means is a license for academics to stray into other disciplinary turf in pursuit of a hot topic. That was where I found myself: pursuing many fascinating topics that pulled, me, in hot pursuit, over various disciplinary lines.
Finding myself for the first time in a large history department, I knew I could draw on the advice of my Americanist colleagues like Jim Sparrow, Kathy Conzen, and Jane Dailey. But I also found it amazingly easy to connect with faculty in farther-flung fields, in a way that would have been much more difficult at some other institutions. When I began to encounter a lot of legal cases in my research, I found my way to Geoffrey Stone, and then to Emily Buss, who gave me a lot of guidance over the next several years. I got extensive advice from several colleagues in psychology, including Howard Nussbaum, Amanda Woodward, and Susan Goldin-Meadow, while Dan Margoliash helped me with many questions relating to the neurosciences. And when I wanted to figure out how to explore the history of moving images and recording devices as a central part of my project, James Chandler, and, especially, Tom Gunning, helped me in ways that changed the way I thought not only about the history of cinema and media, but about the history of technology, and the history of the human sciences.
That intellectual assistance is indicative of the kind of multidisciplinary engagement and collegiality I found at Chicago, a type of engagement that is fundamental to the way this university works. It thrives partly because the culture of the institution is in a general sense intellectually welcoming–or at least, I found it to be. But there is also a structural reason: the committees, centers and workshops that thrive in the interstices of departments intensify cross-disciplinary connections, in a way that I think is incredibly healthy intellectually. It certainly has been for me. I don’t think I could have written Memory without it.
I’d like to close by expressing an appreciation of one of those committees in particular: the Committee on Conceptual and Historical Studies of Science, with which I have been affiliated since I came here. As I looked over the list of Laing Prizes that have been awarded in recent years, I was struck by the fact that three out of the last four have gone to CHSS faculty. CHSS is perhaps the smallest graduate program in the university, but one of the things I love about it is that it embodies one of the biggest distinctions between Chicago and some other universities, namely, this commitment to meaningful engagement among faculty from many different fields. I feel very grateful and lucky to be here.
Christopher A. Lubienski and Sarah Theule Lubienski’s The Public School Advantage: Why Public Schools Outperform Private Schools takes on a daunting task: disputing the assertion that markets can solve our social problems, as evidenced by performances of private, voucher-based, and charter schools.
Since the first charter school was established in Minnesota, in 1992, and in the wake of No Child Left Behind, the fact of public agencies endowing private and semi-private educational institutions has remained controversial, as funding for capital improvements in our public schools (especially those in inner cities) continues to drop.
The case made by the Lubienskis is simple: drawing on two recent, large-scale, and nationally representative databases, they show that any benefit seen in private school performance now is more than explained by demographics. Private schools perform better because their students come from backgrounds of privilege, and are able to access support at many levels unfathomable for their public school counterparts. Despite this, as the Lubienskis demonstrate, gains in student achievement at public schools are at least as great and often greater than those at private ones.
In response to a recent piece published by Education Next, Chris Lubienski defended the arguments made in the book, and we excerpt a portion of his response, directed at claims of bias with regard to achievement tests and datasets, below:
The main shortcoming of Wolf and the bloggers’ efforts to refute our findings is that, for evidence, they simply point to evaluations of voucher programs, usually ones that they have conducted. As we have noted before, even if we accept the validity of their studies, these evaluations of local voucher programs simply do not address the issue at hand — the larger question of achievement in different types of schools. They are purporting to measure the impact of vouchers in what are actually small, non-representative samples of public and private schools. We are drawing from large, nationally representative datasets. Their studies are actually program evaluations in local contexts, and do not address the larger question of the relative effectiveness of U.S. public and private schools, despite what they claim. Wolf studies the effects of vouchers on students who are attempting to leave a specific public school for a private school that appears more desirable on some measure, whether it be peer demographics, instructional quality, or the use of uniforms. One cannot generalize to all public and private schools from such studies. Thus, it is simply silly to claim — as does a blog post from the libertarian Cato Institute — that “private schools beat public schools” based on those studies, especially when they provide absolutely no evidence that this is generally true. . . .
Where Wolf really misses the mark is in his concern that we use “tests that align more closely with public school than with private school curricula.” This claim almost comes across as a suggestion of some kind of conspiracy on our part to use only measures that arrive at a particular finding. Yet these NAEP and ECLS-K tests have been used by Wolf and his colleagues with no such prior complaint. Indeed, these are federal datasets whose construction and administration is overseen by bi-partisan panels of experts, including professionals in testing and assessment, curriculum and learning, US Governors, state superintendents, teachers, businesspeople, and parents, as well as representation from the private school sector. One school choice advocate (at that time) called NAEP the “gold standard” because “the federal program tries to align its performance standards with international education standards.” These tests are constructed by experts to measure the most important content for students in today’s world, not to align with curriculum in one type of school or another.
Nevertheless, Wolf’s criticism that public schools are doing a better job in the areas measured by these tests misses the fact that this is not a weakness of our study but instead one of our major findings: that private schools are less willing to adopt current curricular standards and more likely to employ unqualified teachers who use dated instructional practices. And, as the data show, students in private schools are more likely to sit in rows, do worksheets, and see math as memorization. These factors point to the dangers of deregulation and autonomy that people like Wolf champion. As we discuss in the book, this reflects the fact that public schools have more often embraced reform models based on professional understandings of how children most effectively learn mathematics rather than the market models in many private schools that endorse many parents’ preferences for back-to-basics, and outdated methods of teaching mathematics. Wolf’s criticism is akin to saying that scores are not a good measure of football teams because, now that the sport has evolved away from the traditional 3-yards-and-a-cloud-of-dust-based ground game, teams that have a good passing game score more often.
The Oldest Living Things in the World was a labor of love for artist and photographer Rachel Sussman—the project, to document and photograph continuously living organisms 2,000 years old and older, has been around in one form or another since 2004. The result is a stunning collection of images that function as much more than eye candy in the realm of flora and fauna—Sussman’s work quietly, and with unimpeachable integrity, makes a case for the living history of our planet: where we’ve come since year zero, what we stand to lose in the future if we don’t change our ways, and why we should commit to a more intuitive relationship with the natural world.
Above you can view a trailer for the book, which hints at the spectacular flora with which Sussman comes into contact: an 80,000-year-old colony of aspen in Utah and a 43,600-year-old self-propagating shrub in Tasmania, among them. Sussman continues to make a name for herself as part of a new wave of interdisciplinary artist-researchers, and was recently named a 2014 Guggenheim Fellow, as well as an inaugural Art + Technology Lab awardee from LACMA.
The durable mystery of longevity makes the species in this book all the more precious, and all the more worthy of being preserved. Looking at an organism that has endured for thousands of years is an awesome experience, because it makes us feel like mere gastrotrichs. But it is an even more awesome experience to recognize the bond we share to a 13,000-year-old Palmer’s oak tree, and to wonder how we evolved such different times on this Earth.
To read more about The Oldest Living Things in the World, click here.
*Caveat: I realize it is part of my job to endorse Norman Maclean, but this is wholly sincere. Maclean’s fascination with toughness was couched under two veils of redemption: his prose is pained in its evocation of loss and its struggle to both narrate and literate the tragic confines of human behavior; and what comes through a work such as Young Men and Fire (which is a World Book Night selection this April 23rd), is the bored patience and cautiously learned excavation of a natural teacher, of someone who cares to rescind the relationship between art and life, and then recast it in a more vigilant if forgiving light. That book is spectacular.
Anyhow, Dexter’s profile is weird and narratively disjointed—it reads like a Barry Hannah short story without the lustful reproach and booze, which I think Maclean would probably appreciate. It’s not a coincidence that Dexter went on to renown as a fiction writer. It’s very much worth reading.
Elsewhere, we recently saw a testament to Maclean’s stature as a teacher: Justice John Paul Stevens did a Q & A with the New York Times and proclaimed Maclean—once a professor who taught a course in poetry at the University of Chicago, in a town he would love all his life—”the teacher to whom [he] is the most indebted.”
To read more about the work of Norman Maclean, click here.
Ted Cohen, legendary professor at the University of Chicago and scholar of aesthetic philosophy, whose expertise included, “jokes, baseball, television, photography, painting and sculpture, as well as the philosophy of language and formal logic,” passed away last Friday at age 74.
While some philosophers aim to construct large-scale theories, others “look with a very fine, acute eye at specific phenomena and work from the example outwards, beginning with the ordinary and exposing the extraordinary within it,” said Cohen’s longtime friend and colleague Josef Stern. “Ted was that kind of philosopher.”
Many students remembered him as an expert in his field and an excellent professor, always welcoming others’ insight and connecting his rambling anecdotes back to the text. The “classic image” of him smoking outside of Harper Memorial Library wearing a red beret will also be a part of that memory, said fourth-year Julie Huh. “His presence exuded such nonchalance, and he always took his time with his cigarette outside Harper.”
The detection of a slight swirling by scientists at the South Pole using the BICEP2 telescope makes a case for the existence of gravitational waves—and that, in turn, would point to the cosmic inflation of the Universe, support the theory of the Big Bang, and confirm another facet of Albert Einstein’s theory of gravity and general relativity. Though these observations are not yet confirmed, scholar and expert Harry Collins, author of Gravity’s Ghost and Big Dog: Scientific Discovery and Social Analysis in the Twenty-First Century, was kind enough to elaborate on the process, as well as what the experimental results might mean—and what then is at stake for different scientific communities. You can read his post after the jump.
Gravitational waves and discoveries at the South Pole
On March 17, 2014, there was a huge fuss about the discovery of primordial gravitational waves that could tell us something about the Big Bang’s first tiny fraction of a second. Since I have spent most of my academic life studying the sociology of the—so far fruitless—direct search for gravitational waves, I received a lot of emails asking me about whether this was the real thing at last. I had to answer “no.” Let me take this opportunity to explain.
There’s not much sociology here: only an attempt to explain the science that provides the context for my professional studies. I have to point out that I do not represent the gravitational wave detection community, among whom there are many different opinions, including some revealing much more enthusiasm for and engagement with these findings than are expressed here.
The biggest and best-known direct detection devices are two interferometers, each with two four-kilometer arms at right angles. They are located in Washington and Louisiana, and together comprise the American “Laser Interferometer Gravitational-Wave Observatory,” or “LIGO.” The 3-kilometer Italian-French device (“Virgo”), the 600-meter German-British device (“GEO”), and a few others in construction also exist, scattered around the world. Gravitational waves are often described as ripples in space time; they are incredibly weak. If LIGO finally “sees” a wave, its effect will be to change the relative length of its two arms. The change in length of a four-kilometer arm will be equivalent to the rise in the water level of one-square-mile Cardiff Bay caused by adding 1/100,000th of a drop. It is a hard science!
Since gravitational waves are so weak, their expected sources are huge events in the heavens, such as the explosion or collision of stars, or anything else that shifts stellar amounts of mass around in an asymmetrical way. The direct search community is split into four groups. The “burst group” looks for ill-defined packets of energy, such as might be emitted by a supernova or maybe an earthquake on a neutron star; the “inspiral group” looks for the well-defined waveforms emitted by binary-star systems at the very end of their life when they ‘inspiral’ together and coalesce; the “continuous wave group” looks for well-defined long-duration waves emitted by asymmetric pulsars or the like (these waves are specially weak but their effect can be integrated over years); the “stochastic group” looks for random waves coming, from among other places, the Big Bang—this is the gravitational equivalent of the cosmic microwave background. So far, there has been no confirmed detection of any kind, but assuming no one has made a terrible error, there are reasons to hope that with a more sensitive generation of detectors coming on air, binary-star inspirals might begin to be detected a few years from now.
Matters get complicated because there are other ways to detect gravitational waves. Waves can be detected because of their influence on matter, such as the way they change the length of the interferometers’ arms. This is referred to as “direct” detection even though those changes have to be measured by electromagnetic means. But gravitational waves also affect the matter of stars. They have already been detected in this way by Hulse and Taylor—winners of the 1993 Nobel Prize in physics—who observed for a decade the slow decay of a widely separated binary system’s orbit, and showed it was consistent with the energy emitted by gravitational waves. Given that this observation concerns changes in the separation of lumps of matter (stars) detected by electromagnetic means, it could be argued that this detection is no more indirect than the potential detections that will be made by the interferometers. Maybe that’s a bit too philosophically cute, but maybe not; it can depend on whether you own a telescope or an interferometer (and that’s sociology). What is certain is that when (if) LIGO and the international network of interferometers start observing, they will be looking in different wavebands than did Hulse and Taylor, and they will be able to see many more of many different kinds of phenomena. The observation of a binary inspiral, or a supernova, or a neutron starquake will take seconds or less, not decades, and there should be many per year once full sensitivity is reached. The true justification for the interferometers is then gravitational astronomy—including our first look into the heart of colliding black holes—with the direct discovery of gravitational waves exciting but not so surprising as it once would have been.
Now, if it is confirmed, BICEP has observed gravitational waves in another indirect way. The group has inferred their existence from the polarization patterns of electromagnetic waves (the microwave background). Once more there is scope for arguing that this too is no more indirect than the interferometric detections that may one day be made by the stochastic group; for some, what one calls “direct” and “indirect” seems like a matter of taste. What also seems likely is that the interferometers may one day be able to see primordial gravitational waves at different frequencies and with different kinds of resolution from those seen by BICEP—in other words, a combination of both techniques seems likely to give the best information about the first moments of the universe.
The direct detection community is excited by the BICEP result, because apart from its cosmological importance, it shows that the phenomena that they are looking for are there to be found one day. In the same way, they were pleased by the Hulse-Taylor observation, given that at one time there was doubt whether gravitational waves could be detected even in principle. Speaking now purely as my unprofessional self—a citizen with a schoolboy interest in science, but one who is perhaps biased by lengthy contact with these groups—I think building mind bogglingly fine gossamer webs that can capture exquisitely ephemeral waves is more exciting than inferring their existence from the movement of stars or from patterns in the much stronger electromagnetic spectrum. This is because it leads to more than new understanding: it demonstrates unprecedented control over nature and a heroic extension of our means to uncover its secrets.
“I have great faith in a seed,” Thoreau wrote. “Convince me that you have a seed there, and I am prepared to expect wonders.”
The story of seeds, in a nutshell, is a tale of evolution. From the tiny sesame that we sprinkle on our bagels to the forty-five-pound double coconut borne by the coco de mer tree, seeds are a perpetual reminder of the complexity and diversity of life on earth. With An Orchard Invisible, Jonathan Silvertown presents the oft-ignored seed with the natural history it deserves, one nearly as varied and surprising as the earth’s flora itself.
Beginning with the evolution of the first seed plant from fernlike ancestors more than 360 million years ago, Silvertown carries his tale through epochs and around the globe. In a clear and engaging style, he delves into the science of seeds: How and why do some lie dormant for years on end? How did seeds evolve? The wide variety of uses that humans have developed for seeds of all sorts also receives a fascinating look, studded with examples, including foods, oils, perfumes, and pharmaceuticals. An able guide with an eye for the unusual, Silvertown is happy to take readers on unexpected—but always interesting—tangents, from Lyme disease to human color vision to the Salem witch trials. But he never lets us forget that the driving force behind the story of seeds—its theme, even—is evolution, with its irrepressible habit of stumbling upon new solutions to the challenges of life.
Hillary L. Chute spent a significant portion of the past decade studying, hanging out with, and interviewing many of the artists whose iconic images have helped define contemporary graphic arts. In Outside the Box: Interviews with Contemporary Cartoonists, Chute collects these interviews in book form for the first time, delivering in-depth discussions with twelve of the most prominent and accomplished artists and writers in comics today, and revealing a creative community that is richly interconnected yet fiercely independent. The interviewees include Lynda Barry and Alison Bechdel, Charles Burns and Joe Sacco, and even a never-before published conversation between Art Spiegelman and Chris Ware.
In addition to unparalleled access into the cartooning world, Outside the Box also puts narrative power into the hands of this cast of masters—without whom our eyes (and ears) would not take in such gripping stories.
For Chicagoans, Chute will talk about the book and her experiences as documentarian and scholar of the cartooning community at two upcoming events:
Where the North Sea Touches Alabama is a strange book—I’ve been describing it to strangers (note the relationship between adjective and noun) as an ethnography of mourning, but really it’s a peculiar hybrid of sociological exegesis, lyric essay, and phantasmagorical travelogue. I believe author Allen C. Shelton might consider it a novel, just as Walter Benjamin certainly must have plucked a term from the atmosphere to describe the Arcades Project as he carried its pages in a suitcase like fake currency.
The book considers the tragic life and death of the artist Patrik Keim, a friend of the author’s, and a theoretical muse or Betelgeuse ostensibly traveling between this world and another. That’s the stuff of Western philosophy in the wake of Hegel, or a battered Platonic ideal we repeat to ourselves—the absolute idealism that marks being as an all-inclusive whole: not subject without object, and vice-versa. Shelton takes on this canon—Marx, Foucault, Weber, and especially, Benjamin—and arrives at someplace not entirely recognizable. Maybe that’s because the rest of the landscape he renders—via an epistolary immersion in northeastern Alabama—is so unavoidably specific. Anyhow: not to give too much away. The above trailer should be enough to get you started—like the book, it’s a well-made and unconventional narrative.
My inner Walter Mitty belongs to a small collective of social science writers.
We call ourselves the Professors Higgin. We commiserate, critique and urge each other to confess our literary sins, our endless little murders of the English tongue. We comprise a teacher, a pragmatist, a printmaker, a contrarian, a recovering atheist, an agnostic, a believer with no object of belief, a jaded millenarian, a Luddite, a backsliding Marxist and, depending on academic circumstances, either an anthropologist or a sociologist—an erstwhile Whitman’s Sampler.
We help each other, endlessly contradict, chide, commiserate and condemn colleagues’ writing. We laugh at our phobias, strain for 12-step clarity and all too rarely acknowledge the debt we owe our students. With ease, we blame them for our petty insanities, resent their ability to absorb our time and in the end know our better selves in their reflections.
We read Where the North Sea Touches Alabama in sustained awe. Inspired. Heartened. Daunted.
To read more about Where the North Sea Touches Alabama, click here.
Last week, we were humbled to learn that we received the inaugural International Academic and Professional Publisher Award from the London Book Fair, among a ridiculously esteemed group of nominees across multiple categories. The award, part of a new industry-wide pool of honors, furthers the LBF’s mission to “celebrate the role of the book and the written word at the heart of creative content across all formats.”
More from the press release:
These unique new awards, celebrating achievement across the entire business of publishing, will provide a truly global industry vision. They represent the UK’s recognition of international publishing industry excellence, and take place within the calendar’s most important global publishing event.
LBF and The Publishers Association have selected an group of UK judges, working at the heart of each category, whose international or discipline-specific expertise qualifies them to judge their peers’ work.
The global book industry saw the birth of something new on Tuesday night, something that will surely grow to become a fixture on the international publishing calendar, something that seemed so right one wondered why it had never existed before.
Again, we’re humbled and honored—congrats to the other winners and all the nominees (excitedly: a truly global list).