What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing Blog: The Chicago Blog, Most Recent at Top
Results 1 - 25 of 1,819
Visit This Blog | Login to Add to MyJacketFlap
Blog Banner
Publicity news from the University of Chicago Press including news tips, press releases, reviews, and intelligent commentary.
Statistics for The Chicago Blog

Number of Readers that added this blog to their MyJacketFlap: 9
1. House of Debt on the Independent’s Best of 2014

9780226271651

Atif Mian and Amir Sufi’s House of Debt, a polemic about the Great Recession and a call to action about the borrowing and lending practices that led us down the fiscal pits, already made a splash on the shortlist for the Financial Times‘s Best Business Book of 2014. Now, over at the Independent, the book tops another Best of 2014 list, this time proclaimed, “the jewel of 2014.” From Ben Chu’s review, which also heralds another university press title—HUP’s blockbuster Capital by Thomas Piketty (“the asteroid”):

As with Capital, House of Debt rests on some first-rate empirical research. Using micro data from America, the professors show that the localities where the accumulation of debt by households was the most rapid were also the areas that cut back on spending most drastically when the bubble burst. Mian and Sufi argue that policymakers across the developed world have had the wrong focus over the past half decade. Instead of seeking to restore growth by encouraging bust banks to lend, they should have been writing down household debts. If the professors are correct—and the evidence they assemble is powerful indeed—this work will take its place in the canon of literary economic breakthroughs.

We’ve blogged about the book previously here and here, and no doubt it will appear on more “Best of” lists for business and economics—it’s a read with teeth and legs, and the ostensible advice it offers to ensure we avoid future crises, points its fingers at criminal lending practices, greedy sub-prime investments, and our failure to share—financially and conceptually—risk-taking in our monetary practices.

You can read more about House of Debt here.

 

Add a Comment
2. The Hoarders

nm2

David Drummond’s cover for The Hoarders, one of Paste Magazine’s 30 Best Book Covers of 2014.

This past week, New Yorker critic Joan Acocella profiled Scott Herring’s The Hoarders, a foray into the history of material culture from the perspective of clutter fetish and our fascination with the perils surrounding the urge to organize. The question Herring asks, namely, “What counts as an acceptable material life—and who decides?,” takes on a gradient of meaning for Acocella, who confronts the material preferences of her ninety-three-year-old mother, which prove to be in accord with the DSM V‘s suggestion that, “hoarding sometimes begins in childhood, but that by the time the hoarders come to the attention of the authorities they tend to be old.”

In The Hoarders, Herring tells the tale of Homer and Langley Collyer, two brothers to whom we can trace a legend (um, legacy?) of modern hoarding, whose eccentricity and ill health (Langley took care of Homer, who was both rheumatic and blind) led to a lion’s den of accrual, and a rather unfortunate end. As Acocella explains:

In 1947, a caller alerted the police that someone in the Collyer mansion may have died. After a day’s search, the police found the body of Homer, sitting bent over, with his head on his knees. But where was Langley? It took workers eighteen days to find him. The house contained what, in the end, was said to have been more than a hundred and seventy tons of debris. There were toys, bicycles, guns, chandeliers, tapestries, thousands of books, fourteen grand pianos, an organ, the chassis of a Model T Ford, and Dr. Collyer’s canoe. There were also passbooks for bank accounts containing more than thirty thousand dollars, in today’s money.

As Herring describes it, the rooms were packed almost to the ceilings, but the mass, like a Swiss cheese, was pierced by tunnels, which Langley had equipped with booby traps to foil burglars. It was in one of those tunnels that his corpse, partly eaten by rats, was finally discovered, only a few feet away from where Homer’s body had been found. He was apparently bringing Homer some food when he accidentally set off one of his traps and entombed himself. The medical examiner estimated that Langley had been dead for about a month. Homer seems to have died of starvation, waiting for his dinner.

The New Yorker piece also confronts the grand dames of American hoarding, Little Edie and Big Edie Bouvier Beale, cousins to Jacqueline Kennedy Onassis, and the subjects of Albert and David Maysles’ cult classic documentary Grey Gardens. Acocella positions the Beales as camped-out, if charming, odd fellows, but also points to an underlying class-based assumption in our embrace of their peculiarities:

If they crammed twenty-eight rooms with junk, that’s in part because they had a house with twenty-eight rooms. And if they declined to do the dishes, wouldn’t you, on many nights, have preferred to omit that task?

This is only slightly out of sorts with the stance Herring adopts in the book: as material culture changes, so too do our interactions with it—and each other. You can read much more from Acocella in her profile, but her takehome—we are what we stuff, except what we stuff is subject to the scrutinies and perversions of our social order (class and race among them)—is worth mentioning: we should get out from under it now, because it’s only going to get worse.

Read more about The Hoarders here.

 

 

 

 

Add a Comment
3. Forthcoming: The Big Jones Cookbook

It’s unconventional, to say the least, for a university press to publish a cookbook. But an exception to this rule, coming in Spring 2015, is Paul Fehribach’s Big Jones Cookbook, which expands upon the southern Lowcountry cuisine of the eponymous Chicago restaurant. As mentioned in the book’s catalog copy, “from its inception, Big Jones has focused on cooking with local and sustainably grown heirloom crops and heritage livestock, reinvigorating southern cooking through meticulous technique and the unique perspective of its Midwest location.” More expansively, Fehribach’s restaurant positions the social and cultural inheritances involved in regional cooking at the forefront, while the cookbook expands upon the associated recipes by situating their ingredients (and the culinary alchemy involved in their joining!) as part of a rich tradition invigorated by a kind of heirloom sociology, as well as a sustainable farm-to-table tradition.

This past week, as part of the University of Chicago Press’s Spring 2015 sales conference, much of the Book Division took to a celebratory meal at Big Jones, and the photos below, by editorial director Alan Thomas, both show Fehribach in his element, as well as commemorate the occasion:

141202_BigJones_05

 

141202_BigJones_10

141202_BigJones_15

To read more about The Big Jones Cookbook, forthcoming in Spring 2015, click here.

 

Add a Comment
4. Citizen: Jane Addams and the labor movement

arena_gay

On this day in 1931, Jane Addams became the first woman to win the Nobel Peace Prize. Read an excerpt from Louise W. Knight’s Citizen: Jane Addams and the Struggle for Democracy, about the ethics and deeply held moral beliefs permeating the labor movement—and Addams’s own relationship to it—after the jump.

***

From Chapter 13, “Claims” (1894)

On May 11 Addams, after giving a talk at the University of Wisconsin and visiting Mary Addams Linn in Kenosha, wrote Alice that their sister’s health was improving. The same day, a major strike erupted at the Pullman Car Works, in the southernmost part of Chicago. The immediate cause of the strike was a series of wage cuts the company had made in response to the economic crisis. Since September the company had hired back most of the workers it had laid off at the beginning of the depression, but during the same period workers’ wages had also fallen an average of 30 percent. Meanwhile, the company, feeling pinched, was determined to increase its profits from rents. In addition to the company’s refusing to lower the rent rate to match the wage cuts, its foremen threatened to fire workers living outside of Pullman who did not relocate to the company town. The result was that two-thirds of the workforce was soon living in Pullman. By April, many families were struggling to pay the rents and in desperate straits; some were starving. The company’s stance was firm. “We just cannot afford in the present state of commercial depression to pay higher wages,” Vice President Thomas H. Wickes said. At the same time, the company continued to pay its stockholders dividends at a the rate of 8 percent per annum, the same rate it had paid before the depression hit.

The workers had tried to negotiate. After threatening on May 5 to strike if necessary, leaders of the forty-six-member workers’ grievance committee met twice with several company officials, including, at the second meeting, George Pullman, the company’s founder and chief executive, to demand that the company reverse the wage cuts and reduce the rents. The company refused, and on May 11, after three of the leaders of the grievance committee had been fired and a rumor had spread that the company would lock out all employees at noon, twenty-five hundred of the thirty-one hundred workers walked out. Later that day, the company laid off the remaining six hundred. The strike had begun. “We struck at Pullman,” one worker said, “because we were without hope.”

For Addams, the coincidental timing of the strike and Mary’s illness, both of which would soon worsen, made each tragedy, if possible, a greater sorrow. The strike was a public crisis. Its eruption raised difficult questions for Addams about the ethics of the industrial relationship. What were George Pullman’s obligations to his employees? And what were his employees’ to him? Was it disloyal of him to treat his workers as cogs in his economic machine? Or was it disloyal of his workers to strike against an employer who supplied them with a fine town to live in? Who had betrayed whom? Where did the moral responsibility lie? Mary’s illness was Addams’s private crisis. Mary was the faithful and loving sister whose affection Addams had always relied on and whose life embodied the sacrifices a good woman made for the sake of family. Mary had given up her chance for further higher education for her family’s sake and had been a devoted wife to a husband who had repeatedly failed to support her and their children. The threat of her death stirred feelings of great affection and fears of desperate loss in Addams.

As events unfolded, the two crises would increasingly compete for Addams’s loyalty and time. She would find herself torn, unsure whether she should give her closest attention to her sister’s struggle against death or to labor’s struggle against the capitalist George Pullman. It was a poignant and unusual dilemma; still, it could be stated in the framework she had formulated in “Subjective Necessity”: What balance should she seek between the family and the social claim?

The causes of the Pullman Strike went deeper than the company’s reaction to the depression. For the workers who lived in Pullman, the cuts in wages and the high rents of 189394 were merely short-term manifestations of long-term grievances, all of them tied to company president George Pullman’s philosophy of industrial paternalism. These included the rules regarding life in Pullman, a privately owned community located within the city of Chicago. Pullman had built the town in 1880 to test his theory that if the company’s workers lived in a beautiful, clean, liquor- and sin-free environment, the company would prosper. Reformers, social commentators, and journalists across the country were fascinated by Pullman’s “socially responsible” experiment. Addams would later recall how he was “dined and feted throughout Europe . . . as a friend and benefactor of workingmen.” The workers, however, thought the Pullman Company exercised too much control. Its appointees settled community issues that elsewhere would have been dealt with by an elected government, company policy forbade anyone to buy a house, the town newspaper was a company organ, labor meetings were banned, and company spies were everywhere. Frustrated by this as well as by various employment practices, workers organized into unions according to their particular trades (the usual practice), and these various unions repeatedly struck Pullman Company in the late 1880s and early 1890s. The May 1894 strike was the first that was companywide.

Behind that accomplishment lay the organizing skills of George Howard, vice president of the American Railway Union (ARU), the new cross-trades railroad union that Eugene Debs, its president, had founded the previous year. To organize across trades was a bold idea. Howard had been in Chicago since March signing up members, and by early May he was guiding the workers in their attempted negotiations with the company. The ARU’s stated purpose was to give railroad employees “a voice in fixing wages and in determining conditions of employment.” Only one month earlier it had led railroad workers at the Great Northern Railroad through a successful strike. Thanks to the ARU as well as to the mediating efforts of some businessmen from St. Paul, Minnesota, voluntary arbitration had resolved the strike, and three-fourths of the wage cut of 30 percent had been restored. Impressed, 35 percent of Pullman’s workers joined the ARU in the weeks that followed, hoping that the new union could work the same magic on their behalf.

At first, the prospects for a similar solution at Pullman did not look promising. After the walkout, George Pullman locked out all employees and, using a business trip to New York as his excuse, removed himself from the scene. Meanwhile, a few days after the strike began, Debs, a powerful orator, addressed the strikers to give them courage. He had the rare ability to elevate a controversy about wages into a great moral struggle. The arguments he used that day, familiar ones in the labor movement, would be echoed in Jane Addams’s eventual interpretation of the Pullman Strike. “I do not like the paternalism of Pullman,” he said. “He is everlastingly saying, what can we do for the poor workingmen? . . . The question,” he thundered, “is what can we do for ourselves?”

At this point, the Civic Federation of Chicago decided to get involved. Its president, Lyman Gage, an enthusiast for arbitration, appointed a prestigious and diverse conciliation board to serve as a neutral third party to bring the disputing sides before a separate arbitration panel. Made up partly of members of the federation’s Industrial Committee, on which Addams sat, it was designed to be representative of various interests, particularly those of capital, labor, academia, and reform. It included bank presidents, merchants, a stockbroker, an attorney, presidents of labor federations, labor newspaper editors, professors, and three women civic activists: Jane Addams, Ellen Henrotin, and Bertha Palmer.

The board divided itself into five committees. In the early phase of the strike it would meet nightly, in Addams’s words, to “compare notes and adopt new tactics.” Having had some success in arranging arbitrations in the Nineteenth Ward, Addams was eager to see the method tried in the Pullman case. She would soon emerge as the driving force and the leading actor in the initiative.

The first question the board discussed was whether the Pullman workers wanted the strike to be arbitrated. Addams investigated the question by visiting the striking workers in Pullman, eating supper with some of the women workers, touring the tenement housing, and asking questions. Afterwards, she asked the president of the local ARU chapter, Thomas Heathcoate, and ARU organizer George Howard to allow the conciliation board to meet with the Strike Committee. Refusing her request, Howard told her that the ARU was willing to have the committee meet with the board but that first the Pullman Company would have to state its willingness to go to arbitration.

Meanwhile, three men from the conciliation board were supposed to try to meet with the Pullman Company. The board’s president, A. C. Bartlett, a businessman, was to arrange the meeting but, as of May 30, two weeks into the strike, he had done nothing. Frustrated, Addams stepped in. On June 1 she arranged for Bartlett, Ralph Easley (the Civic Federation’s secretary), and herself to meet with Vice President Wickes and General Superintendent Brown. At the meeting, which Bartlett failed to attend, Wickes merely repeated the company’s well-known position: that it had “nothing to arbitrate.”

Thwarted, Addams decided, with the board’s support, to try again to arrange for the board to meet with the Strike Committee. At a Conciliation Board meeting, Lyman Gage suggested that she propose that rent be the first issue to be arbitrated. Agreeing, Addams decided that, instead of taking the idea to the uncooperative Howard, she would take it over his head to Debs. Persuaded by Addams, Debs immediately arranged for members of the board to speak that night to the Strike Committee about the proposal. Once again, however, Addams’s colleagues failed to follow through. She was the only board member to turn up.

At the meeting, the strike leaders were suspicious, believing that arbitration was the company’s idea. No report survives of how Addams made her case to them, but one can glean impressions from a description of Addams that a reporter published in a newspaper article in June 1894. She described Addams as a “person of marked individuality[;] she strikes one at first as lacking in suavity and graciousness of manner but the impression soon wears away before [her] earnestness and honesty.” She was struck, too, by Addams’s paleness, her “deep” eyes, her “low and well-trained voice,” and they way her face was “a window behind which stands her soul.”

Addams must have made a powerful presentation to the Strike Committee. After she spoke, it voted to arbitrate not only the rents but any point. It was the breakthrough Addams had been hoping for. “Feeling that we had made a beginning toward conciliation,” Addams remembered, she reported her news to the board.

Meanwhile, with the workers and their families’ hunger and desperation increasing, tensions were mounting. Wishing to increase the pressure on the company, Debs had declared on June 1 that the ARU was willing to organize a nationwide sympathy boycott of Pullman cars among railroad employees generally if the company did not negotiate. The Pullman Company’s cars, though owned and operated by the company, were pulled by various railroads. A national boycott of Pullman cars could bring the nation’s already devastated economy to a new low point. Meanwhile, the ARU opened its national convention in Chicago on June 12. Chicago was nervous. Even before the convention began, Addams commented to William Stead, in town again for a visit, that “all classes of people” were feeling “unrest, discontent, and fear. We seem,” she added, “to be on the edge of some great upheaval but one can never tell whether it will turn out a tragedy or a farce.”

Several late efforts at negotiation were made. On June 15 an ARU committee of twelve, six of them Pullman workers, met with Wickes to ask again whether the company would arbitrate. His answer was the same: there was nothing to arbitrate and the company would not deal with a union. Soon afterward George Pullman returned to town. He agreed to meet with the conciliation board but, perhaps sensing the danger that the sincere and persuasive Jane Addams posed, only with its male members. At the meeting he restated his position: no arbitration. At this point, Addams recalled, the board’s effort collapsed in “failure.” The strike was now almost two months old. Addams had done everything she could to bring about arbitration. Resourceful, persistent, even wily, she had almost single-handedly brought the workers to the table, but because she was denied access to George Pullman on the pretext of her gender, she had failed to persuade the company. Her efforts, however, had made something very clear to herself and many others—that George Pullman’s refusal to submit the dispute to arbitration was the reason the strike was continuing.

The situation now became graver. At the ARU convention, the delegates voted on June 22 to begin a national boycott of Pullman cars on June 26 if no settlement were reached. Abruptly, on the same day as the vote, a powerful new player, the General Managers Association (GMA), announced its support for the company. The GMA had been founded in 1886 as a cartel to consider “problems of management” shared by the twenty-four railroad companies serving Chicago; it had dabbled in wage-fixing and had long been opposed to unions. George Pullman’s refusal to arbitrate had been, among other things, an act of solidarity with these railroad companies, his business partners. Disgusted with the outcome of the Great Northern Strike, they were determined to break the upstart ARU, which threatened to shrink the profits of the entire industry. Pullman departed the city again in late June for his vacation home in New Jersey, leaving the GMA in charge of the antistrike strategy. It announced that any railroad worker who refused to handle Pullman cars would be fired.

The ARU was undaunted. On June 26 the boycott began. Within three days, one hundred thousand men had stopped working and twenty railroads were frozen. Debs did not mince words in his message to ARU members and their supporters. This struggle, he said, “has developed into a contest between the producing classes and the money power of this country.” Class warfare was at hand.

Jane Addams was not in the city when the ARU voted for the boycott. She had gone to Cleveland to give a commencement speech on June 19 at the College for Women of Western Reserve University. But she was in Chicago when the boycott began. Chicago felt its impact immediately. There was no railroad service into or out of the city, and public transportation within the city also ceased as the streetcar workers joined the boycott. With normal life having ground to a halt, the city’s mood, which had been initially sympathetic to the workers, began to polarize along class lines. Working people’s sympathies for the railroad workers and hostility toward capitalists rose to a fever pitch while many people in the middle classes felt equally hostile toward the workers; some thought that the strikers should be shot. In Twenty Years Addams writes, “During all those dark days of the Pullman strike, the growth of class bitterness was most obvious.” It shocked her. Before the strike, she writes, “there had been nothing in my experience [that had] reveal[ed] that distinct cleavage of society which a general strike at least momentarily affords.”

The boycott quickly spread, eventually reaching twenty-seven states and territories and involving more than two hundred thousand workers. It had become the largest coordinated work stoppage in the nation’s history and the most significant exercise of union strength the nation had ever witnessed. The workers were winning through the exercise of raw economic power. Virtually the only railcars moving were the federal mail cars, which the boycotting railroad workers continued to handle, as required by federal law and as Debs had carefully instructed them. The railroad yards in the city of Chicago were full of striking workers and boycotters determined to make sure that other railroad cars did not move and to protect them from vandalism.

Now the GMA took aggressive steps that would change the outcome of the strike. On June 30 it used h its influence in Washington to arrange for its own lawyer, Edwin Walker, to be named a U.S. Special Attorney. Walker then hired four hundred unemployed men, deputized them as U.S. Marshals, armed them, and sent them to guard the federal mail cars in the railroad yards to be sure the mail got through. In the yards, the strikers and marshals eyed each other nervously.

Meanwhile, on June 29, Jane Addams’s family crisis worsened. Jane had visited Mary on June 28 but returned to Chicago the same day. That night she received word that her sister’s condition suddenly had become serious, and the following day she rushed back to Kenosha the next day accompanied by Mary’s son Weber Linn (apparently they traveled in a mail car thanks to Addams’s ties to the strikers). She deeply regretted having been gone so much. “My sister is so pleased to have me with her,” she wrote Mary Rozet Smith, “that I feel like a brute when I think of the days I haven’t been here.” As Addams sat by Mary’s bed in Kenosha, the situation in Chicago remained relatively calm. Nevertheless, the GMA now took two more steps that further drew the federal government into the crisis. Claiming that the strikers were blocking the movement of the federal mails (although the subsequent federal investigation produced no evidence that this was true), the GMA asked the U.S. attorney general to ask President Grover Cleveland to send federal troops to shut down the strike. Cleveland agreed, and on July 3 the first troops entered the city. The same day the attorney general ordered Special Attorney Walker to seek an injunction in federal court against the ARU in order to block the union from preventing workers from doing their work duties. The injunction was immediately issued.

By July 1 it was clear that Mary Addams Linn was dying. Addams wired her brother-in-law John and the two younger children, Esther, thirteen, and Stanley, who had recently turned eleven, to come from Iowa to Kenosha. By July 3 they had somehow reached Chicago but, because of the boycott, they could not find a train for the last leg of their trip. At last John signed a document relieving the railroad of liability; then, he, Esther, and Stanley boarded a train (probably a mail train) and within hours had arrived in Kenosha, protected, or so Esther later believed, by the fact that they were relatives of Jane Addams, “who was working for the strikers.” Mary’s family was now all gathered around her except for her oldest son, John, who was still in California. Unconscious by the time they arrived, she died on July 6.

Pullman strike
Illinois National Guard troops in front of the Arcade Building in Pullman during the Pullman Strike. [Neg. i21195aa.tif, Chicago Historical Society.]

While Jane Addams’s private world was crumbling, so was Chicago’s civic order. On July 4, one thousand federal troops set up camp around the Post Office and across the city, including the Pullman headquarters. On July 5 and 6 thousands of unarmed strikers and boycotters crowded the railroad yards, joined by various hangers-on—hungry, angry, unemployed boys and men. Many were increasingly outraged by the armed marshals and the troops’ presence. Suddenly a railroad agent shot one of them, and they erupted into violence. Hundreds of railroad cars burned as the troops moved in. Now the strikers were fighting not only the GMA but also the federal government. This had been the GMA’s aim all along. In the days to come, thousands more federal troops poured into the city.

After attending Mary’s funeral in Cedarville Jane Addams returned on July 9 to find Chicago an armed camp and class warfare on everyone’s minds. In working-class neighborhoods such as the Nineteenth Ward, people wore white ribbons in support of the strike and the boycott. Across town, middle-class people were greeted at their breakfast tables by sensational newspaper headlines claiming that the strikers were out to destroy the nation. One Tribune headline read, “Dictator Debs versus the Federal Government.” The national press echoed the theme of uncontrolled disorder. Harper’s Weekly called the strikers “anarchists.” And the nation remained in economic gridlock. Farmers and producers were upset that they could not move their produce to market. Passengers were stranded. Telegrams poured into the White House.

Like the strikers’ reputation, Hull House’s was worsening daily. Until the strike took place, Addams later recalled, the settlement, despite its radical Working People’s Social Science Club, had been seen as “a kindly philanthropic undertaking whose new form gave us a certain idealistic glamour.” During and after the strike, the situation “changed markedly.” Although Addams had tried to “maintain avenues of intercourse with both sides,” Hull House was now seen as pro-worker and was condemned for being so. Some of the residents were clearly pro-worker. Florence Kelley and one of her assistant factory inspectors, Alzina Stevens, befriended Debs during the strike and its aftermath. Stevens sheltered him for a time in her suburban home when authorities were trying to arrest him; Kelley tried to raise money for his bail after he was arrested later in July.

Addams and Hull House began to be severely criticized. Donors refused to give. Addams told John Dewey, who had come to town to take up his new position at the University of Chicago, that she had gone to meet with Edward Everett Ayer, a Chicago businessman with railroad industry clients who had often supported Hull House’s relief work, to ask him for another gift. Dewey wrote his wife, “[Ayer] turned on her and told her that she had a great thing and now she had thrown it away; that she had been a trustee for the interests of the poor, and had betrayed it [sic]—that like an idiot she had mixed herself in something which was none of her business and about which she knew nothing, the labor movement and especially Pullman, and had thrown down her own work, etc., etc.” That autumn Addams had “a hard time financing Hull-House,” a wealthy friend later recalled. “Many people felt she was too much in sympathy with the laboring people.” Addams merely notes in Twenty Years that “[in] the public excitement following the Pullman Strike Hull House lost many friends.”

And there were public criticisms as well. Some middle- and upper-class people attacked Addams, one resident remembered, as a “traitor to her class.” When Eugene Debs observed that “epithets, calumny, denunciation . . . have been poured forth in a vitriolic tirade to scathe those who advocated and practiced . . . sympathy,” one suspects that he had in mind the treatment Jane Addams received. Meanwhile, the workers were angry that Addams would not more clearly align herself with their cause. Her stance—that she would take no side—guaranteed that nearly everyone in the intensely polarized city would be angry with her.

Standing apart in this way was extremely painful. She was “very dependent on a sense of warm comradeship and harmony with the mass of her fellowmen,” a friend, Alice Hamilton, recalled. “The famous Pullman strike” was “for her the most painful of experiences, because . . . she was forced by conviction to work against the stream, to separate herself from the great mass of her countrymen.” The result was that Addams “suffered from . . . spiritual loneliness.” In these circumstances, no one could mistake Addams’s neutrality for wishy-washiness. Practicing neutrality during the Pullman Strike required integrity and courage. In being true to her conscience, she paid a tremendous price.

Of course, the strike was not the only reason she was lonely. Mary’s death was the other. And if, as we may suspect, Mary’s passing evoked the old trauma for Jane of their mother Sarah’s passing, not to mention the later losses of their sister Martha and their father, then the loneliness Addams felt in the last days of the strike and the boycott was truly profound.

She does not describe these feelings when she writes about the strike and Mary’s death in Twenty Years, but in the chapter about Abraham Lincoln, she conveys her feelings well enough. She tells about a walk she took in the worst days of the strike. In that “time of great perplexity,” she writes, she decided to seek out Lincoln’s “magnanimous counsel.” In the sweltering heat, dressed in the long skirt and long-sleeved shirtwaist that were then the fashion, Addams walked—because the streetcars were on strike—four and a half “wearisome” miles to St. Gaudens’s fine new statue of Lincoln, placed at the entrance to Lincoln Park just two years earlier, and read the words cut in stone at the slain president’s feet: “With charity towards all.” And then, still bearing on her shoulders the burden of public hatred that Lincoln had also borne, she walked the four and a half miles home.

Although the deployment of troops had broken the strike’s momentum, the government needed to put the strike’s leader behind bars to bring the strike to an end. On July 10 Debs was indicted by a grand jury for violating the injunction and arrested. Bailed out two days later, he was arrested again on July 17 to await trial in jail. However, when the government prosecutor, the ubiquitous Edwin Walker, became ill, the trial was postponed, and Debs went home to Indiana, where he collapsed gratefully into bed. The trial was held in November 1894; Debs would begin serving his six-month sentence in January 1895.

With Debs removed from leadership and fourteen thousand armed troops, police, and guardsmen bivouacked in Chicago, the strike and the boycott soon collapsed. On August 2 the ARU called off the strike, and on the same day the Pullman Company partially reopened. The railroads were soon running again. The anti-labor forces had won. Private industry and the federal government had shown that, united and with the power of the law on their side, no one, not even the hundreds of thousands of workers who ran the nation’s most crucial industry, could defeat them. If the strike had been successful, it would have turned the ARU into the nation’s most powerful union. Given that the strike failed, the opposite result took place. As the GMA had intended, the ARU died. After Debs was released from jail, he did not resurrect the union.

Although the strike was over, innumerable questions remained unanswered. For the country as a whole, whose only sources of information had been sensational news stories and magazine articles, the first question was: What were the facts? To sort these out, President Grover Cleveland appointed a three-person fact-finding commission to investigate and issue a report. Jane Addams would testify before the United States Strike Commission in August, as would George Pullman.

Meanwhile, for Addams and other labor and middle-class reformers in Chicago, the question was how to prevent or resolve future strikes. The Conciliation Board’s effort to promote voluntary arbitration had been promising, but its failure revealed, Addams believed, certain “weaknesses in the legal structure,” that is, in state and federal laws. On July 19, two days after Debs’ second arrest, as the troops began slowly to withdraw from the city, the Central Council of the Civic Federation met at the Commerce Club. At the meeting, M. C. Carroll, editor of a labor magazine and a member of the Conciliation Board, proposed that the federation host a conference “on arbitration or conciliation” to seek ideas about ways to avert “strikes and boycotts in the future.” The Central Council “enthusiastically endorsed” the proposal and appointed a committee to devise a plan. The hope was to do something immediately, while interest was high, to increase public support for arbitration legislation in Illinois and across the nation.

Addams missed the meeting because she was assisting at the Hull House Summer School at Rockford College, which began on July 10. But she was back in Chicago by the second week in August and had soon joined the arbitration conference committee. It devised a three-part strategy. First, it would convene “capital and labor” at a national conference titled “Industrial Conciliation and Arbitration” in Chicago in November to provide a forum for “calm discussion” of the questions raised by the strike and bring together information about methods of arbitration and conciliation. Second, conference participants from Illinois would press the Illinois General Assembly to pass a law creating a state board of arbitration. Third, a national commission would be named at the end of the conference to press for federal legislation. Elected as secretary to the committee, Jane Addams threw herself into organizing the event.

At the same time, she took on new family responsibilities. With Mary’s death, Jane Addams, at thirty-three, became the guardian and mother of the two younger Linn children. Their father had decided he could not afford to keep them. For the fall, she and Alice agreed that Stanley would live at Hull House and Esther would attend the preparatory boarding school that was affiliated with Rockford College. Weber, nineteen, was still a student at the University of Chicago. He would spend his vacations at Hull House. The oldest son, John, twenty-two, having returned from California, was once again a resident at Hull House and studying for the Episcopalian priesthood. Esther remembered Addams as taking “me and my brothers in as her own children. . . . [She] was a wonderful mother to us all.” Addams was particularly close to Stanley, who, according to Alice’s daughter Marcet, “became . . . Aunt Jane’s very own little boy[;] . . . he was always like a son to her.”

Jane Addams would honor this family claim for the rest of her life. Her niece and nephews, later joined by their children, would gather with her for holidays, live with her at Hull House at various times in their lives, and rely on her for advice, as well as for a steady supply of the somewhat shapeless sweaters that she would knit for them. Because few letters between Addams and the Linn children have survived, the historical record is mostly silent about the affectionate bonds that linked them and the faithfulness with which she fulfilled the maternal role. Her devotion arose from a deep understanding of what it felt like for a child to lose its mother and from a deep gratitude that she could give to Mary’s children the gift Mary had given her.

The Pullman Strike was a national tragedy that aroused fierce passions and left many scars. For many in the middle classes, including Jane Addams, some of the most painful scars were the memories of the intense hatred the strike had evoked between the business community and the workers. Was such class antagonism inevitable? Many were saying so, but Addams, committed as she was to Tolstoyan and Christian nonviolence, social Christian cooperation, and Comtean societal unity, found it impossible to accept the prevailing view. That fall she and John Dewey, now the first chair of the Department of Philosophy at the University of Chicago, discussed this question. In a letter to his wife Alice Dewey reported telling Addams that conflict was not only inevitable but possibly a good thing. Addams disagreed. She “had always believed and still believed,” he wrote, that “antagonism was not only useless and harmful, but entirely unnecessary.” She based her claim on her view that antagonisms were not caused by, in Dewey’s words, “objective differences, which would always grow into unity if left alone, but [by] a person’s mixing in his own personal reactions.” A person was antagonistic because he took pleasure in opposing others, because he desired not to be a “moral coward,” or because he felt hurt or insulted. These were all avoidable and unnecessary reactions. Only evil, Addams said, echoing Tolstoy, could come from antagonism.

During their conversation, she asked Dewey repeatedly what he thought. Dewey admitted that he was uncomfortable with Addams’s theory. He agreed that personal reactions often created antagonism, but as for history, he was enough of a social Darwinist and a Hegelian to believe that society progressed via struggle and opposition. He questioned her. Did she not think that, in addition to conflict between individuals, there were conflicts between ideas and between institutions, for example, between Christianity and Judaism and between “Labor and Capital”? And was not the “realization of . . . antagonism necessary to an appreciation of the truth and to a consciousness of growth”?

Again she disagreed. To support her case Addams gave two examples of apparently inevitable conflicts involving ideas or institutions that she interpreted differently. When Jesus angrily drove the moneychangers out of the temple, she argued, his anger was personal and avoidable. He had “lost his faith,” she said, “and reacted.” Or consider the Civil War. Through the antagonism of war, we freed the slaves, she observed, but they were still not free individually, and in addition we have had to “pay the costs of war and reckon with the added bitterness of the Southerner besides.” The “antagonisms of institutions,” Dewey told Alice, summarizing Addams’s response, “were always” due to the “injection of the personal attitude and reaction.”

Dewey was stunned and impressed. Addams’s belief struck him as “the most magnificent exhibition of intellectual & moral faith” that he had ever seen. “[W]hen you think,” he wrote Alice, “that Miss Addams does not think this as a philosophy, but believes it in all her senses & muscles—Great God.” Dewey, gripped by the power of Addams’s grand vision, told Alice, “I never had anything take hold of me so.”

But his intellect lagged behind. Struggling to find a way to reconcile his and Addams’s views, Dewey attempted a formulation that honored Addams’s devotion to unity, which he shared, while retaining the principle of antagonistic development that Addams rejected but he could not abandon. “[T]he unity [is not] the reconciliation of opposites,” he explained to his wife. Rather, “opposites [are] the unity in its growth.” But he knew he had avoided a real point of disagreement between them. He admitted to Alice, “[M]y pride of intellect . . . revolts at thinking” that conflict between ideas or institutions “has no functional value.” His and Addams’s disagreement—was it an antagonism?—was real, and in discovering it, the two had taken each other’s measure. Addams’s principled vision and spiritual charisma had met their match in the cool machinery of John Dewey’s powerful mind.

Two days later Dewey sent Addams a short note in which he retracted part of what he had said. He was now willing to agree, he wrote, that a person’s expectation of opposition was in and of itself not good and even that it caused antagonism to arise. “[T]he first antagonism always come[s] back to the assumption that there is or may be antagonism,” he wrote, and this assumption is “bad.” In other words, he was agreeing with Addams’s points that antagonism was evil and that it always began in the feelings or ideas of the individual. Dewey did not, however, retract his claim that conflict had its historical uses. These were, as he had said, to appreciate truth and to be conscious of its growth, that is, its spread. He was speaking as the Christian idealist he still was—someone who saw truth as God’s revelation. Antagonism, in other words, helped bring man to see the truth, and this was its value.

When Dewey agreed with Addams that opposition originated in individual feelings, he was joining her in rejecting the usual view that objective differences justified antagonism. This was the view that unions held. Workers believed that the antagonism between themselves and employers arose because workers lacked something real and necessary: sufficient negotiating power in the relationship. In denying this, Dewey and Addams were being, in the simplest sense, determinedly apolitical. Addams, despite her recent involvement with strikes and politics, still refused to believe that actual conditions could provide legitimate grounds for opposition. Her idealism, expressed in her fierce commitment to cooperation, Christian love, nonresistance, and unity, stood like a wall preventing her from seeing that power, as much as personal feelings, soured human relations. A strong mind is both an asset and a liability.

That fall, Hull House, returning to normalcy, resumed its rich schedule of classes, club meetings, lectures, and exhibits. As usual, Addams was seriously worried about the settlement’s finances. The size of the total deficit for the year is unknown, but her awareness that the household operating account was $888 in arrears surfaced in a letter to Mary. As she had in previous years, Addams paid for part of the debt herself (how much is unclear; the documentation does not survive). Mary Rozet Smith, among others, sent a generous check. “It gives me a lump in my throat,” Addams wrote her in appreciation, “to think of the dollars you have put . . . into the . . . prosaic debt when there are so many more interesting things you might have done and wanted to do.” Aware of the delicacy of asking a close friend for donations, Addams sounded a note of regret. “It grieves me a little lest our friendship should be jarred by all these money transactions.”

As before, the residents were in the dark about the state of Hull House’s finances. Despite her intentions to keep them informed, Addams had convened no Residents’ Meeting between April and October, perhaps because the strike and Mary’s illness had absorbed so much of her attention. Finally, in early November she and the residents had “a long solemn talk,” as she wrote Mary. She had laid “before folks” the full situation and asked them “for help and suggestions.” And she had vowed that she would “never . . .let things get so bad again” before she consulted them. “I hope,” she told Mary, “we are going to be more intimate and mutually responsible on the financial side.”

Addams was renewed in her determination for two reasons. First, there was the problem of her own worsening finances. Since July she had assumed the new financial burden, apparently without any help from Alice, of raising Mary’s two younger children. Second, there was her increasing fear, as the depression deepened and donations dropped because of Hull House’s involvement with the Pullman Strike, that her personal liability for Hull House’s debts could literally put her in the Dunning poorhouse. Meanwhile, she pushed herself to speak as often as she could to earn lecture fees. In October she reported to Alice that she had given five talks in one week. In November, she gave lectures in three states—Illinois, Wisconsin, and Michigan. It was all that she could think of to do: to work harder.

Hull House was doing well enough by other measures. The residents’ group continued to grow. Despite the house’s recently stained reputation and the risky state of its finances, five new residents arrived, all women, bringing the total to twenty. For 1894–95, the residents had decided, probably at Addams’s urging, to limit the size of the residents, group to that number. There was now a good mix of old and new, with the majority, like Starr, Lathrop, and Kelley, having been there two years or more. The number of men had shrunk from seven to two, but in a few years it would be back to five. Addams, as always, took her greatest pleasure in the effervescent dailiness of it all. The settlement was first and foremost something “organic,” a “way of life,” she told an audience at the University of Chicago that fall.

Furthermore, the residents’ book of maps was moving toward completion. Conceived originally as a way to publicize some of the data about the neighborhood from the Department of Labor study, it had expanded to include a collection of essays on various related subjects and had acquired a sober New York publisher, Thomas Y. Crowell and Company, and a glorious title,Hull-House Maps and Papers: A Presentation of Nationalities and Wages in a Congested District of Chicago, Together with Comments and Essays on Problems Growing Out of the Social Conditions. The byline, it was agreed, would read “Residents of Hull-House.” It would be published in March 1895. Five of the essays, those by Kelley, Lathrop, Starr, and Addams, were much-expanded versions of the presentations they had made at the Congress on Social Settlements the previous year. Five others rounded out the collection. These dealt with the Bohemians, the Italians, and the Jews of the neighborhood, the maps, and the wages and expenses of cloakmakers in Chicago and New York. The maps were the book’s original inspiration and its most extravagant feature. Printed on oiled paper, folded and tucked into special slots in the book’s front and back covers, they displayed, block by block and in graphic, color-coded detail, where people of different nationalities lived in the ward and the range of wages they earned.

Addams, happy to be back in the editor’s chair, wrote the prefatory note, edited essays, and wrote the meaty appendix that described the settlement’s activities and programs. The book’s title was likely also her handiwork. Descriptive, indeed, exhaustive, it was the sort of title in which she specialized. As she once admitted to Weber Linn, “I am very poor at titles.” The book was very close to her heart. When she wrote Henry Demarest Lloyd on December 1 to thank him for sending the house a copy of his Wealth Against Commonwealth, she observed, “I have a great deal of respect for anyone who writes a good book.” After Maps was published she noted to those to whom she sent copies, “We are very proud of the appearance of the child.”

Jane Addams’s contribution to Maps was her essay “The Settlement as a Factor in the Labor Movement.” Her intention was to give a history of Hull House’s relations with unions as a sort of case study and to examine why and how settlements should be engaged with the labor movement. The piece is straightforward in tone, nuanced, not polemical. In it she settles fully into the even-handed interpretive role she had first attempted in her speech on domestic servants eighteen months earlier.

But the essay also burns with the painful knowledge she gained from the Pullman Strike. She wrestles with the tension between the labor movement’s loyalty to its class interests and her own vision of a classless, universalized, democratic society. And she probes the philosophical question she and Dewey had been debating: Are (class) antagonisms inevitable? Are antagonisms useful? The resulting essay was the most in-depth exploration of the subject of class that Addams would ever write. She was trying to find her way back from the edge of the cliff—class warfare—to which the Pullman Strike had brought her and the nation.

On the question of what the strike accomplished, her thoughts had shifted somewhat. Although she had told Dewey that antagonism was always useless, she argues in “The Settlement as a Factor” that strikes, which were certainly were a form of antagonism, can be useful and necessary. Strikes are often “the only method of arresting attention to [the workers’] demands”; they also offer the permanent benefits of strengthening of the ties of “brotherhood” among the strikers and producing (at least when successful) a more “democratic” relation between workers and their employer. Perhaps Dewey had been more persuasive than he realized.

She still felt, however, that personal emotion was the main cause of antagonisms, including strikes. She admits that labor has a responsibility to fight for the interests of the working people (that is, more leisure and wealth) but only because achieving them would help the workingman feel less unjustly treated. She charges labor with storing up of “grudges” against “capitalists” and calls this “selfish.” She ignores the question of whether low wages and long hours are fair. Social justice is not a touchstone for her arguments in this essay.

Instead, Addams stresses the ideal she had emphasized since coming to Chicago: that of a society united by its sense of common humanity. She writes prophetically of “the larger solidarity which includes labor and capital” and that is based on a “notion of universal kinship” and “the common good.” One might read into her argument the conclusion of social justice, yet the principle remains uninvoked. Instead, Addams stays focused on feelings. She is calling for sympathy for others’ suffering, not for a change in workers’ physical condition.

Addams disapproves of capitalism but not because of its effects on the workers. The moral failings of the individual capitalist trouble her. She slips in a rather radical quotation by an unnamed writer: “The crucial question of the time is, ‘In what attitude stand ye toward the present industrial system? Are you content that greed . . . shall rule your business life, while in your family and social life you live so differently? Shall Christianity have no play in trade?’” In one place, although only one place, she takes workers’ perspective and refers to capitalists as “the power-holding classes.” (Here at last was a glancing nod toward power.) The closest she comes to making a social justice argument is in a sentence whose Marxist flavor, like the previous phrase, suggests Florence Kelley’s influence, yet it, too, retains Addams’s characteristic emphasis on feelings. She hopes there will come a time “when no factory child in Chicago can be overworked and underpaid without a protest from all good citizens, capitalist and proletarian.” While Debs had wanted to arouse middle-class sympathies as a ways to improve the working conditions of the Pullman laborers, Addams wanted the labor movement to cause society to be more unified in its sympathies. Their means and ends were reversed.

Addams found the idea that labor’s organizing efforts could benefit society compelling. “If we can accept” that possibility, she adds, then the labor movement is “an ethical movement.” The claim was a startling one for her to make. It seems the strike had shown her at least one moral dimension to the workers’ struggle. The negative had become the potentially positive. Instead of seeing labor’s union organizing as a symptom of society’s moral decay, as she once had and many other middle-class people still did, she was considering the hypothesis that labor organizing was a sign of society’s moral redemption.

The Pullman Strike also cracked her moral absolutism. In “The Settlement as a Factor” she argues for the first time that no person or group can be absolutely right or absolutely wrong. “Life teaches us,” she writes, that there is “nothing more inevitable than that right and wrong are most confusingly mixed; that the blackest wrong [can be] within our own motives.” When we triumph, she adds, we bear “the weight of self-righteousness.” In other words, no one—not unions and working-class people, not businesses and middle-class people, not settlement workers and other middle-class reformers—could claim to hold or ever could hold the highest moral ground. The absolute right did not exist.

For Addams, rejecting moral absolutism was a revolutionary act. She had long believed that a single true, moral way existed and that a person, in theory, could find it. This conviction was her paternal inheritance (one recalls her father’s Christian perfectionism) and her social-cultural inheritance. Moral absolutism was the rock on which her confident Anglo-American culture was grounded. (It is also the belief that most sets the nineteenth century in the West apart from the twenty-first century.) Now she was abandoning that belief. In the territory of her mind, tectonic plates were shifting and a new land mass of moral complexity was arising.

In the fall of 1894, as she was writing “The Settlement as a Factor,” this new perspective became her favorite theme. In October she warned the residents of another newly opened settlement, Chicago Commons, “not to be alarmed,” one resident recalled, “if we found our ethical standards broadening as we became better acquainted with the real facts of the lives of our neighbors.” That same month, speaking to supporters of the University of Chicago Settlement, she hinted again at the dangers of moral absolutism. Do not, she said, seek “to do good.” Instead, simply try to understand life. And when a group of young men from the neighborhood told her they proposed to travel to New York City that fall to help end political corruption and spoke disdainfully of those who were corrupt, she admonished them against believing that they were purer than others and asked them if they knew what harm they did in assuming that they were right and others were wrong.

What had she seen during the Pullman Strike that led to this new awareness? She had seen the destructive force of George Pullman’s moral self-righteousness. It seemed to her that his lack of self-doubt, that is, his unwillingness to negotiate, had produced a national tragedy; his behavior and its consequences had revealed the evil inherent in moral absolutism. In Twenty Years she writes of how, in the midst of the strike’s worst days, as she sat by her dying sister’s bedside, she was thinking about “that touch of self-righteousness which makes the spirit of forgiveness well-nigh impossible.”

She grounded her rejection of absolute truth in her experience. “Life teaches us,” she wrote. This was as revolutionary for her as the decision itself. In “Subjective Necessity” she had embraced experience as a positive teacher in a practical way. Here she was allowing experience to shape her ethics. The further implication was that ethics might evolve, but the point is not argued in “The Settlement as a Factor.” Still, in her eyes ideas no longer had the authority to establish truth that they once had. Her pragmatism was strengthening, but it had not yet blossomed into a full-fledged theory of truth.

The Pullman Strike taught her in a compelling way that moral absolutism was dangerous, but she had been troubled by its dangers before. She had made her own mistakes and, apparently, a whole train of them related to self-righteousness. The details have gone unrecorded, but they made her ready to understand, and not afterwards forget, something James O. Huntington, the Episcopal priest who had shared the podium with her at the Plymouth conference, had said in a speech at Hull House the year before the strike. “I once heard Father Huntington say,” she wrote in 1901, that it is “the essence of immorality to make an exception of one’s self.” She elaborated. “[T]o consider one’s self as . . . unlike the rank and file is to walk straight into the pit of self-righteousness.” As Addams interpreted Huntington, he meant there was no moral justification for believing in one’s superiority, not even a belief that one was right and the others wrong.

A deeply held, central moral belief is like a tent pole: it influences the shape of the entire tent that is a person’s thought. A new central belief is like a taller or shorter tent pole; it requires the tent to take a new shape. The tent stakes must be moved. Jane Addams had decided there was no such thing as something or someone that was purely right or purely wrong, but the rest of her thought had yet to be adjusted. Among other things, she still believed that a person of high culture was superior to those who lacked it; that is, she still believed that cultural accomplishment could justify self-righteousness.

Some hints of this can be found in the adjectives Addams attaches to democracy in “The Settlement as a Factor.” After proposing that the workers might lead the ethical movement of democracy, she anticipates the fear her readers might feel at this idea. “We must learn to trust our democracy,” she writes, “giant-like and threatening as it may appear in its uncouth strength and untried applications.” Addams was edging toward trusting that working-class people, people without the cultural training in “the best,” could set their own course. Such trust, should she embrace it, would require her to go beyond her old ideas—her enthusiasm for egalitarian social etiquette, for the principle of cooperation, and for the ideal of a unified humanity. Not feeling such trust yet, she was unable to give working people’s power a ringing endorsement. The essay is therefore full of warnings about the negative aspects of the labor movement.

These radical claims—that the labor movement was or could become ethical, that the movement was engaged in a struggle that advanced society morally, that capitalists were greedy and ethically compromised, and that there was no absolute right or wrong—opened up a number of complicated issues. Addams decided she needed to write a separate essay—would it be a speech?—to make these points more fully and to make them explicitly, as honesty compelled her to do, about the Pullman strike. Sometime in 1894, she began to write it. A page from the first draft, dated that year, survives with the title “A Modern Tragedy.” In its first paragraph she writes that, because we think of ourselves as modern, “it is hard to remember that the same old human passions persist” and can often lead to “tragedy.” She invited her readers to view “one of these great tragedies” from “the historic perspective,” to seek an “attitude of mental detachment” and “stand aside from our personal prejudices.” Still grieving over what had happened, Addams was hoping that the wisdom of culture, of the humanities, of Greek and Shakespearean tragedy could give her the comfort of emotional distance. But she had pulled too far back. The opening was so blandly vague and philosophical that no one could tell what the essay was about. She set the piece aside.

To read more about Citizen, click here.

Add a Comment
5. Free e-book for December: Swordfish

9780226922904

Our free e-book for December is renowned marine biologist Richard Ellis’s Swordfish: A Biography of the Ocean Gladiator.
***
A perfect fish in the evolutionary sense, the broadbill swordfish derives its name from its distinctive bill—much longer and wider than the bill of any other billfish—which is flattened into the sword we all recognize. And though the majesty and allure of this warrior fish has commanded much attention—from adventurous sportfishers eager to land one to ravenous diners eager to taste one—no one has yet been bold enough to truly take on the swordfish as a biographer. Who better to do so than Richard Ellis, a master of marine natural history?Swordfish: A Biography of the Ocean Gladiatoris his masterly ode to this mighty fighter.
The swordfish, whose scientific name means “gladiator,” can take on anyone and anything, including ships, boats, sharks, submarines, divers, and whales, and in this book Ellis regales us with tales of its vitality and strength. Ellis makes it easy to understand why it has inspired so many to take up the challenge of epic sportfishing battles as well as the longline fishing expeditions recounted by writers such as Linda Greenlaw and Sebastian Junger. Ellis shows us how the bill is used for defense—contrary to popular opinion it is not used to spear prey, but to slash and debilitate, like a skillful saber fencer. Swordfish, he explains, hunt at the surface as well as thousands of feet down in the depths, and like tuna and some sharks, have an unusual circulatory system that gives them a significant advantage over their prey, no matter the depth in which they hunt. Their adaptability enables them to swim in waters the world over—tropical, temperate, and sometimes cold—and the largest ever caught on rod and reel was landed in Chile in 1953, weighing in at 1,182 pounds (and this heavyweight fighter, like all the largest swordfish, was a female).
Ellis’s detailed and fascinating, fact-filled biography takes us behind the swordfish’s huge, cornflower-blue eyes and provides a complete history of the fish from prehistoric fossils to its present-day endangerment, as our taste for swordfish has had a drastic effect on their population the world over. Throughout, the book is graced with many of Ellis’s own drawings and paintings, which capture the allure of the fish and bring its splendor and power to life for armchair fishermen and landlocked readers alike.
To download your free copy, click here.

Add a Comment
6. Marked: Race, Crime, and Finding Work in an Era of Mass Incarceration

9780226644844

An excerpt from Marked: Race, Crime, and Finding Work in an Era of Mass Incarceration

by Devah Pager

***

Introduction

At the start of the 1970s, incarceration appeared to be a practice in decline. Criticized for its overuse and detrimental effects, practitioners and reformers looked to community-based alternatives as a more promising strategy for managing criminal offenders. A 1967 report published by the President’s Commission on Law Enforcement and Administration of Justice concluded: “Life in many institutions is at best barren and futile, at worst unspeakably brutal and degrading. The conditions in which [prisoners] live are the poorest possible preparation for their successful reentry into society, and often merely reinforces in them a pattern of manipulation or destructiveness.” The commission’s primary recommendation involved developing “more extensive community programs providing special, intensive treatment as an alternative to institutionalization for both juvenile and adult offenders.” Echoing this sentiment, a 1973 report by the National Advisory Commission on Criminal Justice Standards and Goals took a strong stand against the use of incarceration. “The prison, the reformatory, and the jail have achieved only a shocking record of failure. There is overwhelming evidence that these institutions create crime rather than prevent it.” The commission firmly recommended that “no new institutions for adults should be built and existing institutions for juveniles should be closed.” Following what appeared to be the current of the time, historian David Rothman in 1971 confidently proclaimed, “We have been gradually escaping from institutional responses and one can foresee the period when incarceration will be used still more rarely than it is today.”

Quite opposite to the predictions of the time, incarceration began a steady ascent, with prison populations expanding sevenfold over the next three decades. Today the United States boasts the highest rate of incarceration in the world, with more than two million individuals currently behind bars. Characterized by a rejection of the ideals of rehabilitation and an emphasis on “tough on crime” policies, the practice of punishment over the past thirty years has taken a radically different turn from earlier periods in history. Reflecting the stark shift in orientation, the U.S. Department of Justice released a report in 1992 stating “there is no better way to reduce crime than to identify, target, and incapacitate those hardened criminals who commit staggering numbers of violent crimes whenever they are on the streets.” Far removed from earlier calls for decarceration and community supervision, recent crime policy has emphasized containment and harsh punishment as a primary strategy of crime control.

The revolving door

Since the wave of tough on crime rhetoric spread throughout the nation in the early 1970s, the dominant concern of crime policy has been getting criminals off the streets. Surprisingly little thought, however, has gone into developing a longer-term strategy for coping with criminal offenders. With more than 95 percent of those incarcerated eventually released, the problems of offender management do not end at the prison walls. According to one estimate, there are currently more than twelve million ex-felons in the United States, representing roughly 9 percent of the male working-age population. The yearly influx of returning inmates is double the current number of legal immigrants entering the United States from Mexico, Central America, and South America combined.

Despite the vast numbers of inmates leaving prison each year, little provision has been made for their release; as a result, many do not remain out for long. Of those recently released, nearly two-thirds will be charged with new crimes, and more than 40 percent will return to prison within three years. In fact, the revolving door of the prison has now become its own source of growth, with the faces of former inmates increasingly represented among annual admissions to prison. By the end of the 1990s, more than a third of those entering state prison had been there before.

The revolving door of the prison is fueled, in part, by the social contexts in which crime flourishes. Poor neighborhoods, limited opportunities, broken families, and overburdened schools each contribute to the onset of criminal activity among youth and its persistence into early adulthood. But even beyond these contributing factors, evidence suggests that experience with the criminal justice system in itself has adverse consequences for long-term outcomes. In particular, incarceration is associated with limited future employment opportunities and earnings potential, which themselves are among the strongest predictors of desistance from crime. Given the immense barriers to successful reentry, it is little wonder that such a high proportion of those released from prison quickly make their way back through the prison’s revolving door.

The criminalization of young, black men

As the cycle of incarceration and release continues, an ever greater number of young men face prison as an expected marker of adulthood. But the expansive reach of the criminal justice system has not affected all groups equally. More than any other group, African Americans have felt the impact of the prison boom, comprising more than 40 percent of the current prison population while making up just 12 percent of the U.S. population. At any given time, roughly 12 percent of all young black men between the ages of twenty-five and twenty-nine are behind bars, compared to less than 2 percent of white men in the same age group; roughly a third are under criminal justice supervision. Over the course of a lifetime, nearly one in three young black men–and well over half of young black high school dropouts–will spend some time in prison. According to these estimates, young black men are more likely to go to prison than to attend college, serve in the military, or, in the case of high school dropouts, be in the labor market. Prison is no longer a rare or extreme event among our nation’s most marginalized groups. Rather it has now become a normal and anticipated marker in the transition to adulthood.

There is reason to believe that the consequences of these trends extend well beyond the prison walls, with widespread assumptions about the criminal tendencies among blacks affecting far more than those actually engaged in crime. Blacks in this country have long been regarded with suspicion and fear; but unlike progressive trends in other racial attitudes, associations between race and crime have changed little in recent years. Survey respondents consistently rate blacks as more prone to violence than any other American racial or ethnic group, with the stereotype of aggressiveness and violence most frequently endorsed in ratings of African Americans. The stereotype of blacks as criminals is deeply embedded in the collective consciousness of white Americans, irrespective of the perceiver’s level of prejudice or personal beliefs.

While it would be impossible to trace the source of contemporary racial stereotypes to any one factor, the disproportionate growth of the criminal justice system in the lives of young black men–and the corresponding media coverage of this phenomenon, which presents an even more skewed representation–has likely played an important role. Experimental research shows that exposure to news coverage of a violent incident committed by a black perpetrator not only increases punitive attitudes about crime but further increases negative attitudes about blacks generally. The more exposure we have to images of blacks in custody or behind bars, the stronger our expectations become regarding the race of assailants or the criminal tendencies of black strangers.

The consequences of mass incarceration then may extend far beyond the costs to the individual bodies behind bars, and to the families that are disrupted or the communities whose residents cycle in and out. The criminal justice system may itself legitimate and reinforce deeply embedded racial stereotypes, contributing to the persistent chasm in this society between black and white.

The credentialing of stigma

The phenomenon of mass incarceration has filtered into the public consciousness through cycles of media coverage and political debates. But a more lasting source of information detailing the scope and reach of the criminal justice system is generated internally by state courts and departments of corrections. For each individual processed through the criminal justice system, police records, court documents, and corrections databases detail dates of arrest, charges, conviction, and terms of incarceration. Most states make these records publicly available, often through on-line repositories, accessible to employers, landlords, creditors, and other interested parties. With increasing numbers of occupations, public services, and other social goods becoming off-limits to ex-offenders, these records can be used as the official basis for eligibility determination or exclusion. The state in this way serves as a credentialing institution, providing official and public certification of those among us who have been convicted of wrongdoing. The “credential” of a criminal record, like educational or professional credentials, constitutes a formal and enduring classification of social status, which can be used to regulate access and opportunity across numerous social, economic, and political domains.

Within the employment domain, the criminal credential has indeed become a salient marker for employers, with increasing numbers using background checks to screen out undesirable applicants. The majority of employers claim that they would not knowingly hire an applicant with a criminal background. These employers appear less concerned about specific information conveyed by a criminal conviction and its bearing on a particular job, but rather view this credential as an indicator of general employability or trustworthiness. Well beyond the single incident at its origin, the credential comes to stand for a broader internal disposition.

The power of the credential lies in its recognition as an official and legitimate means of evaluating and classifying individuals. The negative credential of a criminal record represents one such tool, offering formal certification of the offenders among us and official notice of those demographic groups most commonly implicated. To understand fully the impact of this negative credential, however, we must rely on more than speculation as to when and how these official labels are invoked as the basis for enabling or denying opportunity. Because credentials are often highly correlated with other indicators of social status or stigma (e.g., race, gender, class), we must examine their direct and independent impact. In addition, credentials may affect certain groups differently than others, with the official marker of criminality carrying more or less stigma depending on the race of its bearer. As increasing numbers of young men are marked by their contact with the criminal justice system, it becomes a critical priority to understand the costs and consequences of this now prevalent form of negative credential.

What do we know about the consequences of incarceration?

Despite the vast political and financial resources that have been mobilized toward prison expansion, very little systematic attention has been focused on the potential problems posed by the large and increasing number of inmates being released each year. A snapshot of ex-offenders one year after release reveals a rocky path of reintegration, with rates of joblessness in excess of 75 percent and rates of rearrest close to 45 percent. But one simple question remains unanswered: Are the employment problems of ex-offenders caused by their offender status, or does this population simply comprise a group of individuals who were never very successful at mainstream involvement in the first place? This question is important, for its answer points to one of two very different sets of policy recommendations. To the extent that the problems of prisoner reentry reflect the challenges of a population poorly equipped for conventional society, our policies would be best targeted toward some combination of treatment, training, and, at the extreme, containment. If, on the other hand, the problems of prisoner reentry are to some degree caused by contact with the criminal justice system itself, then a closer examination of the (unintended) consequences of America’s war on crime may be warranted. Establishing the nature of the relationship between incarceration and subsequent outcomes, then, is critical to developing strategies best suited to address this rapidly expanding ex-offender population.

In an attempt to resolve the substantive and methodological questions surrounding the consequences of incarceration, this book provides both an experimental and an observational approach to studying the barriers to employment for individuals with criminal records. The first stage observes the experiences of black and white job seekers with criminal records in comparison to equally qualified nonoffenders. In the second stage, I turn to the perspectives of employers in order to better understand the concerns that underlie their hiring decisions. Overall, this study represents an accounting of trends that have gone largely unnoticed or underappreciated by academics, policy makers, and the general public. After thirty years of prison expansion, only recently has broad attention turned to the problems of prisoner reentry in an era of mass incarceration. By studying the ways in which the mark of a criminal record shapes and constrains subsequent employment opportunities, this book sheds light on a powerful, emergent mechanism of labor market stratification. Further, this analysis recognizes that an investigation of incarceration in the contemporary United States would be inadequate without careful attention to the dynamics of race. As described earlier, there is a strong link between race and crime, both real and perceived, and yet the implications of this relationship remain poorly understood. This study takes a hard look at the labor market experiences of young black men, both with and without criminal pasts. In doing so, we gain a close-up view of the powerful role race continues to play in shaping the labor market opportunities available to young men. The United States remains sharply divided along color lines. Understanding the mechanisms that perpetuate these divisions represents a crucial step toward their resolution.

To read more about Marked, click here.

Add a Comment
7. Excerpt: Top 40 Democracy

9780226896182

To follow-up on yesterday’s post, here’s an excerpt from Eric Weisbard’s Top 40 Democracy: The Rival Mainstreams of American Music.

***

“The Logic of Formats”

Nearly every history of Top 40 launches from an anecdote about how radio station manager Todd Storz came up with the idea sometime between World War II and the early 1950s, watching with friends in a bar in Omaha as customers repeatedly punched up the same few songs on the jukebox. A waitress, after hearing the tunes for hours, paid for more listens, though she was unable to explain herself. “When they asked why, she replied, simply: ‘I like ’em.’ ” As Storz said on another occasion, “Why this should be, I don’t know. But I saw waitresses do this time after time.” He resolved to program a radio station following the same principles: the hits and nothing but the hits.

Storz’s aha moment has much to tell about Top 40’s complicated relationship to musical diversity. He might be seen as an entrepreneur with his ear to the ground, like the 1920s furniture salesman who insisted hillbilly music be recorded or the 1970s Fire Island dancer who created remixes to extend the beat. Or he could be viewed as a schlockmeister lowering standards for an inarticulate public, especially women —so often conceived as mass-cultural dupes. Though sponsored broadcasting had been part of radio in America, unlike much of the rest of the world, since its beginnings, Top 40 raised hackles in a postwar era concerned about the numbing effects of mass culture. “We become a jukebox without lights,” the Radio Advertising Bureau’s Kevin Sweeney complained. Time called Storz the “King of the Giveaway” and complained of broadcasting “well larded with commercials.”

Storz and those who followed answered demands that licensed stations serve a communal good by calling playlist catholicity a democracy of sound: “If the public suddenly showed a preference for Chinese music, we would play it . . . I do not believe there is any such thing as better or inferior music.” Top 40 programmer Chuck Blore, responding to charges that formats stifled creative DJs, wrote, “He may not be as free to inflict his musical taste on the public, but now, and rightfully, I think, the public dictates the popular music of the day.” Mike Joseph boasted, “When I first go into a market, I go into every record store personally. I’ll spend up to three weeks doing interviews, with an average of forty-five minutes each. And I get every single thing I can get: the sales on every configuration, every demo for every single, the gender of every buyer, the race of every buyer. . . . I follow the audience flow of the market around the clock.” Ascertaining public taste became a matter of extravagant claim for these professional intermediaries: broadcasting divided into “dayparts” to impact commuters, housewives, or students.

Complicating the tension between seeing formats as pandering or as deferring to popular taste was a formal quality that Top 40 also shared with the jukebox: it could encompass many varieties of hits or group a subset for a defined public. This duality blurred categories we often keep separate. American show business grew from blackface minstrelsy and its performative rather than innate notion of identity —pop as striking a pose, animating a mask, putting on style or a musical. More folk and genre-derived notions of group identity, by contrast, led to the authenticity-based categories of rock, soul, hip-hop, and country. Top 40 formats drew on both modes, in constantly recalibrated proportions. And in doing so, the logic of formats, especially the 1970s format system that assimilated genres, unsettled notions of real and fake music.

Go back to Storz’s jukebox. In the late 1930s, jukeboxes revived a record business collapsed by free music on radio and the Great Depression. Jack Kapp in particular, working for the US branch of British-owned Decca, tailored the records he handled to boom from the pack: swing jazz dance beats, slangy vernacular from black urban culture, and significant sexual frankness. This capitalized on qualities inherent in recordings, which separated sound from its sources in place, time, and community, allowing both new artifice — one did not know where the music came from, exactly — and new realism: one might value, permanently, the warble of a certain voice, suggesting a certain origin. Ella Fitzgerald, eroticizing the nursery rhyme “A-Tisket, A-Tasket” in 1938 on Decca, with Chick Webb’s band behind her, could bring more than a hint of Harlem’s Savoy Ballroom to a place like Omaha, as jukeboxes helped instill a national youth culture. Other jukeboxes highlighted the cheating songs of honky-tonk country or partying &B: urban electrifications of once-rural sounds. By World War II, pop was as much these brash cross-genre jukebox blends as it was the Broadway-Hollywood-network radio axis promoting Irving Berlin’s genteel “White Christmas.”

Todd Storz’s notion of Top 40 put the jukebox on the radio. Records had not always been a radio staple. Syndicated network stations avoided “canned music”; record labels feared the loss of sales and often stamped “Not Licensed for Radio Broadcast” on releases. So the shift that followed television’s taking original network programming was twofold: local radio broadcasting that relied on a premade consumer product. Since there were many more records to choose from than network shows, localized Top 40 fed a broader trend that allowed an entrepreneurial capitalism — independent record-label owners such as Sam Phillips of Sun Records, synergists such as American Bandstandhost Dick Clark, or station managers such as Storz—to compete with corporations like William Paley’s Columbia Broadcasting System, the so-called Tiffany Network, which included Columbia Records. The result, in part, was rock and roll, which had emerged sonically by the late 1940s but needed the Top 40 system to become dominant with young 45 RPM – singles buyers by the end of the 1950s.

An objection immediately presents itself, one that will recur throughout this study: Was Top 40 rock and roll at all, or a betrayal of the rockabilly wildness that Sam Phillips’s roster embodied for the fashioning of safe teen idols by Dick Clark? Did the format destroy the genre? The best answer interrogates the question: Didn’t the commerce-first pragmatism of formatting, with its weak boundaries, free performers and fans inhibited by tighter genre codes? For Susan Douglas, the girl group records of the early 1960s made possible by Top 40 defy critics who claim that rock died between Elvis Presley’s army induction and the arrival of the Beatles. Yes, hits like “Leader of the Pack” were created by others, often men, and were thoroughly commercial. Yes, they pulled punches on gender roles even as they encouraged girls to identify with young male rebels. But they “gave voice to all the warring selves inside us struggling.” White girls admired black girls, just as falsetto harmonizers like the Beach Boys allowed girls singing along to assume male roles in “nothing less than musical cross-dressing.” Top 40’s “euphoria of commercialism,” Douglas argues, did more than push product; “tens of millions of young girls started feeling, at the same time, that they, as a generation, would not be trapped.” Top 40, like the jukebox before it and MTV afterward, channeled cultural democracy: spread it but contained it within a regulated, commercialized path.

We can go back further than jukebox juries becoming American Bandstands. Ambiguities between democratic culture and commodification are familiar within cultural history. As Jean-Christophe Agnew points out in his study Worlds Apart, the theater and the marketplace have been inextricable for centuries, caught up as capitalism developed in “the fundamental problematic of a placeless market: the problems of identity, intentionality, accountability, transparency, and reciprocity that the pursuit of commensurability invariably introduces into that universe of particulate human meanings we call culture.” Agnew’s history ranges from Shakespeare to Melville’s Confidence Man, published in 1857. At that point in American popular culture, white entertainers often performed in blackface, jumping Jim Crow and then singing a plaintive “Ethiopian” melody by Stephen Foster. Eric Lott’s book on minstrelsy gives this racial mimicry a handy catchphrase: Love and Theft. Tarred-up actors, giddy with the new freedoms of a white man’s democracy but threatened by industrial “wage slavery,” embodied cartoonish blacks for social comment and anti-bourgeois rudeness. Amid vicious racial stereotyping could be found performances that respectable theater disavowed. Referring to a popular song of the era, typically performed in drag, the New York Tribune wrote in 1853, “ ‘Lucy Long’ was sung by a white negro as a male female danced.” And because of minstrelsy’s fixation on blackness, African Americans after the Civil War found an entry of sorts into entertainment: as songwriter W. C. Handy unceremoniously put it, “The best talent of that generation came down the same drain. The composers, the singers, the musicians, the speakers, the stage performers —the minstrel shows got them all.” If girl groups showcase liberating possibility in commercial constraints, minstrelsy challenges unreflective celebration.

Entertainment, as it grew into the brashest industry of modernizing America, fused selling and singing as a matter of orthodoxy. The three-act minstrel show stamped formats on show business early on, with its songand-dance opening, variety-act olio, and dramatic afterpiece, its interlocutors and end men. Such structures later migrated to variety, vaudeville, and Broadway. After the 1890s, tunes were supplied by Tin Pan Alley sheet-music publishers, who professionalized formula songwriting and invented “payola”— ethically dubious song plugging. These were song factories, unsentimental about creativity, yet the evocation of cheap tinniness in the name was deliberately outrageous, announcing the arrival of new populations —Siberian-born Irving Berlin, for example, the Jew who wrote “White Christmas.” Tin Pan Alley’s strictures of form but multiplicity of identity paved the way for the Brill Building teams who wrote the girl group songs, the Motown Records approach to mainstreaming African American hits, and even millennial hitmakers from Korean “K-Pop” to Sweden’s Cheiron Studios. Advertisers, Timothy Taylor’s history demonstrates, used popular music attitude as early as they could —sheet-music parodies, jingles, and the showmanship of radio hosts like crooner Rudy Vallee designed to give products “ginger, pep, sparkle, and snap.”

The Lucky Strike Hit Parade, a Top 40 forerunner with in-house vocalists performing the leading tunes, was “music for advertising’s sake,” its conductor said in 1941.

Radio, which arrived in the 1920s, was pushed away from a BBC model and toward what Thomas Streeter calls “corporate liberalism” by leaders like Herbert Hoover, who declared as commerce secretary, “We should not imitate some of our foreign colleagues with governmentally controlled broadcasting supported by a tax upon the listener.” In the years after the 1927 Radio Act, the medium consolidated around sponsor-supported syndicated network shows, successfully making radio present by 1940 in 86 percent of American homes and some 6.5 million cars, with average listening of four hours a day. The programming, initially local, now fused the topsy-turvy theatrics of vaudeville and minstrelsy —Amos ’n’ Andy ranked for years with the most popular programs —with love songs and soap operas aimed at the feminized intimacy of the bourgeois parlor. Radio’s mass orientation meant immigrants used it to embrace a mainstream American identity; women confessed sexual feelings for the likes of Vallee as part of the bushels of letters sent to favored broadcasters; and Vox Pop invented the “man on the street” interview, connecting radio’s commercialized public with more traditional political discourse and the Depression era’s documentary impulse. While radio scholars have rejected the view of an authoritarian, manipulative “culture industry,” classically associated with writers such as the Frankfurt School’s Theodor Adorno, historian Elena Razlogova offers an important qualification: “by the 1940s both commercial broadcasters and empirical social scientists . . . shared Adorno’s belief in expert authority and passive emotional listening.” Those most skeptical of mass culture often worked inside the beast.

Each network radio program had a format. So, for example, Kate Smith, returning for a thirteenth radio season in 1942, offered a three-act structure within each broadcast: a song and comedy slot, ad, drama, ad, and finally a segment devoted to patriotism —fitting for the singer of “God Bless America.” She was said by Billboard, writing with the slangy prose that characterized knowing and not fully genteel entertainment professionals, to have a show that “retains the format which, tho often heavy handed and obvious, is glovefit to keep the tremendous number of listeners it has acquired and do a terrific selling job for the sponsor”— General Foods. The trade journal insisted, “Next to a vocal personality, a band on the air needs a format —an idea, a framework of showmanship.”

Top 40 formats addressed the same need to fit broadcast, advertiser, and public, but through a different paradigm: what one branded with an on-air jukebox approach was now the radio station itself, to multiple sponsors. Early on, Top 40s competed with nonformat stations, the “full service” AM’s that relied on avuncular announcers with years of experience, in-house news, community bulletins, and songs used as filler. As formats came to dominate, with even news and talk stations formatted for consistent sound, competing sonic configurations hailed different demographics. But no format was pure: to secure audience share in a crowded market, a programmer might emphasize a portion of a format (Quiet Storm &B) or blur formats (country crossed with easy listening). Subcategories proliferated, creating what a 1978 how-to book called “the radio format conundrum.” The authors, listing biz slang along the lines of MOR,Good Music, and Chicken Rock, explained, “Words are coined, distorted and mutilated, as the programmer looks for ways to label or tag a format, a piece of music, a frame of mind.”

A framework of showmanship in 1944 had become a frame of mind in 1978. Formats began as theatrical structures but evolved into marketing devices — efforts to convince sponsors of the link between a mediated product and its never fully quantifiable audience. Formats did not idealize culture; they sold it. They structured eclecticism rather than imposing aesthetic values. It was the customer’s money —a democracy of whatever moved people.

The Counterlogic of Genres

At about the same time Todd Storz watched the action at a jukebox in Omaha, sociologist David Riesman was conducting in-depth interviews with young music listeners. Most, he found, were fans of what was popular— uncritical. But a minority of interviewees disliked “name bands, most vocalists (except Negro blues singers), and radio commercials.” They felt “a profound resentment of the commercialization of radio and musicians.” They were also, Riesman reported, overwhelmingly male.

American music in the twentieth century was vital to the creation of what Grace Hale’s account calls “a nation of outsiders.” “Hot jazz” adherents raved about Louis Armstrong’s solos in the 1920s, while everybody else thought it impressive enough that Paul Whiteman’s orchestra could syncopate the Charleston and introduce “Rhapsody in Blue.” By the 1930s, the in-crowd were Popular Front aligned, riveted at the pointedly misnamed cabaret Café Society, where doormen had holes in their gloves and Billie Holiday made the anti-lynching, anti-minstrelsy “Strange Fruit” stop all breathing. Circa Riesman’s study, the hipsters Norman Mailer and Jack Kerouac would celebrate redefined hot as cool, seeding a 1960s San Francisco scene that turned hipsters into hippie counterculture.

But the urge to value music as an authentic expression of identity appealed well beyond outsider scenes and subcultures. Hank Williams testified, “When a hillbilly sings a crazy song, he feels crazy. When he sings, ‘I Laid My Mother Away,’ he sees her a-laying right there in the coffin. He sings more sincere than most entertainers because the hillbilly was raised rougher than most entertainers. You got to know a lot about hard work. You got to have smelt a lot of mule manure before you can sing like a hillbilly. The people who has been raised something like the way the hillbilly has knows what he is singing about and appreciates it.” Loretta Lynn reduced this to a chorus: “If you’re looking at me, you’re looking at country.” Soul, rock, and hip-hop offered similar sentiments. An inherently folkloric valuation of popular music, Karl Miller has written, “so thoroughly trounced minstrelsy that historians rarely discuss the process of its ascendance. The folkloric paradigm is the air that we breathe.”

For this study, I want to combine subcultural outsiders and identity-group notions of folkloric authenticity into a single opposition to formats: genres. If entertainment formats are an undertheorized category of analysis, though a widely used term, genres have been highly theorized. By sticking with popular music, however, we can identify a few accepted notions. Music genres have rules: socially constructed and accepted codes of form, meaning, and behavior. Those who recognize and are shaped by these rules belong to what pioneering pop scholar Simon Frith calls “genre worlds”: configurations of musicians, listeners, and figures mediating between them who collectively create a sense of inclusivity and exclusivity. Genres range from highly specific avant-gardes to scenes, industry categories, and revivals, with large genre “streams” to feed subgenres. If music genres cannot be viewed —as their adherents might prefer —as existing outside of commerce and media, they do share a common aversion: to pop shapelessness.

Deconstructing genre ideology within music can be as touchy as insisting on minstrelsy’s centrality: from validating Theft to spitting in the face of Love. Producer and critic John Hammond, progressive in music and politics, gets rewritten as the man who told Duke Ellington that one of his most ambitious compositions featured “slick, un-negroid musicians,” guilty of “aping Tin Pan Alley composers for commercial reasons.” A Hammond obsession, 1930s Mississippi blues guitarist Robert Johnson has his credentials to be called “King of the Delta Blues” and revered by the likes of Bob Dylan, Eric Clapton, and the Rolling Stones questioned by those who want to know why Delta blues, as a category, was invented and sanctified after the fact and how that undercut more urban and vaudeville-inflected, not to mention female, “classic” blues singers such as Ma Rainey, Mamie Smith, and Bessie Smith.

The tug-of-war between format and genre, performative theatrics and folkloric authenticity, came to a head with rock, the commercially and critically dominant form of American music from the late 1960s to the early 1990s. Fifties rock and roll had been the music of black as much as white Americans, southern as much as northern, working class far more than middle class. Rock was both less inclusive and more ideological: what Robert Christgau, aware of the politics of the shift from his first writing as a founding rock critic, called “all music deriving primarily from the energy and influence of the Beatles—and maybe Bob Dylan, and maybe you should stick pretensions in there someplace.” Ellen Willis, another pivotal early critic, centered her analysis of the change on the rock audience’s artistic affiliations: “I loved rock and roll, but I felt no emotional identification with the performers. Elvis Presley was my favorite singer, and I bought all his records; just the same, he was a stupid, slicked-up hillbilly, a bit too fat and soft to be really good-looking, and I was a middle-class adolescent snob.” Listening to Mick Jagger of the Rolling Stones was a far different process: “I couldn’t condescend to him — his ‘vulgarity’ represented a set of social and aesthetic attitudes as sophisticated as mine.”

The hippies gathered at Woodstock were Riesman’s minority segment turned majority, but with a difference. They no longer esteemed contemporary versions of “Negro blues singers”: only three black artists played Woodstock. Motown-style format pop was dismissed as fluff in contrast to English blues-rock and other music with an overt genre lineage. Top 40 met disdain, as new underground radio centered on “freeform”— meaning free of format. Music critics like Christgau, Willis, and Frith challenged these assumptions at the time, with Frith’s Sound Effects the strongest account of rock’s hypocritical “intimations of sincerity, authenticity, art — noncommercial concerns,” even as “rock became the record industry.” In a nation of outsiders, rock ruled, or as a leftist history, Rock ’n’ Roll Is Here to Pay, snarked, “Music for Music’s Sake Means More Money.” Keir Keightley elaborates, “One of the great ironies of the second half of the twentieth century is that while rock has involved millions of people buying a mass-marketed, standardized commodity (CD, cassette, LP) that is available virtually everywhere, these purchases have produced intense feelings of freedom, rebellion, marginality, oppositionality, uniqueness and authenticity.” In 1979, rock fans led by a rock radio DJ blew up disco records; as late as 2004, Kelefa Sanneh felt the need to deconstruct rock-ism in the New York Times.

Yet it would be simplistic to reduce rockism to its disproportions of race, gender, class, and sexuality. What fueled and fuels such attitudes toward popular music, ones hardly limited to rock alone, is the dream of music as democratic in a way opposite to how champions of radio formats justified their playlists. Michael Kramer, in an account of rock far more sympathetic than most others of late, argues that the countercultural era refashioned the bourgeois public sphere for a mass bohemia: writers and fans debated in music publications, gathered with civic commitment at music festivals, and shaped freeform radio into a community instrument. From the beginning, “hip capitalism” battled movement concerns, but the notion of music embodying anti-commercial beliefs, of rock as revolutionary or at least progressive, was genuine. The unity of the rock audience gave it more commercial clout: not just record sales, but arena-sized concerts, the most enduring music publication in Rolling Stone, and ultimately a Rock and Roll Hall of Fame to debate rock against rock and roll or pop forever. Discursively, if not always in commercial reality, this truly was the Rock Era.

The mostly female listeners of the Top 40 pop formats bequeathed by Storz’s jukebox thus confronted, on multiple levels, the mostly male listeners of a rock genre that traced back to the anti-commercial contingent of Riesman’s interviewees. A democracy of hit songs, limited by its capitalist nature, was challenged by a democracy of genre identity, limited by its demographic narrowness. The multi-category Top 40 strands I will be examining were shaped by this enduring tension.

Pop Music in the Rock Era

Jim Ladd, a DJ at the Los Angeles freeform station KASH-FM, received a rude awakening in 1969 when a new program director laid down some rules. “We would not be playing any Top 40 bullshit, but real rock ’n’ roll; and there was no dress code. There would, however, be something known as ‘the format.’ ” Ladd was now told what to play. He writes bitterly about those advising stations. “The radio consultant imposed a statistical grid over the psychedelic counterculture, and reduced it to demographic research. Do you want men 18–24, adults 18–49, women 35–49, or is your target audience teens? Whatever it may be, the radio consultant had a formula.” Nonetheless, the staff was elated when, in 1975, KASH beat Top 40 KHJ, “because to us, it represented everything that we were trying to change in radio. Top 40 was slick, mindless pop pap, without one second of social involvement in its format.” Soon however, KAOS topped KASH with a still tighter format: “balls-out rock ’n’ roll.”

Ladd’s memoir, for all its biases, demonstrates despite itself why it would be misleading to view rock /pop or genre /format dichotomies as absolute divisions. By the mid-1970s, album-oriented rock (AOR) stations, like soul and country channels, pursued a format strategy as much as Top 40 or AC, guided by consultants and quarterly ratings. Rock programmers who used genre rhetoric of masculine rebellion (“balls-out rock ’n’ roll”) still honored Storz’s precept that most fans wanted the same songs repeated. Stations divided listeners explicitly by age and gender and tacitly by race and class. The division might be more inclusive: adults, 18–49; or less so: men, 18–34. The “psychedelic counterculture” ideal of dropping out from the mass had faded, but so had some of the mass: crossover appeal was one, not always desirable, demographic. And genre longings remained, with Ladd’s rockist disparagement of Top 40 symptomatic: many, including those in the business, quested for “social involvement” and disdained format tyranny. If AOR was formatted à la pop, pop became more like rock and soul, as seen in the power ballad, which merged rock’s amplification of sound and self with churchy and therapeutic exhortation.

Pop music in the rock era encompassed two strongly appealing, sometimes connected, but more often opposed impulses. The logic of formats celebrated the skillful matching of a set of songs with a set of people: its proponents idealized generating audiences, particularly new audiences, and prided themselves on figuring out what people wanted to hear. To believe in formats could mean playing it safe, with the reliance on experts and contempt for audiences that Razlogova describes in an earlier radio era: one cliché in radio was that stations were never switched off for the songs they didn’t play, only the ones they did. But there were strong business reasons to experiment with untapped consumer segments, to accentuate the “maturation” of a buying group with “contemporary”— a buzzword of the times —music to match. To successfully develop a new format, like the urban contemporary approach to black middle-class listeners, marked a great program director or consultant, and market-to-market experimentation in playlist emphasis was constant. Record companies, too, argued that a song like “Help Me Make It through the Night,” Kris Kristofferson’s explicit 1971 hit for Sammi Smith, could attract classier listeners for the country stations that played it.

By contrast, the logic of genres —accentuated by an era of counterculture, black power, feminism, and even conservative backlash — celebrated the creative matching of a set of songs and a set of ideals: music as artistic expression, communal statement, and coherent heritage. These were not necessarily anti-commercial impulses. Songwriters had long since learned the financial reasons to craft a lasting Broadway standard, rather than cash in overnight with a disposable Tin Pan Alley ditty. As Keightley shows, the career artist, steering his or her own path, was adult pop’s gift to the rock superstars. Frank Sinatra, Chairman of the Board, did not only symbolically transform into Neil Young, driving into the ditch if he chose. Young actually recorded for Reprise Records, the label that Sinatra had founded in 1960, whose president, Mo Ostin, went on to merge it with, and run, the artist-friendly and rock-dominated major label Warner Bros. Records.

Contrast Ladd’s or Young’s sour view of formatting with Clive Davis, who took over as president of Columbia Records during the rise of the counterculture. Writing just after the regularizing of multiple Top 40 strands, Davis found the mixture of old-school entertainment and new-school pop categories he confronted, the tensions between format and genre, endlessly fascinating. He was happy to discourse on the reasons why an MOR release by Ray Conniff might outsell an attention-hogging album by Bob Dylan, then turn around and explain why playing Las Vegas had tainted the rock group Blood, Sweat & Tears by rebranding them as MOR. Targeting black albums, rather than singles, to music buyers intrigued him, and here he itemized how he accepted racial divisions as market realities, positioning funk’s Earth, Wind & Fire as “progressive” to white rockers while courting soul nationalists too. “Black radio was also becoming increasingly militant; black program directors were refusing to see white promotion men. . . . If a record is ripe to be added to the black station’s play list, but is not quite a sure thing, it is ridiculous to have a white man trying to convince the program director to put it on.”

The incorporation of genre by formats proved hugely successful from the 1970s to the 1990s. Categories of mainstream music multiplied, major record labels learned boutique approaches to rival indies in what Timothy Dowd calls “decentralized” music selling, and the global sounds that Israeli sociologist Motti Regev sums up as “pop-rock” fused national genres with a common international structure of hitmaking, fueled by the widespread licensing in the 1980s of commercial radio channels in countries formerly limited to government broadcasting. In 2000, I was given the opportunity, for a New York Times feature, to survey a list of the top 1,000 selling albums and top 200 artists by total US sales, as registered by SoundScan’s barcode-scanning process since the service’s introduction in 1991. The range was startling: twelve albums on the list by Nashville’s Garth Brooks, but also twelve by the Beatles and more than twenty linked to the gangsta rappers in N.W.A. Female rocker Alanis Morissette topped the album list, with country and AC singer Shania Twain not far behind. Reggae’s Bob Marley had the most popular back-catalogue album, with mammoth total sales for pre-rock vocalist Barbra Streisand and jazz’s Miles Davis. Even “A Horse with No Name” still had fans:America’s Greatest Hits made a top 1,000 list that was 30 percent artists over forty years old in 2000 and one-quarter 1990s teen pop like Backstreet Boys. Pop meant power ballads (Mariah Carey, Celine Dion), rock (Pink Floyd, Metallica, Pearl Jam), and Latin voices (Selena, Marc Anthony), five mellow new age Enya albums, and four noisy Jock Jams compilations.

Yet nearly all this spectrum of sound was owned by a shrinking number of multinationals, joined as the 1990s ended by a new set of vast radio chains like Clear Channel, allowed by a 1996 Telecommunications Act in the corporate liberal spirit of the 1927 policies. The role of music in sparking countercultural liberation movements had matured into a well-understood range of scenes feeding into mainstreams, or train-wreck moments by tabloid pop stars appreciated with camp irony by omnivorous tastemakers. The tightly formatted world that Jim Ladd feared and Clive Davis coveted had come to pass. Was this true diversity, or a simulation? As Keith Negus found when he spoke with those participating in the global pop order, genre convictions still pressed against format pragmatism. Rock was overrepresented at record labels. Genre codes shaped the corporate cultures that framed the selling of country music, gangsta rap, and Latin pop. “The struggle is not between commerce and creativity,” Negus concluded, “but about what is to be commercial and creative.” The friction between competing notions of how to make and sell music had resulted in a staggering range of product, but also intractable disagreements over that product’s value within cultural hierarchies.

To read more about Top 40 Democracy, click here.

Add a Comment
8. Top 40 Democracy

9780226896182

Eric Weisbard’s Top 40 Democracy: The Rival Mainstreams of American Music considers the shifting terrain of the pop music landscape, in which FM radio (once an indisputably dominant medium) constructed multiple mainstreams, tailoring each to target communities built on race, gender, class, and social identity. Charting (no pun intended) how categories rivaled and pushed against each other in their rise to reach American audiences, the book posits a counterintuitive notion: when even the blandest incarnation of a particular sub-group (the Isley Brothers version of R & B, for instance) rose to the top of the charts, so too did the visibility of that group’s culture and perspective, making musical formatting one of the master narratives of late-twentieth-century identity.

In a recent piece for the Sound Studies blog, Weisbard wrote about the rise of both Taylor Swift and, via mid-term elections, the Republican Party:

The genius, and curse, of the commercial-cultural system that produced Taylor Swift’s Top 40 democracy win in the week of the 2014 elections, is that its disposition is inherently centrist. Our dominant music formats, rival mainstreams engaged in friendly combat rather than culture war, locked into place by the early 1970s. That it happened right then was a response to, and recuperation from, the splintering effects of the 1960s. But also, a moment of maximum wealth equality in the U.S. was perfect to persuade sponsors that differing Americans all deserved cultural representation.

And, as Weisbard concludes:

Pop music democracy too often gives us the formatted figures of diverse individuals triumphing, rather than collective empowerment. It’s impressive what Swift has accomplished; we once felt that about President Obama, too. But she’s rather alone at the top.

To read more about Top 40 Democracy, click here.

 

Add a Comment
9. #UPWeek: FF is really TBT

Today is the last day of #UPWeek—so goes with it another successful tour of university press blogs. On that note, Friday’s theme is one of following: What are your must reads on the internet? Whom do you follow on social media? Which venues and scholars are doing right? University of Illinois Press tracks the geopolitics of imagination, University of Minnesota Press (hi, Maggie!) author John Hartigan explains the foibles of scholars on social media, University of Nebraska Press delivers another social media primer, NYU Press teaches us Key Words in Cultural Studies, Island Press tracks the interests of its editors, and Columbia University Press talks their University Press Round-Up.

Us? We’re running with the idea that history and progress aren’t synonymously bound. The way forward with media is often the way back or through, or at least a trip to the past demonstrates that the seed for new forms of mediation are (apologies for this) always already planted. I realize this makes Follow Friday a bit of Throwback Thursday, but here’s a great photo from UCP author Alan Thomas that has been making the rounds on Twitter of the very first e-book we published. Richard A. Lanham’s The Electronic Word required 2 MB of RAM and a floppy disk reader, yet in its “out-of-timeness,” we can already see the othering of the book-as-object and our desire to store information in as portable (and small) a capacity as possible. Kindle Fire quivers. We keep moving.

B2RJdBXIEAAT819

 

For more on #UPWeek, follow the hash-tag on Twitter.

Add a Comment
10. UPWeek Day 2: Irina Baronova launch in pictures

Today is day two of #UPWeek, which considers the past, present, and future of scholarly publishing through pictures. Among posts dotting the web, you’ll find: a photographic history of Indiana University Press, documentation of 1950s and ’60s print publishing at Stanford University Press, a photo collage from Fordham University Press, a Q & A with art director Martha Sewell and short film of author and illustrator Val Kells at Johns Hopkins University Press, and images of the University Press of Florida through the years. With these surveys in mind, we’re happy to share a few snapshots from our own recent launch of Victoria Tennant’s Irina Baronova and the Ballets Russes de Monte Carlo at Peter Fetterman’s Gallery in Santa Monica, California (including a cameo by Norman Lear). Don’t forget to follow #UPWeek on Twitter to keep up with the AAUP’s celebration of university presses’ blogging culture.

***

IMG_0004

 

IMG_0097

 

IMG_0022

 

To read more about Irina Baronova and the Ballets Russes de Monte Carlo, click here.

Add a Comment
11. #UPWeek: Turabian Teacher Collaborative

9780226816319

 

Welcome to the third annual #UPWeek blog tour—we’re excited to contribute under Monday’s umbrella theme, “Collaboration,” with a post on the Turabian Teacher Collaborative. To get the ball rolling and further the mission, here’s where you can find other university presses, big and small, far and wide, posting on similarly synergetic projects today: the University Press of Colorado on veterinary immunology, the University of Georgia Press on the New Georgia Encyclopedia Project, Duke University Press on Eben Kirksey’s The Multispecies Salon, the University of California Press on Dr. Paul Farmer and Dr. Jim Yong Kim’s work on the Ebola epidemic in West Africa, the University of Virginia Press on their project Chasing Shadows (a special e-book and website devoted to Watergate-era Oval Office conversations), McGill-Queen’s University Press on the online gallery Landscape Architecture in Canada, Texas A & M University Press on a new consumer health advocacy series, Project MUSE on their history of collaboration, and Yale University Press on their Museum Quality Books series. Remember to follow #UPWeek on Twitter, and read on after the jump for the story of the Turabian Teacher Collaborative’s first two years.

***

One of the foundational principles of Kate Turabian’s classic writing guides is that research creates a community between writers and readers. Professors Joseph Williams and Gregory Colomb put the principle of a community into action when they collaborated several years ago to adapt Turabian’s guides for a new generation of student researchers. During their writing process, they circulated and reworked each other’s contributions so much that, “by the end of the process, no one could quite remember who had drafted what.”

Channeling the spirit of this “rotational” writing process, the Turabian Teacher Collaborative adds high school teachers and a university press into the mix of colleagues working to bring Turabian’s principles to a new audience. The University of Chicago Press developed this project with University of Iowa English education professors Bonnie Sunstein and Amy Shoultz, after determining that much in Turabian’s Student’s Guide to Writing College Papers aligns with the Common Core State Standards for English Language Arts. Sunstein and Shoultz suggested that the Press begin by inviting high school teachers to test the effectiveness of Turabian’s book, both at helping high schools meet the Common Core standards and at helping students become college ready.

To strategize for the project’s pilot year, participating teachers—from urban, rural, and suburban high schools in California, Illinois, Massachusetts, and Iowa—convened for a workshop at the Press in the summer of 2013. They all left equipped with a set of books and free classroom resources drawn from the book, including topic sheets and ELA Common Core–aligned lesson plans. Following the workshop, this team of teachers integrated these materials into their curricula and exchanged resources and insights on their experiences throughout the year. Later this month, several members of the Turabian Teacher Collaborative will share what they have learned with teachers from across the country at a workshop following the NCTE annual convention in Washington, DC.

And, of course, high school students are now part of the collaboration and its community of researchers, as they envision the needs of readers by engaging in peer review at every step of the writing process. As participating teacher Deb Aldrich of Kennedy High School in Cedar Rapids, Iowa, said of her students’ response to the book: “[They] acted as sounding boards, polite disagree-ers, questioners, cheerleaders, and empathizers. They would come to class and ask if we were meeting in our research groups today, which showed how much they valued participating in a real shared research conversation, not just an imaginary one in their heads. They acted and felt like academic researchers!”

The Press plans to use feedback like this to develop a teachers’ resource guide this year, as well as additional resources for research writing in future high school classrooms. As the collaborative moves into its second year, it is expanding to include high school teachers from across the disciplines who teach research and academic writing skills. Are you one of them? For more information, e-mail turabianteacher@press.uchicago.edu.

(in the spirit of #UPWeek, this post was collaboratively generated by University of Chicago Press staff members working with the TTC)

To learn more about the TTC project, click here.

Stay tuned for more from #UPWeek’s blog tour!

 

Add a Comment
12. Free e-book for November: Mr. Jefferson and the Giant Moose

9780226169149

 

Lee Alan Dugatkin’s Mr. Jefferson and the Giant Moose, our free e-book for November, reconsiders the crucial supporting role played by a moose carcass in Jeffersonian democracy.

***

Thomas Jefferson—author of the Declaration of Independence, US president, and ardent naturalist—spent years countering the French conception of American degeneracy. His Notes on Virginia systematically and scientifically dismantled Buffon’s case through a series of tables and equally compelling writing on the nature of his home state. But the book did little to counter the arrogance of the French and hardly satisfied Jefferson’s quest to demonstrate that his young nation was every bit the equal of a well-established Europe. Enter the giant moose.

The American moose, which Jefferson claimed was so enormous a European reindeer could walk under it, became the cornerstone of his defense. Convinced that the sight of such a magnificent beast would cause Buffon to revise his claims, Jefferson had the remains of a seven-foot ungulate shipped first class from New Hampshire to Paris. Unfortunately, Buffon died before he could make any revisions to his Histoire Naturelle, but the legend of the moose makes for a fascinating tale about Jefferson’s passion to prove that American nature deserved prestige.

In Mr. Jefferson and the Giant Moose, Lee Alan Dugatkin vividly recreates the origin and evolution of the debates about natural history in America and, in so doing, returns the prize moose to its rightful place in American history.

To download your free copy, click here.

 

 

Add a Comment
13. On the Run: Best Nonfiction of 2014

1610826_10152856990916202_7367643060238594808_n

 

On the Run: Fugitive Life in an American City chronicles the effects the War on Drugs levied on one inner-city Philadelphia neighborhood and its largely African American population. Based on Goffman’s six-year-long ethnographic experience as a participant-observer in the community, the book considers how a cycle of presumed criminality engendered by pervasive policing obscures the friendships and associations of a group of residents, small-time drug dealers, everyday persons, and the lives they lead into nodes in a network of surveillance under operation 24 hours a day—and the very human costs involved. The book was recently named to Publishers Weekly’s list, Best Nonfiction of 2014, after garnering praise from both the New Yorker and the New York Times Book Review.

You can read an excerpt from the book, “The Art of Running,” here.

To read more, click here.

 

 

Add a Comment
14. Excerpt: Serving the Reich

9780226204574

“Physics Must Be Rebuilt”

from Serving the Reich: The Struggle for the Soul of Physics under Hitler by Philip Ball

***

Quantum theory, with its paradoxes and uncertainties, its mysteries and challenges to intuition, is something of a refuge for scoundrels and charlatans, as well as a fount of more serious but nonetheless fantastic speculation. Could it explain Consciousness? Does it undermine causality? Everything from homeopathy to mind control and manifestations of the paranormal has been laid at its seemingly tolerant door.

Mostly that represents a blend of wishful thinking, misconception and pseudoscience. Because quantum theory defies common sense and ‘rational’ expectation, it can easily be hijacked to justify almost any wild idea. The extracurricular uses to which quantum theory has been put tend inevitably to reflect the preoccupations of the times: in the 1970s parallels were drawn with Zen Buddhism, today alternative medicine and theories of mind are in vogue.

Nevertheless, the fact remains that fundamental aspects of quantum physics are still not fully understood, and it has genuinely profound philosophical implications. Many of these aspects were evident to the early pioneers of the field – indeed, in the transformation of scientific thought that quantum theory compelled, they were impossible to ignore. Yet while several of the theory’s persistent conundrums were identified in its early days, one can’t say that the physicists greatly distinguished themselves in their response. This is hardly surprising: neither scientists nor philosophers in the early twentieth century had any preparation for thinking in the way quantum physics demands, and if the physicists tended to retreat into vagueness, near-tautology and mysticism, the philosophers and other intellectuals often just misunderstood the science.

This penchant for pondering the deeper meanings of quantum theory was particularily evident in Germany, proud of its long tradition of philosophical enquiry into nature and reality. The British, American and Italian physicists, in contrast, tended to conform to their stereotypical national pragmatism in dealing with quantum matters. But even if they were rather more content to apply the mathematics and not wonder too hard about the ontology, these other scientists relied strongly on the Germanic nations for those theoretical formulations in the first place. Germany, more than any other country, showed how to turn the microscopic fragmentation of nature into a useful, predictive, quantitative and explanatory science. If you were a theoretical physicist in Germany, it was hard to resist the gravitational pull of quantum theory: where Planck and Einstein led, Arnold Sommerfeld, Peter Debye, Werner Heisenberg, Max Born, Erwin Schrödinger, Wolfgang Pauli and others followed.

This being so, it was inevitable that the philosophical aspects of quantum physics should have been coloured by the political and social preoccupations of Germany. As we shall see, it was not the only part of physics to become politicized. These tendencies rocked the ivory tower: the kind of science you pursued became a statement about the sort of person you were, and the sympathies you harboured.

Unpeeling the atom

The realization that light and energy were granular had profound implications for the emerging understanding of how atoms are constituted. In 1907 New Zealander Ernest Rutherford, work ing at Manchester University in England, found that most of the mass of an atom is concentrated in a small, dense nucleus with a positive electrical charge. He concluded that this kernel was surrounded by a cloud of electrons, the particles found in 1897 to be the constituents of cathode rays by J. J. Thomson at Cambridge. Electrons possess a negative electrical charge that collectively balances the positive charge of the nucleus. In 1911 Rutherford proposed that the atom is like a solar system in miniature, a nuclear sun orbited by planetary electrons, held there not by gravity but by electrical attraction.

But there was a problem with that picture. According to classical physics, the orbiting electrons should radiate energy as electromagnetic rays, and so would gradually relinquish their orbits and spiral into the nucleus: the atom should rapidly decay. In 1913 the 28-year-old Danish physicist Niels Bohr showed that the notion of quantization – discreteness of energy – could solve this problem of atomic stability, and at the same time account for the way atoms absorb and emit radiation. The quantum hypothesis gave Bohr permission to prohibit instability by fiat: if the electron energ ies can only take discrete, quantized values, he said, then this gradual leakage of energ y is prevented: the particles remain orbiting indefinitely. Electrons can lose energy, but only by making a hop (‘quantum jump’) to an orbit of lower energy, shedding the difference in the form of a photon of a specific wavelength. By the same token, an electron can gain energy and jump to a higher orbit by absorbing a photon of the right wavelength. Bohr went on to postulate that each orbit can accommodate only a fixed number of electrons, so that downward jumps are impossible unless a vacancy arises.

It was well established experimentally that atoms do absorb and emit radiation at particular, well-defined wavelengths. Light passing through a gas has ‘missing wavelengths’ – a series of dark, narrow bands in the spectrum. The emission spectrum of the same vapour is made up of corresponding bright bands, accounting for example for the characteristic red glow of neon and the yellow glare of sodium vapour when they are stimulated by an electrical discharge. These photons absorbed or emitted, said Bohr, have energies precisely equal to the energy difference between two electron orbits.

By assuming that the orbits are each characterized by an integer ‘quantum number’ related to their energy, Bohr could rationalize the wavelengths of the emission lines of hydrogen. This idea was developed by Arnold Sommerfeld, professor of theoretical physics at the University of Munich. He and his student Peter Debye worked out why the spectral emission lines are split by a magnetic field – an effect discovered by the Dutch physicist Pieter Zeeman in work that won him the 1902 Nobel Prize. (This Zeeman effect is the magnetic equivalent of the line-splitting by an electric field discovered by the German physicist Johannes Stark – see page 88.)

But this was still a rather ad hoc picture, justified only because it seemed to work. What are the rules that govern the energy levels of electrons in atoms, and the jumps between them? In the early 1920s Max Born at the University of Göttingen set out to address those questions, assisted by his brilliant students Wolfgang Pauli, Pascual Jordan and Werner Heisenberg.

Heisenberg, another of Sommerfeld’s protégés, arrived from Munich in October 1922 to become Born’s private assistant, looking as Born put it ‘like a simple farm boy, with short fair hair, clear bright eyes, and a charming expression’. He and Born sought to apply Bohr’s empirical description of atoms in terms of quantum numbers to the case of helium, the second element in the periodic table after hydrogen. Given Bohr’s prescription for how quantum numbers dictate electron energies, one could in principle work out what the energies of the various electron orbits are, assuming that the electrons are held in place by their electrostatic attraction to the nucleus. But that works only for hydrogen, which has a single electron. With more than one electron in the frame, the mathematical elegance is destroyed by the repulsive electrostatic influence that electrons exert on each other. This is not a minor correction: the force between electrons is about as strong as that between electron and nucleus. So for any element aside from hydrogen, Bohr’s appealing modelbecomes too complicated to work out exactly.

In trying to go beyond these limitations, however, Born was not content to fit experimental observations to improvised quantum hypotheses as Bohr had done. Rather, he wanted to calculate the disposition of the electrons using principles akin to those that Isaac Newton used to explain the gravitationally bound solar system. In other words, he sought the rules that governed the quantum states that Bohr had adduced.

It became clear to Born that what he began to call a ‘quantum mechanics’ could not be constructed by a minor amendment of classical, Newtonian mechanics. ‘One must probably introduce entirely new hypotheses’, Heisenberg wrote to Pauli – another former pupil of Sommerfeld in Munich, where the two had become friends – in early 1923. Born agreed, writing that summer that ‘not only new assumptions in the usual sense of physical hypotheses will be necessary, but the entire system of concepts of physics must be rebuilt from the ground up’.

That was a call for revolution, and the ‘new concepts’ that emerged over the next four years amounted to nothing less. Heisenberg began formulating quantum mechanics by writing the energ ies of the quantum states of an atom as a matrix, a kind of mathematical grid. One could specify, for example, a matrix for the positions of the electrons, and another for their momenta (mass times velocity). Heisenberg’s version of quantum theory, devised with Born and Jordan in 1925, became known as matrix mechanics.

It wasn’t the only way to set out the problem. From early 1926 the Austrian physicist Erwin Schrödinger, working at the University of Zurich, began to explicate a different form of quantum mechanics based not on matrices but on waves. Schrödinger postulated that all the fundamental properties of a quantum particle such as an electron, or a collection of such particles, can be expressed as an equation describing a wave, called a wavefunction. The obvious question was: a wave of what? The wave itself is a purely mathematical entity, incorporating ‘imaginary numbers’ derived from the square root of -1 (denoted i), which, as the name implies, cannot correspond to any observable quantity. But if one calculates the square of a wavefunction – that is, if one multiplies this mathematical entity by itself – (More strictly, one calculates the so-called complex conjugate, the product of two wave functions identical except that the imaginary parts have opposite signs: +i and -i.) – then the imaginary numbers go away and only real ones remain, which means that the result may correspond to something concrete that can be measured in the real world. At first Schrödinger thought that the square of the wavefunction produces a mathematical expression describing how the density of the corresponding particle varies from one place to another, rather as the density of air varies through space in a sound wave. That was already weird enough: it meant that quantum particles could be regarded as smeared-out waves, filling space like a gas. But Born – who, to Heisenberg’s dismay, was enthusiastic about Schrödinger’s rival ‘wave mechanics’ – argued that the squared wavefunction denoted something even odder: the probability of finding the particle at each location in space.

Think about that for a moment. Schrödinger was asserting that the wavefunction says all that can be said about a quantum system. And apparently, all that can be said is not where the particle is, but what the chance is of finding it here or there. This is not a question of incomplete knowledge – of knowing that a friend might be at the cinema or the restaurant, but not knowing which. In that case she is one place or another, and you are forced to talk of probabilities just because you lack sufficient information. Schrödinger’s wave-based quantum mechanics is different: it insists that there is no answer to the question beyond the probabilities. To ask where the particle really is has no physical meaning. At least, it doesn’t until you look – but that act of looking doesn’t then disclose what was previously hidden, it determines what was previously undecided.

Whereas Heisenberg’s matrix mechanics was a way of formalizing the quantum jumps that Bohr had introduced, Schrödinger’s wave mechanics seemed to do away with them entirely. The wavefunction made everything smooth and continuous again. At least, it seemed to. But wasn’t that just a piece of legerdemain? When an electron jumps from one atomic orbit to another, the initial and the final states are both described by wavefunctions. But how did one wavefunction change into the other? The theory didn’t specify that – you had to put it in by hand. And you still do: there remains no consensus about how to build quantum jumps into quantum theory. All the same, Schrödinger’s description has prevailed over Heisenberg’s – not because it is more correct, but because it is more convenient and useful. What’s more, Heisenberg’s quantum matrices were abstract, giving scant purchase to an intuitive understanding, while Schrödinger’s wave mechanics offered more sustenance to the imagination.

The probabilistic view of quantum mechanics is famously what disconcerted Einstein. His scepticism eventually isolated him from the evolution of quantum theory and left him unable to contribute further to it. He remained convinced that there was some deeper reality below the probabilities that would rescue the precise certainties of classical physics, restoring a time and a place for everything. This is how it has always been for quantum theory: those who make great, audacious advances prove unable to reconcile them to the still more audacious notions of the next generation. It seems that one’s ability to ‘suppose’ – ‘understanding’ quantum theory is largely a matter of reconciling ourselves to its counter-intuitive claims – is all too easily exhausted by the demands that the theory makes.

Schrödinger wasn’t alone in accepting and even advocating indeterminacy in the quantum realm. Heisenberg’s matrix mechanics seemed to insist on a very strange thing. If you multiply together the matrices describing the position and the momentum of a particle, you get a different result depending on which matrix you put first in the arithmetic. In the classical world the order of multiplication of two quantities is irrelevant: two times three is the same as three times two, and an object’s momentum is the same expressed as mass times velocity or velocity times mass. For some pairs of quantum properties, such as position and momentum, that was evidently no longer the case.

This might seem an inconsequential quirk. But Heisenberg discovered that it had the most bizarre corollary, as foreshadowed in the portentous title of the paper he published in March 1927: ‘On the perceptual content of quantum-theoretical kinematics and mechanics’. Here he showed that the theory insisted on the impossibility of knowing at any instant the precise position and momentum of a quantum particle. As he put it, ‘The more precisely we determine the position, the more imprecise is the determination of momentum in this instant, and vice versa.’

This is Heisenberg’s uncertainty principle. He sought to offer an intuitive rationalization of it, explaining that one cannot make a measurement on a tiny particle such as an electron without disturbing it in some way. If it were possible to see the particle in a microscope (in fact it is far too small), that would involve bouncing light off it. The more accurately you wish to locate its position, the shorter the wavelength of light you need (crudely speak ing, the finer the divisions of the ‘ruler’ need to be). But as the wavelength of photons gets shorter, their energy increases – that’s what Planck had said. And as the energy goes up, the more the particle recoils from the impact of a photon, and so the more you disturb its momentum.

This thought experiment is of some value for grasping the spirit of the uncertainty principle. But it has fostered the misconception that the uncertainty is a result of the limitations of experimentation: you can’t look without disturbing. The uncertainty is, however, more fundamental than that: again, it’s not that we can’t get at the information, but that this information does not exist. Heisenberg’s uncertainty principle has also become popularly interpreted as imputing fuzziness and imprecision to quantum mechanics. But that’s not quite right either. Rather, it places very precise limits on what we can know. Those limits, it transpires, are determined by Planck’s constant, which is so small that the uncertainty becomes significant only at the tiny scale of subatomic particles.

Political science

Both Schrödinger’s wavefunction and Heisenberg’s uncertainty principle seemed to be insisting on aspects of quantum theory that verged on the metaphysical. For one thing, they placed bounds on what is knowable. This appeared to throw causality itself – the bedrock of science – into question. Within the blurred margins of quantum phenomena, how can we know what is cause and what is effect? An electron could turn up here, or it could instead be there, with no apparent causal principle motivating those alternatives.

Moreover, the observer now intrudes ineluctably into the previously objective, mechanistic realm of physics. Science purports to pronounce on how the world works. But if the very act of observing it changes the outcome – for example, because it transforms the wavefunction from a probability distribution of situations into one particular situation, commonly called ‘collapsing’ the wavefunction – then how can one claim to speak about an objective world that exists before we look?

Today it is generally thought that quantum theory offers no obvious reason to doubt causality, at least at the level at which we can study the world, although the precise role of the observer is still being debated. But for the pioneers of quantum theory these questions were profoundly disturbing. Quantum theory worked as a mathematical description, but without any consensus about its interpretation, which seemed to be merely a matter of taste. Many physicists were content with the prescription devised between 1925 and 1927 by Bohr and Heisenberg, who visited the Dane in Copenhagen. Known now as the Copenhagen interpretation, this view of quantum physics demanded that centuries of classical preconceptions be abandoned in favour of a capitulation to the maths. At its most fundamental level, the physical world was unknowable and in some sense indeterminate. The only reality worthy of the description is what we can access experimentally – and that is all that quantum theory prescribes. To look for any deeper description of the world is meaningless. To Einstein and some others, this seemed to be surrendering to ignorance. Beneath the formal and united appearance of the Solvay group in 1927 lies a morass of contradictory and seemingly irreconcilable views.

These debates were not limited to the physicists. If even they did not fully understand quantum theory, how much scope there was then for confusion, distortion and misappropriation as they disseminated these ideas to the wider world. Much of the blame for this must be laid at the door of the scientists themselves, including Bohr and Heisenberg, who threw caution to the wind when generalizing the narrow meaning of the Copenhagen interpretation in their public pronouncements. For Bohr, a crucial part of this picture was the notion of complementarity, which holds that two apparently contradictory descriptions of a quantum system can both be valid under different observational circumstances. Thus a quantum entity, be it an insubstantial photon or an electron graced with mass, can behave at one time as a particle, at another as a wave. Bohr’s notion of complementarity is scarcely a scientific theory at all, but rather, another characteristic expression of the Copenhagen affirmation that ‘this is just how things are’: it is not that there is some deeper behaviour that sometimes looks ‘wave-like’ and sometimes ‘particle-like’, but rather, this duality is an intrinsic aspect of nature. However one feels about Bohr’s postulate, there was little justification for his enthusiastic extension of the complementarity principle to biology, law, ethics and religion. Such claims made quantum physics a political matter.

The same is true for Heisenberg’s insistence that, via the uncertainty principle, ‘the meaninglessness of the causal law is definitely proved’. He tried to persuade philosophers to come to terms with this abolition of determinism and causality, as though this had moreover been established not as an (apparent) corollary of quantum theory but as a general law of nature.

This quasi-mystical perspective on quantum theory that the physicists appeared to encourage was attuned to a growing rejection, during the Weimar era, of what were viewed as the maladies of materialism: commercialism, avarice and the encroachment of technology. Science in general, and physics in particular, were apt to suffer from association with these supposedly degenerate values, making it inferior in the eyes of many intellectuals to the noble aspirations of art and ‘higher culture’. While it would be too much to say that an emphasis on the metaphysical aspects of quantum mechanics was cultivated in order to rescue physics from such accusations, that desideratum was not overlooked. Historian Paul Forman has argued that the quantum physicists explicitly accommodated their inter pretations to the prevailing social ethos of the age, in which ‘the concept – or the mere word – “causality” symbolized all that was odious in the scientific enterprise’. In his 1918 book Der Untergang des Abendlandes (The Decline of the West), the German philosopher and historian Oswald Spengler more or less equated causality with physics, while making it a concept deserving of scorn and standing in opposition to life itself. Spengler saw in modern physicists’ doubts about causality a symptom of what he regarded as the moribund nature of science itself. Here he was thinking not of quantum theory, which was barely beginning to reach the public consciousness at the end of the First World War, but of the probabilistic microscopic theory of matter developed by the Scottish physicist James Clerk Maxwell and the Austrian Ludwig Boltzmann, which had already renounced claims to a precise, deterministic picture of atomic motions.

Spengler’s book was read and discussed throughout the German academic world. Einstein and Born knew it, as did many other of the leading physicists, and Forman believes that it fed the impulse to realign modern physics with the spirit of the age, leading theoretical physicists and applied mathematicians to ‘denigrat[e] the capacity of their discipline to attain true, or even valuable, knowledge’. They began to speak of science as an essentially spiritual enterprise, unconnected to the demands and depradations of technology but, as Wilhelm Wien put it, arising ‘solely from an inner need of the human spirit’. Even Einstein, who deplored the rejection of causality that he saw in many of his colleagues, emphasized the roles of feeling and intuition in science.

In this way the physicists were attempting to reclaim some of the prestige that science had lost to the neo-Romantic spirit of the times. Causality was a casualty. Only once we have ‘liberation from the rooted prejudice of absolute causality’, said Schrödinger in 1922, would the puzzles of atomic physics be conquered. Bohr even spoke of quantum theory having an ‘inherent irrationality’. And as Forman points out, many physicists seemed to accept these notions not with reluctance or pain but with relief and with the expectation that they would be welcomed by the public. He does not see in all this simply an attempt to ingratiate physics to a potentially hostile audience, but rather, an unconscious adaptation to the prevailing culture, made in good faith. When Einstein expressed his reservations about the trend in a 1932 interview with the Irish writer James Gardner Murphy, Murphy responded that even scientists surely ‘cannot escape the influence of the milieu in which they live’. And that milieu was anti-causal.

Equally, the fact that both quantum theory and relativity were seen to be provoking crises in physics was consistent with the widespread sense that crises pervaded Weimar culture – economically, politically, intellectually and spiritually. ‘The idea of such a crisis of culture’, said the French politician Pierre Viénot in 1931, ‘belongs today to the solid stock of the common habit of thought in Germany. It is a part of the German mentality.’ The applied mathematician Richard von Mises spoke of ‘the present crisis in mechanics’ in 1921; another mathematician, Hermann Weyl (one of the first scientists openly to question causality) claimed there was a ‘crisis in the foundations of mathematics’, and even Einstein wrote for a popular audience on ‘the present crisis in theoretical physics’ in 1922. (Experimental physicist Johannes Stark’s 1921 book The Present Crisis in German Physics used the same trope but spoke to a very different perception: that his kind of physics was being eclipsed by an abstract, degenerate form of theoretical physics – see page 91.) One has the impression that these crises were not causing much dismay, but rather, reassured physicists that they were in the same tumultuous flow as the rest of society.

This was, however, a dangerous game. Some outsiders drew the conclusion that quantum mechanics pronounced on free will, and it was only a matter of time before the new physics was being enlisted for political ends. Some even managed to claim that it vindicated the policies of the National Socialists.

Moreover, if physics was being in some sense shaped to propitiate Spenglerism, it risked seeming to endorse also Spengler’s central thesis of relativism: that not only art and literature but also science and mathematics are shaped by the culture in which they arise and are invalid and indeed all but incomprehensible outside that culture. It is tempting to find here a presentiment of the ‘Aryan physics’ propagated by Nazi sympathizers in the 1930s (see Chapter 6), which contrasted healthy Germanic science with decadent, self-serving Jewish science. And given Spengler’s nationalism, rejection of Weimar liberalism, support for authoritarianism and belief in historical destiny, it is no surprise that he was initially lauded by the Nazis, especially Joseph Goebbels, nor that he voted for Hitler in 1932. (Spengler was too much of an intellectual for his advocacy to survive close contact. After meeting Hitler in 1933, he distanced himself from the Nazis’ vulgar posturing and anti-Semitism, and was no favourite of the Reich by the time he died in Munich in 1936.)

One way or another, then, by the 1920s physics was becoming freighted with political implications. Without intending it, the physicists themselves had encouraged this. But they hadn’t grasped – were perhaps unable to grasp – what it would soon imply.

To read more about Serving the Reich, click here.

Add a Comment
15. Excerpt: Versions of Academic Freedom

9780226064314

***

“Academic Freedom Studies: The Five Schools”

In 2009 Terrence Karran published an essay with the title “Academic Freedom: In Justification of a Universal Ideal.” Although it may not seem so at first glance, the title is tendentious, for it answers in advance the question most often posed in the literature: How does one justify academic freedom? One justifies academic freedom, we are told before Karran’s analysis even begins, by claiming for it the status of a universal ideal.

The advantage of this claim is that it disposes of one of the most frequently voiced objections to academic freedom: Why should members of a particular profession be granted latitudes and exemptions not enjoyed by other citizens? Why, for example, should college and university professors be free to criticize their superiors when employees in other workplaces might face discipline or dismissal? Why should college and university professors be free to determine and design the condition of their workplace (the classroom) while others must adhere to a blueprint laid down by a supervisor? Why should college and university professors be free to choose the direction of their research while researchers who work for industry and government must go down the paths mandated by their employers? We must ask, says Frederick Schauer (2006), “whether academics should, by virtue of their academic employment and/or profession, have rights (or privileges, to be more accurate) not possessed by others” (913).

The architects of the doctrine of academic freedom were not unaware of these questions, and, in anticipation of others raising them, raised them themselves. Academic freedom, wrote Arthur O. Lovejoy (1930), might seem “peculiar chiefly in that the teacher is . . . a salaried employee and that the freedom claimed for him implies a denial of the right of those who provide or administer the funds from which he is paid to control the content of his teaching” ( 384). But this denial of the employer’s control of the employee’s behavior is peculiar only if one assumes, first, that college and university teaching is a job like any other and, second, that the college or university teacher works for a dean or a provost or a board of trustees. Those assumptions are directly challenged and rejected by the American Association of University Professors’ 1915 Declaration of Principles on Academic Freedom and Academic Tenure, a founding document (of which Lovejoy was a principal author) and one that is, in many respects, still authoritative. Here is a key sentence:

The responsibility of the university teacher is primarily to the public itself, and to the judgment of his own profession; and while, with respect to certain external conditions of his vocation, he accepts a responsibility to the authorities of the institution in which he serves, in the essentials of his professional activity his duty is to the wider public to which the institution itself is morally amenable.

There are four actors and four centers of interest in this sentence: the public, the institution of the academy, the individual faculty member, and the individual college or university. The faculty member’s allegiance is first to the public, an abstract entity that is not limited to a particular location. The faculty member’s secondary allegiance is to the judgment of his own profession, but since, as the text observes, the profession’s responsibility is to the public, it amounts to the same thing. Last in line is the actual college or university to which the faculty member is tied by the slightest of ligatures. He must honor the “external conditions of his vocation”—conditions like showing up in class and assigning grades, and holding office hours and teaching to the syllabus and course catalog (although, as we shall see, those conditions are not always considered binding)— but since it is a “vocation” to which the faculty member is responsible, he will always have his eye on what is really essential, the “universal ideal” that underwrites and justifies his labors.

Here in 1915 are the seeds of everything that will flower in the twenty- first century. The key is the distinction between a job and a vocation. A job is defined by an agreement (often contractual) between a worker and a boss: you will do X and I will pay you Y; and if you fail to perform as stipulated, I will discipline or even dismiss you. Those called to a vocation are not merely workers; they are professionals; that is, they profess something larger than the task immediately at hand— a religious faith, a commitment to the rule of law, a dedication to healing, a zeal for truth— and in order to become credentialed professors, as opposed to being amateurs, they must undergo a rigorous and lengthy period of training. Being a professional is less a matter of specific performance (although specific performances are required) than of a continual, indeed lifelong, responsiveness to an ideal or a spirit. And given that a spirit, by definition, cannot be circumscribed, it will always be possible (and even thought mandatory and laudable) to expand the area over which it is said to preside.

The history of academic freedom is in part the history of that expansion as academic freedom is declared to be indistinguishable from, and necessary for, the flourishing of every positive value known to humankind. Here are just a few quotations from Karran’s essay:

Academic freedom is important to everyone’s well-being, as well as being particularly pertinent to academics andtheir students. (The Robbins Committee on Higher Education in the UK, 1963)

Academic freedom is but a facet of freedom in the larger society. (R. M. O. Pritchard, “Academic Freedom and Autonomy in the United Kingdom and Germany,” 1998)

A democratic society is hardly conceivable . . . without academic freedom. (S. Bergan, “Institutional Autonomy: Between Myth and Responsibility,” 2002)

In a society that has a high regard for knowledge and universal values, the scope of academic freedom is wide. (Wan Manan, “Academic Freedom: Ethical Implications and Civic Responsibilities,” 2000)

The sacred trust of the universities is to carry the torch of freedom. (J. W. Boyer, “Academic Freedom and the Modern University: The Experience of the University of Chicago,” 2002)

Notice that in this last statement, freedom is not qualified by the adjective academic. Indeed, you can take it as a rule that the larger the claims for academic freedom, the less the limiting force of the adjective academic will be felt. In the taxonomy I offer in this book, the movement from the most conservative to the most radical view of academic freedom will be marked by the transfer of emphasis from academic, which names a local and specific habitation of the asserted freedom, to freedom, which does not limit the scope or location of what is being asserted at all.

Of course, freedom is itself a contested concept and has many possible meanings. Graeme C. Moodie sorts some of them out and defines the freedom academics might reasonably enjoy in terms more modest than those suggested by the authors cited in Karran’s essay. Moodie (1996) notes that freedom is often understood as the “absence of constraint,” but that, he argues, would be too broad an understanding if it were applied to the activities of academics. Instead he would limit academic freedom to faculty members who are “exercising academic functions in a truly academic matter” (134). Academic freedom, in his account, follows from the nature of academic work; it is not a personal right of those who choose to do that work. That freedom— he calls it an “activity freedom” because it flows from the nature of the job and not from some moral abstraction— “can of course only be exercised by persons, but its justification, and thus its extent, must clearly and explicitly be rooted in its relationship to academic activities rather than (or only consequentially) to the persons who perform them” (133). In short, he concludes, “the special freedom(s) of academics is/are conditional on the fulfillment of their academic obligations” (134).

Unlike those who speak of a universal ideal and of the torch of freedom being carried everywhere, Moodie is focused on the adjective academic. He begins with it and reasons from it to the boundaries of the freedom academics can legitimately be granted. To be sure, the matter is not so cut and dried, for academic must itself be defined so that those boundaries can come clearly into view and that is no easy matter. No one doubts that classroom teaching and research and scholarly publishing are activities where the freedom in question is to be accorded, at least to some extent. But what about the freedom to criticize one’s superiors; or the freedom to configure a course in ways not standard in the department; or the freedom to have a voice in the building of parking garages, or in the funding of athletic programs, or in the decision to erect a student center, or in the selection of a president, or in the awarding of honorary degrees, or in the inviting of outside speakers? Is academic freedom violated when faculty members have minimal input into, or are shut out entirely from, the consideration of these and other matters?

To that question, Mark Yudof, who has been a law school dean and a university president, answers a firm “no.” Yudof (1988) acknowledges that “there are many elements necessary to sustain the university,” including “salaries,” library collections,” a “comfortable workplace,” and even “a parking space” (1356), but do academics have a right to these things or a right to participate in discussions about them (a question apart from the question of whether it is wise for an administration to bring them in)? Only, says Yudof, if you believe “that any restrictions, however indirectly linked to teaching and scholarship, will destroy the quest for knowledge” (1355). And that, he observes, would amount to “a kind of unbridled libertarianism for academicians,” who could say anything they liked in a university setting without fear of reprisal or discipline (1356).

Better, Yudof concludes, to define academic freedom narrowly, if only so those who are called upon to defend it can offer a targeted, and not wholly diffuse, rationale. Academic freedom, he declares, “is what it is” (of course that’s the question; what is it?), and it is “not general liberty, pleasant working conditions, equality, self- realization, or happiness,” for “if academic freedom is thought to include all that is desirable for academicians, it may come to mean quite little to policy makers and courts” (1356). Moodie (1996) gives an even more pointed warning: “Scholars only invite ridicule, or being ignored, when they seem to suggest that every issue that directly affects them is a proper sphere for academic rule” (146). (We shall revisit this issue when we consider the relationship between academic freedom, shared governance, and public employee law.)

So we now have as a working hypothesis an opposition between two views of academic freedom. In one, freedom is a general, overriding, and ever-expanding value, and the academy is just one of the places that house it. In the other, the freedom in question is peculiar to the academic profession and limited to the performance of its core duties. When performing those duties, the instructor is, at least relatively, free. When engaged in other activities, even those that take place within university precincts, no such freedom or special latitude obtains. This modest notion of academic freedom is strongly articulated by J. Peter Byrne (1989): “The term ‘academic freedom’ should be reserved for those rights necessary for the preservation of the unique functions of the university ” (262).

These opposed accounts of academic freedom do not exhaust the possibilities; there are extremes to either side of them, and in the pages that follow I shall present the full range of the positions currently available. In effect I am announcing the inauguration of a new field— Academic Freedom Studies. The field is still in a fluid state; new variants and new theories continue to appear. But for the time being we can identify five schools of academic freedom, plotted on a continuum that goes from right to left. The continuum is obviously a political one, but the politics are the politics of the academy. Any correlation of the points on the continuum with real world politics is imperfect, but, as we shall see, there is some. I should acknowledge at the outset that I shall present these schools as more distinct than they are in practice; individual academics can be members of more than one of them. The taxonomy I shall offer is intended as a device of clarification. The inevitable blurring of the lines comes later.

As an aid to the project of sorting out the five schools, here is a list of questions that would receive different answers depending on which version of academic freedom is in place:

Is academic freedom a constitutional right?
What is the relationship between academic freedom and the First Amendment?
What is the relationship between academic freedom and democracy?
Does academic freedom, whatever its scope, attach to the individual faculty member or to the institution?
Do students have academic freedom rights?
What is the relationship between academic freedom and the form of governance at a college or university?
In what sense, if any, are academics special?
Does academic freedom include the right of a professor to criticize his or her organizational superiors with impunity?
Does academic freedom allow a professor to rehearse his or her political views in the classroom?
What is the relationship between academic freedom and political freedom?
What views of education underlie the various positions on academic freedom?

As a further aid, it would be good to have in mind some examples of incidents or controversies in which academic freedom has been thought to be at stake.

In 2011, the faculty of John Jay College nominated playwright Tony Kushner to be the recipient of an honorary degree from the City University of New York. Normally approval of the nomination would have been pro forma, but this time the CUNY Board of Trustees tabled, and thus effectively killed, the motion supporting Kushner’s candidacy because a single trustee objected to his views on Israel. After a few days of outrage and bad publicity the board met again and changed its mind. Was the board’s initial action a violation of academic freedom, and if so, whose freedom was being violated? Or was the incident just one more instance of garden- variety political jockeying, a tempest in a teapot devoid of larger implications?

In the same year Professor John Michael Bailey of Northwestern University permitted a couple to perform a live sex act at an optional session of his course on human sexuality. The male of the couple brought his naked female partner to orgasm with the help of a device known as a “fucksaw.” Should Bailey have been reprimanded and perhaps disciplined for allowing lewd behavior in his classroom or should the display be regarded as a legitimate pedagogical choice and therefore protected by the doctrine of academic freedom?

In 2009 sociology professor William Robinson of the University of California at Santa Barbara, after listening to a tape of a Martin Luther King speech protesting the Vietnam War, sent an e-mail to the students in his sociology of globalization course that began:

If Martin Luther King were alive on this day of January 19th, there is no doubt that he would be condemning the Israeli aggression against Gaza along with U.S. military and political support for Israeli war crimes, or that he would be standing shoulder to shoulder with the Palestinians.

The e-mail went on to compare the Israeli actions against Gaza to the Nazi actions against the Warsaw ghetto, and to characterize Israel as “a state founded on the negation of a people.” Was Robinson’s e-mail an intrusion of his political views into the classroom or was it a contribution to the subject matter of his course and therefore protected under the doctrine of academic freedom?

As the 2008 election approached, an official communication from the administration of the University of Illinois listed as prohibited political activities the wearing of T-shirts or buttons supporting candidates or parties. Were faculty members being denied their First Amendment and academic freedom rights?

BB&T, a bank holding company, funds instruction in ethics on the condition that the courses it supports include as a required reading Ayn Rand’s Atlas Shrugged (certainly a book concerned with issues of ethics). If a university accepts this arrangement (as Florida State University did), has it traded its academic freedom for cash or is it (as the dean at Florida State insisted) merely accepting help in a time of financial exigency?

In 1996, the state of Virginia passed a law forbidding state employees from accessing pornographic materials on state- owned computers. The statute included a waiver for those who could convince a supervisor that the viewing of pornographic material was part of a bona fide research project. Was the academic freedom of faculty members in the state university system violated because they were prevented from determining for themselves and without government monitoring the course of their research?

Just as my questions would be answered differently by proponents of different accounts of academic freedom, so would these cases be assessed differently depending on which school of academic freedom a commentator belongs to.

Of course I have yet to name the schools, and I will do that now.

(1)— The “It’s just a job” school. This school (which may have only one member and you’re reading him now) rests on a deflationary view of higher education. Rather than being a vocation or holy calling, higher education is a service that offers knowledge and skills to students who wish to receive them. Those who work in higher education are trained to impart that knowledge, demonstrate those skills and engage in research that adds to the body of what is known. They are not exercising First Amendment rights or forming citizens or inculcating moral values or training soldiers to fight for social justice. Their obligations and aspirations are defined by the distinctive task— the advancement of knowledge— they are trained and paid to perform, defined, that is, by contract and by the course catalog rather than by a vision of democracy or world peace. College and university teachers are professionals, and as such the activities they legitimately perform are professional activities, activities in which they have a professional competence. When engaged in those activities, they should be accorded the latitude— call it freedom if you like— necessary to their proper performance. That latitude does not include the performance of other tasks, no matter how worthy they might be. According to this school, academics are not free in any special sense to do anything but their jobs.

(2)— The “For the common good” school. This school has its origin in the AAUP Declaration of Principles (1915), and it shares some arguments with the “It’s just a job” school, especially the argument that the academic task is distinctive. Other tasks may be responsible to market or political forces or to public opinion, but the task of advancing knowledge involves following the evidence wherever it leads, and therefore “the first condition of progress is complete and unlimited freedom to pursue inquiry and publish its results.” The standards an academic must honor are the standards of the academic profession; the freedom he enjoys depends on adherence to those standards: “The liberty of the scholar . . . to set forth his conclusions . . . is conditioned by their being conclusions being gained by a scholar’s method and held in a scholar’s spirit.” That liberty cannot be “used as a shelter . . . for uncritical and intemperate partisanship,” and a teacher should not inundate students with his “own opinions.”

With respect to pronouncements like these, the “For the common good” school and the “It’s just a job” school seem perfectly aligned. Both paint a picture of a self-enclosed professional activity, a transaction between teachers, students, and a set of intellectual questions with no reference to larger moral, political, or societal considerations. But the opening to larger considerations is provided, at least potentially, by a claimed connection between academic freedom and democracy. Democracy, say the authors of the 1915 Declaration, requires “experts . . . to advise both legislators and administrators,” and it is the universities that will supply them and thus render a “service to the right solution of . . . social problems.” Democracy ’s virtues, the authors of the Declaration explain, are also the source of its dangers, for by repudiating despotism and political tyranny, democracy risks legitimizing “the tyranny of public opinion.” The academy rides to the rescue by working “to help make public opinion more self-critical and more circumspect, to check the more hasty and unconsidered impulses of popular feeling, to train the democracy.” By thus offering an external justification for an independent academy— it protects us from our worst instincts and furthers the realization of democratic principles— the “For the common good” school moves away from the severe professionalism of the “It’s just a job” school and toward an argument in which professional values are subordinated to the higher values of democracy or justice or freedom; that is, to the common good.

( 3)— The “Academic exceptionalism or uncommon beings” school. This school is a logical extension of the “For the common good” school. If academics are charged not merely with the task of adding to our knowledge of natural and cultural phenomena, but with the task of providing a counterweight to the force of common popular opinion, they must themselves be uncommon, not only intellectually but morally; they must be, in the words of the 1915 Declaration, “men of high gift and character.” Such men (and now women) not only correct the errors of popular opinion, they escape popular judgment and are not to be held accountable to the same laws and restrictions that constrain ordinary citizens.

The essence of this position is displayed by the plaintiff ’s argument in Urofsky v. Gilmore (2000), a Fourth Circuit case revolving around Virginia’s law forbidding state employees from accessing explicitly sexual material on state-owned computers without the permission of a supervisor. The phrase that drives the legal reasoning in the case is “matter of public concern.” In a series of decisions the Supreme Court had ruled that if public employees speak out on a matter of public concern, their First Amendment rights come into play and might outweigh the government’s interest in efficiency and organizational discipline. (A balancing test is triggered.) If, however, the speech is internal to the operations of the administrative unit, no such protection is available. The Urofsky court determined that the ability of employees to access pornography was not a matter of public concern. The plaintiffs, professors in the state university system, then detached themselves from the umbrella category of “public employees” and claimed a special status. They argued that “even if the Act is valid as to the majority of state employees, it violates the . . . academic freedom rights of professors . . . and thus is invalid as to them.” In short, we’re exceptional.

(4)— The “Academic freedom as critique” school. If academics have the special capacity to see through the conventional public wisdom and expose its contradictions, exercising that capacity is, when it comes down to it, the academic’s real job; critique— of everything— is the continuing obligation. While the “It’s just a job” school and the “For the common good” school insist that the freedom academics enjoy is exercised within the norms of the profession, those who identify academic freedom with critique (because they identify education with critique) object that this view reifies and naturalizes professional norms which are themselves the products of history, and as such are, or should be, challengeable and revisable. One should not rest complacently in the norms and standards presupposed by the current academy ’s practices; one should instead interrogate those norms and make them the objects of critical scrutiny rather than the baseline parameters within which critical scrutiny is performed.

Academic freedom is understood by this school as a protection for dissent and the scope of dissent must extend to the very distinctions and boundaries the academy presently enforces. As Judith Butler (2006a) puts it, “as long as voices of dissent are only admissible if they conform to accepted professional norms, then dissent itself is limited so that it cannot take aim at those norms that are already accepted” (114). One of those norms enforces a separation between academic and political urgencies, but, Butler contends, they are not so easily distinguishable and the boundaries between them blur and change. Fixing boundaries that are permeable, she complains, has the effect of freezing the status quo and of allowing distinctions originally rooted in politics to present themselves as apolitical and natural. The result can be “a form of political lib eralism that is coupled with a profoundly conservative intellectual resistance to . . . innovation” (127). From the perspective of critique, established norms are always conservative and suspect and academic freedom exists so that they can be exposed for what they are. Academic freedom, in short, is an engine of social progress and is thought to be the particular property of the left on the reasoning (which I do not affirm but report) that conservative thought is anti- progressive and protective of the status quo. It’s only a small step, really no step at all, from academic freedom as critique to the fifth school of thought.

(5)— The “Academic freedom as revolution” school. With the emergence of this school the shift from academic as a limiting adjective to freedom as an overriding concern is complete and the political agenda implicit in the “For the common good” school and the “Academic freedom as critique” schools is made explicit. If Butler wants us to ask where the norms governing academic practices come from, the members of this school know: they come from the corrupt motives of agents who are embedded in the corrupt institutions that serve and reflect the corrupt values of a corrupt neoliberal society. (Got that?) The view of education that lies behind and informs this most expansive version of academic freedom is articulated by Henry Giroux (2008). The “responsibilities that come along with teaching,” he says, include fighting for

an inclusive and radical democracy by recognizing that education in the broadest sense is not just about understanding, . . . but also about providing the conditions for assuming the responsibilities we have as citizens to expose human misery and to eliminate the conditions that produce it. (128)

In this statement the line between the teacher as a professional and the teacher as a citizen disappears. Education “in the broadest sense” demands positive political action on the part of those engaged in it. Adhering to a narrow view of one’s responsibilities in the classroom amounts to a betrayal both of one’s political being and one‘s pedagogical being. Academic freedom, declares Grant Farred (2008–2009), “has to be conceived as a form of political solidarity ”; and he doesn’t mean solidarity with banks, corporations, pharmaceutical firms, oil companies or, for that matter, universities ( 355). When university obligations clash with the imperative of doing social justice, social justice always trumps. The standard views of academic freedom, members of this school complain, sequester academics in an intellectual ghetto where, like trained monkeys, they perform obedient and sterile routines. It follows, then, that one can only be true to the academy by breaking free of its constraints.

The poster boy for the “Academic freedom as revolution” school is Denis Rancourt, a physics professor at the University of Ottawa (now removed from his position) who practices what he calls “academic squatting”— turning a course with an advertised subject matter and syllabus into a workshop for revolutionary activity. Rancourt (2007) explains that one cannot adhere to the customary practices of the academy without becoming complicit with the ideology that informs them: “Academic squatting is needed because universities are dictatorships, devoid of real democracy, run by self- appointed executives who serve private capital interests.”

To read more about Versions of Academic Freedom, click here.

Add a Comment
16. Excerpt: Packaged Pleasures

9780226121277
by Gary S. Cross and Robert N. Proctor

 ***

“The Carrot and the Candy Bar”

Our topic is a revolution—as significant as anything that has tossed the world over the past two hundred years. Toward the end of the nineteenth century, a host of often ignored technologies transformed human sensual experience, changing how we eat, drink, see, hear, and feel in ways we still benefit (and suffer) from today. Modern people learned how to capture and intensify sensuality, to preserve it, and to make it portable, durable, and accessible across great reaches of social class and physical space. Our vulnerability to such a transformation traces back hundreds of thousands of years, but the revolution itself did not take place until the end of the nineteenth century, following a series of technological changes altering our ability to compress, distribute, and commercialize a vast range of pleasures.

Strangely, historians have neglected this transformation. Indeed, behind this astonishing lapse lies a common myth—that there was an age of production that somehow gave rise to an age of consumption, with historians of the former exploring industrial technology, while historians of the latter stress the social and symbolic meaning of goods. This artificial division obscures how technologies of production have transformed what and how we actually consume. Technology does far more than just increase productivity or transform work, as historians of the Industrial Revolution so often emphasize. Industrial technology has also shaped how and how much we eat, what we wear and why, and how and what (and how much!) we hear and see. And myriad other aspects of how we experience daily life—or even how we long for escape from it.

Bound to such transformations is a profound disruption in modern life, a breakdown of the age-old tension between our bodily desires and the scarcity of opportunities for fulfillment. New technologies— from the rolling of cigarettes to the recording of sound—have intensified the gratification of desires but also rendered them far more easily satisfied, often to the point of grotesque excess. An obvious example is the mechanized packaging of highly sugared foods, which began over a century ago and has led to a health and moral crisis today. Lots of media attention has focused on the irresponsibility of the food industry and the rise of recreational and workplace sedentism—but there are other ways to look at this.

It should be obvious that technology has transformed how people eat, especially with regard to the ease and speed with which it is now possible to ingest calories. Roots of such transformations go very deep: the Neolithic revolution ten-plus thousand years ago brought with it new methods of regularizing the growing of food and the world’s first possibility of elite obesity. The packaged pleasure revolution in the nineteenth century, however, made such excess possible for much larger numbers of “consumers”—a word only rarely used prior to that time. Industrial food processors learned how to pack fat, sugar, and salt into concentrated and attractive portions, and to manufacture these cheaply and in packages that could be widely distributed. Foods that were once luxuries thus became seductively commonplace. This is the first thing we need to understand.

We also need to appreciate that responsibility for the excesses of today’s consumers cannot be laid entirely at the doors of modern technology and the corporations that benefit from it. We cannot blame the food industry alone. No one is forced to eat at McDonald’s; people choose Big Macs with fries because they satisfy with convenience and affordability, just as people decide to turn on their iPods rather than listen to nature or go to a concert. But why would we make such a choice—and is it entirely a “free choice”? This brings us to a second crucial point: humans have evolved to seek high-energy foods because in prehistoric conditions of scarcity, eating such foods greatly improved their ancestors’ chances of survival. This has limited, but not entirely eliminated, our capacity to resist these foods when they no longer are scarce. And if we today crave sugar and fat and salt, that is partly because these longings must have once promoted survival, deep in the pre-Paleolithic and Paleolithic. Our taste buds respond gleefully to sugars because we are descended from herbivores and especially frugivores for whom sweet-tasting plants and fruits were neuro-marked as edible and nutritious. Poisonous plants were more often bitter-tasting. Pleasure at least in this sensory sense was often a clue to what might help one survive.

But here again is the rub. Thanks to modern industrialism, high-calorie foods once rare are now cheap and plentiful. Industrial technology has overwhelmed and undercut whatever balance may have existed between the biological needs of humans and natural scarcity. We tend to crave those foods that before modern times were rare; cravings for fat and sugar were no threat to health; indeed, they improved our chances of survival. Now, however, sugar, especially in its refined forms, is plentiful, and as a result makes us fat and otherwise unhealthy. And what is true for sugar is also true for animal fat. In our prehistoric past fat was scarce and valuable, accounting for only 2 to 4 percent of the flesh of deer, rabbits, and birds, and early humans correctly gorged whenever it was available. Today, though, factory-farmed beef can consist of 36 percent fat, and most of us expend practically no energy obtaining it. And still we gorge.

And so the candy bar, a perfect example of the engineered pleasure, wins out over the carrot and even the apple. More sugar and seemingly more varied flavors are packed into the confection than the unprocessed fruit or vegetable. In this sense our craving for a Snickers bar is partly an expression of the chimp in us, insofar as we desire energy-packed foods with maximal sugars and fat. The concentration, the packaging, and the ease of access (including affordability) all make it possible—indeed enticingly easy—to ingest far more than we know is good for us. Our biological desires have become imperfect guides for good behavior: drives born in a world of scarcity do not necessarily lead to health and happiness in a world of plenty.

But food is not the only domain where such tensions operate. Indeed, a broader historical optic reveals tensions in our response to the packaged provisioning of other sensations, and this broader perspective invites us to go beyond our current focus on food, as important as that may be.

As biological creatures we are naturally attracted to certain sights and sounds, even smells and motion, insofar as we have evolved in environments where such sensitivities helped our ancestors prevail over myriad threats to human existence. The body’s perceptual organs are, in a sense, some of our oldest tools, and much of the pleasure we take in bright colors, combinations of particular shapes, and certain kinds of movement must be rooted in prehistoric needs to identify food, threats, or mates from a distance. Today we embrace the recreational counterparts, filling our domestic spaces with visual ornaments, fixed or in motion, reminding ourselves of landscapes, colors, or shapes that provoke recall or simulate absent or even impossible worlds.

What has changed, in other words, is our access to once-rare sensations, including sounds but especially imagery. The decorated caves of southern France, once rare and ritualized space, are now tourist attractions, accessible to all through electronic media. Changes in visual technology have made possible a virtual orgy of visual culture; a 2012 count estimated over 348,000,000,000 images on the Internet, with a growth rate of about 10,000 per second. The mix and matrix of information transfer has changed accordingly: orality (and aurality) has been demoted to a certain extent, first with the rise of typography (printing) and then the published picture, and now the ubiquitous electronic image on screens of different sorts. “Seeing is believing” is an expression dating only from about 1800, signaling the surging primacy of the visual. Civilization itself celebrates the light, the visual sense, as the darkness of the night and the narrow street gradually give way to illuminated interiors, light after dark, and ever broader visual surveillance.

Humans also have preferences for certain smells, of course, even if we are (far) less discriminating than most other mammals. Technologies of odor have never been developed as intensively as those of other senses, though we should not forget that for tens of thousands of years hunters have employed dogs—one of the oldest human “tools”—to do their smelling. Smell has also sometimes marked differences between tribes and classes, rationalizing the isolation of slaves or some other subject group. The wealthy are known to have defined themselves by their scents (the ancient Greeks used mint and thyme oils for this purpose), and fragrances have been used to ward off contagions. Some philosophers believed that the scent of incense could reach and please the gods; and of course the devil smelled foul—as did sin.

Still, the olfactory sense lost much of its acuity in upright primates, and it is the rare philosopher who would base an epistemology on odor. Philosophers have always privileged sight over all other senses—which makes sense given how much of our brain is devoted to processing visual images (canine epistemology and agnotology would surely be quite different). Optico-centricity was further accentuated with the rise of novel ways of extending vision in the seventeenth century (microscopes, telescopes) and still more with the rise of photography and moving pictures. Industrial societies have continued to devalue scent, with some even trying to make the world smell-free. Pasteur’s discovery of germs meant that foul air (think miasma) lost its role in carrying disease, but efforts to remove the germs that caused such odors (especially the sewage systems installed in cities in the nineteenth century) ended up mollifying much of the stink of large urban centers. Bodily perfuming has probably been around for as long as humans have been human, but much of recent history has involved a process of deodorizing, further reducing the value of the sensitive nose.

Modern people may well gorge on sight, but we certainly remain sound-sensitive and long for music, “the perfume of hearing” in the apt metaphor of Diane Ackerman. Music has always aroused a certain spiritual consciousness and may even have facilitated social bonding among early humans. Stringed and drum instruments date back only to about 5,500 years ago (in Mesopotamia), but unambiguous flutes date back to at least 40,000 years ago; the oldest known so far is made from vulture and swan bones found in southern Germany. Singing, though, must be far older than whatever physical evidence we have for prehistoric music.

There is arguably a certain industrial utility to music, insofar as “moving and singing together made collective tasks far more efficient” (so claims historian William McNeill). As a mnemonic aid, a song “hooks onto your subconscious and won’t let go.” Music carries emotion and preserves and transports feelings when passed from one person or generation to another—think of the “Star Spangled Banner” or “La Marseillaise.” And music also marks social differences in stratified societies. In Europe by the eighteenth century, for example, people of rank had abandoned participation in the sounds and music of traditional communal festivals and spectacles. To distinguish themselves from the masses, the rich and powerful came to favor the orderly stylized sounds of chamber music—and even demanded that audiences keep silent during performances. One of the signal trends of this particular modernity is the withdrawal of elites from public festivals, creating space instead for their own exclusive music and dance to eliminate the unruly/unmanaged sounds of the street and work. Music helps forge social bonds, but it can also work to separate and to isolate, facilitating escape from community (think earbuds).

We humans also of course crave motion and bodily contact, flexing our muscles in the manner of our ancestors exhilarating in the chase. And even if we no longer chase mammoth herds with spears, we recreate elements of this excitement in our many sports, testing strength against strength or speed against speed, forcing projectiles of one sort or another into some kind of target. Dance is an equally ancient expression of this thrill of movement, with records of ritual motion appearing already on cave and rock walls of early humans. The emotion-charged dance may be diminished in elite civilized life, but it clearly reappears in the physicality of amusement park throngs at the end of the nineteenth century, and more recently in the rhythmic motions of crowds at sporting events and rock concert moshing where strangers slam and grind into each other.

Sensual pleasure is thus central to the “thick tapestry of rewards” of human evolutionary adaptation, rewards wired into the complex circuitry of the brain’s pleasure centers. Pursuit of pleasure (and avoidance of pain) was certainly not an evil in our distant past; indeed, it must have had obvious advantages in promoting evolutionary fitness. Along with other adaptive emotions (fear, surprise, and disgust, for example), pleasure and its pursuit must also have helped create capacities to bond socially—and perhaps even to use and to understand language. The joy that motivates babies to delight in rhythmic and consonant sounds, bright colors, friendly faces, and bouncing motion helps build brain connections essential for motor and cognitive maturity.

Of course the biological propensity to gorge cannot be new; that much we know from the relative constancy of the human genetic constitution over many millennia. We also know that efforts to augment or intensify sensual pleasure long predate industrial civilization. This should come as no surprise, given that, as already noted, our longings for rare delights of taste, sight, smell, sound, and motion are rooted in our prehistoric past. Humans—like wolves—have been bred to binge. But in the past, at least, nature’s parsimony meant that gorging was generally rare and its impact on our bodies, psyches, and sociability limited.

This leads us again to a critical point: pleasure is born in its paucity and scarcity sustains it. And scarcity has been a fact of life for most of human history; in fact, it is very often a precondition for pleasure. Too much of any good can lead to boredom—that is as true for music or arcade games as for ice cream or opera. Most pleasures seem to require a context of relative scarcity. Amongst our prehistoric ancestors this was naturally enforced through the rarity of honey and the all-tooinfrequent opportunity for the chase. Humans eventually developed the ability, however, to create and store surpluses of pleasure-giving goods, first by cooking and preserving foods and drinks and eventually by transforming even fleeting sensory experiences into reproducible and transmissible packets of pleasure. Think about candy bars, soda pop, and cigarettes, but also photography, phonography, and motion pictures—all of which emerged during the packaged pleasure revolution.

Of course, in certain respects the defeat of scarcity has a much older history, having to do with techniques of containerization. Prior to the Neolithic, circa ten thousand years ago, humans had little in the way of either technical means or social organization to store any kind of sensual surplus (though meats may have been stashed the way some nonhuman predators do). Farming and its associated technics changed this. After hundreds of thousands of years of scavenging and predation, people in this new era began to grow their own food—and then to save and preserve it in containers, especially in pots made from clay but also in bags made from skins or fibers from plants. Agriculture seems to have led to the world’s first conspicuous inequalities in wealth, but also the first routine encounters with obesity and other sins of the flesh (drunkenness, for example). Of course the rich—the rulers and priests of ancient city-states and empires or the lords and abbots of religious centers in the Middle Ages—were able to satisfy sensual longings more often, and in some cases continually.

While Christianity was in part a reaction to this sensual indulgence, being originally a religion of the excluded slave and the appalled rich, medieval aristocrats returned to the ancient love of sweet and sour dishes, favoring roasted game (a throwback to the preagricultural era) and the absurd notion that torturing animals before killing them made for the tastiest meats. Medieval European nobility mixed sex, smell, and taste in their large midday meals and frequent evening banquets. Christian church fathers banned perfumes and roses as Roman decadence, but treatments of this sort—along with passions for pungent flavors and scents—were revived with the Crusades and intimate contact with the Orient.

Until recently, pursuit of pleasure on such an opulent scale was confined to those tiny minorities with regular access to the resources to contain and intensify nature. Since antiquity, in fact, the powerful have often been snobbish killjoys, trying to restrict what the poor were allowed to eat, wear, and enjoy. Sometimes this made economic (if invidious) sense—as when England’s Edward III rationed the diet of servants during shortages that followed the Black Death. In the sixteenth century, French law prohibited the eating of fish and meat at the same meal in hopes of preserving scarce supplies. And given the low output of agriculture, there was a certain logic underlying the rationing of access to “luxuries.” But the powerful sometimes seem to have relished denying pleasure to others. How else do we explain sumptuary laws that prohibited the commoner from wearing colorful and costly clothing reserved for aristocrats?

Access to pleasure has long been an expression of privilege and power, but much can be made with little, and rarely has pleasurable display been totally suppressed in any culture. Think of the ceremonies surrounding seasonal festivals, especially the gathering of harvest surplus, when humans drenched themselves in the senses that seemed almost to ache for expression. Think of the Bacchanalia of the Greeks, the Saturnalia of the Romans, the Mardi Gras of medieval Europeans, or the orgies of feasting, dancing, music, and colorful costumes of any society whose everyday world of scarcity is forgotten in bingeing after harvest. Agriculture produced cycles of carnival and Lent, “a self-adjusting gastric equilibrium,” in the words of one historian.

Of course there are many examples of ancient philosophers and sages seeking to limit the hedonism of the privileged (and the festival culture of the poor). Certainly there are ancients who embraced the virtues of moderation, as in Aristotle’s “golden mean” or Confucian ideals of restrained desire. Hebrew prophets, Puritans, Jesuits, and countless Asian ascetics likewise attempted to rein in the fêtes of the senses. Medieval authorities in Europe forbade the eating of meat on Wednesdays, Fridays, and numerous fast days that added up to more than 150 days a year. The classical ideal of moderation was revived, and the moral superiority of grain-based foods was defended. Gluttony was condemned along with lust. Pleasure was to be regulated even in the afterlife, insofar as the Christian heaven was not for pleasure but for self-improvement. These and other ascetic moralities arguably helped people cope with uncertain supplies, putting a brake also on the rapacious greed of the rich and powerful. Curbing of excess extended to all manner of “pleasures of the flesh,” including those that, like sex, were not necessarily even scarce.

Dance came under suspicion in this regard, especially in its ecstatic form. European explorers frowned on the gesticulations of “possessed natives” whom they encountered in Africa and the Americas in the sixteenth and seventeenth centuries. At the same time, European elites smothered social dancing in the towns and villages of their own societies. The reasons were many. Clergy demanded that their holy days and rituals be protected from defilement by the boisterous and even sacrilegious customs of the frolicking crowd; the rich also chose to withdraw from—and then suppressed—the emotional intensity of common people’s celebrations, retiring instead to the confines of their private gatherings and sedate dances. The military also needed a new type of soldier and new ways of preparing men for war: the demand was no longer to fire up the emotions of soldiers to prepare them for handto-hand combat; the new need was to drill and discipline troops to march unflinching into musket and cannon fire, with individual fighters acting as precision components in a machine. The regular rhythms of the military march served this purpose better than the ecstatic dance.

Even when people found ways of intensifying sensation (as in the distillation of alcoholic spirits), state and church authorities were often able to enforce limits, sometimes by harsh means. In London in the 1720s, authorities repressed the widespread and addictive use of gin (a juniper-flavored liquor). At the beginnings of the Industrial Revolution, just as unleashing desire was becoming respectable, philosophers such as Adam Smith and David Hume still mused about the need for personal restraint and moral sympathies.

By this time, and increasingly over the course of the nineteenth century, especially between about 1880 and 1910, these traditional calls for moderation and self-control were starting to face a new kind of challenge, thanks to new techniques of containerization and intensification that would culminate in the packaged pleasure revolution. New kinds of machines brought new sensations to ordinary people, producing goods that for the first time could be made quite cheap and easily storable and portable. Canned food defeated the seasons, extending the availability of fruits and vegetables to the entirety of the year. Candy bars purchased at any newsstand or convenience store replaced the rare encounter with the honeycomb or wild strawberry. And while our more immediate predecessors may have enjoyed a pipe of tobacco or a draft of warm beer, the deadly convenience of the cigarette and the refreshing coolness of the chilled beverage came within the grasp of the masses only toward the end of the nineteenth century. And this revolution in the range and intensity of sensation radically upset the traditional relationship between desire and scarcity.

A similar process occurred with other sensory delights. While earlynineteenth-century Americans and Europeans thrilled at the sight of painted dioramas and magic lantern shows, nothing compared to the spectacle of fast-paced police chases in the one-reel movies viewable after 1900. Opera was a privileged treat of the few in lavish public places, but imagine the revolution wrought by the 1904 hard wax cylinder phonograph, when Caruso could be called upon to sing in the family parlor whenever (and however often) one wanted. Daredevils in Vanuatu dove from high places holding vines long before bungee jumping became a fad; even so, there was nothing like the mass-market calibrated delivery of physical thrills before the roller coaster, popularized in the 1890s. We find something similar even with binge partying: while peoples had long celebrated surpluses in festivals, they typically did so only on those rare days designated by the authorities. By the end of the nineteenth century, however, festive pleasures of a more programmed sort had become widely available on demand in the modern commercial amusement park.

Especially important is how the packaged pleasure intensified (certain aspects of ) human sensory experience. An extreme example is when opium, formerly chewed, smoked, or drunk as tea, was transformed through distillation into morphine and eventually heroin—and then injected directly into the bloodstream with the newly invented syringe in the 1850s. The creation of a wide variety of “tubes” like the syringe for delivering chemically purified, intense sensation was characteristic of much of this new technology—which we shall describe in terms of “tubularization.” The cigarette is another fateful example: tobacco smoking was made cheap, convenient, and “mild” (i.e., deadly) with the advent of James Bonsack’s automated cigarette rolling machine (in the 1880s) and new methods of curing tobacco. Bonsack’s machine lowered the cost of manufacturing by an order of magnitude, and new methods of chemical processing (such as flue curing) allowed a milder, less alkaline smoke to be drawn deep into the lungs. A new mass-market consumer “good” was born, accompanied by mass addiction and mass death from maladies of the heart and lungs.

The “tubing” of tobacco into cigarettes was closely related to techniques used in packing and packaging many other commercial products. Think of mechanized canning—culminating in the double-seamed cylinder of the “sanitary” can-making machinery of 1904—and mechanized bottle and cap making from the late 1890s. New forms of sugar consumption appeared with the invention of soda fountain drinks. Coca-Cola was first served in drug stores in 1886 and in bottles by the end of the century, and in the 1890s the mixing of sugar with bitter chocolate led to candy bars, such as Hershey’s in 1900. Packaged pleasures of this sort—offered in conveniently portable portions with carefully calibrated constituents—allowed manufacturers to claim to have surpassed the sensuous joys of paradise. Chemists also began to be hired to see what new kinds of foods and drugs could be synthesized to surpass the taste, smell, and look of anything nature had created. A new discipline of “marketing” came of age about this time—the word was coined in 1884—with the task of creating demand for this riot of new products, decked out increasingly in colorful and striking labels with eye- and ear-catching slogans.

New technologies also sped up our consumption of visual, auditory, and motion sensoria. In 1839 the Daguerreotype revolutionized the familiar curiosity of the camera obscura—a dark room featuring a pinhole that would project an image of the outside world onto an interior wall—by chemically capturing that image on a metal plate in a miniaturized “camera” (meaning literally “room”). While these early photographs required long periods of exposure to fix an image, that time dramatically declined over the course of the century, allowing by 1888 the amateur snapshot camera and only three years later the motion picture camera. The effect, as we shall see, was a sea change in how we view and recollect the world. Sound was also captured (and preserved and sold) about this same time. The phonograph, invented in 1877 by Thomas Edison, became a new way of experiencing sound when improved and domesticated. And Emile Berliner’s “record” of 1887 made possible the mass production of sound on stamped-out discs, capturing a concert or a speech in a two- or three-minute record available to anyone, anywhere, with the appropriate gear.

Access and speed took another sensual twist when a Midwesterner by the name of La Marcus Thompson introduced the first mechanized roller coaster, in 1884. Bodily sensations that might have signaled danger or even death on a real train were packed into a two- or three-minute adventure trip on a rail “gravity ride.” Adding another dimension to the thrill was Thompson’s scenic railroad (in 1886) with its artificial tunnels and painted images of exotic natural or fantasy scenes. This was a new form of concentrated pleasure, distilling sights and sounds that formerly would have required days of “regular travel.” Rides, in combination with an array of novel multisensory spectacles, were concentrated into dedicated “amusement parks,” offering a kind of packaged recreational experience, accessible (very often) via the new trolley cars of the 1890s. Some of the earliest and most famous were those built at Coney Island on the southernmost tip of Brooklyn, New York.

Innovations of this sort led us into new worlds of sensory access, speed, and intensity. Distance and season were no longer restraints, as canned and bottled goods moved by rail, ship, and eventually truck across vast stretches of space and climate—with mixed outcomes for human health and well-being.

Some of these new technologies nourished and improved our bodies with cheaper, more hygienic, and varied food and drink; others offered more convenient and effective medicines and toiletries. Still others provided unprecedented opportunities to enjoy the beauty of nature (or at least its image), along with music and new kinds of “visual arts.” Amusement rides gave us (relatively) harm-free ways of experiencing the ecstatic and the exhilaration of danger—plus a kind of simulated or virtual travel; photography froze the evanescent sight, preserving images on a scale never previously possible, and with near-perfect fidelity. Yet packaged pleasures also led to new health and moral threats.

In the most extreme form, concentrating intoxicants led to addictions—physical dependencies that often required ever-increasing dosages to maintain a constant effect, and substantial physical discomfort accompanying withdrawal. Here of course the syringe injection of distilled opiates is the paradigmatic example, and addiction to tobacco and alcoholic drinks must also be included. But the impact of concentrated high-energy foods is not entirely different. Fat- or sugar-rich foods produce not just energy but very often endorphins, morphine-like painkillers that offer comfort and calm. That is one reason they are called “comfort” foods. These rich foods cause neurotransmitters in the brain to go out of balance, resulting in cravings. By contrast, the natural physical pleasures of exercise are much less addicting because we get tired; and some “excess”—here pain is gain—can actually make us healthier.

Not all packaged pleasure dependencies were so obviously chemical. Engineered pleasures often create astonishment and delight when first introduced, for example, but can also raise expectations and dull sensibilities for “unpackaged” stimuli, be they nature’s wonders or unaided convivial and social delights. The pleasures of recorded sound, the captured image, and even the amusement park ride and electronic game often satisfy with a kind of ratcheting effect, rendering the visual, auditory, and motion pleasures in uncommodified nature and society boring. In this sense, the packaging of pleasure can turn the once rare into an everyday, even numbing, occurrence. The world beyond the package becomes less thrilling, less desirable. In the wake of the telephoto lens and artful editing of film—with all the “boring bits” taken out—nature itself can appear dull or impoverished. Why go to the waterfall or forest if you can experience these in compressed form at your local zoo or theme park? Or on IMAX or your widescreen, high-def TV? Packaged pleasures of this sort may not induce physical dependencies, but they can create inflated expectations or even degrade other, less distilled or concentrated, kinds of experiences.

Another point we shall be making is that packaged pleasures have often de-socializedpleasure taking. Many create neurological responses similar to those of religious ecstasies, physical exercise, and social or even sexual intercourse, and can end up substituting for, or displacing, such enjoyments. Weak wine and mild natural hallucinogens have long enhanced spiritual and social experience, but the modern packaged pleasure often has the effect of privatizing satisfaction, isolating it from the crowd. Think of the privatization of public space through portable mp3 players, or the isolating effect of television.

The key point to appreciate is that we today live in a vastly different world from that of peoples living prior to the packaged pleasure revolution, when a broad range of sensual pleasures came to be bottled, canned, condensed, distilled, and otherwise intensified. The impact of this revolution has not been uniform, and we acknowledge and stress these differences, but it does seem to have transformed our sensory universe in ways we are only beginning to understand.

The packaged pleasures we shall be considering in this book include cigarettes, candy and soda pop, phonograph records, photographs, movies, amusements park spectacles, and a few other odds and ends.

But of course not all commodities that are tubed, packed, portable, or preserved can be considered packaged pleasures. For our purposes, we can identify several key and interrelated elements:

  1. The packaged pleasure is an engineered commodity that contains, concentrates, preserves, and very often intensifies some form of sensual satisfaction.
  2. It is generally speaking inexpensive, easy to access (readily at hand), and very often portable and storable, often in a domestic setting.
  3. It is typically wrapped and labeled and thus often marketed by branding. Although often portable, in the case of the amusement park, it can also be enclosed and branded in a contained and fixed space.
  4. The packaged pleasure is often produced by companies with broad regional if not national or even global reach, creating a recognizable bond between the individual consumer and the corporate producer.

Of course we are well aware that many other consumer products exhibit one or more of these attributes—clothes, cars, books, packaged cereals, cocaine, pornography, and department stores just to name a few. Our focus will be on those packaged pleasures that signal key features of the early part of this transformation, and notably those that involve the elements of containment, compression, intensification, mobilization, and commodification. And we recognize that we will not offer an encyclopedic survey of pleasures that have been intensified and packaged—we won’t be treating the history of pornography or perfume, for example, and will consider narcotics and alcoholic beverages only briefly.

We should also be clear that the packaged pleasure revolution is on-going and in many ways has strengthened over time, as pleasure engineers find ever-more sophisticated ways of intensifying desire. And we’ll consider this history at least briefly. Since funneled fun has a tendency to bore us over time, pleasure engineers have repeatedly raised the bar on sensory intensity. Nuts and nougat were added to the simple chocolate bar, and cigarette makers added flavorants and chemicals to enhance or optimize nicotine delivery. The visual panel in motion pictures has been made more alluring with increasingly rapid cuts, and recorded sound has seen a dramatic expansion in both fidelity and acoustical range. Roller coasters went ever higher and faster while also becoming ever safer. Pornography is delivered with ever-greater convenience and is now basically free to anyone with an Internet connection. Even opera fans can now hear (and see) their favorite arias with a simple click on YouTube—at no cost and without leaving home (or sitting through those “boring bits”). Entertainment without the “fiber,” one could say.

Another outcome of the packaged pleasure revolution, then, is the progressive refinement—really reengineering—of sensory experience in the century or so since its beginnings. Optimization of satisfactions has become a big part in this, as one might expect from the fact that packaged pleasures are very often commodities produced by corporations with research and marketing departments. Menthol was added to cigarettes in the 1930s, with the idea of turning tobacco back into a kind of medicine. Ammonia and levulinic acid and candied flavors of various sorts were later added to augment the nicotine “kick,” but also to appeal to younger tastes. Flavor chemists meanwhile learned to manipulate the jolt of “soft drinks” by refining dosings of caffeine and sugar, while candy makers developed nuanced “flavor profiles”— surpassing traditional hard candy, for example, with the sensory complex of a Snickers.

Optimization and calibration we also find in other parts of this revolution. The intense thrill of a loop-de-loop ride, debuted first at Coney Island in the 1890s, gave way to the more varied sensuality of “themed” rides. Roller coasters have been designed to go to the edge of exhilaration, stopping just short of the point of nausea or injury. The same principle works with gambling, where even losers keep playing because of the carefully calibrated conditioning that comes with the periodic (and precisely calculated) win built into the game. Pleasure engineers have learned how to create video games that are easy enough to engage newcomers, but complex enough to sustain the interest of experienced players. Gaming engineers even seek to encourage (or require) physical movement and social interactions—think Wii games—to counter critics cautioning against the bodily and social negatives of overly virtualized lives.

Our focus is on the origins of the technologies involved in such transformations, though we also are aware that such novelties have always encountered critics, those who worry that an oversated consuming public would lose control and abandon work and family responsibilities. But the reality in terms of social impact often has been quite different. Few of these optimized pleasures have ever undermined the willingness of consumers to work and obey—and have done little to undermine nerves and sensibilities (as some have feared). Indeed they have often contributed to a new work ethic driven by new needs and imperatives to earn and toil evermore in order to be able to afford the delights of movies, candy, soda, cigarettes, and the rest of the show. Over time, and often a surprisingly short time, these commodified delights have become a kind of second sensory nature—customary and accepted ways of eating, inhaling, seeing and hearing, and feeling.

Scholars have long debated the impact of “modern consumer culture,” albeit too often in negative terms without considering the historical origins of the phenomena in question. In the 1890s, the French sociologist Émile Durkheim feared that the “masses” would be enervated, even immobilized, by technical modernity’s overwhelming assault on the senses. And Aldous Huxley in his Brave New World (1932) warned of a coming culture of commoditized hedonism oblivious to tyranny. Jeremiahs of this sort have singled out different culprits, with blame most often placed on the “weaknesses” of the masses or the manipulation of merchandisers, with the hope expressed that the virtuous few in their celebration of nature and simplicity would constitute a bulwark against immediate gratification and degrading consumerism. These critics have been opposed by apologists for “democratic access” to the choice and comforts of modern consumer society—who champion the idea that only killjoy elitists could find fault in the delights of pleasure engineering. This perspective dominates a broad swath of social science—especially from neoclassical economists (think of George Stigler and Gary Becker’s famous dictum on the nondisputability of taste).

We argue instead that we need to abandon the overgeneralization common to both jeremiahs and free-market populists. Of course it is true that the very notion of a “packaged pleasure revolution” suggests certain links between the cigarette, bottled soda, phonograph records, cameras, movies, and even amusement parks. But the impact of these various inventions over the decades has been very different, and cannot be subsumed under some procrustean notion of “modern consumer culture.” Rather, as we shall see, their distinct histories suggest very different effects on our bodies and our cultures that would seem to require very different personal and policy responses. Our view is that the sale of cigarettes (as presently designed) should be heavily regulated and ultimately banned, for example, while soda should probably only be shamed and (heavily) taxed. And we make no policy recommendations for film or sound “packages.” But we certainly need to better understand how these technologies have shaped and refined (distorted?) our sensibilities.

We should also keep in mind that there are global consequences to the packaged pleasure revolution—and that most of these lie in the future. This is unfinished business. Overconsumption is part of the problem, as is the undermining of world health (notably from processed sugar and cigarettes). The revolution is ongoing, as the engineered world of compressed sensibility spreads to ever-different parts of the globe, and ever-different parts of human anatomy and sociability. It may be hard to opt out of or to escape from this brave new world, but the conditions under which it arose are certainly worth understanding and confronting.

This book takes on a lot. Our hope is to move us beyond the classic debate between the jeremiahs against consumerism and the defenders of a democratic access to commercial delights. We root mass consumption in a sensory revolution facilitated by techniques that upset the ancient balance between desire and scarcity. We take a fresh look at how technology has transformed our nature.

To read more about Packaged Pleasures, click here.

Add a Comment
17. The Professional: Donald E. Westlake

b7cunmn4yj2bqpeufqsj

 

Deadspin columnist/Yankees fan/out-of-print litterateur Alex Belth recently sat down over email with Levi Stahl, University of Chicago Press promotions director and editor of The Getaway Car: A Donald Westlake Nonfiction MiscellanyTheir resulting conversation, published today at Deadspin, al0ng with an excerpt from the book, includes the history of their engagement with the Parker novels, Jimmy the Kid‘s amazing cover design, culling through Westlake’s archive, an obscure British comedy show, and the perils of professional envy vs. professional admiration. You can read the interview in full here, and have a look at a clip after the jump below.

***

Q: In a letter, Westlake described the difference between an author and a writer. A writer was a hack, a professional. There’s something appealing and unpretentious about this but does it take on a romance of its own? I’m not saying he was being a phony but do you think that difference between a writer and an author is that great?

LS: I suspect that it’s not, and that to some extent even Westlake himself would have disagreed with his younger self by the end of his life. I think the key distinction for him, before which all others pale, was what your goal was: Were you sitting down every day to make a living with your pen? Or were you, as he put it ironically in a letter to a friend who was creating an MFA program, “enhanc[ing] your leisure hours by refining the uniqueness of your storytelling talents”? If the former, you’re a writer, full stop. If the latter, then you probably have different goals from Westlake and his fellow hacks.

But does a true hack veer off course regularly to try something new? Does a hack limit himself to only writing about his meal ticket (John Dortmunder) every three books, max, in order not to burn him out? Does a hack, as Westlake put it in a late letter to his friend and former agent Henry Morrison, “follow what interests [him],” to the likely detriment of his career? Westlake was always a commercial writer, but at the same time, he never let commerce define him. Craft defined him, and while craft can be employed in the service of something a writer doesn’t care about at all, it is much easier to call up and deploy effectively if the work it’s being applied to has also engaged something deeper in the writer. You don’t write a hundred books with almost no lousy sentences if you’re truly a hack.

Read more about The Getaway Car here.

 

 

Add a Comment
18. Literature in translation

UCP_translations_2014_cover

In the wake of the controversy (or welcomed interest, depending on your position) surrounding Patrick Modiano’s recent Nobel Prize in Literature, the AAUP circulated the hashtag #litintranslation, in order to promote those books published by university presses that attempt to overcome the dearth of literature in translation that has long acquiesced to a peculiar hegemony in American letters. In fact, Yale University Press already had plans to publish Modiano’s Suspended Sentences: Three Novellas this fall, as part of their Margellos World Republic of Letters series. A quick review of the tweets circulating under #litintranslation reveals an equally robust list of works brought into the English language by the university press community, including several by the University of Chicago Press. With that in mind, and on the heels of the Frankfurt Book Fair, we’re debuting our sales catalog Translations from Chicago, where among hundreds of storied works spanning the disciplines, you can find:

The Selected Letters of Charles Baudelaire: The Conquest of Solitude, ed. and trans. by Rosemary Lloyd

Vegetables: A Biography by Evelyn Bloch-Dano, trans. by Teresa Lavender Fagan

One Must Also Be Hungarian by Adam Biro, trans. by Catherine Tehanyi

Sketch for a Self-Analysis by Pierre Bourdieu, trans. by Richard Nice

The Beast and the Sovereign, Vols. I and II by Jacques Derrida, trans. by Geoffrey Bennington

The Voice Imitator by Thomas Bernhard, trans. by Kenneth J. Norcott

Youth without Youth by Mircea Eliade, trans. by Mac Linscott Ricketts, with a Foreword by Francis Ford Coppola

To see the complete catalog in PDF form, click here.

 

Add a Comment
19. Rachel Sussman and The Oldest Living Things in the World

9780226057507

 

This past week, Rachel Sussman’s colossal photography project—and its associated book—The Oldest Living Things in the World, which documents her attempts to photograph continuously living organisms that are 2,000 years old and older, was profiled by the New Yorker:

To find the oldest living thing in New York City, set out from Staten Island’s West Shore Plaza mall (Chuck E. Cheese’s, Burlington Coat Factory, D.M.V.). Take a right, pass Industry Road, go left. The urban bleakness will fade into a litter-strewn route that bisects a nature preserve called Saw Mill Creek Marsh. Check the tides, and wear rubber boots; trudging through the muddy wetlands is necessary.

The other day, directions in hand, Rachel Sussman, a photographer from Greenpoint, Brooklyn, went looking for the city’s most antiquated resident: a colony of Spartina alterniflora or Spartina patens cordgrass which, she suspects, has been cloning and re-cloning itself for millennia.

Not simply the story of a cordgrass selfie, Sussman’s pursuit becomes contextualized by the lives—and deaths—of our fragile ecological forbearers, and her desire to document their existence while they are still of the earth. In support of the project, Sussman has a series of upcoming events surrounding The Oldest Living Things in the World. You can read more at her website, or see a listing of public events below:

EXHIBITIONS:

Imagining Deep Time (a cultural program of the National Academy of Sciences in Washington, DC), on view from August 28, 2014 to January 15, 2015

Another Green World, an eco-themed group exhibition at NYU’s Gallatin Galleries, featuring Nina KatchadourianMitchell JoaquimWilliam LamsonMary MattinglyMelanie Baker and Joseph Heidecker, on view from September 12, 2014 to October 15, 2014

The Oldest Living Things in the World, a solo exhibition at Pioneer Works in Brooklyn, NY, from September 15, 2014 to November 2, 2014, including a closing program

TALKS:

Sept 18th: a discussion in conjunction with the National Academy of Sciences exhibition Imagining Deep Time for DASER (DC Art Science Evening Rendezvous), Washington, DC (free and open to the public)

Nov 20th: an artist’s talk at the Museum of Contemporary Photography, Chicago

To read more about The Oldest Living Things in the World, click here.

 

 

Add a Comment
20. Alison Bechdel, MacArthur Fellow, 2014

tumblr_nc25b7qq501rr9j8oo1_400

Image via Out Magazine

bechdel_2014_hi-res-download_2_2-1024x682Congratulations to cartoonist and graphic memoirist Alison Bechdel, one the 2014 MacArthur Foundation Fellows, or “genius grant” honorees, whose work in comics and narrative has helped to transform and elevate our understanding of women—”Dykes to Watch Out For” in all their expressions, mothers and daughters,  and the implications of social and political changes on those who dwell everyday in a broad variety of female-identified bodies. Additionally, Bechdel is well-known in film studies circles for her duplicitously simple three-question test for gender parity, which has drawn broad attention since first delivered via her 1985 strip “The Rule.”

From the Washington Post:

1) Does it have two female characters?

2) Who talk to each other?

3) About something other than a man?

If the answer to all three questions is yes, the film passes the Bechdel test.

Bechdel is also the subject of two feature-length interviews in Hillary L. Chute’s Outside the Box: Interviews with Contemporary Cartoonists, and a contributor to Critical Inquiry’s special issue Comics & Mediaboth of which were released this year. Below, see video footage of a Bechdel/Chute interview from 2011, when Chute visited Bechdel at her home in Jericho, Vermont:

To read more about Outside the Box or the Comics & Media issue of CI, click here.

Add a Comment
21. House of Debt on FT’s shortlist for Business Book of the Year

9780226081946

Congrats (!) to House of Debt authors Atif Mian and Amir Sufi for making the shortlist for the Financial Times and McKinsey Business Book of the Year. Now in competition with five other titles from an initial offering of 300 nominations, House of Debt—and its story of the predatory lending practices behind the Great American Recession, the burden of consumer debt on fragile markets, and the need for government-bailed banks to share risk-taking rather than skirt blamewill find out its fate at the November 11th award ceremony.

From the official announcement:

“The provocative questions raised by this year’s titles have been addressed with originality, depth of research and lively writing.”

 The award, now in its 10th edition, aims to find the book that provides “the most compelling and enjoyable insight into modern business issues, including management, finance and economics.” The judges—who include former winners Mohamed El-Erian and Steve Coll—also gave preference this year to books “whose influence is most likely to stand the test of time.”

To read more about House of Debt, including a list of reviews and a link to the authors’ blog, click here.

Add a Comment
22. For Mark Rothko on his birthday

9780226074061

James E. B. Breslin’s book on the life of painter Mark Rothko helped redefine the field of the artist’s biography and, in its day, was praised by outlets such as the New York Times Book Review (on the front cover, no less), where critic Hilton Kramer ascribed it as, “the best life of an American painter that has yet been written.” On what would have been the artist’s 111th birthday, Biographile revisted Breslin’s work:

In Breslin’s book, we follow Rothko’s search for the approach that would become such a significant contribution to art and painting in the twentieth century. He was in his forties before he started making his “multiforms,” and even after he started painting them in his studio, he didn’t show them right away. Breslin dissects and details the techniques Rothko developed upon creating his greatest works. He rotated the canvas as he worked, so that the painting wouldn’t be weighted in any one direction. He spent much more time in the studio figuring out a painting than actually painting it, and he filled a canvas as many as twenty times before feeling it was done. Maybe most important, he worked tirelessly to eliminate any recognizable shapes from the multiforms. They needed to come into the world fully formed, not as interpretations of any real-life objects, but meaningful visions in and of themselves.

Nathan Gelgud, the author behind the Biographile piece, accompanied his writing with a couple of illustrated riffs on the artist, one of which we feature below, and the other you can seek out (and read the review in full) at Biographile.

Mark-Rothko-by-Nathan-Gelgud-2014

Mark Rothko by Nathan Gelgud, 2014. Image via Gelgud’s Biographile review.

To read more about Mark Rothko: A Biography, click here.

Add a Comment
23. Our free e-book for October: In Defense of Negativity

0226284980

Americans tend to see negative campaign ads as just that: negative. Pundits, journalists, voters, and scholars frequently complain that such ads undermine elections and even democratic government itself. But John G. Geer here takes the opposite stance, arguing that when political candidates attack each other, raising doubts about each other’s views and qualifications, voters—and the democratic process—benefit.

In Defense of Negativity, Geer’s study of negative advertising in presidential campaigns from 1960 to 2004, asserts that the proliferating attack ads are far more likely than positive ads to focus on salient political issues, rather than politicians’ personal characteristics. Accordingly, the ads enrich the democratic process, providing voters with relevant and substantial information before they head to the polls.

An important and timely contribution to American political discourse, In Defense of Negativity concludes that if we want campaigns to grapple with relevant issues and address real problems, negative ads just might be the solution.

“Geer has set out to challenge the widely held belief that attack ads and negative campaigns are destroying democracy. Quite the opposite, he argues in his provocative new book: Negativity is good for you and for the political system. . . . In Defense of Negativity adds a new argument to the debate about America’s polarized politics, and in doing so it asserts that voters are less bothered by today’s partisan climate than many believe. If there are problems—and there are—Geer says it’s time to stop blaming it all on 30-second spots.”—Washington Post

Download your free copy of In Defense of Negativity here.

Watch “The Bear,” one of those 30-second spots (less an attack ad, and more a foray into American surrrealism) produced  for Ronald Reagan’s 1984 presidential campaign, below:

Add a Comment
24. Excerpt: Roger Grenier’s Palace of Books

C_Grenier_Palace_9780226308340_jkt_IFT

 

“Private Life”

The expansion of the media has put the writer in the spotlight, even if, nowadays, people who write have lost much of their prestige and their importance in society. Some of them find themselves afflicted with a lack of privacy once reserved for movie stars. Sometimes they ask for it. Michel Contat writes about “this form of media totalitarianism that gives the right to know everything about someone based on the simple fact that he or she has created a public image.” This phenomenon is not so new, if you think about Sartre and Beauvoir, not to mention Musset and George Sand, Dante and Beatrice, Petrarch and Laura, or even the self-dramatizing Byron or Chateaubriand. Nowadays we have scribblers who manage to pass themselves off as writers because they’ve already made a name for themselves as celebrities.

Gérard de Nerval was a victim of the public’s need to know, due to conditions that would be unimaginable today. Jules Janin, in the Journal des débats of March 1, 1841; Alexandre Dumas, in Le Mousquetaire of December 10, 1853; Eugène de Mirecourt in a little monograph in his series Les Contemporains in 1854, wrote openly about their friend’s mental illness. Poor Gérard wrote to his father on June 12, 1854, in response to Mirecourt’s pamphlet on “necrological biography,” and said he was being made into “the hero of a novel.” He dedicated Daughters of Fire to Alexandre Dumas: “I dedicate this book to you, my dear master, as I dedicated Lorely to Jules Janin. You have the same claim on my gratitude. A few years ago, I was thought dead, and he wrote my biography. A few days ago, I was thought mad, and you devoted some of your most charming lines to an epitaph for my spirit. That’s a good deal of glory to advance on my due inheritance.”

Is knowing the private life of an author important for understanding his or her work?

The debate was renewed with great panache by Marcel Proust in By Way of Sainte-Beuve. Proust noticed that Sainte-Beuve, a subtle and cultured man, made nothing but bad judgment calls as to the worth of his contemporaries. Why? Jealousy doesn’t explain it. He couldn’t have been jealous of writers like Stendhal or Baudelaire, who were practically unknown. The fault was with his method. Sainte-Beuve wanted to adopt a scientific attitude. “For me,” he wrote, “literature is indistinguishable from the rest of man. As long as you have not asked yourself a certain number of questions about an author and answered them satisfactorily, if only for your private benefit and sotto voce, you cannot be sure of possessing him entirely. And this is true, though these questions may seem to be altogether foreign to the nature of his writings. For example, what were his religious views? How did the sight of nature affect him? What was he like in his dealings with women, and in his feelings about money? Was he rich? Was he poor? What was his regimen? His daily habits? Finally, what was his persistent vice or weakness, for every man has one. Each of these questions is valuable in judging an author or his book.”

Sainte-Beuve decides that he is engaging in literary botany.

Proust finds all this knowledge useless and likely to mislead the reader: “A book is the product of a different self than the self we manifest in our habits, in our social life, in our vices. If we would try to understand that particular self, it is by searching deep within us and trying to reconstruct it there, that we may arrive at it. Nothing can exempt us from this effort of the heart.”

Proust also writes: “How does having been a friend of Stendhal’s make you better suited to judge him? It would be more likely to get in the way.” Sainte-Beuve, who knew Stendhal and Stendhal’s friends, found his novels “frankly detestable.”

What Proust holds against Sainte-Beuve is that he made no distinction between conversation and the occupation of writing, “in which, in solitude, quieting the speech which belongs as much to others as to ourselves, we come face to face once more with ourselves, and seek to hear and to render the true sound of our hearts.”

Proust admires Balzac, all while thinking that from what he knew of Balzac’s personal life, his letters to his family and to Madame Hanska, he was a vulgar human being. Stefan Zweig raises the same issue. He admires Balzac the writer and seeks reasons to admire the man. He is infuriated because he can’t find any. He has discovered that genius is incomprehensible.

Gaëtan Picon thinks that if Proust attacks Sainte-Beuve so violently it’s because he needs to believe that genius is based on a secret distinct from intelligence. That a man whose life is frivolous and empty, a failure, can nonetheless create a great work. The question is inevitable, beginning with the case of Proust himself. How did this intolerable social climber, whom Lucien Daudet called “an atrocious insect,” become the author of In Search of Lost Time? Paul Valéry concludes his famous study of Leonardo da Vinci with a line that shows in a striking way how much distance he puts between an artist and his work: “As for the true Leonardo, he was what he was.”

Flaubert would have sided with Proust against his friend Sainte-Beuve. He writes to Ernest Feydeau on August 21, 1859, with his customary truculence, “Life is impossible now! The minute you’re an artist, the gentlemen grocers, the auditors of record, the customs agents, the cobblers and all the rest enjoy themselves at your expense! People inform them as to whether you’re a brunette or a blond, facetious or melancholy, how many moons since your birth, whether you’re given to drink or play the harmonica. I believe that on the contrary, the writer must leave behind nothing but his work. His life doesn’t matter. Wipe it away!”

He doesn’t stop there, but insists: “The artist must arrange things so as to make us believe in a posterity he hasn’t experienced.”

You’d have to put Chekhov in Proust’s camp. From his Notebook: “How pleasant it is to respect people! When I see books, I am not concerned with how the authors loved or played cards; I only see their marvelous works.”

The same is true for Henry James, who writes in his short story “The Real Right Thing”: “[. . .] his friend would at moments have shown himself as holding that the ‘literary’ career might—save in the case of a Johnson or a Scott, with a Boswell and a Lockhart to help—best content itself to be represented. The artist was what he did—he was nothing else.” In this fantasy tale, the ghost of a dead writer appears to prevent his biography from being written.

Proust seems rigid. He is right to say that there is a truth for the writer, especially if he’s a genius, that remains a mystery and cannot be explained by social appearance or private life. But he also presents a counter-argument to his own theory when he writes in Jean Santeuil: “[. . .] our lives are not wholly separated from our works. All the scenes that I have narrated here, I have lived through.”

Most of the time, the characters in Jean Santeuil and the Search are indiscreet, eager to know everything about the artists they encounter. Freud, whose theory is close to Proust’s, doesn’t hold back from delving into the private life of Leonardo da Vinci and a few others. J.-B. Pontalis suggests with a touch of malice that Proust and Freud take the opposite tack to Sainte-Beuve’s because they don’t want their own private lives examined: if Proust’s perversion of torturing rats was discovered. . . . The private lives of others are another story!

Nietzsche also pondered the question, but from a different point of view. He thinks that knowing an author distorts our opinion of his work and his person. “We read the writings of our acquaintances (friends and foes) in a twofold sense, inasmuch as our knowledge continually whispers to us: ‘this is by him, a sign of his inner nature, his experiences, his talent,’ while another kind of knowledge simultaneously seeks to determine what his work is worth in and of itself, what evaluation it deserves apart from its author, what enrichment of knowledge it brings with it. As goes without saying, these two kinds of reading and evaluating confound one another.”

But what to do in cases where the work can only be explained by the life? Why deprive ourselves of this source of knowledge?

In the case of Albert Camus, once you know about his impoverished childhood in an illiterate milieu (he described this in The Wrong Side and the Right Side, his first book, and in The First Man, his last), you understand his attitude of respect and rigor towards literature, and the tenor of his style. In the same way, his youth near the sea and the sun, and the illness that continually threatened him, explain to a large extent the spirit of his work, his thought.

Finally—and Proust is right about this—if the author is not a simple manufacturer, if he puts his interior self in his books, the reader will be attracted by this self. The reader will seek out this personal, private self beneath the sentences.

In 1922, the young Aragon wrote, “My instinct, whenever I read, is to look constantly for the author, and to find him, to imagine him writing, to listen to what he says, not what he tells; so in the end, the usual distinctions among the literary genres— poetry, novel, philosophy, maxims—all strike me as insignificant.”

Freud showed that every child constructs a “family romance” that he will later repress. Whereas the writer continues to manufacture a novel which, if not a family romance, is at least a personal one. Marthe Robert has noted that all novelists relate to some extent their sentimental education, their apprentice years, and their search for lost time. The paradox is that they confess their secrets to a piece of paper. Yet they’re careful to disguise them as fiction.

Revealing a lot about oneself is not the purview only of novelists. It is also what poets do, and not just the elegiac poets. For centuries, and in a variety of civilizations, well before there were novels, the great majority of poems came from the poet’s effusion in speaking about his life, his loves, his torments, his anger, his religious feeling, his exile. Gérard de Nerval asks, “Which is more modest: to portray oneself in a novel disguised as Lélio or Octavio or Arthur, or to betray one’s most intimate emotions in a volume of poetry?” That his life and his illness were made public by his friends gave him an argument: “Forgive us our flights of personality, we who are constantly in the limelight, and who, whether we live in glory or in failure, can no longer hope to obtain the benefits of obscurity.”

You might think that contemporary poetry, tending towards abstraction and situated in a world where the air is rarified, has little to do with private life. This is not always true. Even an erudite poet like Jacques Roubaud, who delves into mathematics, writes about a deeply personal unhappiness in Something Black.

The same is true for the playwright, the filmmaker, even the nonfiction writer. You can sense this clearly in the philosophers Jean-Paul Sartre, Michel Foucault, Roland Barthes. Descartes was already inserting elements of autobiography in Discourse on Method. In this essential essay, he portrays himself in Holland, seated next to his stove throughout the winter, reflecting. Thus there is a back-and-forth movement, a dialectic, practically a contradiction. One retreats into oneself in order to communicate better with others.

Authors, whenever they delve into their own private lives, even if they embellish or transpose, find themselves confronted with the issue of personal discretion. They go well beyond simple indiscretion when they attempt to bring to light what is hidden in the deepest part of themselves.

With his taste for nonsense, Julio Cortázar describes an “enlarged self-portrait from which the artist has had the elegance to withdraw.” This little joke reveals the aspirations of so many writers: to be at once invisible and present, to say everything about oneself without seeming to.

Offering your essence to nourish what you write is what Scott Fitzgerald called “the price to pay”: “I have asked a lot of my emotions—one hundred and twenty stories. The price was high, right up with Kipling, because there was one little drop of something not blood not a tear, not my seed, but me more intimately than these in every story: it was the extra I had.”

Scott Fitzgerald couldn’t write without including his entire history. And even when he lost his creative vein, he dug to the depths of his anguish to write The Crack-Up.

John Dos Passos, another American who is now neglected after having been overrated, made a distinction between a literature of confession and a literature of spectacle. Of course he categorized his own books Manhattan Transfer and the U.S.A. trilogy as literature of spectacle. But I’m pretty sure you can find confession beneath the spectacle.

The young novelist’s first book is often autobiographical. Yet this is the phase when one has lived the least. Other, perhaps better, writers save the most personal, the most intimate in their lives or in the history of their families for much later.

On the other hand, some seem to write primarily to cover up a secret. Paul-Jean Toulet never shows his wounds—neither in his novels, frankly mediocre and marred by the most odious clichés of his era: anti-Semitism, etc.—nor in his poetry, far more charming; nor even in the letters he addressed to himself. His friends knew he had a broken heart. Why broken? And by whom? One of the qualities of his poetry is precisely that you can perceive, beyond the light-hearted fantasy, a floating veil of sadness or perhaps despair. We’ll never know the whole story. That is the claim in the last quatrain of his Contrerimes—a kind of challenge:

If living is a duty, when I will have ruined it,

May I use my shroud as a mystery

You must know how to die, Faustine, how to grow silent,

Die like Gilbert by swallowing the key.

(The allusion is to the strange death at age thirty of the poet Nicolas Gilbert, author of the Le poète malheureux [the unhappy poet] who apparently swallowed his key in a fit of delirium.)

In the life of a man or a woman there are always one or two things that he or she will never consent to speak about, not for anything. Secret gardens. But if that man or woman is a writer, we might find those things hidden deep within a novel.

We know that Dickens lived through some very unhappy times in his childhood. The casual egotism of his parents was to blame.

His father, a loudmouth who was often imprisoned for debt, is in part the model for Mr. Micawber. In chapter eleven of David Copperfield, we find, barely altered, what Dickens experienced at age twelve. For six or seven shillings a week, he packaged shoe polish in a putrid factory, working under unspeakably miserable, humiliating conditions.

While he didn’t hesitate to use this experience for David Copperfield, in life he hid the memory as his most closely guarded secret. He refused to talk about it. He even took detours in London to avoid the place where he had been so unhappy. A fragment of his autobiography was found where he confirmed:

No word of that part of my childhood which I have now gladly brought to a close, has passed my lips to any human being . . . I have never, until I now impart it to this paper, in any burst of confidence with anyone, my own wife not excepted, raised the curtain I then dropped, thank God.

Until old Hungerford Market was pulled down, until old Hungerford Stairs were destroyed, and the very nature of the ground changed, I never had the courage to go back to the place where my servitude began. I never saw it. I could not endure to go near it. For many years, when I came near to Robert Warrens’ in the Strand, I crossed over to the opposite side of the way, to avoid a certain smell of the cement they put upon the blackingcorks, which reminded me of what I was once. It was a very long time before I liked to go up Chandos Street. My old way home by the Borough made me cry, after my eldest child could speak.

Thus Charles Dickens and David Copperfield, C. D. and D. C., meet in the person of a humiliated child. Humiliation is a feeling that very few people can tolerate. But it has inspired many books.

Léon Aréga, a forgotten writer who endured endless ridicule, once said to me about one of my novels in which I put much of myself: “It’s a treatise on humiliation.” Which, coming from him, was a great compliment. It is easy to find the humiliated child in many of Chekhov’s short stories. His remark has been quoted a hundred times: “In my childhood, there was no childhood.”

Confessions are made on purpose in David Copperfield. But in most novels they aren’t. They surface in the form of fantasies, obsessions. With Dostoyevsky it’s impossible not to find an allusion to the rape of a little girl in The Possessed, Crime and Punishment,The Eternal Husband.

One rather strange point of view comes from Joseph Conrad. He thought you needed to be a genius to dare unveil your intimate self and thus move the public. If the effect was ruined you would sink into ridicule:

If it be true that every novel contains an element of autobiography—and this can hardy be denied since the creator can only express himself in his creation—then there are some of us to whom an open display of sentiment is repugnant. I would not unduly praise the virtue of restraint. It is often merely temperamental. But it is not always a sign of coldness. It may be pride. There can be nothing more humiliating than to see the shaft of one’s emotions miss the mark of either laughter or tears. Nothing more humiliating! And that for this reason should the mark be missed, should the open display of emotion fail to move, then it must perish unavoidably in disgust or contempt.

This is what the authors of a fashionable genre, baptized “autofiction” in 1970 by Serge Doubrovsky, seem not to fear, and their works collect like dregs on booksellers’ shelves.

Sometimes the most impersonal work can signify something deeply intimate to the author. This is the case of the great allegorical novel by Melville, Moby Dick. He achieves a fusion of a great myth with his own torment. The dire questioning, the violence of Ahab, are his. The Plague, another book that generates a myth, is also a novel about separation, since Camus wrote part of it isolated by the war, cut off from Algeria, from his wife, from his close friends. Virginia Woolf ’s Orlando seems like a fantastical novel of imagination, when it is really the portrait of Vita Sackville-West, who was so dear to the author. In a fairy tale like Alice in Wonderland, Reverend Dodgson confides his passion for Alice Liddell.

The sole fact of starting to write is motivated by a cause that belongs to what is most intimate for the author. I quoted Flaubert, who talks about the sorrow that launched him into the enterprise of Salammbô.

The critics always remind us that Proust and John Cowper Powys wrote their great novels only after the death of their mothers. You could say they waited for their mothers’ deaths to write.

We mustn’t forget the role of the unconscious. Benjamin Crémieux noticed that “the writer who rereads one of his books discovers, after the fact, secret traits he never suspected having put there, traits he may not even have known he possessed—and whose existence is suddenly revealed to him. In all that we write in our own style, the truest aspect of ourselves is inscribed in filigrain.”

How, without blushing, can we agree to deliver to the public so many confessions and intimate motivations, even those that are disguised or dissimulated? This is the mystery of the quasi-religious value we assign to literature.

To read more about Palace of Books, click here.

Add a Comment
25. An excerpt from Lee Siegel’s Trance Migrations

C_Siegel_Trance_9780226185293_jkt_MB

From Trance Migrations: Stories of India, Tales of Hypnosis by Lee Siegel

The Child’s Story
And now, if you dare, LOOK into the hypnotic eye! You cannot look away! You cannot look away! You cannot look away!

THE GREAT DESMOND IN THE HYPNOTIC EYE (1960)

I was eight years old when my mother was hypnotized by a sinister Hindu yogi. Yes, she was entranced by him, entirely under his control, and made do things she would never have done in her normal waking state. My father wasn’t there to protect her and there was nothing I, a mere child, could do about it. I vividly remember his turban and flowing robes, his strange voice, gliding gait, and those eerie eyes that widened to capture her mind. I heard his suggestive whispers—“Sleep Memsaab, sleep”—and saw his hand moving over her face in circular hypnotic passes. “Sleep, Memsaab.”

It’s true. I heard it with my own ears and saw it with my own eyes as I watched “The Unknown Terror,” an episode of the series Ramar of the Jungle, on television one evening in 1953. Playing the part of a teak plantation owner in India, my mother, the actress Noreen Nash, was vulnerable to the suggestions of the Hindu hypnotist they called Catrack. “ When the dawn comes,” he instructed her, “ You will take the rifle and go to the camp of the white Ramar. You will aim at his heart and fire.”

I watched as my mother, wearing a pith helmet, bush jacket, and jodhpur pants, rose from her cot, loaded her rifle, and then trudged in a somnambulistic trance, wooden and emotionless, through the jungle to Ramar’s tent. Since my mother, as far as I knew her at home, had no experience with firearms, I was not surprised that she missed her target. She dropped the rifle and disappeared back into the jungle.

Later on in the show, once again hypnotically entranced, she was led by Catrack to the edge of a cliff where the yogi declared, “ We are in great danger, Memsaab. The only way to escape is to jump off this cliff.” Just as my mother was about to leap to her death, Ramar arrived on the scene and fired his rifle into the air. The loud bang of the gunshot awakened her in the nick of time and caused Catrack to flee. Thanks to Ramar, my mother survived her adventures in India.

The seeds of my curiosity about hypnotism and an indelible association of it with an exotic, at once alluring and foreboding, India were sown in front of a television. At about the same time I saw my mother hypnotized and made to do terrible things by a yogi, I watched another nefarious Hindu hypnotist, Swami Talpar, played by Boris Karloff in Abbott and Costello Meet the Killer, try to take control of the feeble mind of Lou Costello. Both India and hypnosis were dangerous.

But then another old movie, Chandu the Magician, assured me that just as Indian hypnotism could be used for evil, so too it was a power that could be employed to overcome wickedness and serve the good of mankind. The film opened somewhere in India at night with a full moon casting eerie shadows on an ancient heathen temple as the American adventurer Frank Chandler bowed down before a dark-skinned, long-bearded Hindu priest in a white dhoti and matching turban. The Hindu swami addressed his acolyte in a deep echoic voice:

“In the years that thou hast dwelt among us, thou hast conquered the Atma of the spirit and, as one of the sacred company of the Yogi, thou hast been given the name Chandu. Thou hast attained thy reward by being endowed with the ancient Oriental magical power that the doctors of thy race call hypnotism. Thou shalt look into the eyes of men and they shall be as straw in thy hand. Thou shalt cause them to see what is not there even unto a gathering of twelve by twelve. To few, indeed, of thy race have the secrets of the Yogi been revealed. The world needs thee now. Go forth in strength and conquer the evil that threatens mankind.”

That India was the home of hypnotism was further confirmed by listening to my mother read Kipling to me at bedtime. We had moved on from The Jungle Book, read to me when I was about the same age as Mowgli, to Kim. And I imagined the hero of that story and I were the same age, as well. “Kim flung himself wholeheartedly upon the next turn of the wheel,” my mother began. “He would be a Sahib again for a while. . . .” and soon I’d yawn, blink, blink, and yawn again, feel the heaviness of my eyelids, heavier and heavier, more and more relaxed. I’d roll over, eyes closing, and soon be able to imagine that her voice might be Kim’s: “I think that Lurgan Sahib wishes to make me afraid,” she’d say he said. “And I am sure that that devil’s brat below the table wishes to see me afraid. This place is like a Wonder House.”

I’d picture the interior of Lurgan’s shop as vividly as if I were there and could see what Kim saw, focusing my attention on each of the objects, suggested one by one: “Turquoise and raw amber necklaces. Curiously packed incense-sticks in jars crusted over with raw garnets, devil-masks and a wall full of peacock-blue draperies . . . gilt figures of Buddha . . . tarnished silver belts . . . arms of all sorts and kinds . . . and a thousand other oddments.”

When, as commanded, Kim pitched the porous clay water jug that was on the table there to Lurgan, I saw it “falling short and crashing into bits and pieces.”

My mother reached over and lightly placed her hand on the back of my neck as Lurgan, in his attempt to hypnotize Kim, “laid one hand gently on the nape of his neck, stroked it twice or thrice, and whispered: ‘Look! It shall come to life again, piece by piece. First the big piece shall join itself to two others on the right and the left. Look!’ To save his life, Kim could not have turned his head. The light touch held him as in a vice, and his blood tingled pleasantly through him. There was one large piece of the jar where there had been three, and above them the shadowy outline of the entire vessel.”

“Look! It is coming into shape,” my mother whispered and “Look! It is coming into shape,” echoed Lurgan Sahib. Yes, it was coming into shape, all the shards of clay magically reforming the previously unbroken jug. I could see it. The words my mother read aloud to me were as hypnotic as the words uttered by Lurgan.

My childhood fascination with hypnosis was sustained by a school assignment to read Edgar Allan Poe’s stories, several of them—“The Facts in the Case of Mr. Valdemar,” “Mesmeric Revelation,” and “A Tale of the Ragged Mountains”—being about mesmerism, and the final story reaffirming an association of hypnosis with India. The main character goes into a trance in Virginia in which he has a vivid vision of Benares, a city to which he has never been, indicating that he had lived in India in a previous lifetime.

“Not only are Poe’s stories about hypnosis,” I grandly proclaimed in a book report I wrote in the seventh grade, “They are also written in a language that is very hypnotic, especially if they are read out loud.” Little did I suspect that that homework assignment would be prolusory to a book written more than half a century later.

When subsequently in the eighth grade I was required to prepare a project for the school science fair, I was determined to do mine on hypnosis as the only science, other than reproductive biology, in which I had much interest. The science teacher warned that it was a dangerous subject: “Hypnotism is widely used in schools in the Soviet Union to brainwash children so that they believe that Communism is good and that they must do whatever their dictator, Nikita Khrushchev, commands.”

Despite its abuse behind the Iron Curtain, I was determined to learn as much as I could about hypnosis. And so I ordered a book, Home Study Way to Hypnotic Practice, that I had seen advertised in a copy of Twitter magazine, a naughty-for-the-times pulp publication that I had discovered hidden in my uncle’s garage.

The ad promised that a mastery of hypnotism would enable me to control the minds of others, particularly the minds, and indeed the hearts, if not some other parts, of girls: “‘Look here’—Snap! Instantly her eyes close. She seems to be asleep but she isn’t. She’s in a hypnotic trance. A trance you put her into by saying secret words and snapping your fingers. Now she’s ready—ready and waiting to do as you command. She’ll follow your orders without question or hesitation. You’ll have her believing anything you suggest and doing whatever you want her to do. You’ ll be in control of her emotions: love, hate, laughter, tears, happy, sad. She’ ll be as putty in your hands.”

The winsome smiling girl with closed eyes in the advertisement reminded me of a classmate named Vickie Goldman, whose burgeoning breasts were often on my mind. I was naturally intrigued by the idea that by means of hypnotism those breasts might become as putty in my hands.

It was disappointing to discover in reading that book that a mastery of hypnotic techniques was much more complicated and tedious to learn than the ad for it had promised, and even more disheartening to learn that, in order to be hypnotized, Vickie would have to trust me and want to be hypnotized by me.

Another ad, in another copy of Twitter snatched from my uncle’s collection of girlie magazines, however, suggested that, by means of various apparatuses, I would be able to take control of her mind without her consent. All I’d have to do is say, “Look at this,” or “Listen to this.”

So, for the sake of having both a science project and as much control over Vickie Goldman’s emotions and behavior as Catrack had had over my mother’s, even as much power over her as Khrushchev had over children in the Soviet Union, I ordered the products advertised by the Hypnotic Aids and Supply Company: the Electronic Hypnotism Machine, the Electronic Metronome, the folding, pocket-sized Mechanical Hypnotist, and the 78-rpm Hypnotic Record. Because I was spending more than ten dollars on these devices, I also received the Amazing Hypno-Coin at no extra charge. My mother was willing to pay for these devices since I needed them for my science project.

I also purchased the book Oriental Hypnotism, “written in Calcutta India with the cooperation of Sadhu Satish Kumar,” because the yogi pictured in the ad reminded me of the one who had hypnotized my mother in Ramar of the Jungle. The text revealed that, by means of hypnosis, “the power of Maya,” Hindu yogis are able to “charm serpents, control women, and win the favor of men. Self-hypnosis gives the Hindus their amazing ability to lie down on beds of nails. And it is by means of mass hypnosis that their magicians have for thousands of years performed the legendary Indian Rope Trick.” I was familiar with the rope trick from seeing Chandu use his hypnotic power to cause “a gathering of twelve by twelve” to imagine they were seeing it performed.

My science project exhibit, HYPNOTISM EAST AND WEST IN THE PAST, PRESENT AND FUTURE BY LEE SIEGEL, GRADE 8, featured a poster board mounted over a table upon which waved my Hypnotic Metronome and spun both the Hypnotic Spiral Disc of my Electronic Hypnotism Machine and side one of my Hypnotic Record. Over the eerie drone of Oriental music there was a monotonously rhythmic deep voice: “As you listen to these words your muscles will begin to relax, to become more and more relaxed, yes, very relaxed, and your eyelids will become heavy, yes, heavier and heavier, very, very heavy, very relaxed. Deeper and deeper, relaxed.” The words “relaxed,” “heavy,” and “deeper” were repeated over and over and then there was counting backward, then imagining going down, “deeper and deeper,” in an elevator, more counting backward, and finally, at the end of the record, right after “three, two, one,” came the crucial the hypnotic suggestion: “The next voice you hear will have complete control over your mind.”

That’s when I would to take over. That’s when, if the principal of our school, the judge of the projects in the fair, listened to the record, I’d command: “ You will award Lee Siegel the first-place blue ribbon for his science project.” And if Vickie would look and listen, that’s when my interest in hypnosis would really pay off: “ You will go behind the handball courts with Lee Siegel and there you will ask him to fondle your breasts.”

To intensify the hypnotic mystique of my project, I placed a warning sign by the Electronic Hypnotism Machine: Stare at the Spinning Disc at Your Own Risk. Lee Siegel will not be held responsible for any actions resulting from a loss of mental control.

Along with all of my puchases from the Hypnotic Aids Supply Company, I placed the Westclox pocket watch on a chain that my uncle had given me for my bar mitzvah.

I livened up the poster board with a photo labeled EAST: Sadhu Satish Kumar, Hindu Yogi Hypnotist, cut from Oriental Hypnotism side by side with a picture labeled WEST: Dr. Franz Mesmer, Father of Animal Magnetism, that I had clipped from the World Book Encyclopedia.

There was also a timeline beginning in 3000 bc (as estimated by Sadhu Satish Kumar) with “Indian Fakirs and Yogis” and ending “Sometime in the Future” with “Lee Siegel who has learned so much for this science fair project that he plans to become a professional hypnotist. After graduating from high school and college he will go both to India to study hypnotism with yogis and to Oxford University to study it with science professors.”

In between the ancient Hindu hypnotists and my future self were luminaries in the history of hypnosis as enumerated in the World Book Encyclopedia: Franz Mesmer (1734–1815), the Marquis de Puységur (1751– 1825), Abbé Faria (1756–1819), John Elliotson (1791–1868), James Braid (1795–1860), James Esdaile (1808–1859), Ivan Pavlov (1849–1936), and Sigmund Freud (1856–1936). In order to make the list more acknowledging of India’s contributions to hypnosis I added Swami Catrack (1919–1953), Frank Chandler, a.k.a. Chandu (1932–), and Sadhu Satish Kumar (1928–). I also included The Amazing Kreskin (1935–) and William Kroger (1906–—), because, other than Catrack, Swami Talpar, Chandu, Lurgan, Satish Kumar, Nikita Khrushchev, and Sigmund Freud, they were the only hypnotists I had ever heard of. I knew that Sigmund Freud was a psychiatrist who thought that little boys were in love with their mother and that little girls wished they had a penis. I included Kroger, a gynecologist, avid proponent of medical hypnotherapeutics, and a friend my parents who occasionally visited our home, in the hope that he might, once I had shown him my science project, write a note on the official stationery of the International Society for Clinical and Experimental Hypnosis of which he was president, something to be framed and included in my display, something like “Lee Siegel’s science project deserves a blue ribbon and should be sent on to the national competition, which it will certainly win.”

All he wrote, however, was: “ Young Siegel has done a good job in presenting a subject that deserves wider recognition and acceptance.”

Not having been awarded the first-place blue ribbon—or a ribbon of any other color, for that matter—for my science project, nor having been able to successfully use my hypnotic aids to turn Vickie—or any other girl—into putty in my hands, ready to follow my orders without question, my interest in hypnotism waned.

I don’t think I thought about hypnosis very much until a couple of years later when, in 1960, I happened see a horror film, The Hypnotic Eye, the movie, according to publicity posters, “that introduces HypnoMagic, the thrill you SEE and FEEL! It’s the amazing new audience sensation that makes YOU part of the show!” There were warnings that HypnoMagic could cause viewers of the film to actually become hypnotized: “Watch at your own risk!”

The movie was about a mysterious series of gruesome acts of self-mutilation by beautiful women, none of whom were able to remember why or how they had disfigured themselves, and all of whom, a detective, the hero of the film, discovered, just happened to have gone to a theater to see the stage hypnosis show of The Great Desmond. That each of them had been hypnotized during one of his performances caused the detective to suspect that the hypnotist might have been involved in the crimes. Consulting a criminal psychologist, he learned that, “ Yes, posthypnotic suggestion could indeed cause a woman to do things she would not otherwise consider doing.”

At one point in the film, during a performance of his stage show, the despotic Desmond held up something meant to resemble an eyeball flashing with light—the titular Hypnotic Eye! After daring his audience to stare into it, he turned to the camera and dared us, the audience in the movie theater, to do the same. The camera moved in closer and closer on the pulsating orb as, “deeper and deeper” was repeated again and again until soon, as commanded by the diabolical hypnotist, the members of his audience were lifting their arms and then lowering them. And then Desmond stared straight at us again and commanded us to do the same, and soon, together with the audience in the movie, we, the audience of the movie, were lifting our arms, then lowering them, again and again, until Desmond finally ordered us to stop and then, after counting from one to three, he snapped, “ Wake up!”

Although I don’t think I was actually hypnotized by the Great Desmond and don’t know how many members of the movie audience were, I felt compelled to go along with the show, to act as if I was in a trance, and do as I was told. That, I would suggest, is in and of itself a kind of hypnosis. Hypnosis, like listening intently to a story, is playing along with words.

At the very end of the movie, after the crimes had been solved and the evil hypnotist apprehended, the criminal psychologist addressed the viewers of the movie: “Hypnotism can be a valuable tool, helping humanity in many ways. But, just as it can be used to do good, so too, in the hands of unscrupulous practitioners, it can be used to perpetrate evil. We must be wary to maintain our safety because they can catch us anywhere, and at anytime.” He paused as the camera moved in for a close-up: “ Yes, even during a motion picture in a movie theater.” He winked, then smiled, and the screen faded to black.

I didn’t think much about the film until recently, when I began writing about hypnosis. I confess, although I should probably be ashamed to admit it, that this text has been stylistically inspired by the B movie gimmick. In the spirit of The Hypnotic Eye, the tales in this book that are meant to be read aloud to a cooperative listener are written with HypnoMagic, the thrill you SEE and FEEL! It’s the amazing literary sensation that makes the listener part of the story! But beware! HypnoMagic could cause listeners to actually become hypnotized and actually imagine that they are participants in the tales they hear.

Read more about Trance Migrations here.

Add a Comment

View Next 25 Posts