What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Date

Click days in this calendar to see posts by day or month
<<November 2014>>
SuMoTuWeThFrSa
      01
02030405060708
09101112131415
16171819202122
23242526272829
30      
new posts in all blogs
Viewing Blog: The Chicago Blog, Most Recent at Top
Results 1 - 25 of 1,813
Visit This Blog | Login to Add to MyJacketFlap
Blog Banner
Publicity news from the University of Chicago Press including news tips, press releases, reviews, and intelligent commentary.
Statistics for The Chicago Blog

Number of Readers that added this blog to their MyJacketFlap: 9
1. Excerpt: Top 40 Democracy

9780226896182

To follow-up on yesterday’s post, here’s an excerpt from Eric Weisbard’s Top 40 Democracy: The Rival Mainstreams of American Music.

***

“The Logic of Formats”

Nearly every history of Top 40 launches from an anecdote about how radio station manager Todd Storz came up with the idea sometime between World War II and the early 1950s, watching with friends in a bar in Omaha as customers repeatedly punched up the same few songs on the jukebox. A waitress, after hearing the tunes for hours, paid for more listens, though she was unable to explain herself. “When they asked why, she replied, simply: ‘I like ’em.’ ” As Storz said on another occasion, “Why this should be, I don’t know. But I saw waitresses do this time after time.” He resolved to program a radio station following the same principles: the hits and nothing but the hits.

Storz’s aha moment has much to tell about Top 40’s complicated relationship to musical diversity. He might be seen as an entrepreneur with his ear to the ground, like the 1920s furniture salesman who insisted hillbilly music be recorded or the 1970s Fire Island dancer who created remixes to extend the beat. Or he could be viewed as a schlockmeister lowering standards for an inarticulate public, especially women —so often conceived as mass-cultural dupes. Though sponsored broadcasting had been part of radio in America, unlike much of the rest of the world, since its beginnings, Top 40 raised hackles in a postwar era concerned about the numbing effects of mass culture. “We become a jukebox without lights,” the Radio Advertising Bureau’s Kevin Sweeney complained. Time called Storz the “King of the Giveaway” and complained of broadcasting “well larded with commercials.”

Storz and those who followed answered demands that licensed stations serve a communal good by calling playlist catholicity a democracy of sound: “If the public suddenly showed a preference for Chinese music, we would play it . . . I do not believe there is any such thing as better or inferior music.” Top 40 programmer Chuck Blore, responding to charges that formats stifled creative DJs, wrote, “He may not be as free to inflict his musical taste on the public, but now, and rightfully, I think, the public dictates the popular music of the day.” Mike Joseph boasted, “When I first go into a market, I go into every record store personally. I’ll spend up to three weeks doing interviews, with an average of forty-five minutes each. And I get every single thing I can get: the sales on every configuration, every demo for every single, the gender of every buyer, the race of every buyer. . . . I follow the audience flow of the market around the clock.” Ascertaining public taste became a matter of extravagant claim for these professional intermediaries: broadcasting divided into “dayparts” to impact commuters, housewives, or students.

Complicating the tension between seeing formats as pandering or as deferring to popular taste was a formal quality that Top 40 also shared with the jukebox: it could encompass many varieties of hits or group a subset for a defined public. This duality blurred categories we often keep separate. American show business grew from blackface minstrelsy and its performative rather than innate notion of identity —pop as striking a pose, animating a mask, putting on style or a musical. More folk and genre-derived notions of group identity, by contrast, led to the authenticity-based categories of rock, soul, hip-hop, and country. Top 40 formats drew on both modes, in constantly recalibrated proportions. And in doing so, the logic of formats, especially the 1970s format system that assimilated genres, unsettled notions of real and fake music.

Go back to Storz’s jukebox. In the late 1930s, jukeboxes revived a record business collapsed by free music on radio and the Great Depression. Jack Kapp in particular, working for the US branch of British-owned Decca, tailored the records he handled to boom from the pack: swing jazz dance beats, slangy vernacular from black urban culture, and significant sexual frankness. This capitalized on qualities inherent in recordings, which separated sound from its sources in place, time, and community, allowing both new artifice — one did not know where the music came from, exactly — and new realism: one might value, permanently, the warble of a certain voice, suggesting a certain origin. Ella Fitzgerald, eroticizing the nursery rhyme “A-Tisket, A-Tasket” in 1938 on Decca, with Chick Webb’s band behind her, could bring more than a hint of Harlem’s Savoy Ballroom to a place like Omaha, as jukeboxes helped instill a national youth culture. Other jukeboxes highlighted the cheating songs of honky-tonk country or partying &B: urban electrifications of once-rural sounds. By World War II, pop was as much these brash cross-genre jukebox blends as it was the Broadway-Hollywood-network radio axis promoting Irving Berlin’s genteel “White Christmas.”

Todd Storz’s notion of Top 40 put the jukebox on the radio. Records had not always been a radio staple. Syndicated network stations avoided “canned music”; record labels feared the loss of sales and often stamped “Not Licensed for Radio Broadcast” on releases. So the shift that followed television’s taking original network programming was twofold: local radio broadcasting that relied on a premade consumer product. Since there were many more records to choose from than network shows, localized Top 40 fed a broader trend that allowed an entrepreneurial capitalism — independent record-label owners such as Sam Phillips of Sun Records, synergists such as American Bandstandhost Dick Clark, or station managers such as Storz—to compete with corporations like William Paley’s Columbia Broadcasting System, the so-called Tiffany Network, which included Columbia Records. The result, in part, was rock and roll, which had emerged sonically by the late 1940s but needed the Top 40 system to become dominant with young 45 RPM – singles buyers by the end of the 1950s.

An objection immediately presents itself, one that will recur throughout this study: Was Top 40 rock and roll at all, or a betrayal of the rockabilly wildness that Sam Phillips’s roster embodied for the fashioning of safe teen idols by Dick Clark? Did the format destroy the genre? The best answer interrogates the question: Didn’t the commerce-first pragmatism of formatting, with its weak boundaries, free performers and fans inhibited by tighter genre codes? For Susan Douglas, the girl group records of the early 1960s made possible by Top 40 defy critics who claim that rock died between Elvis Presley’s army induction and the arrival of the Beatles. Yes, hits like “Leader of the Pack” were created by others, often men, and were thoroughly commercial. Yes, they pulled punches on gender roles even as they encouraged girls to identify with young male rebels. But they “gave voice to all the warring selves inside us struggling.” White girls admired black girls, just as falsetto harmonizers like the Beach Boys allowed girls singing along to assume male roles in “nothing less than musical cross-dressing.” Top 40’s “euphoria of commercialism,” Douglas argues, did more than push product; “tens of millions of young girls started feeling, at the same time, that they, as a generation, would not be trapped.” Top 40, like the jukebox before it and MTV afterward, channeled cultural democracy: spread it but contained it within a regulated, commercialized path.

We can go back further than jukebox juries becoming American Bandstands. Ambiguities between democratic culture and commodification are familiar within cultural history. As Jean-Christophe Agnew points out in his study Worlds Apart, the theater and the marketplace have been inextricable for centuries, caught up as capitalism developed in “the fundamental problematic of a placeless market: the problems of identity, intentionality, accountability, transparency, and reciprocity that the pursuit of commensurability invariably introduces into that universe of particulate human meanings we call culture.” Agnew’s history ranges from Shakespeare to Melville’s Confidence Man, published in 1857. At that point in American popular culture, white entertainers often performed in blackface, jumping Jim Crow and then singing a plaintive “Ethiopian” melody by Stephen Foster. Eric Lott’s book on minstrelsy gives this racial mimicry a handy catchphrase: Love and Theft. Tarred-up actors, giddy with the new freedoms of a white man’s democracy but threatened by industrial “wage slavery,” embodied cartoonish blacks for social comment and anti-bourgeois rudeness. Amid vicious racial stereotyping could be found performances that respectable theater disavowed. Referring to a popular song of the era, typically performed in drag, the New York Tribune wrote in 1853, “ ‘Lucy Long’ was sung by a white negro as a male female danced.” And because of minstrelsy’s fixation on blackness, African Americans after the Civil War found an entry of sorts into entertainment: as songwriter W. C. Handy unceremoniously put it, “The best talent of that generation came down the same drain. The composers, the singers, the musicians, the speakers, the stage performers —the minstrel shows got them all.” If girl groups showcase liberating possibility in commercial constraints, minstrelsy challenges unreflective celebration.

Entertainment, as it grew into the brashest industry of modernizing America, fused selling and singing as a matter of orthodoxy. The three-act minstrel show stamped formats on show business early on, with its songand-dance opening, variety-act olio, and dramatic afterpiece, its interlocutors and end men. Such structures later migrated to variety, vaudeville, and Broadway. After the 1890s, tunes were supplied by Tin Pan Alley sheet-music publishers, who professionalized formula songwriting and invented “payola”— ethically dubious song plugging. These were song factories, unsentimental about creativity, yet the evocation of cheap tinniness in the name was deliberately outrageous, announcing the arrival of new populations —Siberian-born Irving Berlin, for example, the Jew who wrote “White Christmas.” Tin Pan Alley’s strictures of form but multiplicity of identity paved the way for the Brill Building teams who wrote the girl group songs, the Motown Records approach to mainstreaming African American hits, and even millennial hitmakers from Korean “K-Pop” to Sweden’s Cheiron Studios. Advertisers, Timothy Taylor’s history demonstrates, used popular music attitude as early as they could —sheet-music parodies, jingles, and the showmanship of radio hosts like crooner Rudy Vallee designed to give products “ginger, pep, sparkle, and snap.”

The Lucky Strike Hit Parade, a Top 40 forerunner with in-house vocalists performing the leading tunes, was “music for advertising’s sake,” its conductor said in 1941.

Radio, which arrived in the 1920s, was pushed away from a BBC model and toward what Thomas Streeter calls “corporate liberalism” by leaders like Herbert Hoover, who declared as commerce secretary, “We should not imitate some of our foreign colleagues with governmentally controlled broadcasting supported by a tax upon the listener.” In the years after the 1927 Radio Act, the medium consolidated around sponsor-supported syndicated network shows, successfully making radio present by 1940 in 86 percent of American homes and some 6.5 million cars, with average listening of four hours a day. The programming, initially local, now fused the topsy-turvy theatrics of vaudeville and minstrelsy —Amos ’n’ Andy ranked for years with the most popular programs —with love songs and soap operas aimed at the feminized intimacy of the bourgeois parlor. Radio’s mass orientation meant immigrants used it to embrace a mainstream American identity; women confessed sexual feelings for the likes of Vallee as part of the bushels of letters sent to favored broadcasters; and Vox Pop invented the “man on the street” interview, connecting radio’s commercialized public with more traditional political discourse and the Depression era’s documentary impulse. While radio scholars have rejected the view of an authoritarian, manipulative “culture industry,” classically associated with writers such as the Frankfurt School’s Theodor Adorno, historian Elena Razlogova offers an important qualification: “by the 1940s both commercial broadcasters and empirical social scientists . . . shared Adorno’s belief in expert authority and passive emotional listening.” Those most skeptical of mass culture often worked inside the beast.

Each network radio program had a format. So, for example, Kate Smith, returning for a thirteenth radio season in 1942, offered a three-act structure within each broadcast: a song and comedy slot, ad, drama, ad, and finally a segment devoted to patriotism —fitting for the singer of “God Bless America.” She was said by Billboard, writing with the slangy prose that characterized knowing and not fully genteel entertainment professionals, to have a show that “retains the format which, tho often heavy handed and obvious, is glovefit to keep the tremendous number of listeners it has acquired and do a terrific selling job for the sponsor”— General Foods. The trade journal insisted, “Next to a vocal personality, a band on the air needs a format —an idea, a framework of showmanship.”

Top 40 formats addressed the same need to fit broadcast, advertiser, and public, but through a different paradigm: what one branded with an on-air jukebox approach was now the radio station itself, to multiple sponsors. Early on, Top 40s competed with nonformat stations, the “full service” AM’s that relied on avuncular announcers with years of experience, in-house news, community bulletins, and songs used as filler. As formats came to dominate, with even news and talk stations formatted for consistent sound, competing sonic configurations hailed different demographics. But no format was pure: to secure audience share in a crowded market, a programmer might emphasize a portion of a format (Quiet Storm &B) or blur formats (country crossed with easy listening). Subcategories proliferated, creating what a 1978 how-to book called “the radio format conundrum.” The authors, listing biz slang along the lines of MOR,Good Music, and Chicken Rock, explained, “Words are coined, distorted and mutilated, as the programmer looks for ways to label or tag a format, a piece of music, a frame of mind.”

A framework of showmanship in 1944 had become a frame of mind in 1978. Formats began as theatrical structures but evolved into marketing devices — efforts to convince sponsors of the link between a mediated product and its never fully quantifiable audience. Formats did not idealize culture; they sold it. They structured eclecticism rather than imposing aesthetic values. It was the customer’s money —a democracy of whatever moved people.

The Counterlogic of Genres

At about the same time Todd Storz watched the action at a jukebox in Omaha, sociologist David Riesman was conducting in-depth interviews with young music listeners. Most, he found, were fans of what was popular— uncritical. But a minority of interviewees disliked “name bands, most vocalists (except Negro blues singers), and radio commercials.” They felt “a profound resentment of the commercialization of radio and musicians.” They were also, Riesman reported, overwhelmingly male.

American music in the twentieth century was vital to the creation of what Grace Hale’s account calls “a nation of outsiders.” “Hot jazz” adherents raved about Louis Armstrong’s solos in the 1920s, while everybody else thought it impressive enough that Paul Whiteman’s orchestra could syncopate the Charleston and introduce “Rhapsody in Blue.” By the 1930s, the in-crowd were Popular Front aligned, riveted at the pointedly misnamed cabaret Café Society, where doormen had holes in their gloves and Billie Holiday made the anti-lynching, anti-minstrelsy “Strange Fruit” stop all breathing. Circa Riesman’s study, the hipsters Norman Mailer and Jack Kerouac would celebrate redefined hot as cool, seeding a 1960s San Francisco scene that turned hipsters into hippie counterculture.

But the urge to value music as an authentic expression of identity appealed well beyond outsider scenes and subcultures. Hank Williams testified, “When a hillbilly sings a crazy song, he feels crazy. When he sings, ‘I Laid My Mother Away,’ he sees her a-laying right there in the coffin. He sings more sincere than most entertainers because the hillbilly was raised rougher than most entertainers. You got to know a lot about hard work. You got to have smelt a lot of mule manure before you can sing like a hillbilly. The people who has been raised something like the way the hillbilly has knows what he is singing about and appreciates it.” Loretta Lynn reduced this to a chorus: “If you’re looking at me, you’re looking at country.” Soul, rock, and hip-hop offered similar sentiments. An inherently folkloric valuation of popular music, Karl Miller has written, “so thoroughly trounced minstrelsy that historians rarely discuss the process of its ascendance. The folkloric paradigm is the air that we breathe.”

For this study, I want to combine subcultural outsiders and identity-group notions of folkloric authenticity into a single opposition to formats: genres. If entertainment formats are an undertheorized category of analysis, though a widely used term, genres have been highly theorized. By sticking with popular music, however, we can identify a few accepted notions. Music genres have rules: socially constructed and accepted codes of form, meaning, and behavior. Those who recognize and are shaped by these rules belong to what pioneering pop scholar Simon Frith calls “genre worlds”: configurations of musicians, listeners, and figures mediating between them who collectively create a sense of inclusivity and exclusivity. Genres range from highly specific avant-gardes to scenes, industry categories, and revivals, with large genre “streams” to feed subgenres. If music genres cannot be viewed —as their adherents might prefer —as existing outside of commerce and media, they do share a common aversion: to pop shapelessness.

Deconstructing genre ideology within music can be as touchy as insisting on minstrelsy’s centrality: from validating Theft to spitting in the face of Love. Producer and critic John Hammond, progressive in music and politics, gets rewritten as the man who told Duke Ellington that one of his most ambitious compositions featured “slick, un-negroid musicians,” guilty of “aping Tin Pan Alley composers for commercial reasons.” A Hammond obsession, 1930s Mississippi blues guitarist Robert Johnson has his credentials to be called “King of the Delta Blues” and revered by the likes of Bob Dylan, Eric Clapton, and the Rolling Stones questioned by those who want to know why Delta blues, as a category, was invented and sanctified after the fact and how that undercut more urban and vaudeville-inflected, not to mention female, “classic” blues singers such as Ma Rainey, Mamie Smith, and Bessie Smith.

The tug-of-war between format and genre, performative theatrics and folkloric authenticity, came to a head with rock, the commercially and critically dominant form of American music from the late 1960s to the early 1990s. Fifties rock and roll had been the music of black as much as white Americans, southern as much as northern, working class far more than middle class. Rock was both less inclusive and more ideological: what Robert Christgau, aware of the politics of the shift from his first writing as a founding rock critic, called “all music deriving primarily from the energy and influence of the Beatles—and maybe Bob Dylan, and maybe you should stick pretensions in there someplace.” Ellen Willis, another pivotal early critic, centered her analysis of the change on the rock audience’s artistic affiliations: “I loved rock and roll, but I felt no emotional identification with the performers. Elvis Presley was my favorite singer, and I bought all his records; just the same, he was a stupid, slicked-up hillbilly, a bit too fat and soft to be really good-looking, and I was a middle-class adolescent snob.” Listening to Mick Jagger of the Rolling Stones was a far different process: “I couldn’t condescend to him — his ‘vulgarity’ represented a set of social and aesthetic attitudes as sophisticated as mine.”

The hippies gathered at Woodstock were Riesman’s minority segment turned majority, but with a difference. They no longer esteemed contemporary versions of “Negro blues singers”: only three black artists played Woodstock. Motown-style format pop was dismissed as fluff in contrast to English blues-rock and other music with an overt genre lineage. Top 40 met disdain, as new underground radio centered on “freeform”— meaning free of format. Music critics like Christgau, Willis, and Frith challenged these assumptions at the time, with Frith’s Sound Effects the strongest account of rock’s hypocritical “intimations of sincerity, authenticity, art — noncommercial concerns,” even as “rock became the record industry.” In a nation of outsiders, rock ruled, or as a leftist history, Rock ’n’ Roll Is Here to Pay, snarked, “Music for Music’s Sake Means More Money.” Keir Keightley elaborates, “One of the great ironies of the second half of the twentieth century is that while rock has involved millions of people buying a mass-marketed, standardized commodity (CD, cassette, LP) that is available virtually everywhere, these purchases have produced intense feelings of freedom, rebellion, marginality, oppositionality, uniqueness and authenticity.” In 1979, rock fans led by a rock radio DJ blew up disco records; as late as 2004, Kelefa Sanneh felt the need to deconstruct rock-ism in the New York Times.

Yet it would be simplistic to reduce rockism to its disproportions of race, gender, class, and sexuality. What fueled and fuels such attitudes toward popular music, ones hardly limited to rock alone, is the dream of music as democratic in a way opposite to how champions of radio formats justified their playlists. Michael Kramer, in an account of rock far more sympathetic than most others of late, argues that the countercultural era refashioned the bourgeois public sphere for a mass bohemia: writers and fans debated in music publications, gathered with civic commitment at music festivals, and shaped freeform radio into a community instrument. From the beginning, “hip capitalism” battled movement concerns, but the notion of music embodying anti-commercial beliefs, of rock as revolutionary or at least progressive, was genuine. The unity of the rock audience gave it more commercial clout: not just record sales, but arena-sized concerts, the most enduring music publication in Rolling Stone, and ultimately a Rock and Roll Hall of Fame to debate rock against rock and roll or pop forever. Discursively, if not always in commercial reality, this truly was the Rock Era.

The mostly female listeners of the Top 40 pop formats bequeathed by Storz’s jukebox thus confronted, on multiple levels, the mostly male listeners of a rock genre that traced back to the anti-commercial contingent of Riesman’s interviewees. A democracy of hit songs, limited by its capitalist nature, was challenged by a democracy of genre identity, limited by its demographic narrowness. The multi-category Top 40 strands I will be examining were shaped by this enduring tension.

Pop Music in the Rock Era

Jim Ladd, a DJ at the Los Angeles freeform station KASH-FM, received a rude awakening in 1969 when a new program director laid down some rules. “We would not be playing any Top 40 bullshit, but real rock ’n’ roll; and there was no dress code. There would, however, be something known as ‘the format.’ ” Ladd was now told what to play. He writes bitterly about those advising stations. “The radio consultant imposed a statistical grid over the psychedelic counterculture, and reduced it to demographic research. Do you want men 18–24, adults 18–49, women 35–49, or is your target audience teens? Whatever it may be, the radio consultant had a formula.” Nonetheless, the staff was elated when, in 1975, KASH beat Top 40 KHJ, “because to us, it represented everything that we were trying to change in radio. Top 40 was slick, mindless pop pap, without one second of social involvement in its format.” Soon however, KAOS topped KASH with a still tighter format: “balls-out rock ’n’ roll.”

Ladd’s memoir, for all its biases, demonstrates despite itself why it would be misleading to view rock /pop or genre /format dichotomies as absolute divisions. By the mid-1970s, album-oriented rock (AOR) stations, like soul and country channels, pursued a format strategy as much as Top 40 or AC, guided by consultants and quarterly ratings. Rock programmers who used genre rhetoric of masculine rebellion (“balls-out rock ’n’ roll”) still honored Storz’s precept that most fans wanted the same songs repeated. Stations divided listeners explicitly by age and gender and tacitly by race and class. The division might be more inclusive: adults, 18–49; or less so: men, 18–34. The “psychedelic counterculture” ideal of dropping out from the mass had faded, but so had some of the mass: crossover appeal was one, not always desirable, demographic. And genre longings remained, with Ladd’s rockist disparagement of Top 40 symptomatic: many, including those in the business, quested for “social involvement” and disdained format tyranny. If AOR was formatted à la pop, pop became more like rock and soul, as seen in the power ballad, which merged rock’s amplification of sound and self with churchy and therapeutic exhortation.

Pop music in the rock era encompassed two strongly appealing, sometimes connected, but more often opposed impulses. The logic of formats celebrated the skillful matching of a set of songs with a set of people: its proponents idealized generating audiences, particularly new audiences, and prided themselves on figuring out what people wanted to hear. To believe in formats could mean playing it safe, with the reliance on experts and contempt for audiences that Razlogova describes in an earlier radio era: one cliché in radio was that stations were never switched off for the songs they didn’t play, only the ones they did. But there were strong business reasons to experiment with untapped consumer segments, to accentuate the “maturation” of a buying group with “contemporary”— a buzzword of the times —music to match. To successfully develop a new format, like the urban contemporary approach to black middle-class listeners, marked a great program director or consultant, and market-to-market experimentation in playlist emphasis was constant. Record companies, too, argued that a song like “Help Me Make It through the Night,” Kris Kristofferson’s explicit 1971 hit for Sammi Smith, could attract classier listeners for the country stations that played it.

By contrast, the logic of genres —accentuated by an era of counterculture, black power, feminism, and even conservative backlash — celebrated the creative matching of a set of songs and a set of ideals: music as artistic expression, communal statement, and coherent heritage. These were not necessarily anti-commercial impulses. Songwriters had long since learned the financial reasons to craft a lasting Broadway standard, rather than cash in overnight with a disposable Tin Pan Alley ditty. As Keightley shows, the career artist, steering his or her own path, was adult pop’s gift to the rock superstars. Frank Sinatra, Chairman of the Board, did not only symbolically transform into Neil Young, driving into the ditch if he chose. Young actually recorded for Reprise Records, the label that Sinatra had founded in 1960, whose president, Mo Ostin, went on to merge it with, and run, the artist-friendly and rock-dominated major label Warner Bros. Records.

Contrast Ladd’s or Young’s sour view of formatting with Clive Davis, who took over as president of Columbia Records during the rise of the counterculture. Writing just after the regularizing of multiple Top 40 strands, Davis found the mixture of old-school entertainment and new-school pop categories he confronted, the tensions between format and genre, endlessly fascinating. He was happy to discourse on the reasons why an MOR release by Ray Conniff might outsell an attention-hogging album by Bob Dylan, then turn around and explain why playing Las Vegas had tainted the rock group Blood, Sweat & Tears by rebranding them as MOR. Targeting black albums, rather than singles, to music buyers intrigued him, and here he itemized how he accepted racial divisions as market realities, positioning funk’s Earth, Wind & Fire as “progressive” to white rockers while courting soul nationalists too. “Black radio was also becoming increasingly militant; black program directors were refusing to see white promotion men. . . . If a record is ripe to be added to the black station’s play list, but is not quite a sure thing, it is ridiculous to have a white man trying to convince the program director to put it on.”

The incorporation of genre by formats proved hugely successful from the 1970s to the 1990s. Categories of mainstream music multiplied, major record labels learned boutique approaches to rival indies in what Timothy Dowd calls “decentralized” music selling, and the global sounds that Israeli sociologist Motti Regev sums up as “pop-rock” fused national genres with a common international structure of hitmaking, fueled by the widespread licensing in the 1980s of commercial radio channels in countries formerly limited to government broadcasting. In 2000, I was given the opportunity, for a New York Times feature, to survey a list of the top 1,000 selling albums and top 200 artists by total US sales, as registered by SoundScan’s barcode-scanning process since the service’s introduction in 1991. The range was startling: twelve albums on the list by Nashville’s Garth Brooks, but also twelve by the Beatles and more than twenty linked to the gangsta rappers in N.W.A. Female rocker Alanis Morissette topped the album list, with country and AC singer Shania Twain not far behind. Reggae’s Bob Marley had the most popular back-catalogue album, with mammoth total sales for pre-rock vocalist Barbra Streisand and jazz’s Miles Davis. Even “A Horse with No Name” still had fans:America’s Greatest Hits made a top 1,000 list that was 30 percent artists over forty years old in 2000 and one-quarter 1990s teen pop like Backstreet Boys. Pop meant power ballads (Mariah Carey, Celine Dion), rock (Pink Floyd, Metallica, Pearl Jam), and Latin voices (Selena, Marc Anthony), five mellow new age Enya albums, and four noisy Jock Jams compilations.

Yet nearly all this spectrum of sound was owned by a shrinking number of multinationals, joined as the 1990s ended by a new set of vast radio chains like Clear Channel, allowed by a 1996 Telecommunications Act in the corporate liberal spirit of the 1927 policies. The role of music in sparking countercultural liberation movements had matured into a well-understood range of scenes feeding into mainstreams, or train-wreck moments by tabloid pop stars appreciated with camp irony by omnivorous tastemakers. The tightly formatted world that Jim Ladd feared and Clive Davis coveted had come to pass. Was this true diversity, or a simulation? As Keith Negus found when he spoke with those participating in the global pop order, genre convictions still pressed against format pragmatism. Rock was overrepresented at record labels. Genre codes shaped the corporate cultures that framed the selling of country music, gangsta rap, and Latin pop. “The struggle is not between commerce and creativity,” Negus concluded, “but about what is to be commercial and creative.” The friction between competing notions of how to make and sell music had resulted in a staggering range of product, but also intractable disagreements over that product’s value within cultural hierarchies.

To read more about Top 40 Democracy, click here.

Add a Comment
2. Top 40 Democracy

9780226896182

Eric Weisbard’s Top 40 Democracy: The Rival Mainstreams of American Music considers the shifting terrain of the pop music landscape, in which FM radio (once an indisputably dominant medium) constructed multiple mainstreams, tailoring each to target communities built on race, gender, class, and social identity. Charting (no pun intended) how categories rivaled and pushed against each other in their rise to reach American audiences, the book posits a counterintuitive notion: when even the blandest incarnation of a particular sub-group (the Isley Brothers version of R & B, for instance) rose to the top of the charts, so too did the visibility of that group’s culture and perspective, making musical formatting one of the master narratives of late-twentieth-century identity.

In a recent piece for the Sound Studies blog, Weisbard wrote about the rise of both Taylor Swift and, via mid-term elections, the Republican Party:

The genius, and curse, of the commercial-cultural system that produced Taylor Swift’s Top 40 democracy win in the week of the 2014 elections, is that its disposition is inherently centrist. Our dominant music formats, rival mainstreams engaged in friendly combat rather than culture war, locked into place by the early 1970s. That it happened right then was a response to, and recuperation from, the splintering effects of the 1960s. But also, a moment of maximum wealth equality in the U.S. was perfect to persuade sponsors that differing Americans all deserved cultural representation.

And, as Weisbard concludes:

Pop music democracy too often gives us the formatted figures of diverse individuals triumphing, rather than collective empowerment. It’s impressive what Swift has accomplished; we once felt that about President Obama, too. But she’s rather alone at the top.

To read more about Top 40 Democracy, click here.

 

Add a Comment
3. #UPWeek: FF is really TBT

Today is the last day of #UPWeek—so goes with it another successful tour of university press blogs. On that note, Friday’s theme is one of following: What are your must reads on the internet? Whom do you follow on social media? Which venues and scholars are doing right? University of Illinois Press tracks the geopolitics of imagination, University of Minnesota Press (hi, Maggie!) author John Hartigan explains the foibles of scholars on social media, University of Nebraska Press delivers another social media primer, NYU Press teaches us Key Words in Cultural Studies, Island Press tracks the interests of its editors, and Columbia University Press talks their University Press Round-Up.

Us? We’re running with the idea that history and progress aren’t synonymously bound. The way forward with media is often the way back or through, or at least a trip to the past demonstrates that the seed for new forms of mediation are (apologies for this) always already planted. I realize this makes Follow Friday a bit of Throwback Thursday, but here’s a great photo from UCP author Alan Thomas that has been making the rounds on Twitter of the very first e-book we published. Richard A. Lanham’s The Electronic Word required 2 MB of RAM and a floppy disk reader, yet in its “out-of-timeness,” we can already see the othering of the book-as-object and our desire to store information in as portable (and small) a capacity as possible. Kindle Fire quivers. We keep moving.

B2RJdBXIEAAT819

 

For more on #UPWeek, follow the hash-tag on Twitter.

Add a Comment
4. UPWeek Day 2: Irina Baronova launch in pictures

Today is day two of #UPWeek, which considers the past, present, and future of scholarly publishing through pictures. Among posts dotting the web, you’ll find: a photographic history of Indiana University Press, documentation of 1950s and ’60s print publishing at Stanford University Press, a photo collage from Fordham University Press, a Q & A with art director Martha Sewell and short film of author and illustrator Val Kells at Johns Hopkins University Press, and images of the University Press of Florida through the years. With these surveys in mind, we’re happy to share a few snapshots from our own recent launch of Victoria Tennant’s Irina Baronova and the Ballets Russes de Monte Carlo at Peter Fetterman’s Gallery in Santa Monica, California (including a cameo by Norman Lear). Don’t forget to follow #UPWeek on Twitter to keep up with the AAUP’s celebration of university presses’ blogging culture.

***

IMG_0004

 

IMG_0097

 

IMG_0022

 

To read more about Irina Baronova and the Ballets Russes de Monte Carlo, click here.

Add a Comment
5. #UPWeek: Turabian Teacher Collaborative

9780226816319

 

Welcome to the third annual #UPWeek blog tour—we’re excited to contribute under Monday’s umbrella theme, “Collaboration,” with a post on the Turabian Teacher Collaborative. To get the ball rolling and further the mission, here’s where you can find other university presses, big and small, far and wide, posting on similarly synergetic projects today: the University Press of Colorado on veterinary immunology, the University of Georgia Press on the New Georgia Encyclopedia Project, Duke University Press on Eben Kirksey’s The Multispecies Salon, the University of California Press on Dr. Paul Farmer and Dr. Jim Yong Kim’s work on the Ebola epidemic in West Africa, the University of Virginia Press on their project Chasing Shadows (a special e-book and website devoted to Watergate-era Oval Office conversations), McGill-Queen’s University Press on the online gallery Landscape Architecture in Canada, Texas A & M University Press on a new consumer health advocacy series, Project MUSE on their history of collaboration, and Yale University Press on their Museum Quality Books series. Remember to follow #UPWeek on Twitter, and read on after the jump for the story of the Turabian Teacher Collaborative’s first two years.

***

One of the foundational principles of Kate Turabian’s classic writing guides is that research creates a community between writers and readers. Professors Joseph Williams and Gregory Colomb put the principle of a community into action when they collaborated several years ago to adapt Turabian’s guides for a new generation of student researchers. During their writing process, they circulated and reworked each other’s contributions so much that, “by the end of the process, no one could quite remember who had drafted what.”

Channeling the spirit of this “rotational” writing process, the Turabian Teacher Collaborative adds high school teachers and a university press into the mix of colleagues working to bring Turabian’s principles to a new audience. The University of Chicago Press developed this project with University of Iowa English education professors Bonnie Sunstein and Amy Shoultz, after determining that much in Turabian’s Student’s Guide to Writing College Papers aligns with the Common Core State Standards for English Language Arts. Sunstein and Shoultz suggested that the Press begin by inviting high school teachers to test the effectiveness of Turabian’s book, both at helping high schools meet the Common Core standards and at helping students become college ready.

To strategize for the project’s pilot year, participating teachers—from urban, rural, and suburban high schools in California, Illinois, Massachusetts, and Iowa—convened for a workshop at the Press in the summer of 2013. They all left equipped with a set of books and free classroom resources drawn from the book, including topic sheets and ELA Common Core–aligned lesson plans. Following the workshop, this team of teachers integrated these materials into their curricula and exchanged resources and insights on their experiences throughout the year. Later this month, several members of the Turabian Teacher Collaborative will share what they have learned with teachers from across the country at a workshop following the NCTE annual convention in Washington, DC.

And, of course, high school students are now part of the collaboration and its community of researchers, as they envision the needs of readers by engaging in peer review at every step of the writing process. As participating teacher Deb Aldrich of Kennedy High School in Cedar Rapids, Iowa, said of her students’ response to the book: “[They] acted as sounding boards, polite disagree-ers, questioners, cheerleaders, and empathizers. They would come to class and ask if we were meeting in our research groups today, which showed how much they valued participating in a real shared research conversation, not just an imaginary one in their heads. They acted and felt like academic researchers!”

The Press plans to use feedback like this to develop a teachers’ resource guide this year, as well as additional resources for research writing in future high school classrooms. As the collaborative moves into its second year, it is expanding to include high school teachers from across the disciplines who teach research and academic writing skills. Are you one of them? For more information, e-mail turabianteacher@press.uchicago.edu.

(in the spirit of #UPWeek, this post was collaboratively generated by University of Chicago Press staff members working with the TTC)

To learn more about the TTC project, click here.

Stay tuned for more from #UPWeek’s blog tour!

 

Add a Comment
6. Free e-book for November: Mr. Jefferson and the Giant Moose

9780226169149

 

Lee Alan Dugatkin’s Mr. Jefferson and the Giant Moose, our free e-book for November, reconsiders the crucial supporting role played by a moose carcass in Jeffersonian democracy.

***

Thomas Jefferson—author of the Declaration of Independence, US president, and ardent naturalist—spent years countering the French conception of American degeneracy. His Notes on Virginia systematically and scientifically dismantled Buffon’s case through a series of tables and equally compelling writing on the nature of his home state. But the book did little to counter the arrogance of the French and hardly satisfied Jefferson’s quest to demonstrate that his young nation was every bit the equal of a well-established Europe. Enter the giant moose.

The American moose, which Jefferson claimed was so enormous a European reindeer could walk under it, became the cornerstone of his defense. Convinced that the sight of such a magnificent beast would cause Buffon to revise his claims, Jefferson had the remains of a seven-foot ungulate shipped first class from New Hampshire to Paris. Unfortunately, Buffon died before he could make any revisions to his Histoire Naturelle, but the legend of the moose makes for a fascinating tale about Jefferson’s passion to prove that American nature deserved prestige.

In Mr. Jefferson and the Giant Moose, Lee Alan Dugatkin vividly recreates the origin and evolution of the debates about natural history in America and, in so doing, returns the prize moose to its rightful place in American history.

To download your free copy, click here.

 

 

Add a Comment
7. On the Run: Best Nonfiction of 2014

1610826_10152856990916202_7367643060238594808_n

 

On the Run: Fugitive Life in an American City chronicles the effects the War on Drugs levied on one inner-city Philadelphia neighborhood and its largely African American population. Based on Goffman’s six-year-long ethnographic experience as a participant-observer in the community, the book considers how a cycle of presumed criminality engendered by pervasive policing obscures the friendships and associations of a group of residents, small-time drug dealers, everyday persons, and the lives they lead into nodes in a network of surveillance under operation 24 hours a day—and the very human costs involved. The book was recently named to Publishers Weekly’s list, Best Nonfiction of 2014, after garnering praise from both the New Yorker and the New York Times Book Review.

You can read an excerpt from the book, “The Art of Running,” here.

To read more, click here.

 

 

Add a Comment
8. Excerpt: Serving the Reich

9780226204574

“Physics Must Be Rebuilt”

from Serving the Reich: The Struggle for the Soul of Physics under Hitler by Philip Ball

***

Quantum theory, with its paradoxes and uncertainties, its mysteries and challenges to intuition, is something of a refuge for scoundrels and charlatans, as well as a fount of more serious but nonetheless fantastic speculation. Could it explain Consciousness? Does it undermine causality? Everything from homeopathy to mind control and manifestations of the paranormal has been laid at its seemingly tolerant door.

Mostly that represents a blend of wishful thinking, misconception and pseudoscience. Because quantum theory defies common sense and ‘rational’ expectation, it can easily be hijacked to justify almost any wild idea. The extracurricular uses to which quantum theory has been put tend inevitably to reflect the preoccupations of the times: in the 1970s parallels were drawn with Zen Buddhism, today alternative medicine and theories of mind are in vogue.

Nevertheless, the fact remains that fundamental aspects of quantum physics are still not fully understood, and it has genuinely profound philosophical implications. Many of these aspects were evident to the early pioneers of the field – indeed, in the transformation of scientific thought that quantum theory compelled, they were impossible to ignore. Yet while several of the theory’s persistent conundrums were identified in its early days, one can’t say that the physicists greatly distinguished themselves in their response. This is hardly surprising: neither scientists nor philosophers in the early twentieth century had any preparation for thinking in the way quantum physics demands, and if the physicists tended to retreat into vagueness, near-tautology and mysticism, the philosophers and other intellectuals often just misunderstood the science.

This penchant for pondering the deeper meanings of quantum theory was particularily evident in Germany, proud of its long tradition of philosophical enquiry into nature and reality. The British, American and Italian physicists, in contrast, tended to conform to their stereotypical national pragmatism in dealing with quantum matters. But even if they were rather more content to apply the mathematics and not wonder too hard about the ontology, these other scientists relied strongly on the Germanic nations for those theoretical formulations in the first place. Germany, more than any other country, showed how to turn the microscopic fragmentation of nature into a useful, predictive, quantitative and explanatory science. If you were a theoretical physicist in Germany, it was hard to resist the gravitational pull of quantum theory: where Planck and Einstein led, Arnold Sommerfeld, Peter Debye, Werner Heisenberg, Max Born, Erwin Schrödinger, Wolfgang Pauli and others followed.

This being so, it was inevitable that the philosophical aspects of quantum physics should have been coloured by the political and social preoccupations of Germany. As we shall see, it was not the only part of physics to become politicized. These tendencies rocked the ivory tower: the kind of science you pursued became a statement about the sort of person you were, and the sympathies you harboured.

Unpeeling the atom

The realization that light and energy were granular had profound implications for the emerging understanding of how atoms are constituted. In 1907 New Zealander Ernest Rutherford, work ing at Manchester University in England, found that most of the mass of an atom is concentrated in a small, dense nucleus with a positive electrical charge. He concluded that this kernel was surrounded by a cloud of electrons, the particles found in 1897 to be the constituents of cathode rays by J. J. Thomson at Cambridge. Electrons possess a negative electrical charge that collectively balances the positive charge of the nucleus. In 1911 Rutherford proposed that the atom is like a solar system in miniature, a nuclear sun orbited by planetary electrons, held there not by gravity but by electrical attraction.

But there was a problem with that picture. According to classical physics, the orbiting electrons should radiate energy as electromagnetic rays, and so would gradually relinquish their orbits and spiral into the nucleus: the atom should rapidly decay. In 1913 the 28-year-old Danish physicist Niels Bohr showed that the notion of quantization – discreteness of energy – could solve this problem of atomic stability, and at the same time account for the way atoms absorb and emit radiation. The quantum hypothesis gave Bohr permission to prohibit instability by fiat: if the electron energ ies can only take discrete, quantized values, he said, then this gradual leakage of energ y is prevented: the particles remain orbiting indefinitely. Electrons can lose energy, but only by making a hop (‘quantum jump’) to an orbit of lower energy, shedding the difference in the form of a photon of a specific wavelength. By the same token, an electron can gain energy and jump to a higher orbit by absorbing a photon of the right wavelength. Bohr went on to postulate that each orbit can accommodate only a fixed number of electrons, so that downward jumps are impossible unless a vacancy arises.

It was well established experimentally that atoms do absorb and emit radiation at particular, well-defined wavelengths. Light passing through a gas has ‘missing wavelengths’ – a series of dark, narrow bands in the spectrum. The emission spectrum of the same vapour is made up of corresponding bright bands, accounting for example for the characteristic red glow of neon and the yellow glare of sodium vapour when they are stimulated by an electrical discharge. These photons absorbed or emitted, said Bohr, have energies precisely equal to the energy difference between two electron orbits.

By assuming that the orbits are each characterized by an integer ‘quantum number’ related to their energy, Bohr could rationalize the wavelengths of the emission lines of hydrogen. This idea was developed by Arnold Sommerfeld, professor of theoretical physics at the University of Munich. He and his student Peter Debye worked out why the spectral emission lines are split by a magnetic field – an effect discovered by the Dutch physicist Pieter Zeeman in work that won him the 1902 Nobel Prize. (This Zeeman effect is the magnetic equivalent of the line-splitting by an electric field discovered by the German physicist Johannes Stark – see page 88.)

But this was still a rather ad hoc picture, justified only because it seemed to work. What are the rules that govern the energy levels of electrons in atoms, and the jumps between them? In the early 1920s Max Born at the University of Göttingen set out to address those questions, assisted by his brilliant students Wolfgang Pauli, Pascual Jordan and Werner Heisenberg.

Heisenberg, another of Sommerfeld’s protégés, arrived from Munich in October 1922 to become Born’s private assistant, looking as Born put it ‘like a simple farm boy, with short fair hair, clear bright eyes, and a charming expression’. He and Born sought to apply Bohr’s empirical description of atoms in terms of quantum numbers to the case of helium, the second element in the periodic table after hydrogen. Given Bohr’s prescription for how quantum numbers dictate electron energies, one could in principle work out what the energies of the various electron orbits are, assuming that the electrons are held in place by their electrostatic attraction to the nucleus. But that works only for hydrogen, which has a single electron. With more than one electron in the frame, the mathematical elegance is destroyed by the repulsive electrostatic influence that electrons exert on each other. This is not a minor correction: the force between electrons is about as strong as that between electron and nucleus. So for any element aside from hydrogen, Bohr’s appealing modelbecomes too complicated to work out exactly.

In trying to go beyond these limitations, however, Born was not content to fit experimental observations to improvised quantum hypotheses as Bohr had done. Rather, he wanted to calculate the disposition of the electrons using principles akin to those that Isaac Newton used to explain the gravitationally bound solar system. In other words, he sought the rules that governed the quantum states that Bohr had adduced.

It became clear to Born that what he began to call a ‘quantum mechanics’ could not be constructed by a minor amendment of classical, Newtonian mechanics. ‘One must probably introduce entirely new hypotheses’, Heisenberg wrote to Pauli – another former pupil of Sommerfeld in Munich, where the two had become friends – in early 1923. Born agreed, writing that summer that ‘not only new assumptions in the usual sense of physical hypotheses will be necessary, but the entire system of concepts of physics must be rebuilt from the ground up’.

That was a call for revolution, and the ‘new concepts’ that emerged over the next four years amounted to nothing less. Heisenberg began formulating quantum mechanics by writing the energ ies of the quantum states of an atom as a matrix, a kind of mathematical grid. One could specify, for example, a matrix for the positions of the electrons, and another for their momenta (mass times velocity). Heisenberg’s version of quantum theory, devised with Born and Jordan in 1925, became known as matrix mechanics.

It wasn’t the only way to set out the problem. From early 1926 the Austrian physicist Erwin Schrödinger, working at the University of Zurich, began to explicate a different form of quantum mechanics based not on matrices but on waves. Schrödinger postulated that all the fundamental properties of a quantum particle such as an electron, or a collection of such particles, can be expressed as an equation describing a wave, called a wavefunction. The obvious question was: a wave of what? The wave itself is a purely mathematical entity, incorporating ‘imaginary numbers’ derived from the square root of -1 (denoted i), which, as the name implies, cannot correspond to any observable quantity. But if one calculates the square of a wavefunction – that is, if one multiplies this mathematical entity by itself – (More strictly, one calculates the so-called complex conjugate, the product of two wave functions identical except that the imaginary parts have opposite signs: +i and -i.) – then the imaginary numbers go away and only real ones remain, which means that the result may correspond to something concrete that can be measured in the real world. At first Schrödinger thought that the square of the wavefunction produces a mathematical expression describing how the density of the corresponding particle varies from one place to another, rather as the density of air varies through space in a sound wave. That was already weird enough: it meant that quantum particles could be regarded as smeared-out waves, filling space like a gas. But Born – who, to Heisenberg’s dismay, was enthusiastic about Schrödinger’s rival ‘wave mechanics’ – argued that the squared wavefunction denoted something even odder: the probability of finding the particle at each location in space.

Think about that for a moment. Schrödinger was asserting that the wavefunction says all that can be said about a quantum system. And apparently, all that can be said is not where the particle is, but what the chance is of finding it here or there. This is not a question of incomplete knowledge – of knowing that a friend might be at the cinema or the restaurant, but not knowing which. In that case she is one place or another, and you are forced to talk of probabilities just because you lack sufficient information. Schrödinger’s wave-based quantum mechanics is different: it insists that there is no answer to the question beyond the probabilities. To ask where the particle really is has no physical meaning. At least, it doesn’t until you look – but that act of looking doesn’t then disclose what was previously hidden, it determines what was previously undecided.

Whereas Heisenberg’s matrix mechanics was a way of formalizing the quantum jumps that Bohr had introduced, Schrödinger’s wave mechanics seemed to do away with them entirely. The wavefunction made everything smooth and continuous again. At least, it seemed to. But wasn’t that just a piece of legerdemain? When an electron jumps from one atomic orbit to another, the initial and the final states are both described by wavefunctions. But how did one wavefunction change into the other? The theory didn’t specify that – you had to put it in by hand. And you still do: there remains no consensus about how to build quantum jumps into quantum theory. All the same, Schrödinger’s description has prevailed over Heisenberg’s – not because it is more correct, but because it is more convenient and useful. What’s more, Heisenberg’s quantum matrices were abstract, giving scant purchase to an intuitive understanding, while Schrödinger’s wave mechanics offered more sustenance to the imagination.

The probabilistic view of quantum mechanics is famously what disconcerted Einstein. His scepticism eventually isolated him from the evolution of quantum theory and left him unable to contribute further to it. He remained convinced that there was some deeper reality below the probabilities that would rescue the precise certainties of classical physics, restoring a time and a place for everything. This is how it has always been for quantum theory: those who make great, audacious advances prove unable to reconcile them to the still more audacious notions of the next generation. It seems that one’s ability to ‘suppose’ – ‘understanding’ quantum theory is largely a matter of reconciling ourselves to its counter-intuitive claims – is all too easily exhausted by the demands that the theory makes.

Schrödinger wasn’t alone in accepting and even advocating indeterminacy in the quantum realm. Heisenberg’s matrix mechanics seemed to insist on a very strange thing. If you multiply together the matrices describing the position and the momentum of a particle, you get a different result depending on which matrix you put first in the arithmetic. In the classical world the order of multiplication of two quantities is irrelevant: two times three is the same as three times two, and an object’s momentum is the same expressed as mass times velocity or velocity times mass. For some pairs of quantum properties, such as position and momentum, that was evidently no longer the case.

This might seem an inconsequential quirk. But Heisenberg discovered that it had the most bizarre corollary, as foreshadowed in the portentous title of the paper he published in March 1927: ‘On the perceptual content of quantum-theoretical kinematics and mechanics’. Here he showed that the theory insisted on the impossibility of knowing at any instant the precise position and momentum of a quantum particle. As he put it, ‘The more precisely we determine the position, the more imprecise is the determination of momentum in this instant, and vice versa.’

This is Heisenberg’s uncertainty principle. He sought to offer an intuitive rationalization of it, explaining that one cannot make a measurement on a tiny particle such as an electron without disturbing it in some way. If it were possible to see the particle in a microscope (in fact it is far too small), that would involve bouncing light off it. The more accurately you wish to locate its position, the shorter the wavelength of light you need (crudely speak ing, the finer the divisions of the ‘ruler’ need to be). But as the wavelength of photons gets shorter, their energy increases – that’s what Planck had said. And as the energy goes up, the more the particle recoils from the impact of a photon, and so the more you disturb its momentum.

This thought experiment is of some value for grasping the spirit of the uncertainty principle. But it has fostered the misconception that the uncertainty is a result of the limitations of experimentation: you can’t look without disturbing. The uncertainty is, however, more fundamental than that: again, it’s not that we can’t get at the information, but that this information does not exist. Heisenberg’s uncertainty principle has also become popularly interpreted as imputing fuzziness and imprecision to quantum mechanics. But that’s not quite right either. Rather, it places very precise limits on what we can know. Those limits, it transpires, are determined by Planck’s constant, which is so small that the uncertainty becomes significant only at the tiny scale of subatomic particles.

Political science

Both Schrödinger’s wavefunction and Heisenberg’s uncertainty principle seemed to be insisting on aspects of quantum theory that verged on the metaphysical. For one thing, they placed bounds on what is knowable. This appeared to throw causality itself – the bedrock of science – into question. Within the blurred margins of quantum phenomena, how can we know what is cause and what is effect? An electron could turn up here, or it could instead be there, with no apparent causal principle motivating those alternatives.

Moreover, the observer now intrudes ineluctably into the previously objective, mechanistic realm of physics. Science purports to pronounce on how the world works. But if the very act of observing it changes the outcome – for example, because it transforms the wavefunction from a probability distribution of situations into one particular situation, commonly called ‘collapsing’ the wavefunction – then how can one claim to speak about an objective world that exists before we look?

Today it is generally thought that quantum theory offers no obvious reason to doubt causality, at least at the level at which we can study the world, although the precise role of the observer is still being debated. But for the pioneers of quantum theory these questions were profoundly disturbing. Quantum theory worked as a mathematical description, but without any consensus about its interpretation, which seemed to be merely a matter of taste. Many physicists were content with the prescription devised between 1925 and 1927 by Bohr and Heisenberg, who visited the Dane in Copenhagen. Known now as the Copenhagen interpretation, this view of quantum physics demanded that centuries of classical preconceptions be abandoned in favour of a capitulation to the maths. At its most fundamental level, the physical world was unknowable and in some sense indeterminate. The only reality worthy of the description is what we can access experimentally – and that is all that quantum theory prescribes. To look for any deeper description of the world is meaningless. To Einstein and some others, this seemed to be surrendering to ignorance. Beneath the formal and united appearance of the Solvay group in 1927 lies a morass of contradictory and seemingly irreconcilable views.

These debates were not limited to the physicists. If even they did not fully understand quantum theory, how much scope there was then for confusion, distortion and misappropriation as they disseminated these ideas to the wider world. Much of the blame for this must be laid at the door of the scientists themselves, including Bohr and Heisenberg, who threw caution to the wind when generalizing the narrow meaning of the Copenhagen interpretation in their public pronouncements. For Bohr, a crucial part of this picture was the notion of complementarity, which holds that two apparently contradictory descriptions of a quantum system can both be valid under different observational circumstances. Thus a quantum entity, be it an insubstantial photon or an electron graced with mass, can behave at one time as a particle, at another as a wave. Bohr’s notion of complementarity is scarcely a scientific theory at all, but rather, another characteristic expression of the Copenhagen affirmation that ‘this is just how things are’: it is not that there is some deeper behaviour that sometimes looks ‘wave-like’ and sometimes ‘particle-like’, but rather, this duality is an intrinsic aspect of nature. However one feels about Bohr’s postulate, there was little justification for his enthusiastic extension of the complementarity principle to biology, law, ethics and religion. Such claims made quantum physics a political matter.

The same is true for Heisenberg’s insistence that, via the uncertainty principle, ‘the meaninglessness of the causal law is definitely proved’. He tried to persuade philosophers to come to terms with this abolition of determinism and causality, as though this had moreover been established not as an (apparent) corollary of quantum theory but as a general law of nature.

This quasi-mystical perspective on quantum theory that the physicists appeared to encourage was attuned to a growing rejection, during the Weimar era, of what were viewed as the maladies of materialism: commercialism, avarice and the encroachment of technology. Science in general, and physics in particular, were apt to suffer from association with these supposedly degenerate values, making it inferior in the eyes of many intellectuals to the noble aspirations of art and ‘higher culture’. While it would be too much to say that an emphasis on the metaphysical aspects of quantum mechanics was cultivated in order to rescue physics from such accusations, that desideratum was not overlooked. Historian Paul Forman has argued that the quantum physicists explicitly accommodated their inter pretations to the prevailing social ethos of the age, in which ‘the concept – or the mere word – “causality” symbolized all that was odious in the scientific enterprise’. In his 1918 book Der Untergang des Abendlandes (The Decline of the West), the German philosopher and historian Oswald Spengler more or less equated causality with physics, while making it a concept deserving of scorn and standing in opposition to life itself. Spengler saw in modern physicists’ doubts about causality a symptom of what he regarded as the moribund nature of science itself. Here he was thinking not of quantum theory, which was barely beginning to reach the public consciousness at the end of the First World War, but of the probabilistic microscopic theory of matter developed by the Scottish physicist James Clerk Maxwell and the Austrian Ludwig Boltzmann, which had already renounced claims to a precise, deterministic picture of atomic motions.

Spengler’s book was read and discussed throughout the German academic world. Einstein and Born knew it, as did many other of the leading physicists, and Forman believes that it fed the impulse to realign modern physics with the spirit of the age, leading theoretical physicists and applied mathematicians to ‘denigrat[e] the capacity of their discipline to attain true, or even valuable, knowledge’. They began to speak of science as an essentially spiritual enterprise, unconnected to the demands and depradations of technology but, as Wilhelm Wien put it, arising ‘solely from an inner need of the human spirit’. Even Einstein, who deplored the rejection of causality that he saw in many of his colleagues, emphasized the roles of feeling and intuition in science.

In this way the physicists were attempting to reclaim some of the prestige that science had lost to the neo-Romantic spirit of the times. Causality was a casualty. Only once we have ‘liberation from the rooted prejudice of absolute causality’, said Schrödinger in 1922, would the puzzles of atomic physics be conquered. Bohr even spoke of quantum theory having an ‘inherent irrationality’. And as Forman points out, many physicists seemed to accept these notions not with reluctance or pain but with relief and with the expectation that they would be welcomed by the public. He does not see in all this simply an attempt to ingratiate physics to a potentially hostile audience, but rather, an unconscious adaptation to the prevailing culture, made in good faith. When Einstein expressed his reservations about the trend in a 1932 interview with the Irish writer James Gardner Murphy, Murphy responded that even scientists surely ‘cannot escape the influence of the milieu in which they live’. And that milieu was anti-causal.

Equally, the fact that both quantum theory and relativity were seen to be provoking crises in physics was consistent with the widespread sense that crises pervaded Weimar culture – economically, politically, intellectually and spiritually. ‘The idea of such a crisis of culture’, said the French politician Pierre Viénot in 1931, ‘belongs today to the solid stock of the common habit of thought in Germany. It is a part of the German mentality.’ The applied mathematician Richard von Mises spoke of ‘the present crisis in mechanics’ in 1921; another mathematician, Hermann Weyl (one of the first scientists openly to question causality) claimed there was a ‘crisis in the foundations of mathematics’, and even Einstein wrote for a popular audience on ‘the present crisis in theoretical physics’ in 1922. (Experimental physicist Johannes Stark’s 1921 book The Present Crisis in German Physics used the same trope but spoke to a very different perception: that his kind of physics was being eclipsed by an abstract, degenerate form of theoretical physics – see page 91.) One has the impression that these crises were not causing much dismay, but rather, reassured physicists that they were in the same tumultuous flow as the rest of society.

This was, however, a dangerous game. Some outsiders drew the conclusion that quantum mechanics pronounced on free will, and it was only a matter of time before the new physics was being enlisted for political ends. Some even managed to claim that it vindicated the policies of the National Socialists.

Moreover, if physics was being in some sense shaped to propitiate Spenglerism, it risked seeming to endorse also Spengler’s central thesis of relativism: that not only art and literature but also science and mathematics are shaped by the culture in which they arise and are invalid and indeed all but incomprehensible outside that culture. It is tempting to find here a presentiment of the ‘Aryan physics’ propagated by Nazi sympathizers in the 1930s (see Chapter 6), which contrasted healthy Germanic science with decadent, self-serving Jewish science. And given Spengler’s nationalism, rejection of Weimar liberalism, support for authoritarianism and belief in historical destiny, it is no surprise that he was initially lauded by the Nazis, especially Joseph Goebbels, nor that he voted for Hitler in 1932. (Spengler was too much of an intellectual for his advocacy to survive close contact. After meeting Hitler in 1933, he distanced himself from the Nazis’ vulgar posturing and anti-Semitism, and was no favourite of the Reich by the time he died in Munich in 1936.)

One way or another, then, by the 1920s physics was becoming freighted with political implications. Without intending it, the physicists themselves had encouraged this. But they hadn’t grasped – were perhaps unable to grasp – what it would soon imply.

To read more about Serving the Reich, click here.

Add a Comment
9. Excerpt: Versions of Academic Freedom

9780226064314

***

“Academic Freedom Studies: The Five Schools”

In 2009 Terrence Karran published an essay with the title “Academic Freedom: In Justification of a Universal Ideal.” Although it may not seem so at first glance, the title is tendentious, for it answers in advance the question most often posed in the literature: How does one justify academic freedom? One justifies academic freedom, we are told before Karran’s analysis even begins, by claiming for it the status of a universal ideal.

The advantage of this claim is that it disposes of one of the most frequently voiced objections to academic freedom: Why should members of a particular profession be granted latitudes and exemptions not enjoyed by other citizens? Why, for example, should college and university professors be free to criticize their superiors when employees in other workplaces might face discipline or dismissal? Why should college and university professors be free to determine and design the condition of their workplace (the classroom) while others must adhere to a blueprint laid down by a supervisor? Why should college and university professors be free to choose the direction of their research while researchers who work for industry and government must go down the paths mandated by their employers? We must ask, says Frederick Schauer (2006), “whether academics should, by virtue of their academic employment and/or profession, have rights (or privileges, to be more accurate) not possessed by others” (913).

The architects of the doctrine of academic freedom were not unaware of these questions, and, in anticipation of others raising them, raised them themselves. Academic freedom, wrote Arthur O. Lovejoy (1930), might seem “peculiar chiefly in that the teacher is . . . a salaried employee and that the freedom claimed for him implies a denial of the right of those who provide or administer the funds from which he is paid to control the content of his teaching” ( 384). But this denial of the employer’s control of the employee’s behavior is peculiar only if one assumes, first, that college and university teaching is a job like any other and, second, that the college or university teacher works for a dean or a provost or a board of trustees. Those assumptions are directly challenged and rejected by the American Association of University Professors’ 1915 Declaration of Principles on Academic Freedom and Academic Tenure, a founding document (of which Lovejoy was a principal author) and one that is, in many respects, still authoritative. Here is a key sentence:

The responsibility of the university teacher is primarily to the public itself, and to the judgment of his own profession; and while, with respect to certain external conditions of his vocation, he accepts a responsibility to the authorities of the institution in which he serves, in the essentials of his professional activity his duty is to the wider public to which the institution itself is morally amenable.

There are four actors and four centers of interest in this sentence: the public, the institution of the academy, the individual faculty member, and the individual college or university. The faculty member’s allegiance is first to the public, an abstract entity that is not limited to a particular location. The faculty member’s secondary allegiance is to the judgment of his own profession, but since, as the text observes, the profession’s responsibility is to the public, it amounts to the same thing. Last in line is the actual college or university to which the faculty member is tied by the slightest of ligatures. He must honor the “external conditions of his vocation”—conditions like showing up in class and assigning grades, and holding office hours and teaching to the syllabus and course catalog (although, as we shall see, those conditions are not always considered binding)— but since it is a “vocation” to which the faculty member is responsible, he will always have his eye on what is really essential, the “universal ideal” that underwrites and justifies his labors.

Here in 1915 are the seeds of everything that will flower in the twenty- first century. The key is the distinction between a job and a vocation. A job is defined by an agreement (often contractual) between a worker and a boss: you will do X and I will pay you Y; and if you fail to perform as stipulated, I will discipline or even dismiss you. Those called to a vocation are not merely workers; they are professionals; that is, they profess something larger than the task immediately at hand— a religious faith, a commitment to the rule of law, a dedication to healing, a zeal for truth— and in order to become credentialed professors, as opposed to being amateurs, they must undergo a rigorous and lengthy period of training. Being a professional is less a matter of specific performance (although specific performances are required) than of a continual, indeed lifelong, responsiveness to an ideal or a spirit. And given that a spirit, by definition, cannot be circumscribed, it will always be possible (and even thought mandatory and laudable) to expand the area over which it is said to preside.

The history of academic freedom is in part the history of that expansion as academic freedom is declared to be indistinguishable from, and necessary for, the flourishing of every positive value known to humankind. Here are just a few quotations from Karran’s essay:

Academic freedom is important to everyone’s well-being, as well as being particularly pertinent to academics andtheir students. (The Robbins Committee on Higher Education in the UK, 1963)

Academic freedom is but a facet of freedom in the larger society. (R. M. O. Pritchard, “Academic Freedom and Autonomy in the United Kingdom and Germany,” 1998)

A democratic society is hardly conceivable . . . without academic freedom. (S. Bergan, “Institutional Autonomy: Between Myth and Responsibility,” 2002)

In a society that has a high regard for knowledge and universal values, the scope of academic freedom is wide. (Wan Manan, “Academic Freedom: Ethical Implications and Civic Responsibilities,” 2000)

The sacred trust of the universities is to carry the torch of freedom. (J. W. Boyer, “Academic Freedom and the Modern University: The Experience of the University of Chicago,” 2002)

Notice that in this last statement, freedom is not qualified by the adjective academic. Indeed, you can take it as a rule that the larger the claims for academic freedom, the less the limiting force of the adjective academic will be felt. In the taxonomy I offer in this book, the movement from the most conservative to the most radical view of academic freedom will be marked by the transfer of emphasis from academic, which names a local and specific habitation of the asserted freedom, to freedom, which does not limit the scope or location of what is being asserted at all.

Of course, freedom is itself a contested concept and has many possible meanings. Graeme C. Moodie sorts some of them out and defines the freedom academics might reasonably enjoy in terms more modest than those suggested by the authors cited in Karran’s essay. Moodie (1996) notes that freedom is often understood as the “absence of constraint,” but that, he argues, would be too broad an understanding if it were applied to the activities of academics. Instead he would limit academic freedom to faculty members who are “exercising academic functions in a truly academic matter” (134). Academic freedom, in his account, follows from the nature of academic work; it is not a personal right of those who choose to do that work. That freedom— he calls it an “activity freedom” because it flows from the nature of the job and not from some moral abstraction— “can of course only be exercised by persons, but its justification, and thus its extent, must clearly and explicitly be rooted in its relationship to academic activities rather than (or only consequentially) to the persons who perform them” (133). In short, he concludes, “the special freedom(s) of academics is/are conditional on the fulfillment of their academic obligations” (134).

Unlike those who speak of a universal ideal and of the torch of freedom being carried everywhere, Moodie is focused on the adjective academic. He begins with it and reasons from it to the boundaries of the freedom academics can legitimately be granted. To be sure, the matter is not so cut and dried, for academic must itself be defined so that those boundaries can come clearly into view and that is no easy matter. No one doubts that classroom teaching and research and scholarly publishing are activities where the freedom in question is to be accorded, at least to some extent. But what about the freedom to criticize one’s superiors; or the freedom to configure a course in ways not standard in the department; or the freedom to have a voice in the building of parking garages, or in the funding of athletic programs, or in the decision to erect a student center, or in the selection of a president, or in the awarding of honorary degrees, or in the inviting of outside speakers? Is academic freedom violated when faculty members have minimal input into, or are shut out entirely from, the consideration of these and other matters?

To that question, Mark Yudof, who has been a law school dean and a university president, answers a firm “no.” Yudof (1988) acknowledges that “there are many elements necessary to sustain the university,” including “salaries,” library collections,” a “comfortable workplace,” and even “a parking space” (1356), but do academics have a right to these things or a right to participate in discussions about them (a question apart from the question of whether it is wise for an administration to bring them in)? Only, says Yudof, if you believe “that any restrictions, however indirectly linked to teaching and scholarship, will destroy the quest for knowledge” (1355). And that, he observes, would amount to “a kind of unbridled libertarianism for academicians,” who could say anything they liked in a university setting without fear of reprisal or discipline (1356).

Better, Yudof concludes, to define academic freedom narrowly, if only so those who are called upon to defend it can offer a targeted, and not wholly diffuse, rationale. Academic freedom, he declares, “is what it is” (of course that’s the question; what is it?), and it is “not general liberty, pleasant working conditions, equality, self- realization, or happiness,” for “if academic freedom is thought to include all that is desirable for academicians, it may come to mean quite little to policy makers and courts” (1356). Moodie (1996) gives an even more pointed warning: “Scholars only invite ridicule, or being ignored, when they seem to suggest that every issue that directly affects them is a proper sphere for academic rule” (146). (We shall revisit this issue when we consider the relationship between academic freedom, shared governance, and public employee law.)

So we now have as a working hypothesis an opposition between two views of academic freedom. In one, freedom is a general, overriding, and ever-expanding value, and the academy is just one of the places that house it. In the other, the freedom in question is peculiar to the academic profession and limited to the performance of its core duties. When performing those duties, the instructor is, at least relatively, free. When engaged in other activities, even those that take place within university precincts, no such freedom or special latitude obtains. This modest notion of academic freedom is strongly articulated by J. Peter Byrne (1989): “The term ‘academic freedom’ should be reserved for those rights necessary for the preservation of the unique functions of the university ” (262).

These opposed accounts of academic freedom do not exhaust the possibilities; there are extremes to either side of them, and in the pages that follow I shall present the full range of the positions currently available. In effect I am announcing the inauguration of a new field— Academic Freedom Studies. The field is still in a fluid state; new variants and new theories continue to appear. But for the time being we can identify five schools of academic freedom, plotted on a continuum that goes from right to left. The continuum is obviously a political one, but the politics are the politics of the academy. Any correlation of the points on the continuum with real world politics is imperfect, but, as we shall see, there is some. I should acknowledge at the outset that I shall present these schools as more distinct than they are in practice; individual academics can be members of more than one of them. The taxonomy I shall offer is intended as a device of clarification. The inevitable blurring of the lines comes later.

As an aid to the project of sorting out the five schools, here is a list of questions that would receive different answers depending on which version of academic freedom is in place:

Is academic freedom a constitutional right?
What is the relationship between academic freedom and the First Amendment?
What is the relationship between academic freedom and democracy?
Does academic freedom, whatever its scope, attach to the individual faculty member or to the institution?
Do students have academic freedom rights?
What is the relationship between academic freedom and the form of governance at a college or university?
In what sense, if any, are academics special?
Does academic freedom include the right of a professor to criticize his or her organizational superiors with impunity?
Does academic freedom allow a professor to rehearse his or her political views in the classroom?
What is the relationship between academic freedom and political freedom?
What views of education underlie the various positions on academic freedom?

As a further aid, it would be good to have in mind some examples of incidents or controversies in which academic freedom has been thought to be at stake.

In 2011, the faculty of John Jay College nominated playwright Tony Kushner to be the recipient of an honorary degree from the City University of New York. Normally approval of the nomination would have been pro forma, but this time the CUNY Board of Trustees tabled, and thus effectively killed, the motion supporting Kushner’s candidacy because a single trustee objected to his views on Israel. After a few days of outrage and bad publicity the board met again and changed its mind. Was the board’s initial action a violation of academic freedom, and if so, whose freedom was being violated? Or was the incident just one more instance of garden- variety political jockeying, a tempest in a teapot devoid of larger implications?

In the same year Professor John Michael Bailey of Northwestern University permitted a couple to perform a live sex act at an optional session of his course on human sexuality. The male of the couple brought his naked female partner to orgasm with the help of a device known as a “fucksaw.” Should Bailey have been reprimanded and perhaps disciplined for allowing lewd behavior in his classroom or should the display be regarded as a legitimate pedagogical choice and therefore protected by the doctrine of academic freedom?

In 2009 sociology professor William Robinson of the University of California at Santa Barbara, after listening to a tape of a Martin Luther King speech protesting the Vietnam War, sent an e-mail to the students in his sociology of globalization course that began:

If Martin Luther King were alive on this day of January 19th, there is no doubt that he would be condemning the Israeli aggression against Gaza along with U.S. military and political support for Israeli war crimes, or that he would be standing shoulder to shoulder with the Palestinians.

The e-mail went on to compare the Israeli actions against Gaza to the Nazi actions against the Warsaw ghetto, and to characterize Israel as “a state founded on the negation of a people.” Was Robinson’s e-mail an intrusion of his political views into the classroom or was it a contribution to the subject matter of his course and therefore protected under the doctrine of academic freedom?

As the 2008 election approached, an official communication from the administration of the University of Illinois listed as prohibited political activities the wearing of T-shirts or buttons supporting candidates or parties. Were faculty members being denied their First Amendment and academic freedom rights?

BB&T, a bank holding company, funds instruction in ethics on the condition that the courses it supports include as a required reading Ayn Rand’s Atlas Shrugged (certainly a book concerned with issues of ethics). If a university accepts this arrangement (as Florida State University did), has it traded its academic freedom for cash or is it (as the dean at Florida State insisted) merely accepting help in a time of financial exigency?

In 1996, the state of Virginia passed a law forbidding state employees from accessing pornographic materials on state- owned computers. The statute included a waiver for those who could convince a supervisor that the viewing of pornographic material was part of a bona fide research project. Was the academic freedom of faculty members in the state university system violated because they were prevented from determining for themselves and without government monitoring the course of their research?

Just as my questions would be answered differently by proponents of different accounts of academic freedom, so would these cases be assessed differently depending on which school of academic freedom a commentator belongs to.

Of course I have yet to name the schools, and I will do that now.

(1)— The “It’s just a job” school. This school (which may have only one member and you’re reading him now) rests on a deflationary view of higher education. Rather than being a vocation or holy calling, higher education is a service that offers knowledge and skills to students who wish to receive them. Those who work in higher education are trained to impart that knowledge, demonstrate those skills and engage in research that adds to the body of what is known. They are not exercising First Amendment rights or forming citizens or inculcating moral values or training soldiers to fight for social justice. Their obligations and aspirations are defined by the distinctive task— the advancement of knowledge— they are trained and paid to perform, defined, that is, by contract and by the course catalog rather than by a vision of democracy or world peace. College and university teachers are professionals, and as such the activities they legitimately perform are professional activities, activities in which they have a professional competence. When engaged in those activities, they should be accorded the latitude— call it freedom if you like— necessary to their proper performance. That latitude does not include the performance of other tasks, no matter how worthy they might be. According to this school, academics are not free in any special sense to do anything but their jobs.

(2)— The “For the common good” school. This school has its origin in the AAUP Declaration of Principles (1915), and it shares some arguments with the “It’s just a job” school, especially the argument that the academic task is distinctive. Other tasks may be responsible to market or political forces or to public opinion, but the task of advancing knowledge involves following the evidence wherever it leads, and therefore “the first condition of progress is complete and unlimited freedom to pursue inquiry and publish its results.” The standards an academic must honor are the standards of the academic profession; the freedom he enjoys depends on adherence to those standards: “The liberty of the scholar . . . to set forth his conclusions . . . is conditioned by their being conclusions being gained by a scholar’s method and held in a scholar’s spirit.” That liberty cannot be “used as a shelter . . . for uncritical and intemperate partisanship,” and a teacher should not inundate students with his “own opinions.”

With respect to pronouncements like these, the “For the common good” school and the “It’s just a job” school seem perfectly aligned. Both paint a picture of a self-enclosed professional activity, a transaction between teachers, students, and a set of intellectual questions with no reference to larger moral, political, or societal considerations. But the opening to larger considerations is provided, at least potentially, by a claimed connection between academic freedom and democracy. Democracy, say the authors of the 1915 Declaration, requires “experts . . . to advise both legislators and administrators,” and it is the universities that will supply them and thus render a “service to the right solution of . . . social problems.” Democracy ’s virtues, the authors of the Declaration explain, are also the source of its dangers, for by repudiating despotism and political tyranny, democracy risks legitimizing “the tyranny of public opinion.” The academy rides to the rescue by working “to help make public opinion more self-critical and more circumspect, to check the more hasty and unconsidered impulses of popular feeling, to train the democracy.” By thus offering an external justification for an independent academy— it protects us from our worst instincts and furthers the realization of democratic principles— the “For the common good” school moves away from the severe professionalism of the “It’s just a job” school and toward an argument in which professional values are subordinated to the higher values of democracy or justice or freedom; that is, to the common good.

( 3)— The “Academic exceptionalism or uncommon beings” school. This school is a logical extension of the “For the common good” school. If academics are charged not merely with the task of adding to our knowledge of natural and cultural phenomena, but with the task of providing a counterweight to the force of common popular opinion, they must themselves be uncommon, not only intellectually but morally; they must be, in the words of the 1915 Declaration, “men of high gift and character.” Such men (and now women) not only correct the errors of popular opinion, they escape popular judgment and are not to be held accountable to the same laws and restrictions that constrain ordinary citizens.

The essence of this position is displayed by the plaintiff ’s argument in Urofsky v. Gilmore (2000), a Fourth Circuit case revolving around Virginia’s law forbidding state employees from accessing explicitly sexual material on state-owned computers without the permission of a supervisor. The phrase that drives the legal reasoning in the case is “matter of public concern.” In a series of decisions the Supreme Court had ruled that if public employees speak out on a matter of public concern, their First Amendment rights come into play and might outweigh the government’s interest in efficiency and organizational discipline. (A balancing test is triggered.) If, however, the speech is internal to the operations of the administrative unit, no such protection is available. The Urofsky court determined that the ability of employees to access pornography was not a matter of public concern. The plaintiffs, professors in the state university system, then detached themselves from the umbrella category of “public employees” and claimed a special status. They argued that “even if the Act is valid as to the majority of state employees, it violates the . . . academic freedom rights of professors . . . and thus is invalid as to them.” In short, we’re exceptional.

(4)— The “Academic freedom as critique” school. If academics have the special capacity to see through the conventional public wisdom and expose its contradictions, exercising that capacity is, when it comes down to it, the academic’s real job; critique— of everything— is the continuing obligation. While the “It’s just a job” school and the “For the common good” school insist that the freedom academics enjoy is exercised within the norms of the profession, those who identify academic freedom with critique (because they identify education with critique) object that this view reifies and naturalizes professional norms which are themselves the products of history, and as such are, or should be, challengeable and revisable. One should not rest complacently in the norms and standards presupposed by the current academy ’s practices; one should instead interrogate those norms and make them the objects of critical scrutiny rather than the baseline parameters within which critical scrutiny is performed.

Academic freedom is understood by this school as a protection for dissent and the scope of dissent must extend to the very distinctions and boundaries the academy presently enforces. As Judith Butler (2006a) puts it, “as long as voices of dissent are only admissible if they conform to accepted professional norms, then dissent itself is limited so that it cannot take aim at those norms that are already accepted” (114). One of those norms enforces a separation between academic and political urgencies, but, Butler contends, they are not so easily distinguishable and the boundaries between them blur and change. Fixing boundaries that are permeable, she complains, has the effect of freezing the status quo and of allowing distinctions originally rooted in politics to present themselves as apolitical and natural. The result can be “a form of political lib eralism that is coupled with a profoundly conservative intellectual resistance to . . . innovation” (127). From the perspective of critique, established norms are always conservative and suspect and academic freedom exists so that they can be exposed for what they are. Academic freedom, in short, is an engine of social progress and is thought to be the particular property of the left on the reasoning (which I do not affirm but report) that conservative thought is anti- progressive and protective of the status quo. It’s only a small step, really no step at all, from academic freedom as critique to the fifth school of thought.

(5)— The “Academic freedom as revolution” school. With the emergence of this school the shift from academic as a limiting adjective to freedom as an overriding concern is complete and the political agenda implicit in the “For the common good” school and the “Academic freedom as critique” schools is made explicit. If Butler wants us to ask where the norms governing academic practices come from, the members of this school know: they come from the corrupt motives of agents who are embedded in the corrupt institutions that serve and reflect the corrupt values of a corrupt neoliberal society. (Got that?) The view of education that lies behind and informs this most expansive version of academic freedom is articulated by Henry Giroux (2008). The “responsibilities that come along with teaching,” he says, include fighting for

an inclusive and radical democracy by recognizing that education in the broadest sense is not just about understanding, . . . but also about providing the conditions for assuming the responsibilities we have as citizens to expose human misery and to eliminate the conditions that produce it. (128)

In this statement the line between the teacher as a professional and the teacher as a citizen disappears. Education “in the broadest sense” demands positive political action on the part of those engaged in it. Adhering to a narrow view of one’s responsibilities in the classroom amounts to a betrayal both of one’s political being and one‘s pedagogical being. Academic freedom, declares Grant Farred (2008–2009), “has to be conceived as a form of political solidarity ”; and he doesn’t mean solidarity with banks, corporations, pharmaceutical firms, oil companies or, for that matter, universities ( 355). When university obligations clash with the imperative of doing social justice, social justice always trumps. The standard views of academic freedom, members of this school complain, sequester academics in an intellectual ghetto where, like trained monkeys, they perform obedient and sterile routines. It follows, then, that one can only be true to the academy by breaking free of its constraints.

The poster boy for the “Academic freedom as revolution” school is Denis Rancourt, a physics professor at the University of Ottawa (now removed from his position) who practices what he calls “academic squatting”— turning a course with an advertised subject matter and syllabus into a workshop for revolutionary activity. Rancourt (2007) explains that one cannot adhere to the customary practices of the academy without becoming complicit with the ideology that informs them: “Academic squatting is needed because universities are dictatorships, devoid of real democracy, run by self- appointed executives who serve private capital interests.”

To read more about Versions of Academic Freedom, click here.

Add a Comment
10. Excerpt: Packaged Pleasures

9780226121277
by Gary S. Cross and Robert N. Proctor

 ***

“The Carrot and the Candy Bar”

Our topic is a revolution—as significant as anything that has tossed the world over the past two hundred years. Toward the end of the nineteenth century, a host of often ignored technologies transformed human sensual experience, changing how we eat, drink, see, hear, and feel in ways we still benefit (and suffer) from today. Modern people learned how to capture and intensify sensuality, to preserve it, and to make it portable, durable, and accessible across great reaches of social class and physical space. Our vulnerability to such a transformation traces back hundreds of thousands of years, but the revolution itself did not take place until the end of the nineteenth century, following a series of technological changes altering our ability to compress, distribute, and commercialize a vast range of pleasures.

Strangely, historians have neglected this transformation. Indeed, behind this astonishing lapse lies a common myth—that there was an age of production that somehow gave rise to an age of consumption, with historians of the former exploring industrial technology, while historians of the latter stress the social and symbolic meaning of goods. This artificial division obscures how technologies of production have transformed what and how we actually consume. Technology does far more than just increase productivity or transform work, as historians of the Industrial Revolution so often emphasize. Industrial technology has also shaped how and how much we eat, what we wear and why, and how and what (and how much!) we hear and see. And myriad other aspects of how we experience daily life—or even how we long for escape from it.

Bound to such transformations is a profound disruption in modern life, a breakdown of the age-old tension between our bodily desires and the scarcity of opportunities for fulfillment. New technologies— from the rolling of cigarettes to the recording of sound—have intensified the gratification of desires but also rendered them far more easily satisfied, often to the point of grotesque excess. An obvious example is the mechanized packaging of highly sugared foods, which began over a century ago and has led to a health and moral crisis today. Lots of media attention has focused on the irresponsibility of the food industry and the rise of recreational and workplace sedentism—but there are other ways to look at this.

It should be obvious that technology has transformed how people eat, especially with regard to the ease and speed with which it is now possible to ingest calories. Roots of such transformations go very deep: the Neolithic revolution ten-plus thousand years ago brought with it new methods of regularizing the growing of food and the world’s first possibility of elite obesity. The packaged pleasure revolution in the nineteenth century, however, made such excess possible for much larger numbers of “consumers”—a word only rarely used prior to that time. Industrial food processors learned how to pack fat, sugar, and salt into concentrated and attractive portions, and to manufacture these cheaply and in packages that could be widely distributed. Foods that were once luxuries thus became seductively commonplace. This is the first thing we need to understand.

We also need to appreciate that responsibility for the excesses of today’s consumers cannot be laid entirely at the doors of modern technology and the corporations that benefit from it. We cannot blame the food industry alone. No one is forced to eat at McDonald’s; people choose Big Macs with fries because they satisfy with convenience and affordability, just as people decide to turn on their iPods rather than listen to nature or go to a concert. But why would we make such a choice—and is it entirely a “free choice”? This brings us to a second crucial point: humans have evolved to seek high-energy foods because in prehistoric conditions of scarcity, eating such foods greatly improved their ancestors’ chances of survival. This has limited, but not entirely eliminated, our capacity to resist these foods when they no longer are scarce. And if we today crave sugar and fat and salt, that is partly because these longings must have once promoted survival, deep in the pre-Paleolithic and Paleolithic. Our taste buds respond gleefully to sugars because we are descended from herbivores and especially frugivores for whom sweet-tasting plants and fruits were neuro-marked as edible and nutritious. Poisonous plants were more often bitter-tasting. Pleasure at least in this sensory sense was often a clue to what might help one survive.

But here again is the rub. Thanks to modern industrialism, high-calorie foods once rare are now cheap and plentiful. Industrial technology has overwhelmed and undercut whatever balance may have existed between the biological needs of humans and natural scarcity. We tend to crave those foods that before modern times were rare; cravings for fat and sugar were no threat to health; indeed, they improved our chances of survival. Now, however, sugar, especially in its refined forms, is plentiful, and as a result makes us fat and otherwise unhealthy. And what is true for sugar is also true for animal fat. In our prehistoric past fat was scarce and valuable, accounting for only 2 to 4 percent of the flesh of deer, rabbits, and birds, and early humans correctly gorged whenever it was available. Today, though, factory-farmed beef can consist of 36 percent fat, and most of us expend practically no energy obtaining it. And still we gorge.

And so the candy bar, a perfect example of the engineered pleasure, wins out over the carrot and even the apple. More sugar and seemingly more varied flavors are packed into the confection than the unprocessed fruit or vegetable. In this sense our craving for a Snickers bar is partly an expression of the chimp in us, insofar as we desire energy-packed foods with maximal sugars and fat. The concentration, the packaging, and the ease of access (including affordability) all make it possible—indeed enticingly easy—to ingest far more than we know is good for us. Our biological desires have become imperfect guides for good behavior: drives born in a world of scarcity do not necessarily lead to health and happiness in a world of plenty.

But food is not the only domain where such tensions operate. Indeed, a broader historical optic reveals tensions in our response to the packaged provisioning of other sensations, and this broader perspective invites us to go beyond our current focus on food, as important as that may be.

As biological creatures we are naturally attracted to certain sights and sounds, even smells and motion, insofar as we have evolved in environments where such sensitivities helped our ancestors prevail over myriad threats to human existence. The body’s perceptual organs are, in a sense, some of our oldest tools, and much of the pleasure we take in bright colors, combinations of particular shapes, and certain kinds of movement must be rooted in prehistoric needs to identify food, threats, or mates from a distance. Today we embrace the recreational counterparts, filling our domestic spaces with visual ornaments, fixed or in motion, reminding ourselves of landscapes, colors, or shapes that provoke recall or simulate absent or even impossible worlds.

What has changed, in other words, is our access to once-rare sensations, including sounds but especially imagery. The decorated caves of southern France, once rare and ritualized space, are now tourist attractions, accessible to all through electronic media. Changes in visual technology have made possible a virtual orgy of visual culture; a 2012 count estimated over 348,000,000,000 images on the Internet, with a growth rate of about 10,000 per second. The mix and matrix of information transfer has changed accordingly: orality (and aurality) has been demoted to a certain extent, first with the rise of typography (printing) and then the published picture, and now the ubiquitous electronic image on screens of different sorts. “Seeing is believing” is an expression dating only from about 1800, signaling the surging primacy of the visual. Civilization itself celebrates the light, the visual sense, as the darkness of the night and the narrow street gradually give way to illuminated interiors, light after dark, and ever broader visual surveillance.

Humans also have preferences for certain smells, of course, even if we are (far) less discriminating than most other mammals. Technologies of odor have never been developed as intensively as those of other senses, though we should not forget that for tens of thousands of years hunters have employed dogs—one of the oldest human “tools”—to do their smelling. Smell has also sometimes marked differences between tribes and classes, rationalizing the isolation of slaves or some other subject group. The wealthy are known to have defined themselves by their scents (the ancient Greeks used mint and thyme oils for this purpose), and fragrances have been used to ward off contagions. Some philosophers believed that the scent of incense could reach and please the gods; and of course the devil smelled foul—as did sin.

Still, the olfactory sense lost much of its acuity in upright primates, and it is the rare philosopher who would base an epistemology on odor. Philosophers have always privileged sight over all other senses—which makes sense given how much of our brain is devoted to processing visual images (canine epistemology and agnotology would surely be quite different). Optico-centricity was further accentuated with the rise of novel ways of extending vision in the seventeenth century (microscopes, telescopes) and still more with the rise of photography and moving pictures. Industrial societies have continued to devalue scent, with some even trying to make the world smell-free. Pasteur’s discovery of germs meant that foul air (think miasma) lost its role in carrying disease, but efforts to remove the germs that caused such odors (especially the sewage systems installed in cities in the nineteenth century) ended up mollifying much of the stink of large urban centers. Bodily perfuming has probably been around for as long as humans have been human, but much of recent history has involved a process of deodorizing, further reducing the value of the sensitive nose.

Modern people may well gorge on sight, but we certainly remain sound-sensitive and long for music, “the perfume of hearing” in the apt metaphor of Diane Ackerman. Music has always aroused a certain spiritual consciousness and may even have facilitated social bonding among early humans. Stringed and drum instruments date back only to about 5,500 years ago (in Mesopotamia), but unambiguous flutes date back to at least 40,000 years ago; the oldest known so far is made from vulture and swan bones found in southern Germany. Singing, though, must be far older than whatever physical evidence we have for prehistoric music.

There is arguably a certain industrial utility to music, insofar as “moving and singing together made collective tasks far more efficient” (so claims historian William McNeill). As a mnemonic aid, a song “hooks onto your subconscious and won’t let go.” Music carries emotion and preserves and transports feelings when passed from one person or generation to another—think of the “Star Spangled Banner” or “La Marseillaise.” And music also marks social differences in stratified societies. In Europe by the eighteenth century, for example, people of rank had abandoned participation in the sounds and music of traditional communal festivals and spectacles. To distinguish themselves from the masses, the rich and powerful came to favor the orderly stylized sounds of chamber music—and even demanded that audiences keep silent during performances. One of the signal trends of this particular modernity is the withdrawal of elites from public festivals, creating space instead for their own exclusive music and dance to eliminate the unruly/unmanaged sounds of the street and work. Music helps forge social bonds, but it can also work to separate and to isolate, facilitating escape from community (think earbuds).

We humans also of course crave motion and bodily contact, flexing our muscles in the manner of our ancestors exhilarating in the chase. And even if we no longer chase mammoth herds with spears, we recreate elements of this excitement in our many sports, testing strength against strength or speed against speed, forcing projectiles of one sort or another into some kind of target. Dance is an equally ancient expression of this thrill of movement, with records of ritual motion appearing already on cave and rock walls of early humans. The emotion-charged dance may be diminished in elite civilized life, but it clearly reappears in the physicality of amusement park throngs at the end of the nineteenth century, and more recently in the rhythmic motions of crowds at sporting events and rock concert moshing where strangers slam and grind into each other.

Sensual pleasure is thus central to the “thick tapestry of rewards” of human evolutionary adaptation, rewards wired into the complex circuitry of the brain’s pleasure centers. Pursuit of pleasure (and avoidance of pain) was certainly not an evil in our distant past; indeed, it must have had obvious advantages in promoting evolutionary fitness. Along with other adaptive emotions (fear, surprise, and disgust, for example), pleasure and its pursuit must also have helped create capacities to bond socially—and perhaps even to use and to understand language. The joy that motivates babies to delight in rhythmic and consonant sounds, bright colors, friendly faces, and bouncing motion helps build brain connections essential for motor and cognitive maturity.

Of course the biological propensity to gorge cannot be new; that much we know from the relative constancy of the human genetic constitution over many millennia. We also know that efforts to augment or intensify sensual pleasure long predate industrial civilization. This should come as no surprise, given that, as already noted, our longings for rare delights of taste, sight, smell, sound, and motion are rooted in our prehistoric past. Humans—like wolves—have been bred to binge. But in the past, at least, nature’s parsimony meant that gorging was generally rare and its impact on our bodies, psyches, and sociability limited.

This leads us again to a critical point: pleasure is born in its paucity and scarcity sustains it. And scarcity has been a fact of life for most of human history; in fact, it is very often a precondition for pleasure. Too much of any good can lead to boredom—that is as true for music or arcade games as for ice cream or opera. Most pleasures seem to require a context of relative scarcity. Amongst our prehistoric ancestors this was naturally enforced through the rarity of honey and the all-tooinfrequent opportunity for the chase. Humans eventually developed the ability, however, to create and store surpluses of pleasure-giving goods, first by cooking and preserving foods and drinks and eventually by transforming even fleeting sensory experiences into reproducible and transmissible packets of pleasure. Think about candy bars, soda pop, and cigarettes, but also photography, phonography, and motion pictures—all of which emerged during the packaged pleasure revolution.

Of course, in certain respects the defeat of scarcity has a much older history, having to do with techniques of containerization. Prior to the Neolithic, circa ten thousand years ago, humans had little in the way of either technical means or social organization to store any kind of sensual surplus (though meats may have been stashed the way some nonhuman predators do). Farming and its associated technics changed this. After hundreds of thousands of years of scavenging and predation, people in this new era began to grow their own food—and then to save and preserve it in containers, especially in pots made from clay but also in bags made from skins or fibers from plants. Agriculture seems to have led to the world’s first conspicuous inequalities in wealth, but also the first routine encounters with obesity and other sins of the flesh (drunkenness, for example). Of course the rich—the rulers and priests of ancient city-states and empires or the lords and abbots of religious centers in the Middle Ages—were able to satisfy sensual longings more often, and in some cases continually.

While Christianity was in part a reaction to this sensual indulgence, being originally a religion of the excluded slave and the appalled rich, medieval aristocrats returned to the ancient love of sweet and sour dishes, favoring roasted game (a throwback to the preagricultural era) and the absurd notion that torturing animals before killing them made for the tastiest meats. Medieval European nobility mixed sex, smell, and taste in their large midday meals and frequent evening banquets. Christian church fathers banned perfumes and roses as Roman decadence, but treatments of this sort—along with passions for pungent flavors and scents—were revived with the Crusades and intimate contact with the Orient.

Until recently, pursuit of pleasure on such an opulent scale was confined to those tiny minorities with regular access to the resources to contain and intensify nature. Since antiquity, in fact, the powerful have often been snobbish killjoys, trying to restrict what the poor were allowed to eat, wear, and enjoy. Sometimes this made economic (if invidious) sense—as when England’s Edward III rationed the diet of servants during shortages that followed the Black Death. In the sixteenth century, French law prohibited the eating of fish and meat at the same meal in hopes of preserving scarce supplies. And given the low output of agriculture, there was a certain logic underlying the rationing of access to “luxuries.” But the powerful sometimes seem to have relished denying pleasure to others. How else do we explain sumptuary laws that prohibited the commoner from wearing colorful and costly clothing reserved for aristocrats?

Access to pleasure has long been an expression of privilege and power, but much can be made with little, and rarely has pleasurable display been totally suppressed in any culture. Think of the ceremonies surrounding seasonal festivals, especially the gathering of harvest surplus, when humans drenched themselves in the senses that seemed almost to ache for expression. Think of the Bacchanalia of the Greeks, the Saturnalia of the Romans, the Mardi Gras of medieval Europeans, or the orgies of feasting, dancing, music, and colorful costumes of any society whose everyday world of scarcity is forgotten in bingeing after harvest. Agriculture produced cycles of carnival and Lent, “a self-adjusting gastric equilibrium,” in the words of one historian.

Of course there are many examples of ancient philosophers and sages seeking to limit the hedonism of the privileged (and the festival culture of the poor). Certainly there are ancients who embraced the virtues of moderation, as in Aristotle’s “golden mean” or Confucian ideals of restrained desire. Hebrew prophets, Puritans, Jesuits, and countless Asian ascetics likewise attempted to rein in the fêtes of the senses. Medieval authorities in Europe forbade the eating of meat on Wednesdays, Fridays, and numerous fast days that added up to more than 150 days a year. The classical ideal of moderation was revived, and the moral superiority of grain-based foods was defended. Gluttony was condemned along with lust. Pleasure was to be regulated even in the afterlife, insofar as the Christian heaven was not for pleasure but for self-improvement. These and other ascetic moralities arguably helped people cope with uncertain supplies, putting a brake also on the rapacious greed of the rich and powerful. Curbing of excess extended to all manner of “pleasures of the flesh,” including those that, like sex, were not necessarily even scarce.

Dance came under suspicion in this regard, especially in its ecstatic form. European explorers frowned on the gesticulations of “possessed natives” whom they encountered in Africa and the Americas in the sixteenth and seventeenth centuries. At the same time, European elites smothered social dancing in the towns and villages of their own societies. The reasons were many. Clergy demanded that their holy days and rituals be protected from defilement by the boisterous and even sacrilegious customs of the frolicking crowd; the rich also chose to withdraw from—and then suppressed—the emotional intensity of common people’s celebrations, retiring instead to the confines of their private gatherings and sedate dances. The military also needed a new type of soldier and new ways of preparing men for war: the demand was no longer to fire up the emotions of soldiers to prepare them for handto-hand combat; the new need was to drill and discipline troops to march unflinching into musket and cannon fire, with individual fighters acting as precision components in a machine. The regular rhythms of the military march served this purpose better than the ecstatic dance.

Even when people found ways of intensifying sensation (as in the distillation of alcoholic spirits), state and church authorities were often able to enforce limits, sometimes by harsh means. In London in the 1720s, authorities repressed the widespread and addictive use of gin (a juniper-flavored liquor). At the beginnings of the Industrial Revolution, just as unleashing desire was becoming respectable, philosophers such as Adam Smith and David Hume still mused about the need for personal restraint and moral sympathies.

By this time, and increasingly over the course of the nineteenth century, especially between about 1880 and 1910, these traditional calls for moderation and self-control were starting to face a new kind of challenge, thanks to new techniques of containerization and intensification that would culminate in the packaged pleasure revolution. New kinds of machines brought new sensations to ordinary people, producing goods that for the first time could be made quite cheap and easily storable and portable. Canned food defeated the seasons, extending the availability of fruits and vegetables to the entirety of the year. Candy bars purchased at any newsstand or convenience store replaced the rare encounter with the honeycomb or wild strawberry. And while our more immediate predecessors may have enjoyed a pipe of tobacco or a draft of warm beer, the deadly convenience of the cigarette and the refreshing coolness of the chilled beverage came within the grasp of the masses only toward the end of the nineteenth century. And this revolution in the range and intensity of sensation radically upset the traditional relationship between desire and scarcity.

A similar process occurred with other sensory delights. While earlynineteenth-century Americans and Europeans thrilled at the sight of painted dioramas and magic lantern shows, nothing compared to the spectacle of fast-paced police chases in the one-reel movies viewable after 1900. Opera was a privileged treat of the few in lavish public places, but imagine the revolution wrought by the 1904 hard wax cylinder phonograph, when Caruso could be called upon to sing in the family parlor whenever (and however often) one wanted. Daredevils in Vanuatu dove from high places holding vines long before bungee jumping became a fad; even so, there was nothing like the mass-market calibrated delivery of physical thrills before the roller coaster, popularized in the 1890s. We find something similar even with binge partying: while peoples had long celebrated surpluses in festivals, they typically did so only on those rare days designated by the authorities. By the end of the nineteenth century, however, festive pleasures of a more programmed sort had become widely available on demand in the modern commercial amusement park.

Especially important is how the packaged pleasure intensified (certain aspects of ) human sensory experience. An extreme example is when opium, formerly chewed, smoked, or drunk as tea, was transformed through distillation into morphine and eventually heroin—and then injected directly into the bloodstream with the newly invented syringe in the 1850s. The creation of a wide variety of “tubes” like the syringe for delivering chemically purified, intense sensation was characteristic of much of this new technology—which we shall describe in terms of “tubularization.” The cigarette is another fateful example: tobacco smoking was made cheap, convenient, and “mild” (i.e., deadly) with the advent of James Bonsack’s automated cigarette rolling machine (in the 1880s) and new methods of curing tobacco. Bonsack’s machine lowered the cost of manufacturing by an order of magnitude, and new methods of chemical processing (such as flue curing) allowed a milder, less alkaline smoke to be drawn deep into the lungs. A new mass-market consumer “good” was born, accompanied by mass addiction and mass death from maladies of the heart and lungs.

The “tubing” of tobacco into cigarettes was closely related to techniques used in packing and packaging many other commercial products. Think of mechanized canning—culminating in the double-seamed cylinder of the “sanitary” can-making machinery of 1904—and mechanized bottle and cap making from the late 1890s. New forms of sugar consumption appeared with the invention of soda fountain drinks. Coca-Cola was first served in drug stores in 1886 and in bottles by the end of the century, and in the 1890s the mixing of sugar with bitter chocolate led to candy bars, such as Hershey’s in 1900. Packaged pleasures of this sort—offered in conveniently portable portions with carefully calibrated constituents—allowed manufacturers to claim to have surpassed the sensuous joys of paradise. Chemists also began to be hired to see what new kinds of foods and drugs could be synthesized to surpass the taste, smell, and look of anything nature had created. A new discipline of “marketing” came of age about this time—the word was coined in 1884—with the task of creating demand for this riot of new products, decked out increasingly in colorful and striking labels with eye- and ear-catching slogans.

New technologies also sped up our consumption of visual, auditory, and motion sensoria. In 1839 the Daguerreotype revolutionized the familiar curiosity of the camera obscura—a dark room featuring a pinhole that would project an image of the outside world onto an interior wall—by chemically capturing that image on a metal plate in a miniaturized “camera” (meaning literally “room”). While these early photographs required long periods of exposure to fix an image, that time dramatically declined over the course of the century, allowing by 1888 the amateur snapshot camera and only three years later the motion picture camera. The effect, as we shall see, was a sea change in how we view and recollect the world. Sound was also captured (and preserved and sold) about this same time. The phonograph, invented in 1877 by Thomas Edison, became a new way of experiencing sound when improved and domesticated. And Emile Berliner’s “record” of 1887 made possible the mass production of sound on stamped-out discs, capturing a concert or a speech in a two- or three-minute record available to anyone, anywhere, with the appropriate gear.

Access and speed took another sensual twist when a Midwesterner by the name of La Marcus Thompson introduced the first mechanized roller coaster, in 1884. Bodily sensations that might have signaled danger or even death on a real train were packed into a two- or three-minute adventure trip on a rail “gravity ride.” Adding another dimension to the thrill was Thompson’s scenic railroad (in 1886) with its artificial tunnels and painted images of exotic natural or fantasy scenes. This was a new form of concentrated pleasure, distilling sights and sounds that formerly would have required days of “regular travel.” Rides, in combination with an array of novel multisensory spectacles, were concentrated into dedicated “amusement parks,” offering a kind of packaged recreational experience, accessible (very often) via the new trolley cars of the 1890s. Some of the earliest and most famous were those built at Coney Island on the southernmost tip of Brooklyn, New York.

Innovations of this sort led us into new worlds of sensory access, speed, and intensity. Distance and season were no longer restraints, as canned and bottled goods moved by rail, ship, and eventually truck across vast stretches of space and climate—with mixed outcomes for human health and well-being.

Some of these new technologies nourished and improved our bodies with cheaper, more hygienic, and varied food and drink; others offered more convenient and effective medicines and toiletries. Still others provided unprecedented opportunities to enjoy the beauty of nature (or at least its image), along with music and new kinds of “visual arts.” Amusement rides gave us (relatively) harm-free ways of experiencing the ecstatic and the exhilaration of danger—plus a kind of simulated or virtual travel; photography froze the evanescent sight, preserving images on a scale never previously possible, and with near-perfect fidelity. Yet packaged pleasures also led to new health and moral threats.

In the most extreme form, concentrating intoxicants led to addictions—physical dependencies that often required ever-increasing dosages to maintain a constant effect, and substantial physical discomfort accompanying withdrawal. Here of course the syringe injection of distilled opiates is the paradigmatic example, and addiction to tobacco and alcoholic drinks must also be included. But the impact of concentrated high-energy foods is not entirely different. Fat- or sugar-rich foods produce not just energy but very often endorphins, morphine-like painkillers that offer comfort and calm. That is one reason they are called “comfort” foods. These rich foods cause neurotransmitters in the brain to go out of balance, resulting in cravings. By contrast, the natural physical pleasures of exercise are much less addicting because we get tired; and some “excess”—here pain is gain—can actually make us healthier.

Not all packaged pleasure dependencies were so obviously chemical. Engineered pleasures often create astonishment and delight when first introduced, for example, but can also raise expectations and dull sensibilities for “unpackaged” stimuli, be they nature’s wonders or unaided convivial and social delights. The pleasures of recorded sound, the captured image, and even the amusement park ride and electronic game often satisfy with a kind of ratcheting effect, rendering the visual, auditory, and motion pleasures in uncommodified nature and society boring. In this sense, the packaging of pleasure can turn the once rare into an everyday, even numbing, occurrence. The world beyond the package becomes less thrilling, less desirable. In the wake of the telephoto lens and artful editing of film—with all the “boring bits” taken out—nature itself can appear dull or impoverished. Why go to the waterfall or forest if you can experience these in compressed form at your local zoo or theme park? Or on IMAX or your widescreen, high-def TV? Packaged pleasures of this sort may not induce physical dependencies, but they can create inflated expectations or even degrade other, less distilled or concentrated, kinds of experiences.

Another point we shall be making is that packaged pleasures have often de-socializedpleasure taking. Many create neurological responses similar to those of religious ecstasies, physical exercise, and social or even sexual intercourse, and can end up substituting for, or displacing, such enjoyments. Weak wine and mild natural hallucinogens have long enhanced spiritual and social experience, but the modern packaged pleasure often has the effect of privatizing satisfaction, isolating it from the crowd. Think of the privatization of public space through portable mp3 players, or the isolating effect of television.

The key point to appreciate is that we today live in a vastly different world from that of peoples living prior to the packaged pleasure revolution, when a broad range of sensual pleasures came to be bottled, canned, condensed, distilled, and otherwise intensified. The impact of this revolution has not been uniform, and we acknowledge and stress these differences, but it does seem to have transformed our sensory universe in ways we are only beginning to understand.

The packaged pleasures we shall be considering in this book include cigarettes, candy and soda pop, phonograph records, photographs, movies, amusements park spectacles, and a few other odds and ends.

But of course not all commodities that are tubed, packed, portable, or preserved can be considered packaged pleasures. For our purposes, we can identify several key and interrelated elements:

  1. The packaged pleasure is an engineered commodity that contains, concentrates, preserves, and very often intensifies some form of sensual satisfaction.
  2. It is generally speaking inexpensive, easy to access (readily at hand), and very often portable and storable, often in a domestic setting.
  3. It is typically wrapped and labeled and thus often marketed by branding. Although often portable, in the case of the amusement park, it can also be enclosed and branded in a contained and fixed space.
  4. The packaged pleasure is often produced by companies with broad regional if not national or even global reach, creating a recognizable bond between the individual consumer and the corporate producer.

Of course we are well aware that many other consumer products exhibit one or more of these attributes—clothes, cars, books, packaged cereals, cocaine, pornography, and department stores just to name a few. Our focus will be on those packaged pleasures that signal key features of the early part of this transformation, and notably those that involve the elements of containment, compression, intensification, mobilization, and commodification. And we recognize that we will not offer an encyclopedic survey of pleasures that have been intensified and packaged—we won’t be treating the history of pornography or perfume, for example, and will consider narcotics and alcoholic beverages only briefly.

We should also be clear that the packaged pleasure revolution is on-going and in many ways has strengthened over time, as pleasure engineers find ever-more sophisticated ways of intensifying desire. And we’ll consider this history at least briefly. Since funneled fun has a tendency to bore us over time, pleasure engineers have repeatedly raised the bar on sensory intensity. Nuts and nougat were added to the simple chocolate bar, and cigarette makers added flavorants and chemicals to enhance or optimize nicotine delivery. The visual panel in motion pictures has been made more alluring with increasingly rapid cuts, and recorded sound has seen a dramatic expansion in both fidelity and acoustical range. Roller coasters went ever higher and faster while also becoming ever safer. Pornography is delivered with ever-greater convenience and is now basically free to anyone with an Internet connection. Even opera fans can now hear (and see) their favorite arias with a simple click on YouTube—at no cost and without leaving home (or sitting through those “boring bits”). Entertainment without the “fiber,” one could say.

Another outcome of the packaged pleasure revolution, then, is the progressive refinement—really reengineering—of sensory experience in the century or so since its beginnings. Optimization of satisfactions has become a big part in this, as one might expect from the fact that packaged pleasures are very often commodities produced by corporations with research and marketing departments. Menthol was added to cigarettes in the 1930s, with the idea of turning tobacco back into a kind of medicine. Ammonia and levulinic acid and candied flavors of various sorts were later added to augment the nicotine “kick,” but also to appeal to younger tastes. Flavor chemists meanwhile learned to manipulate the jolt of “soft drinks” by refining dosings of caffeine and sugar, while candy makers developed nuanced “flavor profiles”— surpassing traditional hard candy, for example, with the sensory complex of a Snickers.

Optimization and calibration we also find in other parts of this revolution. The intense thrill of a loop-de-loop ride, debuted first at Coney Island in the 1890s, gave way to the more varied sensuality of “themed” rides. Roller coasters have been designed to go to the edge of exhilaration, stopping just short of the point of nausea or injury. The same principle works with gambling, where even losers keep playing because of the carefully calibrated conditioning that comes with the periodic (and precisely calculated) win built into the game. Pleasure engineers have learned how to create video games that are easy enough to engage newcomers, but complex enough to sustain the interest of experienced players. Gaming engineers even seek to encourage (or require) physical movement and social interactions—think Wii games—to counter critics cautioning against the bodily and social negatives of overly virtualized lives.

Our focus is on the origins of the technologies involved in such transformations, though we also are aware that such novelties have always encountered critics, those who worry that an oversated consuming public would lose control and abandon work and family responsibilities. But the reality in terms of social impact often has been quite different. Few of these optimized pleasures have ever undermined the willingness of consumers to work and obey—and have done little to undermine nerves and sensibilities (as some have feared). Indeed they have often contributed to a new work ethic driven by new needs and imperatives to earn and toil evermore in order to be able to afford the delights of movies, candy, soda, cigarettes, and the rest of the show. Over time, and often a surprisingly short time, these commodified delights have become a kind of second sensory nature—customary and accepted ways of eating, inhaling, seeing and hearing, and feeling.

Scholars have long debated the impact of “modern consumer culture,” albeit too often in negative terms without considering the historical origins of the phenomena in question. In the 1890s, the French sociologist Émile Durkheim feared that the “masses” would be enervated, even immobilized, by technical modernity’s overwhelming assault on the senses. And Aldous Huxley in his Brave New World (1932) warned of a coming culture of commoditized hedonism oblivious to tyranny. Jeremiahs of this sort have singled out different culprits, with blame most often placed on the “weaknesses” of the masses or the manipulation of merchandisers, with the hope expressed that the virtuous few in their celebration of nature and simplicity would constitute a bulwark against immediate gratification and degrading consumerism. These critics have been opposed by apologists for “democratic access” to the choice and comforts of modern consumer society—who champion the idea that only killjoy elitists could find fault in the delights of pleasure engineering. This perspective dominates a broad swath of social science—especially from neoclassical economists (think of George Stigler and Gary Becker’s famous dictum on the nondisputability of taste).

We argue instead that we need to abandon the overgeneralization common to both jeremiahs and free-market populists. Of course it is true that the very notion of a “packaged pleasure revolution” suggests certain links between the cigarette, bottled soda, phonograph records, cameras, movies, and even amusement parks. But the impact of these various inventions over the decades has been very different, and cannot be subsumed under some procrustean notion of “modern consumer culture.” Rather, as we shall see, their distinct histories suggest very different effects on our bodies and our cultures that would seem to require very different personal and policy responses. Our view is that the sale of cigarettes (as presently designed) should be heavily regulated and ultimately banned, for example, while soda should probably only be shamed and (heavily) taxed. And we make no policy recommendations for film or sound “packages.” But we certainly need to better understand how these technologies have shaped and refined (distorted?) our sensibilities.

We should also keep in mind that there are global consequences to the packaged pleasure revolution—and that most of these lie in the future. This is unfinished business. Overconsumption is part of the problem, as is the undermining of world health (notably from processed sugar and cigarettes). The revolution is ongoing, as the engineered world of compressed sensibility spreads to ever-different parts of the globe, and ever-different parts of human anatomy and sociability. It may be hard to opt out of or to escape from this brave new world, but the conditions under which it arose are certainly worth understanding and confronting.

This book takes on a lot. Our hope is to move us beyond the classic debate between the jeremiahs against consumerism and the defenders of a democratic access to commercial delights. We root mass consumption in a sensory revolution facilitated by techniques that upset the ancient balance between desire and scarcity. We take a fresh look at how technology has transformed our nature.

To read more about Packaged Pleasures, click here.

Add a Comment
11. The Professional: Donald E. Westlake

b7cunmn4yj2bqpeufqsj

 

Deadspin columnist/Yankees fan/out-of-print litterateur Alex Belth recently sat down over email with Levi Stahl, University of Chicago Press promotions director and editor of The Getaway Car: A Donald Westlake Nonfiction MiscellanyTheir resulting conversation, published today at Deadspin, al0ng with an excerpt from the book, includes the history of their engagement with the Parker novels, Jimmy the Kid‘s amazing cover design, culling through Westlake’s archive, an obscure British comedy show, and the perils of professional envy vs. professional admiration. You can read the interview in full here, and have a look at a clip after the jump below.

***

Q: In a letter, Westlake described the difference between an author and a writer. A writer was a hack, a professional. There’s something appealing and unpretentious about this but does it take on a romance of its own? I’m not saying he was being a phony but do you think that difference between a writer and an author is that great?

LS: I suspect that it’s not, and that to some extent even Westlake himself would have disagreed with his younger self by the end of his life. I think the key distinction for him, before which all others pale, was what your goal was: Were you sitting down every day to make a living with your pen? Or were you, as he put it ironically in a letter to a friend who was creating an MFA program, “enhanc[ing] your leisure hours by refining the uniqueness of your storytelling talents”? If the former, you’re a writer, full stop. If the latter, then you probably have different goals from Westlake and his fellow hacks.

But does a true hack veer off course regularly to try something new? Does a hack limit himself to only writing about his meal ticket (John Dortmunder) every three books, max, in order not to burn him out? Does a hack, as Westlake put it in a late letter to his friend and former agent Henry Morrison, “follow what interests [him],” to the likely detriment of his career? Westlake was always a commercial writer, but at the same time, he never let commerce define him. Craft defined him, and while craft can be employed in the service of something a writer doesn’t care about at all, it is much easier to call up and deploy effectively if the work it’s being applied to has also engaged something deeper in the writer. You don’t write a hundred books with almost no lousy sentences if you’re truly a hack.

Read more about The Getaway Car here.

 

 

Add a Comment
12. Literature in translation

UCP_translations_2014_cover

In the wake of the controversy (or welcomed interest, depending on your position) surrounding Patrick Modiano’s recent Nobel Prize in Literature, the AAUP circulated the hashtag #litintranslation, in order to promote those books published by university presses that attempt to overcome the dearth of literature in translation that has long acquiesced to a peculiar hegemony in American letters. In fact, Yale University Press already had plans to publish Modiano’s Suspended Sentences: Three Novellas this fall, as part of their Margellos World Republic of Letters series. A quick review of the tweets circulating under #litintranslation reveals an equally robust list of works brought into the English language by the university press community, including several by the University of Chicago Press. With that in mind, and on the heels of the Frankfurt Book Fair, we’re debuting our sales catalog Translations from Chicago, where among hundreds of storied works spanning the disciplines, you can find:

The Selected Letters of Charles Baudelaire: The Conquest of Solitude, ed. and trans. by Rosemary Lloyd

Vegetables: A Biography by Evelyn Bloch-Dano, trans. by Teresa Lavender Fagan

One Must Also Be Hungarian by Adam Biro, trans. by Catherine Tehanyi

Sketch for a Self-Analysis by Pierre Bourdieu, trans. by Richard Nice

The Beast and the Sovereign, Vols. I and II by Jacques Derrida, trans. by Geoffrey Bennington

The Voice Imitator by Thomas Bernhard, trans. by Kenneth J. Norcott

Youth without Youth by Mircea Eliade, trans. by Mac Linscott Ricketts, with a Foreword by Francis Ford Coppola

To see the complete catalog in PDF form, click here.

 

Add a Comment
13. An excerpt from Lee Siegel’s Trance Migrations

C_Siegel_Trance_9780226185293_jkt_MB

From Trance Migrations: Stories of India, Tales of Hypnosis by Lee Siegel

The Child’s Story
And now, if you dare, LOOK into the hypnotic eye! You cannot look away! You cannot look away! You cannot look away!

THE GREAT DESMOND IN THE HYPNOTIC EYE (1960)

I was eight years old when my mother was hypnotized by a sinister Hindu yogi. Yes, she was entranced by him, entirely under his control, and made do things she would never have done in her normal waking state. My father wasn’t there to protect her and there was nothing I, a mere child, could do about it. I vividly remember his turban and flowing robes, his strange voice, gliding gait, and those eerie eyes that widened to capture her mind. I heard his suggestive whispers—“Sleep Memsaab, sleep”—and saw his hand moving over her face in circular hypnotic passes. “Sleep, Memsaab.”

It’s true. I heard it with my own ears and saw it with my own eyes as I watched “The Unknown Terror,” an episode of the series Ramar of the Jungle, on television one evening in 1953. Playing the part of a teak plantation owner in India, my mother, the actress Noreen Nash, was vulnerable to the suggestions of the Hindu hypnotist they called Catrack. “ When the dawn comes,” he instructed her, “ You will take the rifle and go to the camp of the white Ramar. You will aim at his heart and fire.”

I watched as my mother, wearing a pith helmet, bush jacket, and jodhpur pants, rose from her cot, loaded her rifle, and then trudged in a somnambulistic trance, wooden and emotionless, through the jungle to Ramar’s tent. Since my mother, as far as I knew her at home, had no experience with firearms, I was not surprised that she missed her target. She dropped the rifle and disappeared back into the jungle.

Later on in the show, once again hypnotically entranced, she was led by Catrack to the edge of a cliff where the yogi declared, “ We are in great danger, Memsaab. The only way to escape is to jump off this cliff.” Just as my mother was about to leap to her death, Ramar arrived on the scene and fired his rifle into the air. The loud bang of the gunshot awakened her in the nick of time and caused Catrack to flee. Thanks to Ramar, my mother survived her adventures in India.

The seeds of my curiosity about hypnotism and an indelible association of it with an exotic, at once alluring and foreboding, India were sown in front of a television. At about the same time I saw my mother hypnotized and made to do terrible things by a yogi, I watched another nefarious Hindu hypnotist, Swami Talpar, played by Boris Karloff in Abbott and Costello Meet the Killer, try to take control of the feeble mind of Lou Costello. Both India and hypnosis were dangerous.

But then another old movie, Chandu the Magician, assured me that just as Indian hypnotism could be used for evil, so too it was a power that could be employed to overcome wickedness and serve the good of mankind. The film opened somewhere in India at night with a full moon casting eerie shadows on an ancient heathen temple as the American adventurer Frank Chandler bowed down before a dark-skinned, long-bearded Hindu priest in a white dhoti and matching turban. The Hindu swami addressed his acolyte in a deep echoic voice:

“In the years that thou hast dwelt among us, thou hast conquered the Atma of the spirit and, as one of the sacred company of the Yogi, thou hast been given the name Chandu. Thou hast attained thy reward by being endowed with the ancient Oriental magical power that the doctors of thy race call hypnotism. Thou shalt look into the eyes of men and they shall be as straw in thy hand. Thou shalt cause them to see what is not there even unto a gathering of twelve by twelve. To few, indeed, of thy race have the secrets of the Yogi been revealed. The world needs thee now. Go forth in strength and conquer the evil that threatens mankind.”

That India was the home of hypnotism was further confirmed by listening to my mother read Kipling to me at bedtime. We had moved on from The Jungle Book, read to me when I was about the same age as Mowgli, to Kim. And I imagined the hero of that story and I were the same age, as well. “Kim flung himself wholeheartedly upon the next turn of the wheel,” my mother began. “He would be a Sahib again for a while. . . .” and soon I’d yawn, blink, blink, and yawn again, feel the heaviness of my eyelids, heavier and heavier, more and more relaxed. I’d roll over, eyes closing, and soon be able to imagine that her voice might be Kim’s: “I think that Lurgan Sahib wishes to make me afraid,” she’d say he said. “And I am sure that that devil’s brat below the table wishes to see me afraid. This place is like a Wonder House.”

I’d picture the interior of Lurgan’s shop as vividly as if I were there and could see what Kim saw, focusing my attention on each of the objects, suggested one by one: “Turquoise and raw amber necklaces. Curiously packed incense-sticks in jars crusted over with raw garnets, devil-masks and a wall full of peacock-blue draperies . . . gilt figures of Buddha . . . tarnished silver belts . . . arms of all sorts and kinds . . . and a thousand other oddments.”

When, as commanded, Kim pitched the porous clay water jug that was on the table there to Lurgan, I saw it “falling short and crashing into bits and pieces.”

My mother reached over and lightly placed her hand on the back of my neck as Lurgan, in his attempt to hypnotize Kim, “laid one hand gently on the nape of his neck, stroked it twice or thrice, and whispered: ‘Look! It shall come to life again, piece by piece. First the big piece shall join itself to two others on the right and the left. Look!’ To save his life, Kim could not have turned his head. The light touch held him as in a vice, and his blood tingled pleasantly through him. There was one large piece of the jar where there had been three, and above them the shadowy outline of the entire vessel.”

“Look! It is coming into shape,” my mother whispered and “Look! It is coming into shape,” echoed Lurgan Sahib. Yes, it was coming into shape, all the shards of clay magically reforming the previously unbroken jug. I could see it. The words my mother read aloud to me were as hypnotic as the words uttered by Lurgan.

My childhood fascination with hypnosis was sustained by a school assignment to read Edgar Allan Poe’s stories, several of them—“The Facts in the Case of Mr. Valdemar,” “Mesmeric Revelation,” and “A Tale of the Ragged Mountains”—being about mesmerism, and the final story reaffirming an association of hypnosis with India. The main character goes into a trance in Virginia in which he has a vivid vision of Benares, a city to which he has never been, indicating that he had lived in India in a previous lifetime.

“Not only are Poe’s stories about hypnosis,” I grandly proclaimed in a book report I wrote in the seventh grade, “They are also written in a language that is very hypnotic, especially if they are read out loud.” Little did I suspect that that homework assignment would be prolusory to a book written more than half a century later.

When subsequently in the eighth grade I was required to prepare a project for the school science fair, I was determined to do mine on hypnosis as the only science, other than reproductive biology, in which I had much interest. The science teacher warned that it was a dangerous subject: “Hypnotism is widely used in schools in the Soviet Union to brainwash children so that they believe that Communism is good and that they must do whatever their dictator, Nikita Khrushchev, commands.”

Despite its abuse behind the Iron Curtain, I was determined to learn as much as I could about hypnosis. And so I ordered a book, Home Study Way to Hypnotic Practice, that I had seen advertised in a copy of Twitter magazine, a naughty-for-the-times pulp publication that I had discovered hidden in my uncle’s garage.

The ad promised that a mastery of hypnotism would enable me to control the minds of others, particularly the minds, and indeed the hearts, if not some other parts, of girls: “‘Look here’—Snap! Instantly her eyes close. She seems to be asleep but she isn’t. She’s in a hypnotic trance. A trance you put her into by saying secret words and snapping your fingers. Now she’s ready—ready and waiting to do as you command. She’ll follow your orders without question or hesitation. You’ll have her believing anything you suggest and doing whatever you want her to do. You’ ll be in control of her emotions: love, hate, laughter, tears, happy, sad. She’ ll be as putty in your hands.”

The winsome smiling girl with closed eyes in the advertisement reminded me of a classmate named Vickie Goldman, whose burgeoning breasts were often on my mind. I was naturally intrigued by the idea that by means of hypnotism those breasts might become as putty in my hands.

It was disappointing to discover in reading that book that a mastery of hypnotic techniques was much more complicated and tedious to learn than the ad for it had promised, and even more disheartening to learn that, in order to be hypnotized, Vickie would have to trust me and want to be hypnotized by me.

Another ad, in another copy of Twitter snatched from my uncle’s collection of girlie magazines, however, suggested that, by means of various apparatuses, I would be able to take control of her mind without her consent. All I’d have to do is say, “Look at this,” or “Listen to this.”

So, for the sake of having both a science project and as much control over Vickie Goldman’s emotions and behavior as Catrack had had over my mother’s, even as much power over her as Khrushchev had over children in the Soviet Union, I ordered the products advertised by the Hypnotic Aids and Supply Company: the Electronic Hypnotism Machine, the Electronic Metronome, the folding, pocket-sized Mechanical Hypnotist, and the 78-rpm Hypnotic Record. Because I was spending more than ten dollars on these devices, I also received the Amazing Hypno-Coin at no extra charge. My mother was willing to pay for these devices since I needed them for my science project.

I also purchased the book Oriental Hypnotism, “written in Calcutta India with the cooperation of Sadhu Satish Kumar,” because the yogi pictured in the ad reminded me of the one who had hypnotized my mother in Ramar of the Jungle. The text revealed that, by means of hypnosis, “the power of Maya,” Hindu yogis are able to “charm serpents, control women, and win the favor of men. Self-hypnosis gives the Hindus their amazing ability to lie down on beds of nails. And it is by means of mass hypnosis that their magicians have for thousands of years performed the legendary Indian Rope Trick.” I was familiar with the rope trick from seeing Chandu use his hypnotic power to cause “a gathering of twelve by twelve” to imagine they were seeing it performed.

My science project exhibit, HYPNOTISM EAST AND WEST IN THE PAST, PRESENT AND FUTURE BY LEE SIEGEL, GRADE 8, featured a poster board mounted over a table upon which waved my Hypnotic Metronome and spun both the Hypnotic Spiral Disc of my Electronic Hypnotism Machine and side one of my Hypnotic Record. Over the eerie drone of Oriental music there was a monotonously rhythmic deep voice: “As you listen to these words your muscles will begin to relax, to become more and more relaxed, yes, very relaxed, and your eyelids will become heavy, yes, heavier and heavier, very, very heavy, very relaxed. Deeper and deeper, relaxed.” The words “relaxed,” “heavy,” and “deeper” were repeated over and over and then there was counting backward, then imagining going down, “deeper and deeper,” in an elevator, more counting backward, and finally, at the end of the record, right after “three, two, one,” came the crucial the hypnotic suggestion: “The next voice you hear will have complete control over your mind.”

That’s when I would to take over. That’s when, if the principal of our school, the judge of the projects in the fair, listened to the record, I’d command: “ You will award Lee Siegel the first-place blue ribbon for his science project.” And if Vickie would look and listen, that’s when my interest in hypnosis would really pay off: “ You will go behind the handball courts with Lee Siegel and there you will ask him to fondle your breasts.”

To intensify the hypnotic mystique of my project, I placed a warning sign by the Electronic Hypnotism Machine: Stare at the Spinning Disc at Your Own Risk. Lee Siegel will not be held responsible for any actions resulting from a loss of mental control.

Along with all of my puchases from the Hypnotic Aids Supply Company, I placed the Westclox pocket watch on a chain that my uncle had given me for my bar mitzvah.

I livened up the poster board with a photo labeled EAST: Sadhu Satish Kumar, Hindu Yogi Hypnotist, cut from Oriental Hypnotism side by side with a picture labeled WEST: Dr. Franz Mesmer, Father of Animal Magnetism, that I had clipped from the World Book Encyclopedia.

There was also a timeline beginning in 3000 bc (as estimated by Sadhu Satish Kumar) with “Indian Fakirs and Yogis” and ending “Sometime in the Future” with “Lee Siegel who has learned so much for this science fair project that he plans to become a professional hypnotist. After graduating from high school and college he will go both to India to study hypnotism with yogis and to Oxford University to study it with science professors.”

In between the ancient Hindu hypnotists and my future self were luminaries in the history of hypnosis as enumerated in the World Book Encyclopedia: Franz Mesmer (1734–1815), the Marquis de Puységur (1751– 1825), Abbé Faria (1756–1819), John Elliotson (1791–1868), James Braid (1795–1860), James Esdaile (1808–1859), Ivan Pavlov (1849–1936), and Sigmund Freud (1856–1936). In order to make the list more acknowledging of India’s contributions to hypnosis I added Swami Catrack (1919–1953), Frank Chandler, a.k.a. Chandu (1932–), and Sadhu Satish Kumar (1928–). I also included The Amazing Kreskin (1935–) and William Kroger (1906–—), because, other than Catrack, Swami Talpar, Chandu, Lurgan, Satish Kumar, Nikita Khrushchev, and Sigmund Freud, they were the only hypnotists I had ever heard of. I knew that Sigmund Freud was a psychiatrist who thought that little boys were in love with their mother and that little girls wished they had a penis. I included Kroger, a gynecologist, avid proponent of medical hypnotherapeutics, and a friend my parents who occasionally visited our home, in the hope that he might, once I had shown him my science project, write a note on the official stationery of the International Society for Clinical and Experimental Hypnosis of which he was president, something to be framed and included in my display, something like “Lee Siegel’s science project deserves a blue ribbon and should be sent on to the national competition, which it will certainly win.”

All he wrote, however, was: “ Young Siegel has done a good job in presenting a subject that deserves wider recognition and acceptance.”

Not having been awarded the first-place blue ribbon—or a ribbon of any other color, for that matter—for my science project, nor having been able to successfully use my hypnotic aids to turn Vickie—or any other girl—into putty in my hands, ready to follow my orders without question, my interest in hypnotism waned.

I don’t think I thought about hypnosis very much until a couple of years later when, in 1960, I happened see a horror film, The Hypnotic Eye, the movie, according to publicity posters, “that introduces HypnoMagic, the thrill you SEE and FEEL! It’s the amazing new audience sensation that makes YOU part of the show!” There were warnings that HypnoMagic could cause viewers of the film to actually become hypnotized: “Watch at your own risk!”

The movie was about a mysterious series of gruesome acts of self-mutilation by beautiful women, none of whom were able to remember why or how they had disfigured themselves, and all of whom, a detective, the hero of the film, discovered, just happened to have gone to a theater to see the stage hypnosis show of The Great Desmond. That each of them had been hypnotized during one of his performances caused the detective to suspect that the hypnotist might have been involved in the crimes. Consulting a criminal psychologist, he learned that, “ Yes, posthypnotic suggestion could indeed cause a woman to do things she would not otherwise consider doing.”

At one point in the film, during a performance of his stage show, the despotic Desmond held up something meant to resemble an eyeball flashing with light—the titular Hypnotic Eye! After daring his audience to stare into it, he turned to the camera and dared us, the audience in the movie theater, to do the same. The camera moved in closer and closer on the pulsating orb as, “deeper and deeper” was repeated again and again until soon, as commanded by the diabolical hypnotist, the members of his audience were lifting their arms and then lowering them. And then Desmond stared straight at us again and commanded us to do the same, and soon, together with the audience in the movie, we, the audience of the movie, were lifting our arms, then lowering them, again and again, until Desmond finally ordered us to stop and then, after counting from one to three, he snapped, “ Wake up!”

Although I don’t think I was actually hypnotized by the Great Desmond and don’t know how many members of the movie audience were, I felt compelled to go along with the show, to act as if I was in a trance, and do as I was told. That, I would suggest, is in and of itself a kind of hypnosis. Hypnosis, like listening intently to a story, is playing along with words.

At the very end of the movie, after the crimes had been solved and the evil hypnotist apprehended, the criminal psychologist addressed the viewers of the movie: “Hypnotism can be a valuable tool, helping humanity in many ways. But, just as it can be used to do good, so too, in the hands of unscrupulous practitioners, it can be used to perpetrate evil. We must be wary to maintain our safety because they can catch us anywhere, and at anytime.” He paused as the camera moved in for a close-up: “ Yes, even during a motion picture in a movie theater.” He winked, then smiled, and the screen faded to black.

I didn’t think much about the film until recently, when I began writing about hypnosis. I confess, although I should probably be ashamed to admit it, that this text has been stylistically inspired by the B movie gimmick. In the spirit of The Hypnotic Eye, the tales in this book that are meant to be read aloud to a cooperative listener are written with HypnoMagic, the thrill you SEE and FEEL! It’s the amazing literary sensation that makes the listener part of the story! But beware! HypnoMagic could cause listeners to actually become hypnotized and actually imagine that they are participants in the tales they hear.

Read more about Trance Migrations here.

Add a Comment
14. Excerpt: Roger Grenier’s Palace of Books

C_Grenier_Palace_9780226308340_jkt_IFT

 

“Private Life”

The expansion of the media has put the writer in the spotlight, even if, nowadays, people who write have lost much of their prestige and their importance in society. Some of them find themselves afflicted with a lack of privacy once reserved for movie stars. Sometimes they ask for it. Michel Contat writes about “this form of media totalitarianism that gives the right to know everything about someone based on the simple fact that he or she has created a public image.” This phenomenon is not so new, if you think about Sartre and Beauvoir, not to mention Musset and George Sand, Dante and Beatrice, Petrarch and Laura, or even the self-dramatizing Byron or Chateaubriand. Nowadays we have scribblers who manage to pass themselves off as writers because they’ve already made a name for themselves as celebrities.

Gérard de Nerval was a victim of the public’s need to know, due to conditions that would be unimaginable today. Jules Janin, in the Journal des débats of March 1, 1841; Alexandre Dumas, in Le Mousquetaire of December 10, 1853; Eugène de Mirecourt in a little monograph in his series Les Contemporains in 1854, wrote openly about their friend’s mental illness. Poor Gérard wrote to his father on June 12, 1854, in response to Mirecourt’s pamphlet on “necrological biography,” and said he was being made into “the hero of a novel.” He dedicated Daughters of Fire to Alexandre Dumas: “I dedicate this book to you, my dear master, as I dedicated Lorely to Jules Janin. You have the same claim on my gratitude. A few years ago, I was thought dead, and he wrote my biography. A few days ago, I was thought mad, and you devoted some of your most charming lines to an epitaph for my spirit. That’s a good deal of glory to advance on my due inheritance.”

Is knowing the private life of an author important for understanding his or her work?

The debate was renewed with great panache by Marcel Proust in By Way of Sainte-Beuve. Proust noticed that Sainte-Beuve, a subtle and cultured man, made nothing but bad judgment calls as to the worth of his contemporaries. Why? Jealousy doesn’t explain it. He couldn’t have been jealous of writers like Stendhal or Baudelaire, who were practically unknown. The fault was with his method. Sainte-Beuve wanted to adopt a scientific attitude. “For me,” he wrote, “literature is indistinguishable from the rest of man. As long as you have not asked yourself a certain number of questions about an author and answered them satisfactorily, if only for your private benefit and sotto voce, you cannot be sure of possessing him entirely. And this is true, though these questions may seem to be altogether foreign to the nature of his writings. For example, what were his religious views? How did the sight of nature affect him? What was he like in his dealings with women, and in his feelings about money? Was he rich? Was he poor? What was his regimen? His daily habits? Finally, what was his persistent vice or weakness, for every man has one. Each of these questions is valuable in judging an author or his book.”

Sainte-Beuve decides that he is engaging in literary botany.

Proust finds all this knowledge useless and likely to mislead the reader: “A book is the product of a different self than the self we manifest in our habits, in our social life, in our vices. If we would try to understand that particular self, it is by searching deep within us and trying to reconstruct it there, that we may arrive at it. Nothing can exempt us from this effort of the heart.”

Proust also writes: “How does having been a friend of Stendhal’s make you better suited to judge him? It would be more likely to get in the way.” Sainte-Beuve, who knew Stendhal and Stendhal’s friends, found his novels “frankly detestable.”

What Proust holds against Sainte-Beuve is that he made no distinction between conversation and the occupation of writing, “in which, in solitude, quieting the speech which belongs as much to others as to ourselves, we come face to face once more with ourselves, and seek to hear and to render the true sound of our hearts.”

Proust admires Balzac, all while thinking that from what he knew of Balzac’s personal life, his letters to his family and to Madame Hanska, he was a vulgar human being. Stefan Zweig raises the same issue. He admires Balzac the writer and seeks reasons to admire the man. He is infuriated because he can’t find any. He has discovered that genius is incomprehensible.

Gaëtan Picon thinks that if Proust attacks Sainte-Beuve so violently it’s because he needs to believe that genius is based on a secret distinct from intelligence. That a man whose life is frivolous and empty, a failure, can nonetheless create a great work. The question is inevitable, beginning with the case of Proust himself. How did this intolerable social climber, whom Lucien Daudet called “an atrocious insect,” become the author of In Search of Lost Time? Paul Valéry concludes his famous study of Leonardo da Vinci with a line that shows in a striking way how much distance he puts between an artist and his work: “As for the true Leonardo, he was what he was.”

Flaubert would have sided with Proust against his friend Sainte-Beuve. He writes to Ernest Feydeau on August 21, 1859, with his customary truculence, “Life is impossible now! The minute you’re an artist, the gentlemen grocers, the auditors of record, the customs agents, the cobblers and all the rest enjoy themselves at your expense! People inform them as to whether you’re a brunette or a blond, facetious or melancholy, how many moons since your birth, whether you’re given to drink or play the harmonica. I believe that on the contrary, the writer must leave behind nothing but his work. His life doesn’t matter. Wipe it away!”

He doesn’t stop there, but insists: “The artist must arrange things so as to make us believe in a posterity he hasn’t experienced.”

You’d have to put Chekhov in Proust’s camp. From his Notebook: “How pleasant it is to respect people! When I see books, I am not concerned with how the authors loved or played cards; I only see their marvelous works.”

The same is true for Henry James, who writes in his short story “The Real Right Thing”: “[. . .] his friend would at moments have shown himself as holding that the ‘literary’ career might—save in the case of a Johnson or a Scott, with a Boswell and a Lockhart to help—best content itself to be represented. The artist was what he did—he was nothing else.” In this fantasy tale, the ghost of a dead writer appears to prevent his biography from being written.

Proust seems rigid. He is right to say that there is a truth for the writer, especially if he’s a genius, that remains a mystery and cannot be explained by social appearance or private life. But he also presents a counter-argument to his own theory when he writes in Jean Santeuil: “[. . .] our lives are not wholly separated from our works. All the scenes that I have narrated here, I have lived through.”

Most of the time, the characters in Jean Santeuil and the Search are indiscreet, eager to know everything about the artists they encounter. Freud, whose theory is close to Proust’s, doesn’t hold back from delving into the private life of Leonardo da Vinci and a few others. J.-B. Pontalis suggests with a touch of malice that Proust and Freud take the opposite tack to Sainte-Beuve’s because they don’t want their own private lives examined: if Proust’s perversion of torturing rats was discovered. . . . The private lives of others are another story!

Nietzsche also pondered the question, but from a different point of view. He thinks that knowing an author distorts our opinion of his work and his person. “We read the writings of our acquaintances (friends and foes) in a twofold sense, inasmuch as our knowledge continually whispers to us: ‘this is by him, a sign of his inner nature, his experiences, his talent,’ while another kind of knowledge simultaneously seeks to determine what his work is worth in and of itself, what evaluation it deserves apart from its author, what enrichment of knowledge it brings with it. As goes without saying, these two kinds of reading and evaluating confound one another.”

But what to do in cases where the work can only be explained by the life? Why deprive ourselves of this source of knowledge?

In the case of Albert Camus, once you know about his impoverished childhood in an illiterate milieu (he described this in The Wrong Side and the Right Side, his first book, and in The First Man, his last), you understand his attitude of respect and rigor towards literature, and the tenor of his style. In the same way, his youth near the sea and the sun, and the illness that continually threatened him, explain to a large extent the spirit of his work, his thought.

Finally—and Proust is right about this—if the author is not a simple manufacturer, if he puts his interior self in his books, the reader will be attracted by this self. The reader will seek out this personal, private self beneath the sentences.

In 1922, the young Aragon wrote, “My instinct, whenever I read, is to look constantly for the author, and to find him, to imagine him writing, to listen to what he says, not what he tells; so in the end, the usual distinctions among the literary genres— poetry, novel, philosophy, maxims—all strike me as insignificant.”

Freud showed that every child constructs a “family romance” that he will later repress. Whereas the writer continues to manufacture a novel which, if not a family romance, is at least a personal one. Marthe Robert has noted that all novelists relate to some extent their sentimental education, their apprentice years, and their search for lost time. The paradox is that they confess their secrets to a piece of paper. Yet they’re careful to disguise them as fiction.

Revealing a lot about oneself is not the purview only of novelists. It is also what poets do, and not just the elegiac poets. For centuries, and in a variety of civilizations, well before there were novels, the great majority of poems came from the poet’s effusion in speaking about his life, his loves, his torments, his anger, his religious feeling, his exile. Gérard de Nerval asks, “Which is more modest: to portray oneself in a novel disguised as Lélio or Octavio or Arthur, or to betray one’s most intimate emotions in a volume of poetry?” That his life and his illness were made public by his friends gave him an argument: “Forgive us our flights of personality, we who are constantly in the limelight, and who, whether we live in glory or in failure, can no longer hope to obtain the benefits of obscurity.”

You might think that contemporary poetry, tending towards abstraction and situated in a world where the air is rarified, has little to do with private life. This is not always true. Even an erudite poet like Jacques Roubaud, who delves into mathematics, writes about a deeply personal unhappiness in Something Black.

The same is true for the playwright, the filmmaker, even the nonfiction writer. You can sense this clearly in the philosophers Jean-Paul Sartre, Michel Foucault, Roland Barthes. Descartes was already inserting elements of autobiography in Discourse on Method. In this essential essay, he portrays himself in Holland, seated next to his stove throughout the winter, reflecting. Thus there is a back-and-forth movement, a dialectic, practically a contradiction. One retreats into oneself in order to communicate better with others.

Authors, whenever they delve into their own private lives, even if they embellish or transpose, find themselves confronted with the issue of personal discretion. They go well beyond simple indiscretion when they attempt to bring to light what is hidden in the deepest part of themselves.

With his taste for nonsense, Julio Cortázar describes an “enlarged self-portrait from which the artist has had the elegance to withdraw.” This little joke reveals the aspirations of so many writers: to be at once invisible and present, to say everything about oneself without seeming to.

Offering your essence to nourish what you write is what Scott Fitzgerald called “the price to pay”: “I have asked a lot of my emotions—one hundred and twenty stories. The price was high, right up with Kipling, because there was one little drop of something not blood not a tear, not my seed, but me more intimately than these in every story: it was the extra I had.”

Scott Fitzgerald couldn’t write without including his entire history. And even when he lost his creative vein, he dug to the depths of his anguish to write The Crack-Up.

John Dos Passos, another American who is now neglected after having been overrated, made a distinction between a literature of confession and a literature of spectacle. Of course he categorized his own books Manhattan Transfer and the U.S.A. trilogy as literature of spectacle. But I’m pretty sure you can find confession beneath the spectacle.

The young novelist’s first book is often autobiographical. Yet this is the phase when one has lived the least. Other, perhaps better, writers save the most personal, the most intimate in their lives or in the history of their families for much later.

On the other hand, some seem to write primarily to cover up a secret. Paul-Jean Toulet never shows his wounds—neither in his novels, frankly mediocre and marred by the most odious clichés of his era: anti-Semitism, etc.—nor in his poetry, far more charming; nor even in the letters he addressed to himself. His friends knew he had a broken heart. Why broken? And by whom? One of the qualities of his poetry is precisely that you can perceive, beyond the light-hearted fantasy, a floating veil of sadness or perhaps despair. We’ll never know the whole story. That is the claim in the last quatrain of his Contrerimes—a kind of challenge:

If living is a duty, when I will have ruined it,

May I use my shroud as a mystery

You must know how to die, Faustine, how to grow silent,

Die like Gilbert by swallowing the key.

(The allusion is to the strange death at age thirty of the poet Nicolas Gilbert, author of the Le poète malheureux [the unhappy poet] who apparently swallowed his key in a fit of delirium.)

In the life of a man or a woman there are always one or two things that he or she will never consent to speak about, not for anything. Secret gardens. But if that man or woman is a writer, we might find those things hidden deep within a novel.

We know that Dickens lived through some very unhappy times in his childhood. The casual egotism of his parents was to blame.

His father, a loudmouth who was often imprisoned for debt, is in part the model for Mr. Micawber. In chapter eleven of David Copperfield, we find, barely altered, what Dickens experienced at age twelve. For six or seven shillings a week, he packaged shoe polish in a putrid factory, working under unspeakably miserable, humiliating conditions.

While he didn’t hesitate to use this experience for David Copperfield, in life he hid the memory as his most closely guarded secret. He refused to talk about it. He even took detours in London to avoid the place where he had been so unhappy. A fragment of his autobiography was found where he confirmed:

No word of that part of my childhood which I have now gladly brought to a close, has passed my lips to any human being . . . I have never, until I now impart it to this paper, in any burst of confidence with anyone, my own wife not excepted, raised the curtain I then dropped, thank God.

Until old Hungerford Market was pulled down, until old Hungerford Stairs were destroyed, and the very nature of the ground changed, I never had the courage to go back to the place where my servitude began. I never saw it. I could not endure to go near it. For many years, when I came near to Robert Warrens’ in the Strand, I crossed over to the opposite side of the way, to avoid a certain smell of the cement they put upon the blackingcorks, which reminded me of what I was once. It was a very long time before I liked to go up Chandos Street. My old way home by the Borough made me cry, after my eldest child could speak.

Thus Charles Dickens and David Copperfield, C. D. and D. C., meet in the person of a humiliated child. Humiliation is a feeling that very few people can tolerate. But it has inspired many books.

Léon Aréga, a forgotten writer who endured endless ridicule, once said to me about one of my novels in which I put much of myself: “It’s a treatise on humiliation.” Which, coming from him, was a great compliment. It is easy to find the humiliated child in many of Chekhov’s short stories. His remark has been quoted a hundred times: “In my childhood, there was no childhood.”

Confessions are made on purpose in David Copperfield. But in most novels they aren’t. They surface in the form of fantasies, obsessions. With Dostoyevsky it’s impossible not to find an allusion to the rape of a little girl in The Possessed, Crime and Punishment,The Eternal Husband.

One rather strange point of view comes from Joseph Conrad. He thought you needed to be a genius to dare unveil your intimate self and thus move the public. If the effect was ruined you would sink into ridicule:

If it be true that every novel contains an element of autobiography—and this can hardy be denied since the creator can only express himself in his creation—then there are some of us to whom an open display of sentiment is repugnant. I would not unduly praise the virtue of restraint. It is often merely temperamental. But it is not always a sign of coldness. It may be pride. There can be nothing more humiliating than to see the shaft of one’s emotions miss the mark of either laughter or tears. Nothing more humiliating! And that for this reason should the mark be missed, should the open display of emotion fail to move, then it must perish unavoidably in disgust or contempt.

This is what the authors of a fashionable genre, baptized “autofiction” in 1970 by Serge Doubrovsky, seem not to fear, and their works collect like dregs on booksellers’ shelves.

Sometimes the most impersonal work can signify something deeply intimate to the author. This is the case of the great allegorical novel by Melville, Moby Dick. He achieves a fusion of a great myth with his own torment. The dire questioning, the violence of Ahab, are his. The Plague, another book that generates a myth, is also a novel about separation, since Camus wrote part of it isolated by the war, cut off from Algeria, from his wife, from his close friends. Virginia Woolf ’s Orlando seems like a fantastical novel of imagination, when it is really the portrait of Vita Sackville-West, who was so dear to the author. In a fairy tale like Alice in Wonderland, Reverend Dodgson confides his passion for Alice Liddell.

The sole fact of starting to write is motivated by a cause that belongs to what is most intimate for the author. I quoted Flaubert, who talks about the sorrow that launched him into the enterprise of Salammbô.

The critics always remind us that Proust and John Cowper Powys wrote their great novels only after the death of their mothers. You could say they waited for their mothers’ deaths to write.

We mustn’t forget the role of the unconscious. Benjamin Crémieux noticed that “the writer who rereads one of his books discovers, after the fact, secret traits he never suspected having put there, traits he may not even have known he possessed—and whose existence is suddenly revealed to him. In all that we write in our own style, the truest aspect of ourselves is inscribed in filigrain.”

How, without blushing, can we agree to deliver to the public so many confessions and intimate motivations, even those that are disguised or dissimulated? This is the mystery of the quasi-religious value we assign to literature.

To read more about Palace of Books, click here.

Add a Comment
15. Our free e-book for October: In Defense of Negativity

0226284980

Americans tend to see negative campaign ads as just that: negative. Pundits, journalists, voters, and scholars frequently complain that such ads undermine elections and even democratic government itself. But John G. Geer here takes the opposite stance, arguing that when political candidates attack each other, raising doubts about each other’s views and qualifications, voters—and the democratic process—benefit.

In Defense of Negativity, Geer’s study of negative advertising in presidential campaigns from 1960 to 2004, asserts that the proliferating attack ads are far more likely than positive ads to focus on salient political issues, rather than politicians’ personal characteristics. Accordingly, the ads enrich the democratic process, providing voters with relevant and substantial information before they head to the polls.

An important and timely contribution to American political discourse, In Defense of Negativity concludes that if we want campaigns to grapple with relevant issues and address real problems, negative ads just might be the solution.

“Geer has set out to challenge the widely held belief that attack ads and negative campaigns are destroying democracy. Quite the opposite, he argues in his provocative new book: Negativity is good for you and for the political system. . . . In Defense of Negativity adds a new argument to the debate about America’s polarized politics, and in doing so it asserts that voters are less bothered by today’s partisan climate than many believe. If there are problems—and there are—Geer says it’s time to stop blaming it all on 30-second spots.”—Washington Post

Download your free copy of In Defense of Negativity here.

Watch “The Bear,” one of those 30-second spots (less an attack ad, and more a foray into American surrrealism) produced  for Ronald Reagan’s 1984 presidential campaign, below:

Add a Comment
16. For Mark Rothko on his birthday

9780226074061

James E. B. Breslin’s book on the life of painter Mark Rothko helped redefine the field of the artist’s biography and, in its day, was praised by outlets such as the New York Times Book Review (on the front cover, no less), where critic Hilton Kramer ascribed it as, “the best life of an American painter that has yet been written.” On what would have been the artist’s 111th birthday, Biographile revisted Breslin’s work:

In Breslin’s book, we follow Rothko’s search for the approach that would become such a significant contribution to art and painting in the twentieth century. He was in his forties before he started making his “multiforms,” and even after he started painting them in his studio, he didn’t show them right away. Breslin dissects and details the techniques Rothko developed upon creating his greatest works. He rotated the canvas as he worked, so that the painting wouldn’t be weighted in any one direction. He spent much more time in the studio figuring out a painting than actually painting it, and he filled a canvas as many as twenty times before feeling it was done. Maybe most important, he worked tirelessly to eliminate any recognizable shapes from the multiforms. They needed to come into the world fully formed, not as interpretations of any real-life objects, but meaningful visions in and of themselves.

Nathan Gelgud, the author behind the Biographile piece, accompanied his writing with a couple of illustrated riffs on the artist, one of which we feature below, and the other you can seek out (and read the review in full) at Biographile.

Mark-Rothko-by-Nathan-Gelgud-2014

Mark Rothko by Nathan Gelgud, 2014. Image via Gelgud’s Biographile review.

To read more about Mark Rothko: A Biography, click here.

Add a Comment
17. House of Debt on FT’s shortlist for Business Book of the Year

9780226081946

Congrats (!) to House of Debt authors Atif Mian and Amir Sufi for making the shortlist for the Financial Times and McKinsey Business Book of the Year. Now in competition with five other titles from an initial offering of 300 nominations, House of Debt—and its story of the predatory lending practices behind the Great American Recession, the burden of consumer debt on fragile markets, and the need for government-bailed banks to share risk-taking rather than skirt blamewill find out its fate at the November 11th award ceremony.

From the official announcement:

“The provocative questions raised by this year’s titles have been addressed with originality, depth of research and lively writing.”

 The award, now in its 10th edition, aims to find the book that provides “the most compelling and enjoyable insight into modern business issues, including management, finance and economics.” The judges—who include former winners Mohamed El-Erian and Steve Coll—also gave preference this year to books “whose influence is most likely to stand the test of time.”

To read more about House of Debt, including a list of reviews and a link to the authors’ blog, click here.

Add a Comment
18. Alison Bechdel, MacArthur Fellow, 2014

tumblr_nc25b7qq501rr9j8oo1_400

Image via Out Magazine

bechdel_2014_hi-res-download_2_2-1024x682Congratulations to cartoonist and graphic memoirist Alison Bechdel, one the 2014 MacArthur Foundation Fellows, or “genius grant” honorees, whose work in comics and narrative has helped to transform and elevate our understanding of women—”Dykes to Watch Out For” in all their expressions, mothers and daughters,  and the implications of social and political changes on those who dwell everyday in a broad variety of female-identified bodies. Additionally, Bechdel is well-known in film studies circles for her duplicitously simple three-question test for gender parity, which has drawn broad attention since first delivered via her 1985 strip “The Rule.”

From the Washington Post:

1) Does it have two female characters?

2) Who talk to each other?

3) About something other than a man?

If the answer to all three questions is yes, the film passes the Bechdel test.

Bechdel is also the subject of two feature-length interviews in Hillary L. Chute’s Outside the Box: Interviews with Contemporary Cartoonists, and a contributor to Critical Inquiry’s special issue Comics & Mediaboth of which were released this year. Below, see video footage of a Bechdel/Chute interview from 2011, when Chute visited Bechdel at her home in Jericho, Vermont:

To read more about Outside the Box or the Comics & Media issue of CI, click here.

Add a Comment
19. The State of the University Press

intelligent-books-to-read

Recently, a spate of articles appeared surrounding the future of the university press. Many of these, of course, focused on the roles institutional library sales, e-books, and shifting concerns around tenure play in determining the strictures and limitations to be overcome as scholarly publishing moves forward in an increasingly digital age. Last week, Book Business published an profile on what goes on behind the scenes as discussions about these issues shape, abet, and occasionally undermine the relationships between the university press, its supporting institution, its constituents, and the consumers and scholars for whom it markets its books. Including commentary from directors at the University of North Carolina Press, the University of California Press, and Johns Hopkins University Press, the piece also included a conversation with our own director, Garrett Kiely:

From Dan Eldridge’s “The State of the University Presses” at Book Business:

Talk to University of Chicago Press director Garrett Kiely, who also sits on the board of the Association of American University Presses (AAUP), and he’ll tell you that many of the presses that are struggling today — financially or otherwise — are dealing with the same sort of headaches being suffered by their colleagues in the commercial world. And yet there is one major difference: “The commercial imperative,” says Kiely, “has never been a requirement for many of these [university] presses.”

Historically, Kiely explains, an understanding has existed between university presses and their affiliated schools that the presses are publishing primarily to disseminate scholarly information. That’s a valuable service, you might say, that feeds the public good, regardless of profit. “But at the same time,” he adds, “as everything gets tight [regarding] the universities and the amount of money they spend on supporting their presses, those things get looked at very carefully.”

As a result, Kiely says, there’s an increasingly strong push today to align the interests of a press with its university. At the University of Chicago, for instance, both the institution and its press are well known for their strong sociology offerings. But because more and more library budgets today are going toward the scientific fields, a catalog filled with even the strongest of humanities titles isn’t necessarily the best thing for a press’ bottom line.

 The shift the digital, in particular, was a pivot point for much of Kiely’s discussion, which went on to consider some of the more successful—as well as awkward—endeavors embraced by the press as part of a publishing culture blatantly faced with the need to experiment via new modalities in order to meet the interlinked demands of expanding scholarship and changing technology. Today, the formerly comfortable terrain once tackled by academic publishing is ever-changing, and with an increasing rapidity, which as the article asserts, may leave “more questions than answers.” As Kiely put it:

“I think the speed with which new ideas can be tested, and either pursued or abandoned is very different than it was five years ago. . . . We’ve found you can very quickly go down the rabbit hole. And then you start wondering, ‘Is there a market for this? Is this really the way we should be going?’”

To read more from “The State of the University Press,” click here.

 

Add a Comment
20. Peter Bacon Hales (1950–2014)

f5d48ad406710a8c0b1204.L._V362950315_SX200_

University of Chicago Press author, professor emeritus at the University of Illinois at Chicago, dedicated Americanist, photographer, writer, cyclist, and musician Peter Bacon Hales (1950–2014) died earlier this week, near his home in upstate New York. Once a student of the photographers Garry Winogrand and Russell Lee, Hales obtained his MA and PhD from the University of Texas at Austin, and launched an academic career around American art and culture that saw him take on personal and collaborative topics as diverse as the history of urban photography, the Westward Expansion of the United States, the Manhattan Project, Levittown, contemporary art, and the geographical landscapes of our virtual and built worlds. He began teaching at UIC in 1980, and went on to become director of their American Studies Institute. His most recent book, Outside the Gates of Eden: The Dream of America from Hiroshima to Now, was published by the University of Chicago Press earlier this year.

***

From Outside the Gates of Eden:

 

“We live, then, second lives, and third, and fourth—protean lives, threatened by the lingering traces of our mistakes, but also amenable to self-invention and renewal. . . . The cultural landscape [of the future] is hazy:  it could be a desert or a garden, or something in between. It is and will be populated by Americans, or by those infected by the American imagination: a little cynical, skeptical, self-righteous, self-deprecating, impatient, but interested, engaged, argumentative, observant of the perilous beauty of a landscape we can never possess but yearn to be a part of, even as we are restive, impatient to go on. It’s worth waiting around to see how it turns out.”

9780226313153

Add a Comment
21. Chicago 1968, the militarization of police, and Ferguson

9780226740782

John Schultz, author of The Chicago Conspiracy Trial and No One Was Killed: The Democratic National Convention, August 1968, recently spoke with WMNF about the history of police militarization, in light of both recent events in Ferguson, Missouri, and the forty-sixth anniversary (this week) of the 1968 Democratic National Convention in Chicago. Providing historical and social context to the ongoing “debate over whether the nation’s police have become so militarized that they are no longer there to preserve and protect but have adopted an attitude of ‘us’ and ‘them,’” Schultz related his eyewitness accounts to that collision of 22,000 police and members of the National Guard with demonstrators in Chicago to the armed forces that swarmed around mostly peaceful protesters in Ferguson these past few weeks.

The selection below, drawn in part from a larger excerpt from No One Was Killed, relays some of that primary account from what happened in Grant Park nearly half a century ago. The full excerpt can be accessed here.

***

The cop bullhorn bellowed that anyone in the Park, including newsmen, were in violation of the law. Nobody moved. The newsmen did not believe that they were marked men; they thought it was just a way for the Cops to emphasize their point. The media lights were turned on for the confrontation. Near the Stockton Drive embankment, the line of police came up to the Yippies and the two lines stood there, a few steps apart, in a moment of meeting that was almost formal, as if everybody recognized the stupendous seriousness of the game that was about to begin. The kids were yelling: “Parks belong to the people! Pig! Pig! Oink, oink!” In The Walker Report, the police say that they were pelted with rocks the moment the media lights “blinded” them. I was at the point where the final, triggering violence began, and friends of mine were nearby up and down the line, and at this point none of us saw anything thrown. Cops in white shirts, meaning lieutenants or captains, were present. It was the formality of the moment between the two groups, the theatrical and game nature showing itself on a definitive level, that was awesome and terrifying in its implications.

It is legend by now that the final insult that caused the first wedge of cops to break loose upon the Yippies, was “Your mother sucks dirty cock!” Now that’s desperate provocation. The authors of The Walker Report purport to believe that the massive use of obscenities during Convention Week was a major form of provocation, as if it helped to explain “irrational” acts. In the very first sentence of the summary at the beginning of the Report, they say “… the Chicago Police were the targets of mounting provocation by both word and act. Obscene epithets …” etcetera. One wonders where the writers of The Walker Report went to school, were they ever in the Army, what streets do they live on, where do they work? They would also benefit by a trip to a police station at night, even up to the bull-pen, where the naked toilet bowl sits in the center of the room, and they could listen and find out whether the cops heard anything during Convention Week that was unfamiliar to their ears or tongue. It matters more who cusses you, and does he know you well enough to hit home to galvanize you into destructive action. It also matters whether you regard a club on the head as an equivalent response to being called a “mother fucking Fascist pig.”

The kids wouldn’t go away and then the cops began shoving them hard up the Stockton Drive embankment and then hitting with their clubs. “Pigs! Pigs! Pigs! Fascist pig bastards!” A cop behind me—I was immediately behind the cop line facing the Yippies—said to me and a few others, in a sick voice, “Move along, sir,” as if he foresaw everything that would happen in the week to come. I have thought again and again about him and the tone of his voice. “Oink, oink,” came the taunts from the kids. The cops charged. A boy trapped against the trunk of a car by a cop on Stockton Drive had the temerity to hit back with his bare fists and the cop tried to break every bone in his body. “If you’re newsmen,” one kid screamed, “get that man’s number!” I tried but all I saw was his blue shirt—no badge or name tag—and he, hearing the cries, stepped backward up onto the curb as a half-dozen cops crammed around him and carried him off into the melée, and I was carried in another direction. A cop swung and smashed the lens of a media camera. “He got my lens!” The cameraman was amazed and offended. The rest of the week the cops would cram around a fellow cop who was in danger of being identified and carry him away, and they would smash any camera that they saw get an incriminating picture. The cops slowed, crossing the grass toward Clark Street, and the more daring kids sensed the loss of contact, loss of energy, and went back to meet the skirmish line of cops. The cops charged again up to the sidewalk on the edge of the Park.

It was thought that the cops would stop along Clark Street on the edge of the Park. For several minutes, there was a huge, loud jam of traffic and people in Clark Street, horns and voices. “Red Rover, Red Rover, send Daley right over!” Then the cops crossed the street and lined up on the curb on the west side, outside curfew territory. Now they started to make utterly new law as they went along—at the behest of those orders they kept talking about. The crowd on the sidewalk, excited but generally peaceable, included a great many bystanders and Lincoln Park citizens. Now came mass cop violence of unmitigated fury, descriptions of which become redundant. No status or manner of appearance or attitude made one less likely to be clubbed. The Cops did us a great favor by putting us all in the same boat. A few upper middleclass white men said they now had some idea of what it meant to be on the other end of the law in the ghetto.

At the corner of Menomenee and Clark, several straight, young people were sitting on their doorsteps to jeer at the Yippies. The cops beat them, too, and took them by the backs of the necks and jerked them onto the sidewalk. A photographer got a picture of a terrible beating here and a cop smashed his camera and beat the photographer unconscious. I saw a stocky cop spring out of the pavement swinging his club, smashing a media man’s movie camera into two pieces, and the media man walked around in the street holding up the pieces for everybody to see, including other cameras, some of which were also smashed. Cops methodically beat one man, summoned an ambulance that was whirling its light out in the traffic jam, shoved the man into it, and rapped their clubs on the bumper to send it on its way. There were people caught in this charge, who had been in civil rights demonstrations in the South in the early Sixties, who said this was the time that they had feared for their lives.

The first missiles thrown Sunday night at cops were beer-cans, then a few rocks, more rocks, a bottle or two, more bottles. Yippies and New Left kids rolled cars into the side streets to block access for the cop attack patrols. The traffic-jam reached wildly north and south, and everywhere Yippies, working out in the traffic, were getting shocked drivers to honk in sympathy. One kid lofted a beer-can at a patrol car that was moving slowly; he led the car perfectly and the beer-can hit on the trunk and stayed there. The cops stopped the car and looked through their rear window at the beer-can on their trunk. They started to back up toward the corner at Wisconsin from which the can was thrown, but they were only two and the Yippies were many, so they thought better of it and drove away. There were kids picking up rocks and other kids telling them to put the rocks down.

At Clark and Wisconsin, a few of the “leaders”—those who trained parade marshalls and also some of the conventionally known and sought leaders—who had expected a confrontation of sorts in Chicago, were standing on a doorstep with their hands clipped together in front of their crotches as they stared balefully out at the streets, trying to look as uninvolved as possible. “Beautiful, beautiful,” one was saying, but they didn’t know how the thing had been delivered or what was happening. They had even directly advised against violent action, and had been denounced for it. Their leadership was that, in all the play and put-on of publicity before the Convention, they had contributed to the development of a consciousness of a politics of confrontation and social disruption. An anarchist saw his dream come true though he was only a spectator of the dream; the middle-class man saw his nightmare. A radioman, moving up and down the street, apparently a friend of Tom Hayden, stuck his mike up the stairs and asked Hayden to make some comments. Hayden, not at all interested in making a statement, leaned down urgently, chopping with his hand, and said, “Hey, man, turn the mike off, turn the mike off.” Hayden, along with Rubin, was a man the Chicago cops deemed a crucial leader and they would have sent them both to the bottom of the Chicago River, if they had thought they could get away with it. The radioman turned the mike off. Hayden said, “Is it off?” The radioman said yes. Hayden said, “Man, what’s going on down there?” The radioman could only say that what was going on was going on everywhere.

Read more about No One Was Killed: The Democratic National Convention, August 1968 here.

Add a Comment
22. Terror and Wonder: our free ebook for September

9780226423128

For nearly twenty years now, Blair Kamin of the Chicago Tribune has explored how architecture captures our imagination and engages our deepest emotions. A winner of the Pulitzer Prize for criticism and writer of the widely read Cityscapes blog, Kamin treats his subjects not only as works of art but also as symbols of the cultural and political forces that inspire them. Terror and Wonder gathers the best of Kamin’s writings from the past decade along with new reflections on an era framed by the destruction of the World Trade Center and the opening of the world’s tallest skyscraper.

Assessing ordinary commercial structures as well as head-turning designs by some of the world’s leading architects, Kamin paints a sweeping but finely textured portrait of a tumultuous age torn between the conflicting mandates of architectural spectacle and sustainability. For Kamin, the story of our built environment over the past ten years is, in tangible ways, the story of the decade itself. Terror and Wonder considers how architecture has been central to the main events and crosscurrents in American life since 2001: the devastating and debilitating consequences of 9/11 and Hurricane Katrina; the real estate boom and bust; the use of over-the-top cultural designs as engines of civic renewal; new challenges in saving old buildings; the unlikely rise of energy-saving, green architecture; and growing concern over our nation’s crumbling infrastructure.

A prominent cast of players—including Santiago Calatrava, Frank Gehry, Helmut Jahn, Daniel Libeskind, Barack Obama, Renzo Piano, and Donald Trump—fills the pages of this eye-opening look at the astounding and extraordinary ways that architecture mirrors our values—and shapes our everyday lives.

***

“Blair Kamin, Pulitzer Prize-winning architecture critic for the Chicago Tribune, thoughtfully and provocatively defines the emotional and cultural dimensions of architecture. He is one of the nation’s leading voices for design that uplifts and enhances life as well as the environment. His new book, Terror and Wonder: Architecture in a Tumultuous Age, assembles some of his best writing from the past ten years.”—Huffington Post
Download your free copy of Terror and Wonder here.

Add a Comment
23. Ashley Gilbertson’s Bedrooms of the Fallen

Gilbertson_Bedrooms Yurchison, page 80.jpg.CROP.original-original (1)

Army Spc. Ryan Yurchison, 27, overdosed on drugs after struggling with PTSD, on May 22, 2010, in Youngstown, Ohio. He was from New Middletown, Ohio. His bedroom was photographed in September 2011.

(caption via Slate)

From Philip Gourevitch’s Introduction to Bedrooms of the Fallen by Ashley Gilbertson:

These wars really are ours—they implicate us—and when our military men and women die in far off lands, they do so in our name. [Gilbertson] wanted to depict what it means that they are gone. Photographs of the fallen, or of their coffins or their graves, don’t tell us that. But the places they came from and were supposed to go back to—the places they left empty—do tell us.

See more images from the book via an image gallery at Hyperallergic.

Add a Comment
24. Aspiring Adults Adrift

9780226197289

In 2011, Richard Arum and Josipa Roksa’s Academically Adrift inscribed itself in post-secondary education wonking with all the subtlety of a wax crayon; the book made a splash in major newspapers, on television, via Twitter, on the pages of popular magazines, and of course, inside policy debates. The authors’ argument—drawn from complex data analysis, personal surveys, and a widespread standardized testing of more than 2300 undergraduates from 24 institutions—was simple: 45 percent of these students demonstrated no significant improvement in a range of skills (critical thinking, complex reasoning, and writing) during their first two years of study. Were the undergraduates learning once they hit college? The book’s answer was, at best, a shaky “maybe.”

Now, the authors are back with a sequel of sorts: Aspiring Adults Adrift, which follows these students through the rest of their undergraduate careers and out into the world. The findings this time around? Recent graduates struggle to obtain decent jobs, develop stable romantic relationships, and assume civic and financial responsibilities. Their transitions, like their educational experiences, are mired in much deeper and more systemic obstacles than a simple “failure to launch.”

The book debuted last week with four-part coverage at Inside Higher Ed. Since then, pundits and reviewers have started to weigh in; below are just a few of their profiles and accounts, which for an interested audience, help to situate the book’s findings.

***

Vox asked, “Why hasn’t the class of 2009 grown up?“:

The people Arum and Roksa interviewed sounded like my high school and college classmates. A business major who partied his way to a 3.9 GPA, then ended up working a delivery job he found on Craigslist, sounded familiar; so did a public health major who was living at home two years after graduation, planning to go to nursing school. Everyone in the class of 2009 knows someone with a story like that.

These graduates flailed after college because they didn’t learn much while they were in it, the authors argue. About a third of students in their study made virtually no improvement on a test of critical thinking and reasoning over four years of college. Aspiring Adults Adrift argues that this hurt them in the job market. Students with higher critical thinking scores were less likely to be unemployed, less likely to end up in unskilled jobs, and less likely to lose their jobs once they had them.

. . . . . Roksa and Arum aren’t really arguing for a more academically rigorous college education. They did that in their last book. They’re fighting the broader idea of emerging adulthood—that the first half of your 20s is a time to prolong adolescence and delay adult responsibilities.

A Time piece chimed in:

Parents, colleges, and the students themselves share the blame for this “failure to launch,” Arum says, but, he adds, “We think it is very important not to disparage a generation. These students have been taught and internalized misconceptions about what it takes to be successful.”

Frank Bruni cited and interviewed the authors for his piece, “Demanding More from College,” in the New York Times:

Arum and Roksa, in “Aspiring Adults Adrift,” do take note of upsetting patterns outside the classroom and independent of career preparation; they cite survey data that showed that more than 30 percent of college graduates read online or print newspapers only “monthly or never” and nearly 40 percent discuss public affairs only “monthly or never.”

Arum said that that’s “a much greater challenge to our society” than college graduates’ problems in the labor market. “If college graduates are no longer reading the newspaper, keeping up with the news, talking about politics and public affairs — how do you have a democratic society moving forward?” he asked me.

And finally, Richard Arum explained the book’s findings in an online interview with the WSJ.

To read more about Aspiring Adults Adrift, click here.

Add a Comment
25. Rachel Sussman and The Oldest Living Things in the World

9780226057507

 

This past week, Rachel Sussman’s colossal photography project—and its associated book—The Oldest Living Things in the World, which documents her attempts to photograph continuously living organisms that are 2,000 years old and older, was profiled by the New Yorker:

To find the oldest living thing in New York City, set out from Staten Island’s West Shore Plaza mall (Chuck E. Cheese’s, Burlington Coat Factory, D.M.V.). Take a right, pass Industry Road, go left. The urban bleakness will fade into a litter-strewn route that bisects a nature preserve called Saw Mill Creek Marsh. Check the tides, and wear rubber boots; trudging through the muddy wetlands is necessary.

The other day, directions in hand, Rachel Sussman, a photographer from Greenpoint, Brooklyn, went looking for the city’s most antiquated resident: a colony of Spartina alterniflora or Spartina patens cordgrass which, she suspects, has been cloning and re-cloning itself for millennia.

Not simply the story of a cordgrass selfie, Sussman’s pursuit becomes contextualized by the lives—and deaths—of our fragile ecological forbearers, and her desire to document their existence while they are still of the earth. In support of the project, Sussman has a series of upcoming events surrounding The Oldest Living Things in the World. You can read more at her website, or see a listing of public events below:

EXHIBITIONS:

Imagining Deep Time (a cultural program of the National Academy of Sciences in Washington, DC), on view from August 28, 2014 to January 15, 2015

Another Green World, an eco-themed group exhibition at NYU’s Gallatin Galleries, featuring Nina KatchadourianMitchell JoaquimWilliam LamsonMary MattinglyMelanie Baker and Joseph Heidecker, on view from September 12, 2014 to October 15, 2014

The Oldest Living Things in the World, a solo exhibition at Pioneer Works in Brooklyn, NY, from September 15, 2014 to November 2, 2014, including a closing program

TALKS:

Sept 18th: a discussion in conjunction with the National Academy of Sciences exhibition Imagining Deep Time for DASER (DC Art Science Evening Rendezvous), Washington, DC (free and open to the public)

Nov 20th: an artist’s talk at the Museum of Contemporary Photography, Chicago

To read more about The Oldest Living Things in the World, click here.

 

 

Add a Comment

View Next 25 Posts