What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing Blog: The Chicago Blog, Most Recent at Top
Results 1 - 25 of 1,805
Visit This Blog | Login to Add to MyJacketFlap
Blog Banner
Publicity news from the University of Chicago Press including news tips, press releases, reviews, and intelligent commentary.
Statistics for The Chicago Blog

Number of Readers that added this blog to their MyJacketFlap: 7
1. Excerpt: Versions of Academic Freedom

9780226064314

***

“Academic Freedom Studies: The Five Schools”

In 2009 Terrence Karran published an essay with the title “Academic Freedom: In Justification of a Universal Ideal.” Although it may not seem so at first glance, the title is tendentious, for it answers in advance the question most often posed in the literature: How does one justify academic freedom? One justifies academic freedom, we are told before Karran’s analysis even begins, by claiming for it the status of a universal ideal.

The advantage of this claim is that it disposes of one of the most frequently voiced objections to academic freedom: Why should members of a particular profession be granted latitudes and exemptions not enjoyed by other citizens? Why, for example, should college and university professors be free to criticize their superiors when employees in other workplaces might face discipline or dismissal? Why should college and university professors be free to determine and design the condition of their workplace (the classroom) while others must adhere to a blueprint laid down by a supervisor? Why should college and university professors be free to choose the direction of their research while researchers who work for industry and government must go down the paths mandated by their employers? We must ask, says Frederick Schauer (2006), “whether academics should, by virtue of their academic employment and/or profession, have rights (or privileges, to be more accurate) not possessed by others” (913).

The architects of the doctrine of academic freedom were not unaware of these questions, and, in anticipation of others raising them, raised them themselves. Academic freedom, wrote Arthur O. Lovejoy (1930), might seem “peculiar chiefly in that the teacher is . . . a salaried employee and that the freedom claimed for him implies a denial of the right of those who provide or administer the funds from which he is paid to control the content of his teaching” ( 384). But this denial of the employer’s control of the employee’s behavior is peculiar only if one assumes, first, that college and university teaching is a job like any other and, second, that the college or university teacher works for a dean or a provost or a board of trustees. Those assumptions are directly challenged and rejected by the American Association of University Professors’ 1915 Declaration of Principles on Academic Freedom and Academic Tenure, a founding document (of which Lovejoy was a principal author) and one that is, in many respects, still authoritative. Here is a key sentence:

The responsibility of the university teacher is primarily to the public itself, and to the judgment of his own profession; and while, with respect to certain external conditions of his vocation, he accepts a responsibility to the authorities of the institution in which he serves, in the essentials of his professional activity his duty is to the wider public to which the institution itself is morally amenable.

There are four actors and four centers of interest in this sentence: the public, the institution of the academy, the individual faculty member, and the individual college or university. The faculty member’s allegiance is first to the public, an abstract entity that is not limited to a particular location. The faculty member’s secondary allegiance is to the judgment of his own profession, but since, as the text observes, the profession’s responsibility is to the public, it amounts to the same thing. Last in line is the actual college or university to which the faculty member is tied by the slightest of ligatures. He must honor the “external conditions of his vocation”—conditions like showing up in class and assigning grades, and holding office hours and teaching to the syllabus and course catalog (although, as we shall see, those conditions are not always considered binding)— but since it is a “vocation” to which the faculty member is responsible, he will always have his eye on what is really essential, the “universal ideal” that underwrites and justifies his labors.

Here in 1915 are the seeds of everything that will flower in the twenty- first century. The key is the distinction between a job and a vocation. A job is defined by an agreement (often contractual) between a worker and a boss: you will do X and I will pay you Y; and if you fail to perform as stipulated, I will discipline or even dismiss you. Those called to a vocation are not merely workers; they are professionals; that is, they profess something larger than the task immediately at hand— a religious faith, a commitment to the rule of law, a dedication to healing, a zeal for truth— and in order to become credentialed professors, as opposed to being amateurs, they must undergo a rigorous and lengthy period of training. Being a professional is less a matter of specific performance (although specific performances are required) than of a continual, indeed lifelong, responsiveness to an ideal or a spirit. And given that a spirit, by definition, cannot be circumscribed, it will always be possible (and even thought mandatory and laudable) to expand the area over which it is said to preside.

The history of academic freedom is in part the history of that expansion as academic freedom is declared to be indistinguishable from, and necessary for, the flourishing of every positive value known to humankind. Here are just a few quotations from Karran’s essay:

Academic freedom is important to everyone’s well-being, as well as being particularly pertinent to academics andtheir students. (The Robbins Committee on Higher Education in the UK, 1963)

Academic freedom is but a facet of freedom in the larger society. (R. M. O. Pritchard, “Academic Freedom and Autonomy in the United Kingdom and Germany,” 1998)

A democratic society is hardly conceivable . . . without academic freedom. (S. Bergan, “Institutional Autonomy: Between Myth and Responsibility,” 2002)

In a society that has a high regard for knowledge and universal values, the scope of academic freedom is wide. (Wan Manan, “Academic Freedom: Ethical Implications and Civic Responsibilities,” 2000)

The sacred trust of the universities is to carry the torch of freedom. (J. W. Boyer, “Academic Freedom and the Modern University: The Experience of the University of Chicago,” 2002)

Notice that in this last statement, freedom is not qualified by the adjective academic. Indeed, you can take it as a rule that the larger the claims for academic freedom, the less the limiting force of the adjective academic will be felt. In the taxonomy I offer in this book, the movement from the most conservative to the most radical view of academic freedom will be marked by the transfer of emphasis from academic, which names a local and specific habitation of the asserted freedom, to freedom, which does not limit the scope or location of what is being asserted at all.

Of course, freedom is itself a contested concept and has many possible meanings. Graeme C. Moodie sorts some of them out and defines the freedom academics might reasonably enjoy in terms more modest than those suggested by the authors cited in Karran’s essay. Moodie (1996) notes that freedom is often understood as the “absence of constraint,” but that, he argues, would be too broad an understanding if it were applied to the activities of academics. Instead he would limit academic freedom to faculty members who are “exercising academic functions in a truly academic matter” (134). Academic freedom, in his account, follows from the nature of academic work; it is not a personal right of those who choose to do that work. That freedom— he calls it an “activity freedom” because it flows from the nature of the job and not from some moral abstraction— “can of course only be exercised by persons, but its justification, and thus its extent, must clearly and explicitly be rooted in its relationship to academic activities rather than (or only consequentially) to the persons who perform them” (133). In short, he concludes, “the special freedom(s) of academics is/are conditional on the fulfillment of their academic obligations” (134).

Unlike those who speak of a universal ideal and of the torch of freedom being carried everywhere, Moodie is focused on the adjective academic. He begins with it and reasons from it to the boundaries of the freedom academics can legitimately be granted. To be sure, the matter is not so cut and dried, for academic must itself be defined so that those boundaries can come clearly into view and that is no easy matter. No one doubts that classroom teaching and research and scholarly publishing are activities where the freedom in question is to be accorded, at least to some extent. But what about the freedom to criticize one’s superiors; or the freedom to configure a course in ways not standard in the department; or the freedom to have a voice in the building of parking garages, or in the funding of athletic programs, or in the decision to erect a student center, or in the selection of a president, or in the awarding of honorary degrees, or in the inviting of outside speakers? Is academic freedom violated when faculty members have minimal input into, or are shut out entirely from, the consideration of these and other matters?

To that question, Mark Yudof, who has been a law school dean and a university president, answers a firm “no.” Yudof (1988) acknowledges that “there are many elements necessary to sustain the university,” including “salaries,” library collections,” a “comfortable workplace,” and even “a parking space” (1356), but do academics have a right to these things or a right to participate in discussions about them (a question apart from the question of whether it is wise for an administration to bring them in)? Only, says Yudof, if you believe “that any restrictions, however indirectly linked to teaching and scholarship, will destroy the quest for knowledge” (1355). And that, he observes, would amount to “a kind of unbridled libertarianism for academicians,” who could say anything they liked in a university setting without fear of reprisal or discipline (1356).

Better, Yudof concludes, to define academic freedom narrowly, if only so those who are called upon to defend it can offer a targeted, and not wholly diffuse, rationale. Academic freedom, he declares, “is what it is” (of course that’s the question; what is it?), and it is “not general liberty, pleasant working conditions, equality, self- realization, or happiness,” for “if academic freedom is thought to include all that is desirable for academicians, it may come to mean quite little to policy makers and courts” (1356). Moodie (1996) gives an even more pointed warning: “Scholars only invite ridicule, or being ignored, when they seem to suggest that every issue that directly affects them is a proper sphere for academic rule” (146). (We shall revisit this issue when we consider the relationship between academic freedom, shared governance, and public employee law.)

So we now have as a working hypothesis an opposition between two views of academic freedom. In one, freedom is a general, overriding, and ever-expanding value, and the academy is just one of the places that house it. In the other, the freedom in question is peculiar to the academic profession and limited to the performance of its core duties. When performing those duties, the instructor is, at least relatively, free. When engaged in other activities, even those that take place within university precincts, no such freedom or special latitude obtains. This modest notion of academic freedom is strongly articulated by J. Peter Byrne (1989): “The term ‘academic freedom’ should be reserved for those rights necessary for the preservation of the unique functions of the university ” (262).

These opposed accounts of academic freedom do not exhaust the possibilities; there are extremes to either side of them, and in the pages that follow I shall present the full range of the positions currently available. In effect I am announcing the inauguration of a new field— Academic Freedom Studies. The field is still in a fluid state; new variants and new theories continue to appear. But for the time being we can identify five schools of academic freedom, plotted on a continuum that goes from right to left. The continuum is obviously a political one, but the politics are the politics of the academy. Any correlation of the points on the continuum with real world politics is imperfect, but, as we shall see, there is some. I should acknowledge at the outset that I shall present these schools as more distinct than they are in practice; individual academics can be members of more than one of them. The taxonomy I shall offer is intended as a device of clarification. The inevitable blurring of the lines comes later.

As an aid to the project of sorting out the five schools, here is a list of questions that would receive different answers depending on which version of academic freedom is in place:

Is academic freedom a constitutional right?
What is the relationship between academic freedom and the First Amendment?
What is the relationship between academic freedom and democracy?
Does academic freedom, whatever its scope, attach to the individual faculty member or to the institution?
Do students have academic freedom rights?
What is the relationship between academic freedom and the form of governance at a college or university?
In what sense, if any, are academics special?
Does academic freedom include the right of a professor to criticize his or her organizational superiors with impunity?
Does academic freedom allow a professor to rehearse his or her political views in the classroom?
What is the relationship between academic freedom and political freedom?
What views of education underlie the various positions on academic freedom?

As a further aid, it would be good to have in mind some examples of incidents or controversies in which academic freedom has been thought to be at stake.

In 2011, the faculty of John Jay College nominated playwright Tony Kushner to be the recipient of an honorary degree from the City University of New York. Normally approval of the nomination would have been pro forma, but this time the CUNY Board of Trustees tabled, and thus effectively killed, the motion supporting Kushner’s candidacy because a single trustee objected to his views on Israel. After a few days of outrage and bad publicity the board met again and changed its mind. Was the board’s initial action a violation of academic freedom, and if so, whose freedom was being violated? Or was the incident just one more instance of garden- variety political jockeying, a tempest in a teapot devoid of larger implications?

In the same year Professor John Michael Bailey of Northwestern University permitted a couple to perform a live sex act at an optional session of his course on human sexuality. The male of the couple brought his naked female partner to orgasm with the help of a device known as a “fucksaw.” Should Bailey have been reprimanded and perhaps disciplined for allowing lewd behavior in his classroom or should the display be regarded as a legitimate pedagogical choice and therefore protected by the doctrine of academic freedom?

In 2009 sociology professor William Robinson of the University of California at Santa Barbara, after listening to a tape of a Martin Luther King speech protesting the Vietnam War, sent an e-mail to the students in his sociology of globalization course that began:

If Martin Luther King were alive on this day of January 19th, there is no doubt that he would be condemning the Israeli aggression against Gaza along with U.S. military and political support for Israeli war crimes, or that he would be standing shoulder to shoulder with the Palestinians.

The e-mail went on to compare the Israeli actions against Gaza to the Nazi actions against the Warsaw ghetto, and to characterize Israel as “a state founded on the negation of a people.” Was Robinson’s e-mail an intrusion of his political views into the classroom or was it a contribution to the subject matter of his course and therefore protected under the doctrine of academic freedom?

As the 2008 election approached, an official communication from the administration of the University of Illinois listed as prohibited political activities the wearing of T-shirts or buttons supporting candidates or parties. Were faculty members being denied their First Amendment and academic freedom rights?

BB&T, a bank holding company, funds instruction in ethics on the condition that the courses it supports include as a required reading Ayn Rand’s Atlas Shrugged (certainly a book concerned with issues of ethics). If a university accepts this arrangement (as Florida State University did), has it traded its academic freedom for cash or is it (as the dean at Florida State insisted) merely accepting help in a time of financial exigency?

In 1996, the state of Virginia passed a law forbidding state employees from accessing pornographic materials on state- owned computers. The statute included a waiver for those who could convince a supervisor that the viewing of pornographic material was part of a bona fide research project. Was the academic freedom of faculty members in the state university system violated because they were prevented from determining for themselves and without government monitoring the course of their research?

Just as my questions would be answered differently by proponents of different accounts of academic freedom, so would these cases be assessed differently depending on which school of academic freedom a commentator belongs to.

Of course I have yet to name the schools, and I will do that now.

(1)— The “It’s just a job” school. This school (which may have only one member and you’re reading him now) rests on a deflationary view of higher education. Rather than being a vocation or holy calling, higher education is a service that offers knowledge and skills to students who wish to receive them. Those who work in higher education are trained to impart that knowledge, demonstrate those skills and engage in research that adds to the body of what is known. They are not exercising First Amendment rights or forming citizens or inculcating moral values or training soldiers to fight for social justice. Their obligations and aspirations are defined by the distinctive task— the advancement of knowledge— they are trained and paid to perform, defined, that is, by contract and by the course catalog rather than by a vision of democracy or world peace. College and university teachers are professionals, and as such the activities they legitimately perform are professional activities, activities in which they have a professional competence. When engaged in those activities, they should be accorded the latitude— call it freedom if you like— necessary to their proper performance. That latitude does not include the performance of other tasks, no matter how worthy they might be. According to this school, academics are not free in any special sense to do anything but their jobs.

(2)— The “For the common good” school. This school has its origin in the AAUP Declaration of Principles (1915), and it shares some arguments with the “It’s just a job” school, especially the argument that the academic task is distinctive. Other tasks may be responsible to market or political forces or to public opinion, but the task of advancing knowledge involves following the evidence wherever it leads, and therefore “the first condition of progress is complete and unlimited freedom to pursue inquiry and publish its results.” The standards an academic must honor are the standards of the academic profession; the freedom he enjoys depends on adherence to those standards: “The liberty of the scholar . . . to set forth his conclusions . . . is conditioned by their being conclusions being gained by a scholar’s method and held in a scholar’s spirit.” That liberty cannot be “used as a shelter . . . for uncritical and intemperate partisanship,” and a teacher should not inundate students with his “own opinions.”

With respect to pronouncements like these, the “For the common good” school and the “It’s just a job” school seem perfectly aligned. Both paint a picture of a self-enclosed professional activity, a transaction between teachers, students, and a set of intellectual questions with no reference to larger moral, political, or societal considerations. But the opening to larger considerations is provided, at least potentially, by a claimed connection between academic freedom and democracy. Democracy, say the authors of the 1915 Declaration, requires “experts . . . to advise both legislators and administrators,” and it is the universities that will supply them and thus render a “service to the right solution of . . . social problems.” Democracy ’s virtues, the authors of the Declaration explain, are also the source of its dangers, for by repudiating despotism and political tyranny, democracy risks legitimizing “the tyranny of public opinion.” The academy rides to the rescue by working “to help make public opinion more self-critical and more circumspect, to check the more hasty and unconsidered impulses of popular feeling, to train the democracy.” By thus offering an external justification for an independent academy— it protects us from our worst instincts and furthers the realization of democratic principles— the “For the common good” school moves away from the severe professionalism of the “It’s just a job” school and toward an argument in which professional values are subordinated to the higher values of democracy or justice or freedom; that is, to the common good.

( 3)— The “Academic exceptionalism or uncommon beings” school. This school is a logical extension of the “For the common good” school. If academics are charged not merely with the task of adding to our knowledge of natural and cultural phenomena, but with the task of providing a counterweight to the force of common popular opinion, they must themselves be uncommon, not only intellectually but morally; they must be, in the words of the 1915 Declaration, “men of high gift and character.” Such men (and now women) not only correct the errors of popular opinion, they escape popular judgment and are not to be held accountable to the same laws and restrictions that constrain ordinary citizens.

The essence of this position is displayed by the plaintiff ’s argument in Urofsky v. Gilmore (2000), a Fourth Circuit case revolving around Virginia’s law forbidding state employees from accessing explicitly sexual material on state-owned computers without the permission of a supervisor. The phrase that drives the legal reasoning in the case is “matter of public concern.” In a series of decisions the Supreme Court had ruled that if public employees speak out on a matter of public concern, their First Amendment rights come into play and might outweigh the government’s interest in efficiency and organizational discipline. (A balancing test is triggered.) If, however, the speech is internal to the operations of the administrative unit, no such protection is available. The Urofsky court determined that the ability of employees to access pornography was not a matter of public concern. The plaintiffs, professors in the state university system, then detached themselves from the umbrella category of “public employees” and claimed a special status. They argued that “even if the Act is valid as to the majority of state employees, it violates the . . . academic freedom rights of professors . . . and thus is invalid as to them.” In short, we’re exceptional.

(4)— The “Academic freedom as critique” school. If academics have the special capacity to see through the conventional public wisdom and expose its contradictions, exercising that capacity is, when it comes down to it, the academic’s real job; critique— of everything— is the continuing obligation. While the “It’s just a job” school and the “For the common good” school insist that the freedom academics enjoy is exercised within the norms of the profession, those who identify academic freedom with critique (because they identify education with critique) object that this view reifies and naturalizes professional norms which are themselves the products of history, and as such are, or should be, challengeable and revisable. One should not rest complacently in the norms and standards presupposed by the current academy ’s practices; one should instead interrogate those norms and make them the objects of critical scrutiny rather than the baseline parameters within which critical scrutiny is performed.

Academic freedom is understood by this school as a protection for dissent and the scope of dissent must extend to the very distinctions and boundaries the academy presently enforces. As Judith Butler (2006a) puts it, “as long as voices of dissent are only admissible if they conform to accepted professional norms, then dissent itself is limited so that it cannot take aim at those norms that are already accepted” (114). One of those norms enforces a separation between academic and political urgencies, but, Butler contends, they are not so easily distinguishable and the boundaries between them blur and change. Fixing boundaries that are permeable, she complains, has the effect of freezing the status quo and of allowing distinctions originally rooted in politics to present themselves as apolitical and natural. The result can be “a form of political lib eralism that is coupled with a profoundly conservative intellectual resistance to . . . innovation” (127). From the perspective of critique, established norms are always conservative and suspect and academic freedom exists so that they can be exposed for what they are. Academic freedom, in short, is an engine of social progress and is thought to be the particular property of the left on the reasoning (which I do not affirm but report) that conservative thought is anti- progressive and protective of the status quo. It’s only a small step, really no step at all, from academic freedom as critique to the fifth school of thought.

(5)— The “Academic freedom as revolution” school. With the emergence of this school the shift from academic as a limiting adjective to freedom as an overriding concern is complete and the political agenda implicit in the “For the common good” school and the “Academic freedom as critique” schools is made explicit. If Butler wants us to ask where the norms governing academic practices come from, the members of this school know: they come from the corrupt motives of agents who are embedded in the corrupt institutions that serve and reflect the corrupt values of a corrupt neoliberal society. (Got that?) The view of education that lies behind and informs this most expansive version of academic freedom is articulated by Henry Giroux (2008). The “responsibilities that come along with teaching,” he says, include fighting for

an inclusive and radical democracy by recognizing that education in the broadest sense is not just about understanding, . . . but also about providing the conditions for assuming the responsibilities we have as citizens to expose human misery and to eliminate the conditions that produce it. (128)

In this statement the line between the teacher as a professional and the teacher as a citizen disappears. Education “in the broadest sense” demands positive political action on the part of those engaged in it. Adhering to a narrow view of one’s responsibilities in the classroom amounts to a betrayal both of one’s political being and one‘s pedagogical being. Academic freedom, declares Grant Farred (2008–2009), “has to be conceived as a form of political solidarity ”; and he doesn’t mean solidarity with banks, corporations, pharmaceutical firms, oil companies or, for that matter, universities ( 355). When university obligations clash with the imperative of doing social justice, social justice always trumps. The standard views of academic freedom, members of this school complain, sequester academics in an intellectual ghetto where, like trained monkeys, they perform obedient and sterile routines. It follows, then, that one can only be true to the academy by breaking free of its constraints.

The poster boy for the “Academic freedom as revolution” school is Denis Rancourt, a physics professor at the University of Ottawa (now removed from his position) who practices what he calls “academic squatting”— turning a course with an advertised subject matter and syllabus into a workshop for revolutionary activity. Rancourt (2007) explains that one cannot adhere to the customary practices of the academy without becoming complicit with the ideology that informs them: “Academic squatting is needed because universities are dictatorships, devoid of real democracy, run by self- appointed executives who serve private capital interests.”

To read more about Versions of Academic Freedom, click here.

Add a Comment
2. Excerpt: Packaged Pleasures

9780226121277
by Gary S. Cross and Robert N. Proctor

 ***

“The Carrot and the Candy Bar”

Our topic is a revolution—as significant as anything that has tossed the world over the past two hundred years. Toward the end of the nineteenth century, a host of often ignored technologies transformed human sensual experience, changing how we eat, drink, see, hear, and feel in ways we still benefit (and suffer) from today. Modern people learned how to capture and intensify sensuality, to preserve it, and to make it portable, durable, and accessible across great reaches of social class and physical space. Our vulnerability to such a transformation traces back hundreds of thousands of years, but the revolution itself did not take place until the end of the nineteenth century, following a series of technological changes altering our ability to compress, distribute, and commercialize a vast range of pleasures.

Strangely, historians have neglected this transformation. Indeed, behind this astonishing lapse lies a common myth—that there was an age of production that somehow gave rise to an age of consumption, with historians of the former exploring industrial technology, while historians of the latter stress the social and symbolic meaning of goods. This artificial division obscures how technologies of production have transformed what and how we actually consume. Technology does far more than just increase productivity or transform work, as historians of the Industrial Revolution so often emphasize. Industrial technology has also shaped how and how much we eat, what we wear and why, and how and what (and how much!) we hear and see. And myriad other aspects of how we experience daily life—or even how we long for escape from it.

Bound to such transformations is a profound disruption in modern life, a breakdown of the age-old tension between our bodily desires and the scarcity of opportunities for fulfillment. New technologies— from the rolling of cigarettes to the recording of sound—have intensified the gratification of desires but also rendered them far more easily satisfied, often to the point of grotesque excess. An obvious example is the mechanized packaging of highly sugared foods, which began over a century ago and has led to a health and moral crisis today. Lots of media attention has focused on the irresponsibility of the food industry and the rise of recreational and workplace sedentism—but there are other ways to look at this.

It should be obvious that technology has transformed how people eat, especially with regard to the ease and speed with which it is now possible to ingest calories. Roots of such transformations go very deep: the Neolithic revolution ten-plus thousand years ago brought with it new methods of regularizing the growing of food and the world’s first possibility of elite obesity. The packaged pleasure revolution in the nineteenth century, however, made such excess possible for much larger numbers of “consumers”—a word only rarely used prior to that time. Industrial food processors learned how to pack fat, sugar, and salt into concentrated and attractive portions, and to manufacture these cheaply and in packages that could be widely distributed. Foods that were once luxuries thus became seductively commonplace. This is the first thing we need to understand.

We also need to appreciate that responsibility for the excesses of today’s consumers cannot be laid entirely at the doors of modern technology and the corporations that benefit from it. We cannot blame the food industry alone. No one is forced to eat at McDonald’s; people choose Big Macs with fries because they satisfy with convenience and affordability, just as people decide to turn on their iPods rather than listen to nature or go to a concert. But why would we make such a choice—and is it entirely a “free choice”? This brings us to a second crucial point: humans have evolved to seek high-energy foods because in prehistoric conditions of scarcity, eating such foods greatly improved their ancestors’ chances of survival. This has limited, but not entirely eliminated, our capacity to resist these foods when they no longer are scarce. And if we today crave sugar and fat and salt, that is partly because these longings must have once promoted survival, deep in the pre-Paleolithic and Paleolithic. Our taste buds respond gleefully to sugars because we are descended from herbivores and especially frugivores for whom sweet-tasting plants and fruits were neuro-marked as edible and nutritious. Poisonous plants were more often bitter-tasting. Pleasure at least in this sensory sense was often a clue to what might help one survive.

But here again is the rub. Thanks to modern industrialism, high-calorie foods once rare are now cheap and plentiful. Industrial technology has overwhelmed and undercut whatever balance may have existed between the biological needs of humans and natural scarcity. We tend to crave those foods that before modern times were rare; cravings for fat and sugar were no threat to health; indeed, they improved our chances of survival. Now, however, sugar, especially in its refined forms, is plentiful, and as a result makes us fat and otherwise unhealthy. And what is true for sugar is also true for animal fat. In our prehistoric past fat was scarce and valuable, accounting for only 2 to 4 percent of the flesh of deer, rabbits, and birds, and early humans correctly gorged whenever it was available. Today, though, factory-farmed beef can consist of 36 percent fat, and most of us expend practically no energy obtaining it. And still we gorge.

And so the candy bar, a perfect example of the engineered pleasure, wins out over the carrot and even the apple. More sugar and seemingly more varied flavors are packed into the confection than the unprocessed fruit or vegetable. In this sense our craving for a Snickers bar is partly an expression of the chimp in us, insofar as we desire energy-packed foods with maximal sugars and fat. The concentration, the packaging, and the ease of access (including affordability) all make it possible—indeed enticingly easy—to ingest far more than we know is good for us. Our biological desires have become imperfect guides for good behavior: drives born in a world of scarcity do not necessarily lead to health and happiness in a world of plenty.

But food is not the only domain where such tensions operate. Indeed, a broader historical optic reveals tensions in our response to the packaged provisioning of other sensations, and this broader perspective invites us to go beyond our current focus on food, as important as that may be.

As biological creatures we are naturally attracted to certain sights and sounds, even smells and motion, insofar as we have evolved in environments where such sensitivities helped our ancestors prevail over myriad threats to human existence. The body’s perceptual organs are, in a sense, some of our oldest tools, and much of the pleasure we take in bright colors, combinations of particular shapes, and certain kinds of movement must be rooted in prehistoric needs to identify food, threats, or mates from a distance. Today we embrace the recreational counterparts, filling our domestic spaces with visual ornaments, fixed or in motion, reminding ourselves of landscapes, colors, or shapes that provoke recall or simulate absent or even impossible worlds.

What has changed, in other words, is our access to once-rare sensations, including sounds but especially imagery. The decorated caves of southern France, once rare and ritualized space, are now tourist attractions, accessible to all through electronic media. Changes in visual technology have made possible a virtual orgy of visual culture; a 2012 count estimated over 348,000,000,000 images on the Internet, with a growth rate of about 10,000 per second. The mix and matrix of information transfer has changed accordingly: orality (and aurality) has been demoted to a certain extent, first with the rise of typography (printing) and then the published picture, and now the ubiquitous electronic image on screens of different sorts. “Seeing is believing” is an expression dating only from about 1800, signaling the surging primacy of the visual. Civilization itself celebrates the light, the visual sense, as the darkness of the night and the narrow street gradually give way to illuminated interiors, light after dark, and ever broader visual surveillance.

Humans also have preferences for certain smells, of course, even if we are (far) less discriminating than most other mammals. Technologies of odor have never been developed as intensively as those of other senses, though we should not forget that for tens of thousands of years hunters have employed dogs—one of the oldest human “tools”—to do their smelling. Smell has also sometimes marked differences between tribes and classes, rationalizing the isolation of slaves or some other subject group. The wealthy are known to have defined themselves by their scents (the ancient Greeks used mint and thyme oils for this purpose), and fragrances have been used to ward off contagions. Some philosophers believed that the scent of incense could reach and please the gods; and of course the devil smelled foul—as did sin.

Still, the olfactory sense lost much of its acuity in upright primates, and it is the rare philosopher who would base an epistemology on odor. Philosophers have always privileged sight over all other senses—which makes sense given how much of our brain is devoted to processing visual images (canine epistemology and agnotology would surely be quite different). Optico-centricity was further accentuated with the rise of novel ways of extending vision in the seventeenth century (microscopes, telescopes) and still more with the rise of photography and moving pictures. Industrial societies have continued to devalue scent, with some even trying to make the world smell-free. Pasteur’s discovery of germs meant that foul air (think miasma) lost its role in carrying disease, but efforts to remove the germs that caused such odors (especially the sewage systems installed in cities in the nineteenth century) ended up mollifying much of the stink of large urban centers. Bodily perfuming has probably been around for as long as humans have been human, but much of recent history has involved a process of deodorizing, further reducing the value of the sensitive nose.

Modern people may well gorge on sight, but we certainly remain sound-sensitive and long for music, “the perfume of hearing” in the apt metaphor of Diane Ackerman. Music has always aroused a certain spiritual consciousness and may even have facilitated social bonding among early humans. Stringed and drum instruments date back only to about 5,500 years ago (in Mesopotamia), but unambiguous flutes date back to at least 40,000 years ago; the oldest known so far is made from vulture and swan bones found in southern Germany. Singing, though, must be far older than whatever physical evidence we have for prehistoric music.

There is arguably a certain industrial utility to music, insofar as “moving and singing together made collective tasks far more efficient” (so claims historian William McNeill). As a mnemonic aid, a song “hooks onto your subconscious and won’t let go.” Music carries emotion and preserves and transports feelings when passed from one person or generation to another—think of the “Star Spangled Banner” or “La Marseillaise.” And music also marks social differences in stratified societies. In Europe by the eighteenth century, for example, people of rank had abandoned participation in the sounds and music of traditional communal festivals and spectacles. To distinguish themselves from the masses, the rich and powerful came to favor the orderly stylized sounds of chamber music—and even demanded that audiences keep silent during performances. One of the signal trends of this particular modernity is the withdrawal of elites from public festivals, creating space instead for their own exclusive music and dance to eliminate the unruly/unmanaged sounds of the street and work. Music helps forge social bonds, but it can also work to separate and to isolate, facilitating escape from community (think earbuds).

We humans also of course crave motion and bodily contact, flexing our muscles in the manner of our ancestors exhilarating in the chase. And even if we no longer chase mammoth herds with spears, we recreate elements of this excitement in our many sports, testing strength against strength or speed against speed, forcing projectiles of one sort or another into some kind of target. Dance is an equally ancient expression of this thrill of movement, with records of ritual motion appearing already on cave and rock walls of early humans. The emotion-charged dance may be diminished in elite civilized life, but it clearly reappears in the physicality of amusement park throngs at the end of the nineteenth century, and more recently in the rhythmic motions of crowds at sporting events and rock concert moshing where strangers slam and grind into each other.

Sensual pleasure is thus central to the “thick tapestry of rewards” of human evolutionary adaptation, rewards wired into the complex circuitry of the brain’s pleasure centers. Pursuit of pleasure (and avoidance of pain) was certainly not an evil in our distant past; indeed, it must have had obvious advantages in promoting evolutionary fitness. Along with other adaptive emotions (fear, surprise, and disgust, for example), pleasure and its pursuit must also have helped create capacities to bond socially—and perhaps even to use and to understand language. The joy that motivates babies to delight in rhythmic and consonant sounds, bright colors, friendly faces, and bouncing motion helps build brain connections essential for motor and cognitive maturity.

Of course the biological propensity to gorge cannot be new; that much we know from the relative constancy of the human genetic constitution over many millennia. We also know that efforts to augment or intensify sensual pleasure long predate industrial civilization. This should come as no surprise, given that, as already noted, our longings for rare delights of taste, sight, smell, sound, and motion are rooted in our prehistoric past. Humans—like wolves—have been bred to binge. But in the past, at least, nature’s parsimony meant that gorging was generally rare and its impact on our bodies, psyches, and sociability limited.

This leads us again to a critical point: pleasure is born in its paucity and scarcity sustains it. And scarcity has been a fact of life for most of human history; in fact, it is very often a precondition for pleasure. Too much of any good can lead to boredom—that is as true for music or arcade games as for ice cream or opera. Most pleasures seem to require a context of relative scarcity. Amongst our prehistoric ancestors this was naturally enforced through the rarity of honey and the all-tooinfrequent opportunity for the chase. Humans eventually developed the ability, however, to create and store surpluses of pleasure-giving goods, first by cooking and preserving foods and drinks and eventually by transforming even fleeting sensory experiences into reproducible and transmissible packets of pleasure. Think about candy bars, soda pop, and cigarettes, but also photography, phonography, and motion pictures—all of which emerged during the packaged pleasure revolution.

Of course, in certain respects the defeat of scarcity has a much older history, having to do with techniques of containerization. Prior to the Neolithic, circa ten thousand years ago, humans had little in the way of either technical means or social organization to store any kind of sensual surplus (though meats may have been stashed the way some nonhuman predators do). Farming and its associated technics changed this. After hundreds of thousands of years of scavenging and predation, people in this new era began to grow their own food—and then to save and preserve it in containers, especially in pots made from clay but also in bags made from skins or fibers from plants. Agriculture seems to have led to the world’s first conspicuous inequalities in wealth, but also the first routine encounters with obesity and other sins of the flesh (drunkenness, for example). Of course the rich—the rulers and priests of ancient city-states and empires or the lords and abbots of religious centers in the Middle Ages—were able to satisfy sensual longings more often, and in some cases continually.

While Christianity was in part a reaction to this sensual indulgence, being originally a religion of the excluded slave and the appalled rich, medieval aristocrats returned to the ancient love of sweet and sour dishes, favoring roasted game (a throwback to the preagricultural era) and the absurd notion that torturing animals before killing them made for the tastiest meats. Medieval European nobility mixed sex, smell, and taste in their large midday meals and frequent evening banquets. Christian church fathers banned perfumes and roses as Roman decadence, but treatments of this sort—along with passions for pungent flavors and scents—were revived with the Crusades and intimate contact with the Orient.

Until recently, pursuit of pleasure on such an opulent scale was confined to those tiny minorities with regular access to the resources to contain and intensify nature. Since antiquity, in fact, the powerful have often been snobbish killjoys, trying to restrict what the poor were allowed to eat, wear, and enjoy. Sometimes this made economic (if invidious) sense—as when England’s Edward III rationed the diet of servants during shortages that followed the Black Death. In the sixteenth century, French law prohibited the eating of fish and meat at the same meal in hopes of preserving scarce supplies. And given the low output of agriculture, there was a certain logic underlying the rationing of access to “luxuries.” But the powerful sometimes seem to have relished denying pleasure to others. How else do we explain sumptuary laws that prohibited the commoner from wearing colorful and costly clothing reserved for aristocrats?

Access to pleasure has long been an expression of privilege and power, but much can be made with little, and rarely has pleasurable display been totally suppressed in any culture. Think of the ceremonies surrounding seasonal festivals, especially the gathering of harvest surplus, when humans drenched themselves in the senses that seemed almost to ache for expression. Think of the Bacchanalia of the Greeks, the Saturnalia of the Romans, the Mardi Gras of medieval Europeans, or the orgies of feasting, dancing, music, and colorful costumes of any society whose everyday world of scarcity is forgotten in bingeing after harvest. Agriculture produced cycles of carnival and Lent, “a self-adjusting gastric equilibrium,” in the words of one historian.

Of course there are many examples of ancient philosophers and sages seeking to limit the hedonism of the privileged (and the festival culture of the poor). Certainly there are ancients who embraced the virtues of moderation, as in Aristotle’s “golden mean” or Confucian ideals of restrained desire. Hebrew prophets, Puritans, Jesuits, and countless Asian ascetics likewise attempted to rein in the fêtes of the senses. Medieval authorities in Europe forbade the eating of meat on Wednesdays, Fridays, and numerous fast days that added up to more than 150 days a year. The classical ideal of moderation was revived, and the moral superiority of grain-based foods was defended. Gluttony was condemned along with lust. Pleasure was to be regulated even in the afterlife, insofar as the Christian heaven was not for pleasure but for self-improvement. These and other ascetic moralities arguably helped people cope with uncertain supplies, putting a brake also on the rapacious greed of the rich and powerful. Curbing of excess extended to all manner of “pleasures of the flesh,” including those that, like sex, were not necessarily even scarce.

Dance came under suspicion in this regard, especially in its ecstatic form. European explorers frowned on the gesticulations of “possessed natives” whom they encountered in Africa and the Americas in the sixteenth and seventeenth centuries. At the same time, European elites smothered social dancing in the towns and villages of their own societies. The reasons were many. Clergy demanded that their holy days and rituals be protected from defilement by the boisterous and even sacrilegious customs of the frolicking crowd; the rich also chose to withdraw from—and then suppressed—the emotional intensity of common people’s celebrations, retiring instead to the confines of their private gatherings and sedate dances. The military also needed a new type of soldier and new ways of preparing men for war: the demand was no longer to fire up the emotions of soldiers to prepare them for handto-hand combat; the new need was to drill and discipline troops to march unflinching into musket and cannon fire, with individual fighters acting as precision components in a machine. The regular rhythms of the military march served this purpose better than the ecstatic dance.

Even when people found ways of intensifying sensation (as in the distillation of alcoholic spirits), state and church authorities were often able to enforce limits, sometimes by harsh means. In London in the 1720s, authorities repressed the widespread and addictive use of gin (a juniper-flavored liquor). At the beginnings of the Industrial Revolution, just as unleashing desire was becoming respectable, philosophers such as Adam Smith and David Hume still mused about the need for personal restraint and moral sympathies.

By this time, and increasingly over the course of the nineteenth century, especially between about 1880 and 1910, these traditional calls for moderation and self-control were starting to face a new kind of challenge, thanks to new techniques of containerization and intensification that would culminate in the packaged pleasure revolution. New kinds of machines brought new sensations to ordinary people, producing goods that for the first time could be made quite cheap and easily storable and portable. Canned food defeated the seasons, extending the availability of fruits and vegetables to the entirety of the year. Candy bars purchased at any newsstand or convenience store replaced the rare encounter with the honeycomb or wild strawberry. And while our more immediate predecessors may have enjoyed a pipe of tobacco or a draft of warm beer, the deadly convenience of the cigarette and the refreshing coolness of the chilled beverage came within the grasp of the masses only toward the end of the nineteenth century. And this revolution in the range and intensity of sensation radically upset the traditional relationship between desire and scarcity.

A similar process occurred with other sensory delights. While earlynineteenth-century Americans and Europeans thrilled at the sight of painted dioramas and magic lantern shows, nothing compared to the spectacle of fast-paced police chases in the one-reel movies viewable after 1900. Opera was a privileged treat of the few in lavish public places, but imagine the revolution wrought by the 1904 hard wax cylinder phonograph, when Caruso could be called upon to sing in the family parlor whenever (and however often) one wanted. Daredevils in Vanuatu dove from high places holding vines long before bungee jumping became a fad; even so, there was nothing like the mass-market calibrated delivery of physical thrills before the roller coaster, popularized in the 1890s. We find something similar even with binge partying: while peoples had long celebrated surpluses in festivals, they typically did so only on those rare days designated by the authorities. By the end of the nineteenth century, however, festive pleasures of a more programmed sort had become widely available on demand in the modern commercial amusement park.

Especially important is how the packaged pleasure intensified (certain aspects of ) human sensory experience. An extreme example is when opium, formerly chewed, smoked, or drunk as tea, was transformed through distillation into morphine and eventually heroin—and then injected directly into the bloodstream with the newly invented syringe in the 1850s. The creation of a wide variety of “tubes” like the syringe for delivering chemically purified, intense sensation was characteristic of much of this new technology—which we shall describe in terms of “tubularization.” The cigarette is another fateful example: tobacco smoking was made cheap, convenient, and “mild” (i.e., deadly) with the advent of James Bonsack’s automated cigarette rolling machine (in the 1880s) and new methods of curing tobacco. Bonsack’s machine lowered the cost of manufacturing by an order of magnitude, and new methods of chemical processing (such as flue curing) allowed a milder, less alkaline smoke to be drawn deep into the lungs. A new mass-market consumer “good” was born, accompanied by mass addiction and mass death from maladies of the heart and lungs.

The “tubing” of tobacco into cigarettes was closely related to techniques used in packing and packaging many other commercial products. Think of mechanized canning—culminating in the double-seamed cylinder of the “sanitary” can-making machinery of 1904—and mechanized bottle and cap making from the late 1890s. New forms of sugar consumption appeared with the invention of soda fountain drinks. Coca-Cola was first served in drug stores in 1886 and in bottles by the end of the century, and in the 1890s the mixing of sugar with bitter chocolate led to candy bars, such as Hershey’s in 1900. Packaged pleasures of this sort—offered in conveniently portable portions with carefully calibrated constituents—allowed manufacturers to claim to have surpassed the sensuous joys of paradise. Chemists also began to be hired to see what new kinds of foods and drugs could be synthesized to surpass the taste, smell, and look of anything nature had created. A new discipline of “marketing” came of age about this time—the word was coined in 1884—with the task of creating demand for this riot of new products, decked out increasingly in colorful and striking labels with eye- and ear-catching slogans.

New technologies also sped up our consumption of visual, auditory, and motion sensoria. In 1839 the Daguerreotype revolutionized the familiar curiosity of the camera obscura—a dark room featuring a pinhole that would project an image of the outside world onto an interior wall—by chemically capturing that image on a metal plate in a miniaturized “camera” (meaning literally “room”). While these early photographs required long periods of exposure to fix an image, that time dramatically declined over the course of the century, allowing by 1888 the amateur snapshot camera and only three years later the motion picture camera. The effect, as we shall see, was a sea change in how we view and recollect the world. Sound was also captured (and preserved and sold) about this same time. The phonograph, invented in 1877 by Thomas Edison, became a new way of experiencing sound when improved and domesticated. And Emile Berliner’s “record” of 1887 made possible the mass production of sound on stamped-out discs, capturing a concert or a speech in a two- or three-minute record available to anyone, anywhere, with the appropriate gear.

Access and speed took another sensual twist when a Midwesterner by the name of La Marcus Thompson introduced the first mechanized roller coaster, in 1884. Bodily sensations that might have signaled danger or even death on a real train were packed into a two- or three-minute adventure trip on a rail “gravity ride.” Adding another dimension to the thrill was Thompson’s scenic railroad (in 1886) with its artificial tunnels and painted images of exotic natural or fantasy scenes. This was a new form of concentrated pleasure, distilling sights and sounds that formerly would have required days of “regular travel.” Rides, in combination with an array of novel multisensory spectacles, were concentrated into dedicated “amusement parks,” offering a kind of packaged recreational experience, accessible (very often) via the new trolley cars of the 1890s. Some of the earliest and most famous were those built at Coney Island on the southernmost tip of Brooklyn, New York.

Innovations of this sort led us into new worlds of sensory access, speed, and intensity. Distance and season were no longer restraints, as canned and bottled goods moved by rail, ship, and eventually truck across vast stretches of space and climate—with mixed outcomes for human health and well-being.

Some of these new technologies nourished and improved our bodies with cheaper, more hygienic, and varied food and drink; others offered more convenient and effective medicines and toiletries. Still others provided unprecedented opportunities to enjoy the beauty of nature (or at least its image), along with music and new kinds of “visual arts.” Amusement rides gave us (relatively) harm-free ways of experiencing the ecstatic and the exhilaration of danger—plus a kind of simulated or virtual travel; photography froze the evanescent sight, preserving images on a scale never previously possible, and with near-perfect fidelity. Yet packaged pleasures also led to new health and moral threats.

In the most extreme form, concentrating intoxicants led to addictions—physical dependencies that often required ever-increasing dosages to maintain a constant effect, and substantial physical discomfort accompanying withdrawal. Here of course the syringe injection of distilled opiates is the paradigmatic example, and addiction to tobacco and alcoholic drinks must also be included. But the impact of concentrated high-energy foods is not entirely different. Fat- or sugar-rich foods produce not just energy but very often endorphins, morphine-like painkillers that offer comfort and calm. That is one reason they are called “comfort” foods. These rich foods cause neurotransmitters in the brain to go out of balance, resulting in cravings. By contrast, the natural physical pleasures of exercise are much less addicting because we get tired; and some “excess”—here pain is gain—can actually make us healthier.

Not all packaged pleasure dependencies were so obviously chemical. Engineered pleasures often create astonishment and delight when first introduced, for example, but can also raise expectations and dull sensibilities for “unpackaged” stimuli, be they nature’s wonders or unaided convivial and social delights. The pleasures of recorded sound, the captured image, and even the amusement park ride and electronic game often satisfy with a kind of ratcheting effect, rendering the visual, auditory, and motion pleasures in uncommodified nature and society boring. In this sense, the packaging of pleasure can turn the once rare into an everyday, even numbing, occurrence. The world beyond the package becomes less thrilling, less desirable. In the wake of the telephoto lens and artful editing of film—with all the “boring bits” taken out—nature itself can appear dull or impoverished. Why go to the waterfall or forest if you can experience these in compressed form at your local zoo or theme park? Or on IMAX or your widescreen, high-def TV? Packaged pleasures of this sort may not induce physical dependencies, but they can create inflated expectations or even degrade other, less distilled or concentrated, kinds of experiences.

Another point we shall be making is that packaged pleasures have often de-socializedpleasure taking. Many create neurological responses similar to those of religious ecstasies, physical exercise, and social or even sexual intercourse, and can end up substituting for, or displacing, such enjoyments. Weak wine and mild natural hallucinogens have long enhanced spiritual and social experience, but the modern packaged pleasure often has the effect of privatizing satisfaction, isolating it from the crowd. Think of the privatization of public space through portable mp3 players, or the isolating effect of television.

The key point to appreciate is that we today live in a vastly different world from that of peoples living prior to the packaged pleasure revolution, when a broad range of sensual pleasures came to be bottled, canned, condensed, distilled, and otherwise intensified. The impact of this revolution has not been uniform, and we acknowledge and stress these differences, but it does seem to have transformed our sensory universe in ways we are only beginning to understand.

The packaged pleasures we shall be considering in this book include cigarettes, candy and soda pop, phonograph records, photographs, movies, amusements park spectacles, and a few other odds and ends.

But of course not all commodities that are tubed, packed, portable, or preserved can be considered packaged pleasures. For our purposes, we can identify several key and interrelated elements:

  1. The packaged pleasure is an engineered commodity that contains, concentrates, preserves, and very often intensifies some form of sensual satisfaction.
  2. It is generally speaking inexpensive, easy to access (readily at hand), and very often portable and storable, often in a domestic setting.
  3. It is typically wrapped and labeled and thus often marketed by branding. Although often portable, in the case of the amusement park, it can also be enclosed and branded in a contained and fixed space.
  4. The packaged pleasure is often produced by companies with broad regional if not national or even global reach, creating a recognizable bond between the individual consumer and the corporate producer.

Of course we are well aware that many other consumer products exhibit one or more of these attributes—clothes, cars, books, packaged cereals, cocaine, pornography, and department stores just to name a few. Our focus will be on those packaged pleasures that signal key features of the early part of this transformation, and notably those that involve the elements of containment, compression, intensification, mobilization, and commodification. And we recognize that we will not offer an encyclopedic survey of pleasures that have been intensified and packaged—we won’t be treating the history of pornography or perfume, for example, and will consider narcotics and alcoholic beverages only briefly.

We should also be clear that the packaged pleasure revolution is on-going and in many ways has strengthened over time, as pleasure engineers find ever-more sophisticated ways of intensifying desire. And we’ll consider this history at least briefly. Since funneled fun has a tendency to bore us over time, pleasure engineers have repeatedly raised the bar on sensory intensity. Nuts and nougat were added to the simple chocolate bar, and cigarette makers added flavorants and chemicals to enhance or optimize nicotine delivery. The visual panel in motion pictures has been made more alluring with increasingly rapid cuts, and recorded sound has seen a dramatic expansion in both fidelity and acoustical range. Roller coasters went ever higher and faster while also becoming ever safer. Pornography is delivered with ever-greater convenience and is now basically free to anyone with an Internet connection. Even opera fans can now hear (and see) their favorite arias with a simple click on YouTube—at no cost and without leaving home (or sitting through those “boring bits”). Entertainment without the “fiber,” one could say.

Another outcome of the packaged pleasure revolution, then, is the progressive refinement—really reengineering—of sensory experience in the century or so since its beginnings. Optimization of satisfactions has become a big part in this, as one might expect from the fact that packaged pleasures are very often commodities produced by corporations with research and marketing departments. Menthol was added to cigarettes in the 1930s, with the idea of turning tobacco back into a kind of medicine. Ammonia and levulinic acid and candied flavors of various sorts were later added to augment the nicotine “kick,” but also to appeal to younger tastes. Flavor chemists meanwhile learned to manipulate the jolt of “soft drinks” by refining dosings of caffeine and sugar, while candy makers developed nuanced “flavor profiles”— surpassing traditional hard candy, for example, with the sensory complex of a Snickers.

Optimization and calibration we also find in other parts of this revolution. The intense thrill of a loop-de-loop ride, debuted first at Coney Island in the 1890s, gave way to the more varied sensuality of “themed” rides. Roller coasters have been designed to go to the edge of exhilaration, stopping just short of the point of nausea or injury. The same principle works with gambling, where even losers keep playing because of the carefully calibrated conditioning that comes with the periodic (and precisely calculated) win built into the game. Pleasure engineers have learned how to create video games that are easy enough to engage newcomers, but complex enough to sustain the interest of experienced players. Gaming engineers even seek to encourage (or require) physical movement and social interactions—think Wii games—to counter critics cautioning against the bodily and social negatives of overly virtualized lives.

Our focus is on the origins of the technologies involved in such transformations, though we also are aware that such novelties have always encountered critics, those who worry that an oversated consuming public would lose control and abandon work and family responsibilities. But the reality in terms of social impact often has been quite different. Few of these optimized pleasures have ever undermined the willingness of consumers to work and obey—and have done little to undermine nerves and sensibilities (as some have feared). Indeed they have often contributed to a new work ethic driven by new needs and imperatives to earn and toil evermore in order to be able to afford the delights of movies, candy, soda, cigarettes, and the rest of the show. Over time, and often a surprisingly short time, these commodified delights have become a kind of second sensory nature—customary and accepted ways of eating, inhaling, seeing and hearing, and feeling.

Scholars have long debated the impact of “modern consumer culture,” albeit too often in negative terms without considering the historical origins of the phenomena in question. In the 1890s, the French sociologist Émile Durkheim feared that the “masses” would be enervated, even immobilized, by technical modernity’s overwhelming assault on the senses. And Aldous Huxley in his Brave New World (1932) warned of a coming culture of commoditized hedonism oblivious to tyranny. Jeremiahs of this sort have singled out different culprits, with blame most often placed on the “weaknesses” of the masses or the manipulation of merchandisers, with the hope expressed that the virtuous few in their celebration of nature and simplicity would constitute a bulwark against immediate gratification and degrading consumerism. These critics have been opposed by apologists for “democratic access” to the choice and comforts of modern consumer society—who champion the idea that only killjoy elitists could find fault in the delights of pleasure engineering. This perspective dominates a broad swath of social science—especially from neoclassical economists (think of George Stigler and Gary Becker’s famous dictum on the nondisputability of taste).

We argue instead that we need to abandon the overgeneralization common to both jeremiahs and free-market populists. Of course it is true that the very notion of a “packaged pleasure revolution” suggests certain links between the cigarette, bottled soda, phonograph records, cameras, movies, and even amusement parks. But the impact of these various inventions over the decades has been very different, and cannot be subsumed under some procrustean notion of “modern consumer culture.” Rather, as we shall see, their distinct histories suggest very different effects on our bodies and our cultures that would seem to require very different personal and policy responses. Our view is that the sale of cigarettes (as presently designed) should be heavily regulated and ultimately banned, for example, while soda should probably only be shamed and (heavily) taxed. And we make no policy recommendations for film or sound “packages.” But we certainly need to better understand how these technologies have shaped and refined (distorted?) our sensibilities.

We should also keep in mind that there are global consequences to the packaged pleasure revolution—and that most of these lie in the future. This is unfinished business. Overconsumption is part of the problem, as is the undermining of world health (notably from processed sugar and cigarettes). The revolution is ongoing, as the engineered world of compressed sensibility spreads to ever-different parts of the globe, and ever-different parts of human anatomy and sociability. It may be hard to opt out of or to escape from this brave new world, but the conditions under which it arose are certainly worth understanding and confronting.

This book takes on a lot. Our hope is to move us beyond the classic debate between the jeremiahs against consumerism and the defenders of a democratic access to commercial delights. We root mass consumption in a sensory revolution facilitated by techniques that upset the ancient balance between desire and scarcity. We take a fresh look at how technology has transformed our nature.

To read more about Packaged Pleasures, click here.

Add a Comment
3. The Professional: Donald E. Westlake

b7cunmn4yj2bqpeufqsj

 

Deadspin columnist/Yankees fan/out-of-print litterateur Alex Belth recently sat down over email with Levi Stahl, University of Chicago Press promotions director and editor of The Getaway Car: A Donald Westlake Nonfiction MiscellanyTheir resulting conversation, published today at Deadspin, al0ng with an excerpt from the book, includes the history of their engagement with the Parker novels, Jimmy the Kid‘s amazing cover design, culling through Westlake’s archive, an obscure British comedy show, and the perils of professional envy vs. professional admiration. You can read the interview in full here, and have a look at a clip after the jump below.

***

Q: In a letter, Westlake described the difference between an author and a writer. A writer was a hack, a professional. There’s something appealing and unpretentious about this but does it take on a romance of its own? I’m not saying he was being a phony but do you think that difference between a writer and an author is that great?

LS: I suspect that it’s not, and that to some extent even Westlake himself would have disagreed with his younger self by the end of his life. I think the key distinction for him, before which all others pale, was what your goal was: Were you sitting down every day to make a living with your pen? Or were you, as he put it ironically in a letter to a friend who was creating an MFA program, “enhanc[ing] your leisure hours by refining the uniqueness of your storytelling talents”? If the former, you’re a writer, full stop. If the latter, then you probably have different goals from Westlake and his fellow hacks.

But does a true hack veer off course regularly to try something new? Does a hack limit himself to only writing about his meal ticket (John Dortmunder) every three books, max, in order not to burn him out? Does a hack, as Westlake put it in a late letter to his friend and former agent Henry Morrison, “follow what interests [him],” to the likely detriment of his career? Westlake was always a commercial writer, but at the same time, he never let commerce define him. Craft defined him, and while craft can be employed in the service of something a writer doesn’t care about at all, it is much easier to call up and deploy effectively if the work it’s being applied to has also engaged something deeper in the writer. You don’t write a hundred books with almost no lousy sentences if you’re truly a hack.

Read more about The Getaway Car here.

 

 

Add a Comment
4. Literature in translation

UCP_translations_2014_cover

In the wake of the controversy (or welcomed interest, depending on your position) surrounding Patrick Modiano’s recent Nobel Prize in Literature, the AAUP circulated the hashtag #litintranslation, in order to promote those books published by university presses that attempt to overcome the dearth of literature in translation that has long acquiesced to a peculiar hegemony in American letters. In fact, Yale University Press already had plans to publish Modiano’s Suspended Sentences: Three Novellas this fall, as part of their Margellos World Republic of Letters series. A quick review of the tweets circulating under #litintranslation reveals an equally robust list of works brought into the English language by the university press community, including several by the University of Chicago Press. With that in mind, and on the heels of the Frankfurt Book Fair, we’re debuting our sales catalog Translations from Chicago, where among hundreds of storied works spanning the disciplines, you can find:

The Selected Letters of Charles Baudelaire: The Conquest of Solitude, ed. and trans. by Rosemary Lloyd

Vegetables: A Biography by Evelyn Bloch-Dano, trans. by Teresa Lavender Fagan

One Must Also Be Hungarian by Adam Biro, trans. by Catherine Tehanyi

Sketch for a Self-Analysis by Pierre Bourdieu, trans. by Richard Nice

The Beast and the Sovereign, Vols. I and II by Jacques Derrida, trans. by Geoffrey Bennington

The Voice Imitator by Thomas Bernhard, trans. by Kenneth J. Norcott

Youth without Youth by Mircea Eliade, trans. by Mac Linscott Ricketts, with a Foreword by Francis Ford Coppola

To see the complete catalog in PDF form, click here.

 

Add a Comment
5. An excerpt from Lee Siegel’s Trance Migrations

C_Siegel_Trance_9780226185293_jkt_MB

From Trance Migrations: Stories of India, Tales of Hypnosis by Lee Siegel

The Child’s Story
And now, if you dare, LOOK into the hypnotic eye! You cannot look away! You cannot look away! You cannot look away!

THE GREAT DESMOND IN THE HYPNOTIC EYE (1960)

I was eight years old when my mother was hypnotized by a sinister Hindu yogi. Yes, she was entranced by him, entirely under his control, and made do things she would never have done in her normal waking state. My father wasn’t there to protect her and there was nothing I, a mere child, could do about it. I vividly remember his turban and flowing robes, his strange voice, gliding gait, and those eerie eyes that widened to capture her mind. I heard his suggestive whispers—“Sleep Memsaab, sleep”—and saw his hand moving over her face in circular hypnotic passes. “Sleep, Memsaab.”

It’s true. I heard it with my own ears and saw it with my own eyes as I watched “The Unknown Terror,” an episode of the series Ramar of the Jungle, on television one evening in 1953. Playing the part of a teak plantation owner in India, my mother, the actress Noreen Nash, was vulnerable to the suggestions of the Hindu hypnotist they called Catrack. “ When the dawn comes,” he instructed her, “ You will take the rifle and go to the camp of the white Ramar. You will aim at his heart and fire.”

I watched as my mother, wearing a pith helmet, bush jacket, and jodhpur pants, rose from her cot, loaded her rifle, and then trudged in a somnambulistic trance, wooden and emotionless, through the jungle to Ramar’s tent. Since my mother, as far as I knew her at home, had no experience with firearms, I was not surprised that she missed her target. She dropped the rifle and disappeared back into the jungle.

Later on in the show, once again hypnotically entranced, she was led by Catrack to the edge of a cliff where the yogi declared, “ We are in great danger, Memsaab. The only way to escape is to jump off this cliff.” Just as my mother was about to leap to her death, Ramar arrived on the scene and fired his rifle into the air. The loud bang of the gunshot awakened her in the nick of time and caused Catrack to flee. Thanks to Ramar, my mother survived her adventures in India.

The seeds of my curiosity about hypnotism and an indelible association of it with an exotic, at once alluring and foreboding, India were sown in front of a television. At about the same time I saw my mother hypnotized and made to do terrible things by a yogi, I watched another nefarious Hindu hypnotist, Swami Talpar, played by Boris Karloff in Abbott and Costello Meet the Killer, try to take control of the feeble mind of Lou Costello. Both India and hypnosis were dangerous.

But then another old movie, Chandu the Magician, assured me that just as Indian hypnotism could be used for evil, so too it was a power that could be employed to overcome wickedness and serve the good of mankind. The film opened somewhere in India at night with a full moon casting eerie shadows on an ancient heathen temple as the American adventurer Frank Chandler bowed down before a dark-skinned, long-bearded Hindu priest in a white dhoti and matching turban. The Hindu swami addressed his acolyte in a deep echoic voice:

“In the years that thou hast dwelt among us, thou hast conquered the Atma of the spirit and, as one of the sacred company of the Yogi, thou hast been given the name Chandu. Thou hast attained thy reward by being endowed with the ancient Oriental magical power that the doctors of thy race call hypnotism. Thou shalt look into the eyes of men and they shall be as straw in thy hand. Thou shalt cause them to see what is not there even unto a gathering of twelve by twelve. To few, indeed, of thy race have the secrets of the Yogi been revealed. The world needs thee now. Go forth in strength and conquer the evil that threatens mankind.”

That India was the home of hypnotism was further confirmed by listening to my mother read Kipling to me at bedtime. We had moved on from The Jungle Book, read to me when I was about the same age as Mowgli, to Kim. And I imagined the hero of that story and I were the same age, as well. “Kim flung himself wholeheartedly upon the next turn of the wheel,” my mother began. “He would be a Sahib again for a while. . . .” and soon I’d yawn, blink, blink, and yawn again, feel the heaviness of my eyelids, heavier and heavier, more and more relaxed. I’d roll over, eyes closing, and soon be able to imagine that her voice might be Kim’s: “I think that Lurgan Sahib wishes to make me afraid,” she’d say he said. “And I am sure that that devil’s brat below the table wishes to see me afraid. This place is like a Wonder House.”

I’d picture the interior of Lurgan’s shop as vividly as if I were there and could see what Kim saw, focusing my attention on each of the objects, suggested one by one: “Turquoise and raw amber necklaces. Curiously packed incense-sticks in jars crusted over with raw garnets, devil-masks and a wall full of peacock-blue draperies . . . gilt figures of Buddha . . . tarnished silver belts . . . arms of all sorts and kinds . . . and a thousand other oddments.”

When, as commanded, Kim pitched the porous clay water jug that was on the table there to Lurgan, I saw it “falling short and crashing into bits and pieces.”

My mother reached over and lightly placed her hand on the back of my neck as Lurgan, in his attempt to hypnotize Kim, “laid one hand gently on the nape of his neck, stroked it twice or thrice, and whispered: ‘Look! It shall come to life again, piece by piece. First the big piece shall join itself to two others on the right and the left. Look!’ To save his life, Kim could not have turned his head. The light touch held him as in a vice, and his blood tingled pleasantly through him. There was one large piece of the jar where there had been three, and above them the shadowy outline of the entire vessel.”

“Look! It is coming into shape,” my mother whispered and “Look! It is coming into shape,” echoed Lurgan Sahib. Yes, it was coming into shape, all the shards of clay magically reforming the previously unbroken jug. I could see it. The words my mother read aloud to me were as hypnotic as the words uttered by Lurgan.

My childhood fascination with hypnosis was sustained by a school assignment to read Edgar Allan Poe’s stories, several of them—“The Facts in the Case of Mr. Valdemar,” “Mesmeric Revelation,” and “A Tale of the Ragged Mountains”—being about mesmerism, and the final story reaffirming an association of hypnosis with India. The main character goes into a trance in Virginia in which he has a vivid vision of Benares, a city to which he has never been, indicating that he had lived in India in a previous lifetime.

“Not only are Poe’s stories about hypnosis,” I grandly proclaimed in a book report I wrote in the seventh grade, “They are also written in a language that is very hypnotic, especially if they are read out loud.” Little did I suspect that that homework assignment would be prolusory to a book written more than half a century later.

When subsequently in the eighth grade I was required to prepare a project for the school science fair, I was determined to do mine on hypnosis as the only science, other than reproductive biology, in which I had much interest. The science teacher warned that it was a dangerous subject: “Hypnotism is widely used in schools in the Soviet Union to brainwash children so that they believe that Communism is good and that they must do whatever their dictator, Nikita Khrushchev, commands.”

Despite its abuse behind the Iron Curtain, I was determined to learn as much as I could about hypnosis. And so I ordered a book, Home Study Way to Hypnotic Practice, that I had seen advertised in a copy of Twitter magazine, a naughty-for-the-times pulp publication that I had discovered hidden in my uncle’s garage.

The ad promised that a mastery of hypnotism would enable me to control the minds of others, particularly the minds, and indeed the hearts, if not some other parts, of girls: “‘Look here’—Snap! Instantly her eyes close. She seems to be asleep but she isn’t. She’s in a hypnotic trance. A trance you put her into by saying secret words and snapping your fingers. Now she’s ready—ready and waiting to do as you command. She’ll follow your orders without question or hesitation. You’ll have her believing anything you suggest and doing whatever you want her to do. You’ ll be in control of her emotions: love, hate, laughter, tears, happy, sad. She’ ll be as putty in your hands.”

The winsome smiling girl with closed eyes in the advertisement reminded me of a classmate named Vickie Goldman, whose burgeoning breasts were often on my mind. I was naturally intrigued by the idea that by means of hypnotism those breasts might become as putty in my hands.

It was disappointing to discover in reading that book that a mastery of hypnotic techniques was much more complicated and tedious to learn than the ad for it had promised, and even more disheartening to learn that, in order to be hypnotized, Vickie would have to trust me and want to be hypnotized by me.

Another ad, in another copy of Twitter snatched from my uncle’s collection of girlie magazines, however, suggested that, by means of various apparatuses, I would be able to take control of her mind without her consent. All I’d have to do is say, “Look at this,” or “Listen to this.”

So, for the sake of having both a science project and as much control over Vickie Goldman’s emotions and behavior as Catrack had had over my mother’s, even as much power over her as Khrushchev had over children in the Soviet Union, I ordered the products advertised by the Hypnotic Aids and Supply Company: the Electronic Hypnotism Machine, the Electronic Metronome, the folding, pocket-sized Mechanical Hypnotist, and the 78-rpm Hypnotic Record. Because I was spending more than ten dollars on these devices, I also received the Amazing Hypno-Coin at no extra charge. My mother was willing to pay for these devices since I needed them for my science project.

I also purchased the book Oriental Hypnotism, “written in Calcutta India with the cooperation of Sadhu Satish Kumar,” because the yogi pictured in the ad reminded me of the one who had hypnotized my mother in Ramar of the Jungle. The text revealed that, by means of hypnosis, “the power of Maya,” Hindu yogis are able to “charm serpents, control women, and win the favor of men. Self-hypnosis gives the Hindus their amazing ability to lie down on beds of nails. And it is by means of mass hypnosis that their magicians have for thousands of years performed the legendary Indian Rope Trick.” I was familiar with the rope trick from seeing Chandu use his hypnotic power to cause “a gathering of twelve by twelve” to imagine they were seeing it performed.

My science project exhibit, HYPNOTISM EAST AND WEST IN THE PAST, PRESENT AND FUTURE BY LEE SIEGEL, GRADE 8, featured a poster board mounted over a table upon which waved my Hypnotic Metronome and spun both the Hypnotic Spiral Disc of my Electronic Hypnotism Machine and side one of my Hypnotic Record. Over the eerie drone of Oriental music there was a monotonously rhythmic deep voice: “As you listen to these words your muscles will begin to relax, to become more and more relaxed, yes, very relaxed, and your eyelids will become heavy, yes, heavier and heavier, very, very heavy, very relaxed. Deeper and deeper, relaxed.” The words “relaxed,” “heavy,” and “deeper” were repeated over and over and then there was counting backward, then imagining going down, “deeper and deeper,” in an elevator, more counting backward, and finally, at the end of the record, right after “three, two, one,” came the crucial the hypnotic suggestion: “The next voice you hear will have complete control over your mind.”

That’s when I would to take over. That’s when, if the principal of our school, the judge of the projects in the fair, listened to the record, I’d command: “ You will award Lee Siegel the first-place blue ribbon for his science project.” And if Vickie would look and listen, that’s when my interest in hypnosis would really pay off: “ You will go behind the handball courts with Lee Siegel and there you will ask him to fondle your breasts.”

To intensify the hypnotic mystique of my project, I placed a warning sign by the Electronic Hypnotism Machine: Stare at the Spinning Disc at Your Own Risk. Lee Siegel will not be held responsible for any actions resulting from a loss of mental control.

Along with all of my puchases from the Hypnotic Aids Supply Company, I placed the Westclox pocket watch on a chain that my uncle had given me for my bar mitzvah.

I livened up the poster board with a photo labeled EAST: Sadhu Satish Kumar, Hindu Yogi Hypnotist, cut from Oriental Hypnotism side by side with a picture labeled WEST: Dr. Franz Mesmer, Father of Animal Magnetism, that I had clipped from the World Book Encyclopedia.

There was also a timeline beginning in 3000 bc (as estimated by Sadhu Satish Kumar) with “Indian Fakirs and Yogis” and ending “Sometime in the Future” with “Lee Siegel who has learned so much for this science fair project that he plans to become a professional hypnotist. After graduating from high school and college he will go both to India to study hypnotism with yogis and to Oxford University to study it with science professors.”

In between the ancient Hindu hypnotists and my future self were luminaries in the history of hypnosis as enumerated in the World Book Encyclopedia: Franz Mesmer (1734–1815), the Marquis de Puységur (1751– 1825), Abbé Faria (1756–1819), John Elliotson (1791–1868), James Braid (1795–1860), James Esdaile (1808–1859), Ivan Pavlov (1849–1936), and Sigmund Freud (1856–1936). In order to make the list more acknowledging of India’s contributions to hypnosis I added Swami Catrack (1919–1953), Frank Chandler, a.k.a. Chandu (1932–), and Sadhu Satish Kumar (1928–). I also included The Amazing Kreskin (1935–) and William Kroger (1906–—), because, other than Catrack, Swami Talpar, Chandu, Lurgan, Satish Kumar, Nikita Khrushchev, and Sigmund Freud, they were the only hypnotists I had ever heard of. I knew that Sigmund Freud was a psychiatrist who thought that little boys were in love with their mother and that little girls wished they had a penis. I included Kroger, a gynecologist, avid proponent of medical hypnotherapeutics, and a friend my parents who occasionally visited our home, in the hope that he might, once I had shown him my science project, write a note on the official stationery of the International Society for Clinical and Experimental Hypnosis of which he was president, something to be framed and included in my display, something like “Lee Siegel’s science project deserves a blue ribbon and should be sent on to the national competition, which it will certainly win.”

All he wrote, however, was: “ Young Siegel has done a good job in presenting a subject that deserves wider recognition and acceptance.”

Not having been awarded the first-place blue ribbon—or a ribbon of any other color, for that matter—for my science project, nor having been able to successfully use my hypnotic aids to turn Vickie—or any other girl—into putty in my hands, ready to follow my orders without question, my interest in hypnotism waned.

I don’t think I thought about hypnosis very much until a couple of years later when, in 1960, I happened see a horror film, The Hypnotic Eye, the movie, according to publicity posters, “that introduces HypnoMagic, the thrill you SEE and FEEL! It’s the amazing new audience sensation that makes YOU part of the show!” There were warnings that HypnoMagic could cause viewers of the film to actually become hypnotized: “Watch at your own risk!”

The movie was about a mysterious series of gruesome acts of self-mutilation by beautiful women, none of whom were able to remember why or how they had disfigured themselves, and all of whom, a detective, the hero of the film, discovered, just happened to have gone to a theater to see the stage hypnosis show of The Great Desmond. That each of them had been hypnotized during one of his performances caused the detective to suspect that the hypnotist might have been involved in the crimes. Consulting a criminal psychologist, he learned that, “ Yes, posthypnotic suggestion could indeed cause a woman to do things she would not otherwise consider doing.”

At one point in the film, during a performance of his stage show, the despotic Desmond held up something meant to resemble an eyeball flashing with light—the titular Hypnotic Eye! After daring his audience to stare into it, he turned to the camera and dared us, the audience in the movie theater, to do the same. The camera moved in closer and closer on the pulsating orb as, “deeper and deeper” was repeated again and again until soon, as commanded by the diabolical hypnotist, the members of his audience were lifting their arms and then lowering them. And then Desmond stared straight at us again and commanded us to do the same, and soon, together with the audience in the movie, we, the audience of the movie, were lifting our arms, then lowering them, again and again, until Desmond finally ordered us to stop and then, after counting from one to three, he snapped, “ Wake up!”

Although I don’t think I was actually hypnotized by the Great Desmond and don’t know how many members of the movie audience were, I felt compelled to go along with the show, to act as if I was in a trance, and do as I was told. That, I would suggest, is in and of itself a kind of hypnosis. Hypnosis, like listening intently to a story, is playing along with words.

At the very end of the movie, after the crimes had been solved and the evil hypnotist apprehended, the criminal psychologist addressed the viewers of the movie: “Hypnotism can be a valuable tool, helping humanity in many ways. But, just as it can be used to do good, so too, in the hands of unscrupulous practitioners, it can be used to perpetrate evil. We must be wary to maintain our safety because they can catch us anywhere, and at anytime.” He paused as the camera moved in for a close-up: “ Yes, even during a motion picture in a movie theater.” He winked, then smiled, and the screen faded to black.

I didn’t think much about the film until recently, when I began writing about hypnosis. I confess, although I should probably be ashamed to admit it, that this text has been stylistically inspired by the B movie gimmick. In the spirit of The Hypnotic Eye, the tales in this book that are meant to be read aloud to a cooperative listener are written with HypnoMagic, the thrill you SEE and FEEL! It’s the amazing literary sensation that makes the listener part of the story! But beware! HypnoMagic could cause listeners to actually become hypnotized and actually imagine that they are participants in the tales they hear.

Read more about Trance Migrations here.

Add a Comment
6. Excerpt: Roger Grenier’s Palace of Books

C_Grenier_Palace_9780226308340_jkt_IFT

 

“Private Life”

The expansion of the media has put the writer in the spotlight, even if, nowadays, people who write have lost much of their prestige and their importance in society. Some of them find themselves afflicted with a lack of privacy once reserved for movie stars. Sometimes they ask for it. Michel Contat writes about “this form of media totalitarianism that gives the right to know everything about someone based on the simple fact that he or she has created a public image.” This phenomenon is not so new, if you think about Sartre and Beauvoir, not to mention Musset and George Sand, Dante and Beatrice, Petrarch and Laura, or even the self-dramatizing Byron or Chateaubriand. Nowadays we have scribblers who manage to pass themselves off as writers because they’ve already made a name for themselves as celebrities.

Gérard de Nerval was a victim of the public’s need to know, due to conditions that would be unimaginable today. Jules Janin, in the Journal des débats of March 1, 1841; Alexandre Dumas, in Le Mousquetaire of December 10, 1853; Eugène de Mirecourt in a little monograph in his series Les Contemporains in 1854, wrote openly about their friend’s mental illness. Poor Gérard wrote to his father on June 12, 1854, in response to Mirecourt’s pamphlet on “necrological biography,” and said he was being made into “the hero of a novel.” He dedicated Daughters of Fire to Alexandre Dumas: “I dedicate this book to you, my dear master, as I dedicated Lorely to Jules Janin. You have the same claim on my gratitude. A few years ago, I was thought dead, and he wrote my biography. A few days ago, I was thought mad, and you devoted some of your most charming lines to an epitaph for my spirit. That’s a good deal of glory to advance on my due inheritance.”

Is knowing the private life of an author important for understanding his or her work?

The debate was renewed with great panache by Marcel Proust in By Way of Sainte-Beuve. Proust noticed that Sainte-Beuve, a subtle and cultured man, made nothing but bad judgment calls as to the worth of his contemporaries. Why? Jealousy doesn’t explain it. He couldn’t have been jealous of writers like Stendhal or Baudelaire, who were practically unknown. The fault was with his method. Sainte-Beuve wanted to adopt a scientific attitude. “For me,” he wrote, “literature is indistinguishable from the rest of man. As long as you have not asked yourself a certain number of questions about an author and answered them satisfactorily, if only for your private benefit and sotto voce, you cannot be sure of possessing him entirely. And this is true, though these questions may seem to be altogether foreign to the nature of his writings. For example, what were his religious views? How did the sight of nature affect him? What was he like in his dealings with women, and in his feelings about money? Was he rich? Was he poor? What was his regimen? His daily habits? Finally, what was his persistent vice or weakness, for every man has one. Each of these questions is valuable in judging an author or his book.”

Sainte-Beuve decides that he is engaging in literary botany.

Proust finds all this knowledge useless and likely to mislead the reader: “A book is the product of a different self than the self we manifest in our habits, in our social life, in our vices. If we would try to understand that particular self, it is by searching deep within us and trying to reconstruct it there, that we may arrive at it. Nothing can exempt us from this effort of the heart.”

Proust also writes: “How does having been a friend of Stendhal’s make you better suited to judge him? It would be more likely to get in the way.” Sainte-Beuve, who knew Stendhal and Stendhal’s friends, found his novels “frankly detestable.”

What Proust holds against Sainte-Beuve is that he made no distinction between conversation and the occupation of writing, “in which, in solitude, quieting the speech which belongs as much to others as to ourselves, we come face to face once more with ourselves, and seek to hear and to render the true sound of our hearts.”

Proust admires Balzac, all while thinking that from what he knew of Balzac’s personal life, his letters to his family and to Madame Hanska, he was a vulgar human being. Stefan Zweig raises the same issue. He admires Balzac the writer and seeks reasons to admire the man. He is infuriated because he can’t find any. He has discovered that genius is incomprehensible.

Gaëtan Picon thinks that if Proust attacks Sainte-Beuve so violently it’s because he needs to believe that genius is based on a secret distinct from intelligence. That a man whose life is frivolous and empty, a failure, can nonetheless create a great work. The question is inevitable, beginning with the case of Proust himself. How did this intolerable social climber, whom Lucien Daudet called “an atrocious insect,” become the author of In Search of Lost Time? Paul Valéry concludes his famous study of Leonardo da Vinci with a line that shows in a striking way how much distance he puts between an artist and his work: “As for the true Leonardo, he was what he was.”

Flaubert would have sided with Proust against his friend Sainte-Beuve. He writes to Ernest Feydeau on August 21, 1859, with his customary truculence, “Life is impossible now! The minute you’re an artist, the gentlemen grocers, the auditors of record, the customs agents, the cobblers and all the rest enjoy themselves at your expense! People inform them as to whether you’re a brunette or a blond, facetious or melancholy, how many moons since your birth, whether you’re given to drink or play the harmonica. I believe that on the contrary, the writer must leave behind nothing but his work. His life doesn’t matter. Wipe it away!”

He doesn’t stop there, but insists: “The artist must arrange things so as to make us believe in a posterity he hasn’t experienced.”

You’d have to put Chekhov in Proust’s camp. From his Notebook: “How pleasant it is to respect people! When I see books, I am not concerned with how the authors loved or played cards; I only see their marvelous works.”

The same is true for Henry James, who writes in his short story “The Real Right Thing”: “[. . .] his friend would at moments have shown himself as holding that the ‘literary’ career might—save in the case of a Johnson or a Scott, with a Boswell and a Lockhart to help—best content itself to be represented. The artist was what he did—he was nothing else.” In this fantasy tale, the ghost of a dead writer appears to prevent his biography from being written.

Proust seems rigid. He is right to say that there is a truth for the writer, especially if he’s a genius, that remains a mystery and cannot be explained by social appearance or private life. But he also presents a counter-argument to his own theory when he writes in Jean Santeuil: “[. . .] our lives are not wholly separated from our works. All the scenes that I have narrated here, I have lived through.”

Most of the time, the characters in Jean Santeuil and the Search are indiscreet, eager to know everything about the artists they encounter. Freud, whose theory is close to Proust’s, doesn’t hold back from delving into the private life of Leonardo da Vinci and a few others. J.-B. Pontalis suggests with a touch of malice that Proust and Freud take the opposite tack to Sainte-Beuve’s because they don’t want their own private lives examined: if Proust’s perversion of torturing rats was discovered. . . . The private lives of others are another story!

Nietzsche also pondered the question, but from a different point of view. He thinks that knowing an author distorts our opinion of his work and his person. “We read the writings of our acquaintances (friends and foes) in a twofold sense, inasmuch as our knowledge continually whispers to us: ‘this is by him, a sign of his inner nature, his experiences, his talent,’ while another kind of knowledge simultaneously seeks to determine what his work is worth in and of itself, what evaluation it deserves apart from its author, what enrichment of knowledge it brings with it. As goes without saying, these two kinds of reading and evaluating confound one another.”

But what to do in cases where the work can only be explained by the life? Why deprive ourselves of this source of knowledge?

In the case of Albert Camus, once you know about his impoverished childhood in an illiterate milieu (he described this in The Wrong Side and the Right Side, his first book, and in The First Man, his last), you understand his attitude of respect and rigor towards literature, and the tenor of his style. In the same way, his youth near the sea and the sun, and the illness that continually threatened him, explain to a large extent the spirit of his work, his thought.

Finally—and Proust is right about this—if the author is not a simple manufacturer, if he puts his interior self in his books, the reader will be attracted by this self. The reader will seek out this personal, private self beneath the sentences.

In 1922, the young Aragon wrote, “My instinct, whenever I read, is to look constantly for the author, and to find him, to imagine him writing, to listen to what he says, not what he tells; so in the end, the usual distinctions among the literary genres— poetry, novel, philosophy, maxims—all strike me as insignificant.”

Freud showed that every child constructs a “family romance” that he will later repress. Whereas the writer continues to manufacture a novel which, if not a family romance, is at least a personal one. Marthe Robert has noted that all novelists relate to some extent their sentimental education, their apprentice years, and their search for lost time. The paradox is that they confess their secrets to a piece of paper. Yet they’re careful to disguise them as fiction.

Revealing a lot about oneself is not the purview only of novelists. It is also what poets do, and not just the elegiac poets. For centuries, and in a variety of civilizations, well before there were novels, the great majority of poems came from the poet’s effusion in speaking about his life, his loves, his torments, his anger, his religious feeling, his exile. Gérard de Nerval asks, “Which is more modest: to portray oneself in a novel disguised as Lélio or Octavio or Arthur, or to betray one’s most intimate emotions in a volume of poetry?” That his life and his illness were made public by his friends gave him an argument: “Forgive us our flights of personality, we who are constantly in the limelight, and who, whether we live in glory or in failure, can no longer hope to obtain the benefits of obscurity.”

You might think that contemporary poetry, tending towards abstraction and situated in a world where the air is rarified, has little to do with private life. This is not always true. Even an erudite poet like Jacques Roubaud, who delves into mathematics, writes about a deeply personal unhappiness in Something Black.

The same is true for the playwright, the filmmaker, even the nonfiction writer. You can sense this clearly in the philosophers Jean-Paul Sartre, Michel Foucault, Roland Barthes. Descartes was already inserting elements of autobiography in Discourse on Method. In this essential essay, he portrays himself in Holland, seated next to his stove throughout the winter, reflecting. Thus there is a back-and-forth movement, a dialectic, practically a contradiction. One retreats into oneself in order to communicate better with others.

Authors, whenever they delve into their own private lives, even if they embellish or transpose, find themselves confronted with the issue of personal discretion. They go well beyond simple indiscretion when they attempt to bring to light what is hidden in the deepest part of themselves.

With his taste for nonsense, Julio Cortázar describes an “enlarged self-portrait from which the artist has had the elegance to withdraw.” This little joke reveals the aspirations of so many writers: to be at once invisible and present, to say everything about oneself without seeming to.

Offering your essence to nourish what you write is what Scott Fitzgerald called “the price to pay”: “I have asked a lot of my emotions—one hundred and twenty stories. The price was high, right up with Kipling, because there was one little drop of something not blood not a tear, not my seed, but me more intimately than these in every story: it was the extra I had.”

Scott Fitzgerald couldn’t write without including his entire history. And even when he lost his creative vein, he dug to the depths of his anguish to write The Crack-Up.

John Dos Passos, another American who is now neglected after having been overrated, made a distinction between a literature of confession and a literature of spectacle. Of course he categorized his own books Manhattan Transfer and the U.S.A. trilogy as literature of spectacle. But I’m pretty sure you can find confession beneath the spectacle.

The young novelist’s first book is often autobiographical. Yet this is the phase when one has lived the least. Other, perhaps better, writers save the most personal, the most intimate in their lives or in the history of their families for much later.

On the other hand, some seem to write primarily to cover up a secret. Paul-Jean Toulet never shows his wounds—neither in his novels, frankly mediocre and marred by the most odious clichés of his era: anti-Semitism, etc.—nor in his poetry, far more charming; nor even in the letters he addressed to himself. His friends knew he had a broken heart. Why broken? And by whom? One of the qualities of his poetry is precisely that you can perceive, beyond the light-hearted fantasy, a floating veil of sadness or perhaps despair. We’ll never know the whole story. That is the claim in the last quatrain of his Contrerimes—a kind of challenge:

If living is a duty, when I will have ruined it,

May I use my shroud as a mystery

You must know how to die, Faustine, how to grow silent,

Die like Gilbert by swallowing the key.

(The allusion is to the strange death at age thirty of the poet Nicolas Gilbert, author of the Le poète malheureux [the unhappy poet] who apparently swallowed his key in a fit of delirium.)

In the life of a man or a woman there are always one or two things that he or she will never consent to speak about, not for anything. Secret gardens. But if that man or woman is a writer, we might find those things hidden deep within a novel.

We know that Dickens lived through some very unhappy times in his childhood. The casual egotism of his parents was to blame.

His father, a loudmouth who was often imprisoned for debt, is in part the model for Mr. Micawber. In chapter eleven of David Copperfield, we find, barely altered, what Dickens experienced at age twelve. For six or seven shillings a week, he packaged shoe polish in a putrid factory, working under unspeakably miserable, humiliating conditions.

While he didn’t hesitate to use this experience for David Copperfield, in life he hid the memory as his most closely guarded secret. He refused to talk about it. He even took detours in London to avoid the place where he had been so unhappy. A fragment of his autobiography was found where he confirmed:

No word of that part of my childhood which I have now gladly brought to a close, has passed my lips to any human being . . . I have never, until I now impart it to this paper, in any burst of confidence with anyone, my own wife not excepted, raised the curtain I then dropped, thank God.

Until old Hungerford Market was pulled down, until old Hungerford Stairs were destroyed, and the very nature of the ground changed, I never had the courage to go back to the place where my servitude began. I never saw it. I could not endure to go near it. For many years, when I came near to Robert Warrens’ in the Strand, I crossed over to the opposite side of the way, to avoid a certain smell of the cement they put upon the blackingcorks, which reminded me of what I was once. It was a very long time before I liked to go up Chandos Street. My old way home by the Borough made me cry, after my eldest child could speak.

Thus Charles Dickens and David Copperfield, C. D. and D. C., meet in the person of a humiliated child. Humiliation is a feeling that very few people can tolerate. But it has inspired many books.

Léon Aréga, a forgotten writer who endured endless ridicule, once said to me about one of my novels in which I put much of myself: “It’s a treatise on humiliation.” Which, coming from him, was a great compliment. It is easy to find the humiliated child in many of Chekhov’s short stories. His remark has been quoted a hundred times: “In my childhood, there was no childhood.”

Confessions are made on purpose in David Copperfield. But in most novels they aren’t. They surface in the form of fantasies, obsessions. With Dostoyevsky it’s impossible not to find an allusion to the rape of a little girl in The Possessed, Crime and Punishment,The Eternal Husband.

One rather strange point of view comes from Joseph Conrad. He thought you needed to be a genius to dare unveil your intimate self and thus move the public. If the effect was ruined you would sink into ridicule:

If it be true that every novel contains an element of autobiography—and this can hardy be denied since the creator can only express himself in his creation—then there are some of us to whom an open display of sentiment is repugnant. I would not unduly praise the virtue of restraint. It is often merely temperamental. But it is not always a sign of coldness. It may be pride. There can be nothing more humiliating than to see the shaft of one’s emotions miss the mark of either laughter or tears. Nothing more humiliating! And that for this reason should the mark be missed, should the open display of emotion fail to move, then it must perish unavoidably in disgust or contempt.

This is what the authors of a fashionable genre, baptized “autofiction” in 1970 by Serge Doubrovsky, seem not to fear, and their works collect like dregs on booksellers’ shelves.

Sometimes the most impersonal work can signify something deeply intimate to the author. This is the case of the great allegorical novel by Melville, Moby Dick. He achieves a fusion of a great myth with his own torment. The dire questioning, the violence of Ahab, are his. The Plague, another book that generates a myth, is also a novel about separation, since Camus wrote part of it isolated by the war, cut off from Algeria, from his wife, from his close friends. Virginia Woolf ’s Orlando seems like a fantastical novel of imagination, when it is really the portrait of Vita Sackville-West, who was so dear to the author. In a fairy tale like Alice in Wonderland, Reverend Dodgson confides his passion for Alice Liddell.

The sole fact of starting to write is motivated by a cause that belongs to what is most intimate for the author. I quoted Flaubert, who talks about the sorrow that launched him into the enterprise of Salammbô.

The critics always remind us that Proust and John Cowper Powys wrote their great novels only after the death of their mothers. You could say they waited for their mothers’ deaths to write.

We mustn’t forget the role of the unconscious. Benjamin Crémieux noticed that “the writer who rereads one of his books discovers, after the fact, secret traits he never suspected having put there, traits he may not even have known he possessed—and whose existence is suddenly revealed to him. In all that we write in our own style, the truest aspect of ourselves is inscribed in filigrain.”

How, without blushing, can we agree to deliver to the public so many confessions and intimate motivations, even those that are disguised or dissimulated? This is the mystery of the quasi-religious value we assign to literature.

To read more about Palace of Books, click here.

Add a Comment
7. Our free e-book for October: In Defense of Negativity

0226284980

Americans tend to see negative campaign ads as just that: negative. Pundits, journalists, voters, and scholars frequently complain that such ads undermine elections and even democratic government itself. But John G. Geer here takes the opposite stance, arguing that when political candidates attack each other, raising doubts about each other’s views and qualifications, voters—and the democratic process—benefit.

In Defense of Negativity, Geer’s study of negative advertising in presidential campaigns from 1960 to 2004, asserts that the proliferating attack ads are far more likely than positive ads to focus on salient political issues, rather than politicians’ personal characteristics. Accordingly, the ads enrich the democratic process, providing voters with relevant and substantial information before they head to the polls.

An important and timely contribution to American political discourse, In Defense of Negativity concludes that if we want campaigns to grapple with relevant issues and address real problems, negative ads just might be the solution.

“Geer has set out to challenge the widely held belief that attack ads and negative campaigns are destroying democracy. Quite the opposite, he argues in his provocative new book: Negativity is good for you and for the political system. . . . In Defense of Negativity adds a new argument to the debate about America’s polarized politics, and in doing so it asserts that voters are less bothered by today’s partisan climate than many believe. If there are problems—and there are—Geer says it’s time to stop blaming it all on 30-second spots.”—Washington Post

Download your free copy of In Defense of Negativity here.

Watch “The Bear,” one of those 30-second spots (less an attack ad, and more a foray into American surrrealism) produced  for Ronald Reagan’s 1984 presidential campaign, below:

Add a Comment
8. For Mark Rothko on his birthday

9780226074061

James E. B. Breslin’s book on the life of painter Mark Rothko helped redefine the field of the artist’s biography and, in its day, was praised by outlets such as the New York Times Book Review (on the front cover, no less), where critic Hilton Kramer ascribed it as, “the best life of an American painter that has yet been written.” On what would have been the artist’s 111th birthday, Biographile revisted Breslin’s work:

In Breslin’s book, we follow Rothko’s search for the approach that would become such a significant contribution to art and painting in the twentieth century. He was in his forties before he started making his “multiforms,” and even after he started painting them in his studio, he didn’t show them right away. Breslin dissects and details the techniques Rothko developed upon creating his greatest works. He rotated the canvas as he worked, so that the painting wouldn’t be weighted in any one direction. He spent much more time in the studio figuring out a painting than actually painting it, and he filled a canvas as many as twenty times before feeling it was done. Maybe most important, he worked tirelessly to eliminate any recognizable shapes from the multiforms. They needed to come into the world fully formed, not as interpretations of any real-life objects, but meaningful visions in and of themselves.

Nathan Gelgud, the author behind the Biographile piece, accompanied his writing with a couple of illustrated riffs on the artist, one of which we feature below, and the other you can seek out (and read the review in full) at Biographile.

Mark-Rothko-by-Nathan-Gelgud-2014

Mark Rothko by Nathan Gelgud, 2014. Image via Gelgud’s Biographile review.

To read more about Mark Rothko: A Biography, click here.

Add a Comment
9. House of Debt on FT’s shortlist for Business Book of the Year

9780226081946

Congrats (!) to House of Debt authors Atif Mian and Amir Sufi for making the shortlist for the Financial Times and McKinsey Business Book of the Year. Now in competition with five other titles from an initial offering of 300 nominations, House of Debt—and its story of the predatory lending practices behind the Great American Recession, the burden of consumer debt on fragile markets, and the need for government-bailed banks to share risk-taking rather than skirt blamewill find out its fate at the November 11th award ceremony.

From the official announcement:

“The provocative questions raised by this year’s titles have been addressed with originality, depth of research and lively writing.”

 The award, now in its 10th edition, aims to find the book that provides “the most compelling and enjoyable insight into modern business issues, including management, finance and economics.” The judges—who include former winners Mohamed El-Erian and Steve Coll—also gave preference this year to books “whose influence is most likely to stand the test of time.”

To read more about House of Debt, including a list of reviews and a link to the authors’ blog, click here.

Add a Comment
10. Alison Bechdel, MacArthur Fellow, 2014

tumblr_nc25b7qq501rr9j8oo1_400

Image via Out Magazine

bechdel_2014_hi-res-download_2_2-1024x682Congratulations to cartoonist and graphic memoirist Alison Bechdel, one the 2014 MacArthur Foundation Fellows, or “genius grant” honorees, whose work in comics and narrative has helped to transform and elevate our understanding of women—”Dykes to Watch Out For” in all their expressions, mothers and daughters,  and the implications of social and political changes on those who dwell everyday in a broad variety of female-identified bodies. Additionally, Bechdel is well-known in film studies circles for her duplicitously simple three-question test for gender parity, which has drawn broad attention since first delivered via her 1985 strip “The Rule.”

From the Washington Post:

1) Does it have two female characters?

2) Who talk to each other?

3) About something other than a man?

If the answer to all three questions is yes, the film passes the Bechdel test.

Bechdel is also the subject of two feature-length interviews in Hillary L. Chute’s Outside the Box: Interviews with Contemporary Cartoonists, and a contributor to Critical Inquiry’s special issue Comics & Mediaboth of which were released this year. Below, see video footage of a Bechdel/Chute interview from 2011, when Chute visited Bechdel at her home in Jericho, Vermont:

To read more about Outside the Box or the Comics & Media issue of CI, click here.

Add a Comment
11. Rachel Sussman and The Oldest Living Things in the World

9780226057507

 

This past week, Rachel Sussman’s colossal photography project—and its associated book—The Oldest Living Things in the World, which documents her attempts to photograph continuously living organisms that are 2,000 years old and older, was profiled by the New Yorker:

To find the oldest living thing in New York City, set out from Staten Island’s West Shore Plaza mall (Chuck E. Cheese’s, Burlington Coat Factory, D.M.V.). Take a right, pass Industry Road, go left. The urban bleakness will fade into a litter-strewn route that bisects a nature preserve called Saw Mill Creek Marsh. Check the tides, and wear rubber boots; trudging through the muddy wetlands is necessary.

The other day, directions in hand, Rachel Sussman, a photographer from Greenpoint, Brooklyn, went looking for the city’s most antiquated resident: a colony of Spartina alterniflora or Spartina patens cordgrass which, she suspects, has been cloning and re-cloning itself for millennia.

Not simply the story of a cordgrass selfie, Sussman’s pursuit becomes contextualized by the lives—and deaths—of our fragile ecological forbearers, and her desire to document their existence while they are still of the earth. In support of the project, Sussman has a series of upcoming events surrounding The Oldest Living Things in the World. You can read more at her website, or see a listing of public events below:

EXHIBITIONS:

Imagining Deep Time (a cultural program of the National Academy of Sciences in Washington, DC), on view from August 28, 2014 to January 15, 2015

Another Green World, an eco-themed group exhibition at NYU’s Gallatin Galleries, featuring Nina KatchadourianMitchell JoaquimWilliam LamsonMary MattinglyMelanie Baker and Joseph Heidecker, on view from September 12, 2014 to October 15, 2014

The Oldest Living Things in the World, a solo exhibition at Pioneer Works in Brooklyn, NY, from September 15, 2014 to November 2, 2014, including a closing program

TALKS:

Sept 18th: a discussion in conjunction with the National Academy of Sciences exhibition Imagining Deep Time for DASER (DC Art Science Evening Rendezvous), Washington, DC (free and open to the public)

Nov 20th: an artist’s talk at the Museum of Contemporary Photography, Chicago

To read more about The Oldest Living Things in the World, click here.

 

 

Add a Comment
12. Aspiring Adults Adrift

9780226197289

In 2011, Richard Arum and Josipa Roksa’s Academically Adrift inscribed itself in post-secondary education wonking with all the subtlety of a wax crayon; the book made a splash in major newspapers, on television, via Twitter, on the pages of popular magazines, and of course, inside policy debates. The authors’ argument—drawn from complex data analysis, personal surveys, and a widespread standardized testing of more than 2300 undergraduates from 24 institutions—was simple: 45 percent of these students demonstrated no significant improvement in a range of skills (critical thinking, complex reasoning, and writing) during their first two years of study. Were the undergraduates learning once they hit college? The book’s answer was, at best, a shaky “maybe.”

Now, the authors are back with a sequel of sorts: Aspiring Adults Adrift, which follows these students through the rest of their undergraduate careers and out into the world. The findings this time around? Recent graduates struggle to obtain decent jobs, develop stable romantic relationships, and assume civic and financial responsibilities. Their transitions, like their educational experiences, are mired in much deeper and more systemic obstacles than a simple “failure to launch.”

The book debuted last week with four-part coverage at Inside Higher Ed. Since then, pundits and reviewers have started to weigh in; below are just a few of their profiles and accounts, which for an interested audience, help to situate the book’s findings.

***

Vox asked, “Why hasn’t the class of 2009 grown up?“:

The people Arum and Roksa interviewed sounded like my high school and college classmates. A business major who partied his way to a 3.9 GPA, then ended up working a delivery job he found on Craigslist, sounded familiar; so did a public health major who was living at home two years after graduation, planning to go to nursing school. Everyone in the class of 2009 knows someone with a story like that.

These graduates flailed after college because they didn’t learn much while they were in it, the authors argue. About a third of students in their study made virtually no improvement on a test of critical thinking and reasoning over four years of college. Aspiring Adults Adrift argues that this hurt them in the job market. Students with higher critical thinking scores were less likely to be unemployed, less likely to end up in unskilled jobs, and less likely to lose their jobs once they had them.

. . . . . Roksa and Arum aren’t really arguing for a more academically rigorous college education. They did that in their last book. They’re fighting the broader idea of emerging adulthood—that the first half of your 20s is a time to prolong adolescence and delay adult responsibilities.

A Time piece chimed in:

Parents, colleges, and the students themselves share the blame for this “failure to launch,” Arum says, but, he adds, “We think it is very important not to disparage a generation. These students have been taught and internalized misconceptions about what it takes to be successful.”

Frank Bruni cited and interviewed the authors for his piece, “Demanding More from College,” in the New York Times:

Arum and Roksa, in “Aspiring Adults Adrift,” do take note of upsetting patterns outside the classroom and independent of career preparation; they cite survey data that showed that more than 30 percent of college graduates read online or print newspapers only “monthly or never” and nearly 40 percent discuss public affairs only “monthly or never.”

Arum said that that’s “a much greater challenge to our society” than college graduates’ problems in the labor market. “If college graduates are no longer reading the newspaper, keeping up with the news, talking about politics and public affairs — how do you have a democratic society moving forward?” he asked me.

And finally, Richard Arum explained the book’s findings in an online interview with the WSJ.

To read more about Aspiring Adults Adrift, click here.

Add a Comment
13. Ashley Gilbertson’s Bedrooms of the Fallen

Gilbertson_Bedrooms Yurchison, page 80.jpg.CROP.original-original (1)

Army Spc. Ryan Yurchison, 27, overdosed on drugs after struggling with PTSD, on May 22, 2010, in Youngstown, Ohio. He was from New Middletown, Ohio. His bedroom was photographed in September 2011.

(caption via Slate)

From Philip Gourevitch’s Introduction to Bedrooms of the Fallen by Ashley Gilbertson:

These wars really are ours—they implicate us—and when our military men and women die in far off lands, they do so in our name. [Gilbertson] wanted to depict what it means that they are gone. Photographs of the fallen, or of their coffins or their graves, don’t tell us that. But the places they came from and were supposed to go back to—the places they left empty—do tell us.

See more images from the book via an image gallery at Hyperallergic.

Add a Comment
14. Terror and Wonder: our free ebook for September

9780226423128

For nearly twenty years now, Blair Kamin of the Chicago Tribune has explored how architecture captures our imagination and engages our deepest emotions. A winner of the Pulitzer Prize for criticism and writer of the widely read Cityscapes blog, Kamin treats his subjects not only as works of art but also as symbols of the cultural and political forces that inspire them. Terror and Wonder gathers the best of Kamin’s writings from the past decade along with new reflections on an era framed by the destruction of the World Trade Center and the opening of the world’s tallest skyscraper.

Assessing ordinary commercial structures as well as head-turning designs by some of the world’s leading architects, Kamin paints a sweeping but finely textured portrait of a tumultuous age torn between the conflicting mandates of architectural spectacle and sustainability. For Kamin, the story of our built environment over the past ten years is, in tangible ways, the story of the decade itself. Terror and Wonder considers how architecture has been central to the main events and crosscurrents in American life since 2001: the devastating and debilitating consequences of 9/11 and Hurricane Katrina; the real estate boom and bust; the use of over-the-top cultural designs as engines of civic renewal; new challenges in saving old buildings; the unlikely rise of energy-saving, green architecture; and growing concern over our nation’s crumbling infrastructure.

A prominent cast of players—including Santiago Calatrava, Frank Gehry, Helmut Jahn, Daniel Libeskind, Barack Obama, Renzo Piano, and Donald Trump—fills the pages of this eye-opening look at the astounding and extraordinary ways that architecture mirrors our values—and shapes our everyday lives.

***

“Blair Kamin, Pulitzer Prize-winning architecture critic for the Chicago Tribune, thoughtfully and provocatively defines the emotional and cultural dimensions of architecture. He is one of the nation’s leading voices for design that uplifts and enhances life as well as the environment. His new book, Terror and Wonder: Architecture in a Tumultuous Age, assembles some of his best writing from the past ten years.”—Huffington Post
Download your free copy of Terror and Wonder here.

Add a Comment
15. Chicago 1968, the militarization of police, and Ferguson

9780226740782

John Schultz, author of The Chicago Conspiracy Trial and No One Was Killed: The Democratic National Convention, August 1968, recently spoke with WMNF about the history of police militarization, in light of both recent events in Ferguson, Missouri, and the forty-sixth anniversary (this week) of the 1968 Democratic National Convention in Chicago. Providing historical and social context to the ongoing “debate over whether the nation’s police have become so militarized that they are no longer there to preserve and protect but have adopted an attitude of ‘us’ and ‘them,’” Schultz related his eyewitness accounts to that collision of 22,000 police and members of the National Guard with demonstrators in Chicago to the armed forces that swarmed around mostly peaceful protesters in Ferguson these past few weeks.

The selection below, drawn in part from a larger excerpt from No One Was Killed, relays some of that primary account from what happened in Grant Park nearly half a century ago. The full excerpt can be accessed here.

***

The cop bullhorn bellowed that anyone in the Park, including newsmen, were in violation of the law. Nobody moved. The newsmen did not believe that they were marked men; they thought it was just a way for the Cops to emphasize their point. The media lights were turned on for the confrontation. Near the Stockton Drive embankment, the line of police came up to the Yippies and the two lines stood there, a few steps apart, in a moment of meeting that was almost formal, as if everybody recognized the stupendous seriousness of the game that was about to begin. The kids were yelling: “Parks belong to the people! Pig! Pig! Oink, oink!” In The Walker Report, the police say that they were pelted with rocks the moment the media lights “blinded” them. I was at the point where the final, triggering violence began, and friends of mine were nearby up and down the line, and at this point none of us saw anything thrown. Cops in white shirts, meaning lieutenants or captains, were present. It was the formality of the moment between the two groups, the theatrical and game nature showing itself on a definitive level, that was awesome and terrifying in its implications.

It is legend by now that the final insult that caused the first wedge of cops to break loose upon the Yippies, was “Your mother sucks dirty cock!” Now that’s desperate provocation. The authors of The Walker Report purport to believe that the massive use of obscenities during Convention Week was a major form of provocation, as if it helped to explain “irrational” acts. In the very first sentence of the summary at the beginning of the Report, they say “… the Chicago Police were the targets of mounting provocation by both word and act. Obscene epithets …” etcetera. One wonders where the writers of The Walker Report went to school, were they ever in the Army, what streets do they live on, where do they work? They would also benefit by a trip to a police station at night, even up to the bull-pen, where the naked toilet bowl sits in the center of the room, and they could listen and find out whether the cops heard anything during Convention Week that was unfamiliar to their ears or tongue. It matters more who cusses you, and does he know you well enough to hit home to galvanize you into destructive action. It also matters whether you regard a club on the head as an equivalent response to being called a “mother fucking Fascist pig.”

The kids wouldn’t go away and then the cops began shoving them hard up the Stockton Drive embankment and then hitting with their clubs. “Pigs! Pigs! Pigs! Fascist pig bastards!” A cop behind me—I was immediately behind the cop line facing the Yippies—said to me and a few others, in a sick voice, “Move along, sir,” as if he foresaw everything that would happen in the week to come. I have thought again and again about him and the tone of his voice. “Oink, oink,” came the taunts from the kids. The cops charged. A boy trapped against the trunk of a car by a cop on Stockton Drive had the temerity to hit back with his bare fists and the cop tried to break every bone in his body. “If you’re newsmen,” one kid screamed, “get that man’s number!” I tried but all I saw was his blue shirt—no badge or name tag—and he, hearing the cries, stepped backward up onto the curb as a half-dozen cops crammed around him and carried him off into the melée, and I was carried in another direction. A cop swung and smashed the lens of a media camera. “He got my lens!” The cameraman was amazed and offended. The rest of the week the cops would cram around a fellow cop who was in danger of being identified and carry him away, and they would smash any camera that they saw get an incriminating picture. The cops slowed, crossing the grass toward Clark Street, and the more daring kids sensed the loss of contact, loss of energy, and went back to meet the skirmish line of cops. The cops charged again up to the sidewalk on the edge of the Park.

It was thought that the cops would stop along Clark Street on the edge of the Park. For several minutes, there was a huge, loud jam of traffic and people in Clark Street, horns and voices. “Red Rover, Red Rover, send Daley right over!” Then the cops crossed the street and lined up on the curb on the west side, outside curfew territory. Now they started to make utterly new law as they went along—at the behest of those orders they kept talking about. The crowd on the sidewalk, excited but generally peaceable, included a great many bystanders and Lincoln Park citizens. Now came mass cop violence of unmitigated fury, descriptions of which become redundant. No status or manner of appearance or attitude made one less likely to be clubbed. The Cops did us a great favor by putting us all in the same boat. A few upper middleclass white men said they now had some idea of what it meant to be on the other end of the law in the ghetto.

At the corner of Menomenee and Clark, several straight, young people were sitting on their doorsteps to jeer at the Yippies. The cops beat them, too, and took them by the backs of the necks and jerked them onto the sidewalk. A photographer got a picture of a terrible beating here and a cop smashed his camera and beat the photographer unconscious. I saw a stocky cop spring out of the pavement swinging his club, smashing a media man’s movie camera into two pieces, and the media man walked around in the street holding up the pieces for everybody to see, including other cameras, some of which were also smashed. Cops methodically beat one man, summoned an ambulance that was whirling its light out in the traffic jam, shoved the man into it, and rapped their clubs on the bumper to send it on its way. There were people caught in this charge, who had been in civil rights demonstrations in the South in the early Sixties, who said this was the time that they had feared for their lives.

The first missiles thrown Sunday night at cops were beer-cans, then a few rocks, more rocks, a bottle or two, more bottles. Yippies and New Left kids rolled cars into the side streets to block access for the cop attack patrols. The traffic-jam reached wildly north and south, and everywhere Yippies, working out in the traffic, were getting shocked drivers to honk in sympathy. One kid lofted a beer-can at a patrol car that was moving slowly; he led the car perfectly and the beer-can hit on the trunk and stayed there. The cops stopped the car and looked through their rear window at the beer-can on their trunk. They started to back up toward the corner at Wisconsin from which the can was thrown, but they were only two and the Yippies were many, so they thought better of it and drove away. There were kids picking up rocks and other kids telling them to put the rocks down.

At Clark and Wisconsin, a few of the “leaders”—those who trained parade marshalls and also some of the conventionally known and sought leaders—who had expected a confrontation of sorts in Chicago, were standing on a doorstep with their hands clipped together in front of their crotches as they stared balefully out at the streets, trying to look as uninvolved as possible. “Beautiful, beautiful,” one was saying, but they didn’t know how the thing had been delivered or what was happening. They had even directly advised against violent action, and had been denounced for it. Their leadership was that, in all the play and put-on of publicity before the Convention, they had contributed to the development of a consciousness of a politics of confrontation and social disruption. An anarchist saw his dream come true though he was only a spectator of the dream; the middle-class man saw his nightmare. A radioman, moving up and down the street, apparently a friend of Tom Hayden, stuck his mike up the stairs and asked Hayden to make some comments. Hayden, not at all interested in making a statement, leaned down urgently, chopping with his hand, and said, “Hey, man, turn the mike off, turn the mike off.” Hayden, along with Rubin, was a man the Chicago cops deemed a crucial leader and they would have sent them both to the bottom of the Chicago River, if they had thought they could get away with it. The radioman turned the mike off. Hayden said, “Is it off?” The radioman said yes. Hayden said, “Man, what’s going on down there?” The radioman could only say that what was going on was going on everywhere.

Read more about No One Was Killed: The Democratic National Convention, August 1968 here.

Add a Comment
16. Peter Bacon Hales (1950–2014)

f5d48ad406710a8c0b1204.L._V362950315_SX200_

University of Chicago Press author, professor emeritus at the University of Illinois at Chicago, dedicated Americanist, photographer, writer, cyclist, and musician Peter Bacon Hales (1950–2014) died earlier this week, near his home in upstate New York. Once a student of the photographers Garry Winogrand and Russell Lee, Hales obtained his MA and PhD from the University of Texas at Austin, and launched an academic career around American art and culture that saw him take on personal and collaborative topics as diverse as the history of urban photography, the Westward Expansion of the United States, the Manhattan Project, Levittown, contemporary art, and the geographical landscapes of our virtual and built worlds. He began teaching at UIC in 1980, and went on to become director of their American Studies Institute. His most recent book, Outside the Gates of Eden: The Dream of America from Hiroshima to Now, was published by the University of Chicago Press earlier this year.

***

From Outside the Gates of Eden:

 

“We live, then, second lives, and third, and fourth—protean lives, threatened by the lingering traces of our mistakes, but also amenable to self-invention and renewal. . . . The cultural landscape [of the future] is hazy:  it could be a desert or a garden, or something in between. It is and will be populated by Americans, or by those infected by the American imagination: a little cynical, skeptical, self-righteous, self-deprecating, impatient, but interested, engaged, argumentative, observant of the perilous beauty of a landscape we can never possess but yearn to be a part of, even as we are restive, impatient to go on. It’s worth waiting around to see how it turns out.”

9780226313153

Add a Comment
17. The State of the University Press

intelligent-books-to-read

Recently, a spate of articles appeared surrounding the future of the university press. Many of these, of course, focused on the roles institutional library sales, e-books, and shifting concerns around tenure play in determining the strictures and limitations to be overcome as scholarly publishing moves forward in an increasingly digital age. Last week, Book Business published an profile on what goes on behind the scenes as discussions about these issues shape, abet, and occasionally undermine the relationships between the university press, its supporting institution, its constituents, and the consumers and scholars for whom it markets its books. Including commentary from directors at the University of North Carolina Press, the University of California Press, and Johns Hopkins University Press, the piece also included a conversation with our own director, Garrett Kiely:

From Dan Eldridge’s “The State of the University Presses” at Book Business:

Talk to University of Chicago Press director Garrett Kiely, who also sits on the board of the Association of American University Presses (AAUP), and he’ll tell you that many of the presses that are struggling today — financially or otherwise — are dealing with the same sort of headaches being suffered by their colleagues in the commercial world. And yet there is one major difference: “The commercial imperative,” says Kiely, “has never been a requirement for many of these [university] presses.”

Historically, Kiely explains, an understanding has existed between university presses and their affiliated schools that the presses are publishing primarily to disseminate scholarly information. That’s a valuable service, you might say, that feeds the public good, regardless of profit. “But at the same time,” he adds, “as everything gets tight [regarding] the universities and the amount of money they spend on supporting their presses, those things get looked at very carefully.”

As a result, Kiely says, there’s an increasingly strong push today to align the interests of a press with its university. At the University of Chicago, for instance, both the institution and its press are well known for their strong sociology offerings. But because more and more library budgets today are going toward the scientific fields, a catalog filled with even the strongest of humanities titles isn’t necessarily the best thing for a press’ bottom line.

 The shift the digital, in particular, was a pivot point for much of Kiely’s discussion, which went on to consider some of the more successful—as well as awkward—endeavors embraced by the press as part of a publishing culture blatantly faced with the need to experiment via new modalities in order to meet the interlinked demands of expanding scholarship and changing technology. Today, the formerly comfortable terrain once tackled by academic publishing is ever-changing, and with an increasing rapidity, which as the article asserts, may leave “more questions than answers.” As Kiely put it:

“I think the speed with which new ideas can be tested, and either pursued or abandoned is very different than it was five years ago. . . . We’ve found you can very quickly go down the rabbit hole. And then you start wondering, ‘Is there a market for this? Is this really the way we should be going?’”

To read more from “The State of the University Press,” click here.

 

Add a Comment
18. Wikipedia and the Politics of Openness

When you think about Wikipedia, you might not immediately envision it as a locus for a political theory of openness—and that might well be due to a cut-and-paste utopian haze that masks the site’s very real politicking around issues of shared decision-making, administrative organization, and the push for and against transparencies. In Wikipedia and the Politics of Openness, forthcoming this December, Nathaniel Tkacz cuts throw the glow and establishes how issues integral to the concept of “openness” play themselves out in the day-to-day reality of Wikipedia’s existence. Recently, critic Alan Liu, whose prescient scholarship on the relationship between our literary/historical and technological imaginations has shaped much of the humanities turn to new media, endorsed the book via Twitter:

Untitled

With that in mind, the book’s jacket copy furthers a frame for Tkacz’s argument:

Few virtues are as celebrated in contemporary culture as openness. Rooted in software culture and carrying more than a whiff of Silicon Valley technical utopianism, openness—of decision-making, data, and organizational structure—is seen as the cure for many problems in politics and business.

 But what does openness mean, and what would a political theory of openness look like? With Wikipedia and the Politics of Openness, Nathaniel Tkacz uses Wikipedia, the most prominent product of open organization, to analyze the theory and politics of openness in practice—and to break its spell. Through discussions of edit wars, article deletion policies, user access levels, and more, Tkacz enables us to see how the key concepts of openness—including collaboration, ad-hocracy, and the splitting of contested projects through “forking”—play out in reality.

The resulting book is the richest critical analysis of openness to date, one that roots media theory in messy reality and thereby helps us move beyond the vaporware promises of digital utopians and take the first steps toward truly understanding what openness does, and does not, have to offer.

Read more about Wikipedia and the Politics of Openness, available December 2014, here.

Add a Comment
19. Philosophy in a Time of Terror

0226066649

Giovanna Borradori conceived Philosophy in a Time of Terror: Dialogues with Jürgen Habermas and Jacques Derrida shortly following the attacks on September 11, 2001; through it, he was able engage in separate interviews with two of the most profound—and mutually antagonistic—philosophers of the era. The work they labor here unravels the social and political rhetoric surrounding the nature of “the event,” examines the contexts of good versus evil, and considers the repercussions such acts of terror levy against our assessment of humanity’s potential for vulnerability and dismissal. All of this, of course, prescient and relevant to ongoing matters today.

Below follows an excerpt published on Berfrois. In it, Jacques Derrida responds to one of Borradori’s questions, which asked if the initial impression of US citizens to 9/11, “as a major event, one of the most important historical events we will witness in our lifetime, especially for those of us who never lived through a world war,” was testifiable:

Whether this “impression” is justified or not, it is in itself an event, let us never forget it, especially when it is, though in quite different ways, a properly global effect. The “impression” cannot be dissociated from all the affects, interpretations, and rhetoric that have at once reflected, communicated, and “globalized” it from everything that also and first of all formed, produced, and made it possible. The “impression” thus resembles “the very thing” that produced it. Even if the so-called “thing” cannot be reduced to it. Even if, therefore, the event itself cannot be reduced to it. The event is made up of the “thing” itself (that which happens or comes) and the impression (itself at once “spontaneous” and “controlled”) that is given, left, or made by the so-called “thing.” We could say that the impression is “informed,” in both senses of the word: a predominant system gave it form, and this form then gets run through an organized information machine (language, communication, rhetoric, image, media, and so on). This informational apparatus is from the very outset political, technical, economic. But we can and, I believe, must (and this duty is at once philosophical and political) distinguish between the supposedly brute fact, the “impression,” and the interpretation. It is of course just about impossible, I realize, to distinguish the “brute” fact from the system that produces the “information” about it. But it is necessary to push the analysis as far as possible. To produce a “major event,” it is, sad to say, not enough, and this has been true for some time now, to cause the deaths of some four thousand people, and especially “civilians,” in just a few seconds by means of so-called advanced technology. Many examples could be given from the world wars (for you specified that this event appears even more important to those who “have never lived through a world war”) but also from after these wars, examples of quasi-instantaneous mass murders that were not recorded, interpreted, felt, and presented as “major events.” They did not give the “impression,” at least not to everyone, of being unforgettable catastrophes.

We must thus ask why this is the case and distinguish between two “impressions.” On the one hand, compassion for the victims and indignation over the killings; our sadness and condemnation should be without limits, unconditional, unimpeachable; they are responding to an undeniable “event,” beyond all simulacra and all possible virtualization; they respond with what might be called the heart and they go straight to the heart of the event. On the other hand, the interpreted, interpretative, informed impression, the conditional evaluation that makes us believe that this is a “major event.” Belief, the phenomenon of credit and of accreditation,, constitutes an essential dimension of the evaluation, of the dating, indeed, of the compulsive inflation of which we’ve been speaking. By distinguishing impression from belief, I continue to make as if I were privileging this language of English empiricism, which we would be wrong to resist here. All the philosophical questions remain open, unless they are opening up again in a perhaps new and original way: what is an impression? What is a belief? But especially: what is an event worthy of this name? And a “major” event, that is, one that is actually more of an “event,” more actually an “event,” than ever? An event that would bear witness, in an exemplary or hyperbolic fashion, to the very essence of an event or even to an event beyond essence? For could an event that still conforms to an essence, to a law or to a truth, indeed to a concept of the event, ever be a major event? A major event should be so unforeseeable and irruptive that it disturbs even the horizon of the concept or essence on the basis of which we believe we recognize an event as such. That is why all the “philosophical” questions remain open, perhaps even beyond philosophy itself, as soon as it is a matter of thinking the event.

Read more about Philosophy in a Time of Terror here.

Add a Comment
20. War’s Waste: Rehabilitation in World War I America

9780226143354

On the one-hundredth anniversary of World War I, it might be especially opportune to consider one of the unspoken inheritances of global warfare: soldiers who return home physically and/or psychologically wounded from battle. With that in mind, this excerpt from War’s Waste: Rehabilitation in World War I America contextualizes the relationship between rehabilitation—as the proper social and cultural response to those injured in battle—and the progressive reformers who pushed for it as a means to “rebuild” the disabled and regenerate the American medical industry.

***

Rehabilitation was thus a way to restore social order after the chaos of war by (re)making men into producers of capital. Since wage earning often defined manhood, rehabilitation was, in essence, a process of making a man manly. Or, as the World War I “Creed of the Disabled Man” put it, the point of rehabilitation was for each disabled veteran to become “a MAN among MEN in spite of his physical handicap.” Relying on the breadwinner ideal of manhood, those in favor of pension reform began to define disability not by a man’s missing limbs or by any other physical incapacity (as the Civil War pension system had done), but rather by his will (or lack thereof) to work. Seen this way, economic dependency—often linked overtly and metaphorically to womanliness—came to be understood as the real handicap that thwarted the full physical recovery of the veteran and the fiscal strength of the nation.

Much of what Progressive reformers knew about rehabilitation they learned from Europe. This was a time, as historian Daniel T. Rodgers tells us, when “American politics was peculiarly open to foreign models and imported ideas. Germany, France, and Great Britain first introduced rehabilitation as a way to cope, economically, morally, and militarily, with the face that millions of men had been lost to the war. Both the Allied and Central Powers instituted rehabilitation programs so that injured soldiers could be reused on the front lines and in munitions in order to meet the military and industrial demands of a totalizing war. Eventually other belligerent nations—Australia, Canada, India, and the United States—adopted programs in rehabilitation, too, in order to help their own war injured recover. Although these countries engaged in a transnational exchange of knowledge, each nation brought its own particular prewar history and culture to bear on the meaning and construction of rehabilitation. Going into the Great War, the United States was known to have the most generous veterans pension system worldwide. This fact alone makes the story of the rise of rehabilitation in the United States unique.

To make rehabilitation a reality, Woodrow Wilson appointed two internationally known and informed Progressive reformers, Judge Julian Mack and Julia Lathrop, to draw up the necessary legislation. Both Chicagoans, Mack and Lathrop moved in the same social and professional circles, networks dictated by the effort to bring about reform at the state and federal level. In July 1917, Wilson tapped Mack to help “work out a new program for compensation and aid  . . . to soldiers,” one that would be “an improvement upon the traditional [Civil War] pension system.” With the help of Lathrop and Samuel Gompers, Mack drafted a complex piece of legislation that replaced the veteran pension system with government life insurance and a provision for the “rehabilitation and re-education of all disabled soldiers.” The War Risk Insurance Act, as it became known, passed Congress on October 6, 1917, without a dissenting vote.

Although rehabilitation had become law, the practicalities of how, where, and by whom it should be administered remained in question. Who should take control of the endeavor? Civilian or military leaders? Moreover, what kind of professionals should be in charge? Educators, social workers, or medical professionals? Neither Mack nor Lathrop considered the hospital to be the obvious choice. The Veterans Administration did not exist in 1917. Nor did its system of hospitals. Even in the civilian sector at the time, very few hospitals engaged in rehabilitative medicine as we have come to know it today. Put simply, the infrastructure and personnel to rehabilitate an army of injured soldiers did not exist at the time that America entered the First World War. Before the Great War, caring for maimed soldiers was largely a private matter, a community matter, a family matter, handled mostly by sisters, mothers, wives, and private charity groups.

The Army Medical Department stepped in quickly to fill the legislative requirements for rehabilitation. Within months of Wilson’s declaration of war, Army Surgeon General William C. Gorgas created the Division of Special Hospitals and Physical Reconstruction, putting a group of Boston-area orthopedic surgeons in charge. Gorgas turned to orthopedic surgeons for two reasons. First, a few of them had already begun experimenting with work and rehabilitation therapy in a handful of the nation’s children’s hospitals. Second, and more important, several orthopedists had already been involved in the rehabilitation effort abroad, assisting their colleagues in Great Britain long before the United States officially became involved in the war.

Dramatic changes took place in the Army Medical Department to accommodate the demand for rehabilitation. Because virtually every type of war wound had become defined as a disability, the Medical Department expanded to include a wide array of medical specialties. Psychiatrists, neurologists, and psychologists oversaw the rehabilitation of soldiers with neurasthenia and the newly designated diagnosis of shell shock. Ophthalmologists took charge of controlling the spread of trachoma and of providing rehabilitative care to soldiers blinded by mortar shells and poison gas. Tuberculosis specialists supervised the reconstruction of men who had acquired the tubercle bacillus during the war. And orthopedists managed fractures, amputations, and all other musculoskeletal injuries.

Rehabilitation legislation also led to the formation of entirely new, female-dominated medical subspecialties, such as occupational and physical therapy. The driving assumption behind rehabilitation was that disabled men needed to be toughened up, lest they become dependent of the state, their communities, and their families. The newly minted physical therapists engaged in this hardening process with zeal, convincing their male commanding officers that women caregivers could be forceful enough to manage, rehabilitate, and make an army of ostensibly emasculated men manly again. To that end, wartime physical therapists directed their amputee patients in “stump pounding” drills, having men with newly amputated legs walk on, thump, and pound their residual limbs. When not acting as drill sergeants, the physical therapists engaged in the arduous task of stretching and massaging limbs and backs, but only if such manual treatment elicited a degree of pain. These women adhered strictly to the “no pain, no gain” philosophy of physical training. To administer a light touch, “feel good” massage would have endangered their professional reputation (they might have been mistaken for prostitutes) while also undermining the process of remasculinization. Male rehabilitation proponents constantly reminded female physical therapists that they needed to deny their innate mothering and nurturing tendencies, for disabled soldiers required a heavy hand, not coddling.

The expansion of new medical personnel devoted to the long-term care of disabled soldiers created an unprecedented demand for hospital space. Soon after the rehabilitation legislation passed in Congress, the US Army Corps of Engineers erected hundreds of patient wards as well as entirely novel treatment areas such as massage rooms, hydrotherapy units, and electrotherapy quarters. Orthopedic appliance shops and “limb laboratories,” where physicians and staff mechanics engineered and repaired prosthetic limbs, also became a regular part of the new rehabilitation hospitals. Less than a year into the war, Walter Reed Hospital, in Washington, DC, emerged as the leading US medical facility for rehabilitation and prosthetic limb innovation, a reputation the facility still enjoys today.

The most awe-inspiring spaces of the new military rehabilitation hospitals were the “curative workshops,” wards that looked more like industrial workplaces than medical clinics. In these hospital workshops, disabled soldiers repaired automobiles, painted signs, operated telegraphs, and engaged in woodworking, all under the oversight of medical professionals who insisted that rehabilitation was at once industrial training and therapeutic agent. Although built in a time of war, a majority of these hospital facilities and personnel became a permanent part of veteran care in both army general hospitals and in the eventual Veterans Administration hospitals for the remainder of the twentieth century. Taking its cue from the military, the post–World War I civilian hospital began to construct and incorporate rehabilitation units into its system of care as well. Rehabilitation was born as a Progressive Era ideal, took shape as a military medical specialty, and eventually became a societal norm in the civilian sector.

To read more about War’s Waste, click here.

 

 

Add a Comment
21. Our free e-book for August: For the Love of It

0226065863

Wayne C. Booth (1921–2005) was the George M. Pullman Distinguished Service Professor Emeritus in English Language and Literature at the University of Chicago, one of the most renowned literary critics of his generation, and an amateur cellist who came to music later in life.  For the Love of It is a story not only of one intimate struggle between a man and his cello, but also of the larger conflict between a society obsessed with success and individuals who choose challenging hobbies that yield no payoff except the love of it. 

“Will be read with delight by every well-meaning amateur who has ever struggled.… Even general readers will come away with a valuable lesson for living: Never mind the outcome of a possibly vain pursuit; in the passion that is expended lies the glory.”—John von Rhein, Chicago Tribune

“If, in truth, Booth is an amateur player now in his fifth decade of amateuring, he is certainly not an amateur thinker about music and culture. . . . Would that all of us who think and teach and care about music could be so practical and profound at the same time.”—Peter Kountz, New York Times Book Review

“Wayne Booth, the prominent American literary critic, has written the only sustained study of the interior experience of musical amateurism in recent years, For the Love of It. [It] succeeds as a meditation on the tension between the centrality of music in Booth’s life, both inner and social, and its marginality. . . . It causes the reader to acknowledge the heterogeneity of the pleasures involved in making music; the satisfaction in playing well, the pride one takes in learning a difficult piece or passage or technique, the buzz in one’s fingertips and the sense of completeness with the bow when the turn is done just right, the pleasure of playing with others, the comfort of a shared society, the joy of not just hearing, but making, the music, the wonder at the notes lingering in the air.”—Times Literary Supplement
Download your copy here.

Add a Comment
22. Malcolm Gladwell profiles On the Run

9780226136714

From a profile of On the Run by Malcolm Gladwell in this week’s New Yorker:

It was simply a fact of American life. He saw the pattern being repeated in New York City during the nineteen-seventies, as the city’s demographics changed. The Lupollos’ gambling operations in Harlem had been taken over by African-Americans. In Brooklyn, the family had been forced to enter into a franchise arrangement with blacks and Puerto Ricans, limiting themselves to providing capital and arranging for police protection. “Things here in Brooklyn aren’t good for us now,” Uncle Phil told Ianni. “We’re moving out, and they’re moving in. I guess it’s their turn now.” In the early seventies, Ianni recruited eight black and Puerto Rican ex-cons—all of whom had gone to prison for organized-crime activities—to be his field assistants, and they came back with a picture of organized crime in Harlem that looked a lot like what had been going on in Little Italy seventy years earlier, only with drugs, rather than bootleg alcohol, as the currency of innovation. The newcomers, he predicted, would climb the ladder to respectability just as their predecessors had done. “It was toward the end of the Lupollo study that I became convinced that organized crime was a functional part of the American social system and should be viewed as one end of a continuum of business enterprises with legitimate business at the other end,” Ianni wrote. Fast-forward two generations and, with any luck, the grandchildren of the loan sharks and the street thugs would be riding horses in Old Westbury. It had happened before. Wouldn’t it happen again?

This is one of the questions at the heart of the sociologist Alice Goffman’s extraordinary new book, “On the Run: Fugitive Life in an American City.” The story she tells, however, is very different.

That story—an  ethnography set in West Philadelphia that explores how the War on Drugs turned one neighborhood into a surveillance state—contextualizes the all-too-common toll the presumption of criminality takes on young black men, their families, and their communities. And unlike the story of organized crime in the twentieth century, which saw “respectability” as within reach of one or two generations, Goffman’s fieldwork demonstrates how the “once surveilled, always surveilled” mentality that polices our inner-city neighborhoods engenders a cycle of stigma, suppression, limitation, and control—and its very real human costs. At the same time, as with the shift of turf and contraband that characterized last century’s criminal underworld in New York, we see a pattern enforced demographically; the real question becomes whether or not its constituents have any chance—literally and figuratively—to escape.

Read more about On the Run here.

Add a Comment
23. Carl Zimmer on the Ebolapocalypse

9780226983363

Carl Zimmer is one of our most recognizable—and acclaimed—popular science journalists. Not only have his long-standing New York Times column, “Matter,” and his National Geographic blog, The Loom, helped us to digest everything from the oxytocin in our bloodstream to the genetic roots of mental illness in humans and animals, they also have helped to circulate cutting-edge science and global biological concerns to broad audiences.

One of Zimmer’s areas of journalistic expertise is providing context for the latest research on virology, or, as the back cover of his book Planet of Viruses explains: “How viruses hold sway over our lives and our biosphere, how viruses helped give rise to the first life-forms, how viruses are producing new diseases, how we can harness viruses for our own ends, and how viruses will continue to control our fate for years to come.” 

It shouldn’t come as any surprise, then, that with regard to recent predictions of an Ebolapocalypse Zimmer stands ready to help us interpret and qualify risk with regard to Ebola and the biotech industry’s push for experimental medications and treatments.

BuXs1jIIIAE90yS

At The Loom, Zimmer shows a strand of the ebola virus as an otherworldly cul-de-sac against a dappled pink light. As he writes, we still have no antiviral treatment for some of our nastiest viruses, including this one, as, “viruses—which cause their own panoply of diseases from the common cold and the flu to AIDS and Ebola—are profoundly different from bacteria, and so they don’t present the same targets for a drug to hit.”

A Planet of Viruses takes this all a step further; in the chapter “Predicting the Next Plague: SARS and Ebola,” Zimmer advocates a cautionary—but not hysterical—approach:

There’s no reason to think that one of these new viruses will wipe out the human race. That may be the impression that movies like The Andromeda Strain give, but the biology of real viruses suggests otherwise. Ebola, for example, is a horrific virus that can cause people to bleed from all their orifices including their eyes. It can sweep from victim to victim, killing almost all its hosts along the way. And yet a typical Ebola outbreak only kills a few dozen people before coming to a halt. The virus is just too good at making people sick, and so it kills victims faster than it can find new ones. Once an Ebola outbreak ends, the virus vanishes for years.

With its profile rising daily, this most recent Ebola outbreak is primed to force us to rethink those assumptions–and to reflect the commingling of key issues at the intersection of biology, technology, and Big Pharma. As an article in today’s New York Times about a possible experimental medication points out, therapeutic treatment of the virus is already plagued by this overlap:

How quickly the drug could be made on a larger scale will depend to some extent on the tobacco company Reynolds American. It owns the facility in Owensboro, Ky., where the drug is made inside the leaves of tobacco plants. David Howard, a spokesman for Reynolds, said it would take several months to scale up.

Regardless of the course, we’ll look to Zimmer to help us digest what this means in our daily lives—whether we’re assembling a list of novels for the Ebolapocalypse like The Millions, or standing in line at CVS for a pre-emptive vaccination.

Read more about A Planet of Viruses here.

 

Add a Comment
24. Tom Koch on Ebola and the new epidemic

9780226449357

“Ebola and the new epidemic” by Tom Koch

Mindless but intelligent, viruses and bacteria want what we all want: to survive, evolve, and then, to procreate. That’s been their program since before there were humans. From the first influenza outbreak around 2500 BC to the current Ebola epidemic, we have created the conditions for microbial evolution, hosted their survival, and tried to live with the results.

These are early days for the Ebola epidemic, which was for some years constrained to a few isolated African sites, but has now advanced from its natal place to several countries, with outbreaks elsewhere. Since the first days of influenza, this has always been the viral way. Born in a specific locale, the virus hitches itself to a traveler who brings it to a new and fertile field of humans. The “epidemic curve,” as it is called, starts slowly but then, as the virus spreads and travels, spreads and travels, the numbers mount.

Hippocrates provided a fine description of an influenza pandemic in 500 BC, one that reached Greece from Asia. The Black Death that hastened the end of the Middle Ages traveled with Crusaders and local traders, infecting the then-known world. Cholera (with a mortality rate of over thirty percent) started in India in 1818 and by 1832 had infected Europe and North America.

Since the end of the seventeenth century, we’ve mapped these spreads in towns and villages located in provinces and nations. The first maps were of plague, but in the eighteenth century that scourge was replaced in North American minds by yellow fever, which in turn, was replaced by the global pandemic of cholera (and then at the end of the century came polio).

In attempting to combat these viral outbreaks, the question is one of scale. Early cases are charted on the streets of a city, the homes of a town. Can they be quarantined and those infected separated? And then, as the epidemic grows, the mapping pulls back to the nations in which those towns are located as travelers, infected but as yet not symptomatic, move from place to place. Those local streets become bus and rail lines that become, as a pandemic begins, airline routes that gird the world.

There are lots of models for us to follow here. In the 1690s, Filipo Arrieta mapped a four-stage containment program that attempted to limit the passage of plague through the province of Bari, Italy, where he marshaled the army to create containment circles.

Indeed, quarantines have been employed, often with little success, since the days of plague. The sooner they are put in place, the better they seem to be. They are not, however, foolproof.

Complacently, we have assumed that our expertise at genetic profiling would permit rapid identification and the speedy production of vaccines or at least curative drugs. We thought we were beyond viral attack. Alas, our microbial friends are faster than that. By the time we’ve genetically typed the virus and found a biochemical to counter it, it will have, most likely, been and gone. Epidemiologists talk about the “epidemic curve” as a natural phenomenon that begins slowly, rises fiercely, and then ends.

We have nobody to blame but ourselves.

Four factors promote the viral and bacterial evolution that results in pandemic diseases and their spread. First, there is the deforestation and man-made ecological changes that upset natural habitats, forcing microbes to seek new homes. Second, urbanization brings people together in dense fields of habitation that become the microbe’s new hosts—when those people live in poverty, the field is even better. Third, trade provides travelers to carry microbes, one way or another, to new places. And, fourth and finally, war always promotes the spread of disease among folk who are poor and stressed.

29_merchant

We have created this perfect context in recent decades and the result has been a fast pace of viral and bacterial evolution to meet the stresses we impose and the opportunities we present as hosts. For their part, diseases must balance between virulence—killing the person quickly—and longevity. The diseases that kill quickly usually modify over time. They need their hosts, or something else, to help them move to new fields of endeavor. New diseases like Ebola are aggressive adolescents seeking the fastest, and thus deadliest, exchanges.

Will it become an “unstoppable” pandemic? Probably not, but we do not know for certain; we don’t know how Ebola will mutate in the face of our plans for resistance.

What we do know is that as anxiety increases the niceties developed over the past fifty years of medical protocol and ethics will fade away. There will now be heated discussions surrounding “ethics” and “justice,” as well as practical questions of quarantine and care. Do we try experimental drugs without the normal safety protocol? (The answer will be … yes, sooner if not later.) If something works and there is not enough for all, how do we decide to whom it is to be given first?

For those like me who have tacked diseases through history and mapped their outbreaks in our world, Ebola, or something like it, is what we have feared would come. And when Ebola is contained it will not be the end. We’re in a period of rapid viral and bacterial evolution brought on by globalization and its trade practices. Our microbial friends will, almost surely, continue to take advantage.

***

Tom Koch is a medical geographer and ethicist, and the author of a number of papers in the history of medicine and disease. His most recent book in this field was Disease Maps: Epidemics on the Ground, published by University of Chicago Press.

 

Add a Comment
25. Against Prediction: #Ferguson

 

628x471

Photo by: Scott Olson, Getty Images, via Associated Press

From Bernard E. Harcourt’s Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age

***

The ratchet [also] contributes to an exaggerated general perception in the public imagination and among police officers of an association between being African American and being a criminal—between, in Dorothy Roberts’s words, “blackness and criminality.” As she explains,

One of the main tests in American culture for distinguishing law-abiding from lawless people is their race. Many, if not most, Americans believe that Black people are “prone to violence” and make race-based assessments of the danger posed by strangers they encounter. The myth of Black criminality is part of a belief system deeply embedded in American culture that is premised on the superiority of whites and inferiority of Blacks. Stereotypes that originated in slavery are perpetuated today by the media and reinforced by the huge numbers of Blacks under criminal justice supervision. As Jody Armour puts it, “it is unrealistic to dispute the depressing conclusion that, for many Americans, crime has a black face.”

Roberts discusses one extremely revealing symptom of the “black face” of crime, namely, the strong tendency of white victims and eyewitnesses to misidentify suspects in cross-racial situations. Studies show a disproportionate rate of false identifications when the person identifying is white and the person identified is black. In face, according to Sheri Lynn Johnson, “this expectation is so strong that whites may observe an interracial scene in which a white person is the aggressor, yet remember the black person as the aggressor.” The black face has become the criminal in our collective subconscious. “The unconscious association between Blacks and crime is so powerful that it supersedes reality.” Roberts observes: ”it predisposes whites to literally see Black people as criminals. Their skin color marks Blacks as visibly lawless.”

This, in turn, further undermines the ability of African Americans to obtain employment or pursue educational opportunities. It has a delegitimizing effect on the criminal justice system that may encourage disaffected youths to commit crime. It may also erode community-police relations, hampering law enforcement efforts as minority community members become less willing to report crime, to testify, and to convict. The feedback mechanisms, in turn, accelerate the imbalance in the prison population and the growing correlation between race and criminality.

And the costs are deeply personal as well. Dorothy Roberts discusses the personal harm poignantly in a more private voice in her brilliant essay, Race, Vagueness, and the Social Meaning of Order-Maintenance Policing, sharing with the reader a conversation that she had with her sixteen-year-old son, who is African American:

In the middle of writing this Foreword, I had a revealing conversation with my sixteen-year-old son about police and loitering. I told my son that I was discussing the constitutionality of a city ordinance that allowed the police to disperse people talking on the sidewalk if any one of them looked as if he belonged in a gang. My son responded apathetically, “What’s new about that? The police do it all the time, anyway. They don’t like Black kids standing around stores where white people shop, so they tell us to move.” He then casually recounted a couple of instances when he and his friends were ordered by officers to move along when they gathered after school to shoot the breeze on the streets of our integrated community in New Jersey. He seemed resigned to this treatment as a fact of life, just another indignity of growing up Black in America. He was used to being viewed with suspicion: being hassled by police was similar to the way store owners followed him with hawk eyes as he walked through the aisles of neighborhood stores or women clutched their purses as he approached them on the street.

Even my relatively privileged son had become acculturated to one of the salient social norms of contemporary America: Black children, as well as adults, are presumed to be lawless, and that status is enforced by the police. He has learned that as a Black person he cannot expect to be treated with the same dignity and respect accorded his white classmates. Of course, Black teens in inner-city communities are subjected to more routine and brutal forms of police harassment.

To read more about Against Prediction, click here.

 

Add a Comment

View Next 25 Posts