JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
The most recent issue of Commonweal includes “The River Runs On: Norman Maclean’s Christian Tragedies,” a long-form piece by Timothy B. Schilling, who goes on to read Maclean (expectedly, given the title) through both Christianity and tragedy—but most compellingly, through the author’s own often contradictory and ambivalent relationship to religion. You can read the piece in full here; a brief excerpt from Young Men and Fire that situates the Smokejumpers—first responders to the Mann Gulch fire of 1949, from which the book takes its name— in this context follows below.
Maclean tells us that most of the Smokejumpers believe in God. “You wouldn’t dare jump,” they say, “if it was empty out there.” But of the sixteen who descended to fight the fire, only three survived. What then—for them, for us—is the last word in this story? Does the Mann Gulch fire reveal the ultimate tragedy of all human experience? Or does it enjoin us to embrace the world’s faith traditions in looking for a life and a truth beyond death? As in A River Runs Through It, Maclean counters fatalism with Christian symbols and biblical allusions, including references to the Stations of the Cross, the Mass, Calvary, and the Book of Job. He also calls again on Psalm 23. But perhaps the most telling biblical reference in Young Men and Fire is the scorched deer included in the “Black Ghost” section. This image, reinforced in a photograph, calls to mind Psalm 42: “As a deer longs for running streams, so longs my soul for You, O God.” Here, as in Maclean’s earlier book, the river is a potential source of relief and rescue—not the Big Blackfoot this time, but the Missouri, sporadically “glaring” through the smoke and trees. The ambivalence is typical of both of his books and very much to Maclean’s point; it sharpens our sense of the tragedy without “answering” it in any definitive way.
Advanced praise for Philip Ball’s forthcoming Patterns in Nature: Why the Natural World Looks the Way It Does (April 2016),
from Publishers Weekly:
Acclaimed English science writer Ball (Invisible: The Dangerous Allure of the Unseen) curates a visually striking, riotously colorful photographic display of the most dramatic examples of the “sheer splendor” of physical patterns in the natural world. He lightly ties the work together with snippets of scientific history, using bits of physics, chemistry, and mathematics to show that although patterns in living beings can offer clear, functional evolutionary advantages, the small set of design elements that we can see—symmetries, branching fractals, spirals, flowing swirls, spots, and stripes—come from a basic set of organizing properties of growth and equilibrium seeking. Ball ranges across the whole spectrum of creation—from the living to the nonliving, and from the macroscopic to the microscopic—for displays of nature’s patterned beauty. He finds symmetry in grains of pollen, drops of falling water, and owl’s eyes; fractals in leaf veins, lungs, and nebulae; spirals in seashells, sunflowers, and cyclones; and flow patterns in wood grain, flocks of birds, and dunes on Mars. This is formidable eye candy for the I-love-science crowd, sure to spark a sense of impressed wonder at the beauty of our universe and our ability to photograph it.
To read more about Patterns in Nature, click here.
As with so many environmental disasters, this one was preventable. Evidence suggests that the simple failure to use proper anti-corrosive agents led to the leaching of lead into the city’s water. It has also become apparent that the slow responses of local, state and federal officials to this crisis — as well as their penchant for obfuscation — prolonged the lead exposure.
It would be a mistake, however, to conclude that Flint’s predicament is simply the result of government mismanagement. It’s also the product of a variety of larger structural problems that are much more difficult to untangle and remedy.
Over the past three-quarters of a century, waves of deindustrialization, disinvestment and depopulation eviscerated Flint’s tax base, making it all but impossible to improve — or even maintain — the city’s crumbling infrastructure. Flint — which once claimed 200,000 residents — now contains fewer than 100,000, nearly half impoverished, more than half African American. The economic prospects of locals are grim. After decades of plant closures and layoffs, GM’s workforce in the area, which once surpassed 80,000, is less than 10,000. The hemorrhaging of jobs has produced unemployment rates that routinely reach into the double digits. . . .
If there was ever a canary in Flint’s coal mine, it may have been Ailene Butler. When she stepped forward in 1966, she crystallized the tight connections between environmental inequality and social injustice. To be sure, much has changed since Butler sounded the alarm half a century ago. Whereas in the 1960s it was the encroachment of industrial plants upon black neighborhoods that fueled local resentment, Flint’s current water crisis stems in many ways from the absence of those plants — and the jobs, taxes, services and infrastructure they supported. Still, looking ahead at Flint’s uncertain future, Butler’s message seems more relevant than ever.
To read more about Demolition Means Progress, click here.
Full of “blood and thunder“—words for the Lyric Opera of Chicago’s staging of Giuseppe Verdi’s Nabucco, an amalgamation of quasi-stories from the Book of Jeremiah and the Book of Daniel coalesced around a love triangle, here revived for the first time since 1998. On the heels of its opening—the full run is from January 23 to February 12—UCP hosted a talk and dinner featuring a lecture “Nabucco and the Verdi Edition” by Francesco Ives. That Verdi Edition, The Works of Giuseppe Verdi, is the most comprehensive critical edition of the composer’s works. In addition to publishing its many volumes, the University of Chicago Press also hosts a website devoted to all aspects of the project, which you can visit here; to do justice to the scope and necessity of the Verdi Edition, here’s an excerpt from “Why a Critical Edition?” on that same site:
The need for a new edition of Verdi’s works is intimately tied to the history of earlier publications of the operas and other compositions. When Verdi completed the autograph orchestral manuscript of an opera, manuscript copies were made by the theater that commissioned the work or by his publisher (usually Casa Ricordi). These copies were used in performance, and most of the autograph scores became part of the Ricordi archives. Copies of the copies were made, and orchestral materials were extracted for performances. With the possible exception of his last operas, Otello and Falstaff, Verdi played no part whatever in preparing the printed scores: almost all printed editions of his works were prepared by Ricordi after Verdi’s death in 1901.
Predictably, these copying and printing practices have yielded vocal and orchestral parts that differ drastically from the autograph scores. Indeed, the problem of operas performed using unreliable parts and scores dates to Verdi’s own lifetime. After the premieres of Rigoletto, Il trovatore, and La traviata, for example, Verdi wrote to Ricordi on 24 October 1855: “I complain bitterly of the editions of my last operas, made with such little care, and filled with an infinite number of errors.”
Copyists and musicians who prepared these errant printed editions were not consciously falsifying Verdi’s text. They merely glossed over particularities of Verdi’s notation (e.g., the simultaneous use of different dynamic levels—“p” and “pp”, for instance) and altered details of his orchestration, which differed considerably from the style of Puccini, whose music dominated Italian opera when the printed editions of Verdi’s works were prepared. These editions, which in certain details drastically compromise the composer’s original text, are the scores that are used today, except where the critical edition has made reliable scores available.
The critical edition of the complete works of Verdi undertaken jointly by the University of Chicago Press and Casa Ricordi is finally correcting this situation.
To read more about The Works of Giuseppe Verdi, click here.
The University of Chicago Press and Signs are pleased to announce the competition for the 2017 Catharine Stimpson Prize for Outstanding Feminist Scholarship. Named in honor of the founding editor of Signs: Journal of Women in Culture and Society, the Catharine Stimpson Prize is designed to recognize excellence and innovation in the work of emerging feminist scholars.
The Catharine Stimpson Prize is awarded biennially to the best paper in an international competition. Leading feminist scholars from around the globe will select the winner. The prizewinning paper will be published in Signs, and the author will be provided an honorarium of $1,000. All papers submitted for the Stimpson Prize will be considered for peer review and possible publication in Signs.
Eligibility: Feminist scholars in the early years of their careers (fewer than seven years since receipt of the terminal degree) are invited to submit papers for the Stimpson Prize. Papers may be on any topic that falls under the broad rubric of interdisciplinary feminist scholarship. Submissions must be no longer than 10,000 words (including notes and references) and must conform to the guidelines for Signs contributors.
Deadline for Submissions: March 1, 2016.
Please submit papers online at http://signs.edmgr.com. Be sure to indicate submission for consideration for the Catharine Stimpson Prize. The honorarium will be awarded upon publication of the prizewinning article.
Papers may also be submitted by post to
The Catharine Stimpson Prize Selection Committee Signs: Journal of Women in Culture and Society Northeastern University
360 Huntington Avenue
263 Holmes Hall
Boston, MA 02115
Exhilaration and anxiety, the yearning for community and the quest for identity: these shared, contradictory feelings course through Outside the Gates of Eden, Peter Bacon Hales’s ambitious and intoxicating new history of America from the atomic age to the virtual age.
Born under the shadow of the bomb, with little security but the cold comfort of duck-and-cover, the postwar generations lived through—and led—some of the most momentous changes in all of American history. Hales explores those decades through perceptive accounts of a succession of resonant moments, spaces, and artifacts of everyday life—drawing unexpected connections and tracing the intertwined undercurrents of promise and peril. From sharp analyses of newsreels of the first atomic bomb tests and the invention of a new ideal American life in Levittown; from the music emerging from the Brill Building and the Beach Boys, and a brilliant account of Bob Dylan’s transformations; from the painful failures of communes and the breathtaking utopian potential of the early days of the digital age, Hales reveals a nation, and a dream, in transition, as a new generation began to make its mark on the world it was inheriting.
Full of richly drawn set-pieces and countless stories of unforgettable moments, Outside the Gates of Eden is the most comprehensive account yet of the baby boomers, their parents, and their children, as seen through the places they built, the music and movies and shows they loved, and the battles they fought to define their nation, their culture, and their place in what remains a fragile and dangerous world.
To read more about Outside the Gates of Eden, click here.
Just a snippet from a fab piece by Jennifer Tyburczy for Artforumon the research informing her recent book Sex Museums: The Politics and Performance of Display, which places the museum in its spatial, political, and sexual contexts, each imbricated by the other, as well as our notions of public and private. You can read more from her “500 Words” piece here.
The big surprise, though, was that as soon as I started to write about sex museums, they started to close. The latter part of my book is dedicated to an ethnography of these spaces. It was disconcerting when I would plan out a visit to Los Angeles to see an erotic museum that then closed mere months before I could make the trip. Part of the book became about the failure of these ventures, and I don’t mean in a Jack Halberstam, Queer Art of Failure kind of way. Ultimately, many of these museums could not provide what visitors wanted, which was a really raw experience with sex drawn from the archive and arranged in displays. A lot of the museums I discuss—whether in New York, Denmark, or Spain—had an ingrained idea of who their normative visitor was and where their threshold of shock was located. Without fail, they always set the bar too low. People wanted more! The demands of being a twenty-first-century museum taking on the onus to display sex overwhelmed a lot of the museum planners. Typically they censored themselves in some way that visitors noted. The heartening message here is that we shouldn’t assume that people will be shocked and turned off by displays of diverse sexual cultures and people. Museum visitors are smart and savvy, and ready and willing to have that experience. My work makes an argument for the emotional and sexual intelligence of a viewer.
Taussig’s work is the sort of bewilderingly beautiful prose (one is often tempted to call it poetry) that’s able to operate on multiple intellectual levels. The first essay in the collection, “The Corn Wolf: Writing Apotropaic Texts”, immerses the reader fully and mercilessly in the style. It opens with a poor graduate student realizing that writing up their fieldwork is the most difficult and important task of graduate school, and also the one thing graduate school teaches you nothing about. Fieldwork and writing; “they are both rich, ripe, secret-society-type shenanigans. Could it be that both are based on impossible-to-define talents, intuitions, tricks, and fears?”
No wonder many careerist academics dislike him.
Of course the essay isn’t so much about graduate writing as about his own writing, and about the act of writing—the magical act of writing—itself.
For example, Taussig considers anthropology’s treatment of magic and shamanic sorcery: “Pulling the wool over one’s eyes is a simpler way of putting it… What we have generally done in anthropology is really pretty amazing in this regard, piggybacking on their magic and on their conjuring—their tricks—so as to come up with explanations that seem nonmagical and free of trickery.”
This seemingly nonmagical academic form of writing—or mode of production, as he calls it—is what he refers to as ‘agribusiness writing’: “Agribusiness writing is what we find throughout the university and everyone knows it when they don’t see it.” Against it he pitches the idea of ‘apotropaic writing’, a magic that connives with the prosaic to produce a counter-magic of its own.
When anthropologists demystify shamanic sorcery, for instance, the ‘wolfing’ moves of apotropaic magic would reveal the sorcery implicit in the act of the ‘scientific’ anthropologist’s recasting of shamanism. Indeed, the fact that the wonder and magic of the everyday world has been demystified by science is a sort of magical transformation itself. Is this how we re-enchant the world? By the use of story-telling and writing to re-position what seems like the boring, unmagical workaday world of everyday capitalist drudgery and expose it as the magical sleight-of-hand and tricksterism that it is? “I have long felt that agribusiness writing is more magical than magic ever could be and that what is required is to counter the purported realism of agribusiness writing with apotropaic writing as countermagic, apotropaic from the ancient Greek meaning the use of magic to protect one from harmful magic.”
The point of Silver’s statement rests on whether or not a Trump nomination would destroy the Republican Party. The book’s argument is that party elites—unelected insiders—control who ultimately ends up nominated at the convention, and that decision is made many months before the primary campaign season even begins. Was anyone but Trump the nominee (say Marco Rubio, or even Jeb Bush), then The Party Decides had it right all along; if Republicans put forward DT, then it may be less a sign that the statistically supported data of the book is incorrect, and more a case of the possible dissolution of the Grand Old Party.
In the meantime, you can hear more about the book and what a Trump nomination might signify on today’s episode of The Brian Lehrer Show below:
The Pet Collector reminds us of the most fundamental role of language: the ability to name things, and by doing so, to make them belong to us, and we to them. (The naming of and “dominion over” animals are central to Adam’s role in the Garden of Eden.) But the Collector doesn’t just take possession of his adopted family of animals; in his excessive abundance of attachments, he is clearly also possessed, and appears to be a fearful hoarder of living things. Arlo, by contrast, only needs his one companion, Spot, and he is comfortable with letting Spot go when he finds a human family to join at the conclusion of the film.
All this reeks of what anthropologists used to call totemism, the adoption of natural things (animals and plants) as kinfolk and symbols of kinship in so-called primitive cultures. The problem is that dinosaurs were unknown to primitive cultures; they are a thoroughly modern discovery, never named, classified, or adopted until the British paleontologist Richard Owen proclaimed their existence in 1843. Could it be that modern cultures need totemism too? Freud’s Totem and Taboo argued that totemism was obsolete in the modern world, while taboos still abound. But he failed to consider the possibility of a distinctively modern totemism, in which the animal counterpart and companion to the human species is an extinct family of prehistoric animals discoverable only by modern science. Dinosaurs provide the perfect Darwinian allegory for the human race — namely, the possible (or should we say highly probable) prospect that human beings could wind up just like them — extinct. That, it seems to me, is the best explanation of the strange array of contradictory attitudes toward dinosaurs as popular icons. They are friends and companions, on the one hand, and feared enemies, on the other. They are ferocious wild animals and domestic pets, vicious predators and peaceful vegetarians. In short, they are a mirror of all the varieties of our own human species, distributed across a genus of extinct animals that exist only in the realms of unbridled imagination and biological science — a perfectly modern combination.
On January 6, 1941, Franklin Delano Roosevelt delivered the State of the Union address known as the “Four Freedoms” speech. Then recently elected to an unprecedented third presidential term, Roosevelt had run on a platform that included the promise to “not send American boys into any foreign wars.” In the days leading up to his speech, Nazi Germany had begun a bombing campaign on the coal port at Cardiff, Wales, and the Roosevelt administration had announced the Liberty Ship Program to build freighters for the war effort. A few days after the address, thousands of Jews were killed in a pogrom in Bucharest, Romania, and over the next several weeks, anti-Jewish measures spread across Eastern Europe.
This was the state of things that prompted Roosevelt to articulate “four essential human freedoms” as a basis for a secure world: freedom of expression; freedom of religion; freedom from want, which, he explained, “translated into world terms, means economic understandings which will secure to every nation a healthy peacetime life for its inhabitants everywhere in the world”; and freedom from fear, focusing on dramatic reductions in armaments to eliminate the possibility of wars of aggression. After the attack on Pearl Harbor, the Four Freedoms became a touchstone for American foreign policy. Memorialized in a famous series of Norman Rockwell paintings, they were later incorporated into the United Nations Declaration of Human Rights.
The broad acceptance of the Four Freedoms does not mean that they provoked no dissent. Roosevelt’s call for freedom of expression and freedom of religion were largely uncontroversial, but his appeal for freedom from want and fear were received as partisan gambits intended to bolster the New Deal and advance a Democratic program. In later years, freedom from want came to define Roosevelt’s domestic agenda, notably when he called for a “Second Bill of Rights” to include employment, health care, housing, and education in his 1944 State of the Union address.
In the 2016 State of the Union last Tuesday, President Barack Obama presented the American people with four questions that resonate in some striking ways with Roosevelt’s Four Freedoms. The form of this State of the Union address, whose difference from the typical “laundry list” speech was much emphasized, offers a model for the kind of renewed citizenship that the President seeks to promote—one whose sources I trace in Imagining Deliberative Democracy in the Early American Republic. Rather than tell the nation what to do, or explicitly articulate national values as Roosevelt did, Obama has attempted to frame a discussion around the core questions that have animated his presidency.
Obama’s experience as a law professor, well-versed in the Socratic method, was clearly evident when he offered these four questions for discussion and debate:
First, how do we give everyone a fair shot at opportunity and security in this new economy?
Second, how do we make technology work for us, and not against us—especially when it comes to solving urgent challenges like climate change?
Third, how do we keep America safe and lead the world without becoming its policeman?
And finally, how can we make our politics reflect what’s best in us, and not what’s worst?
Capacious, timely questions, they offer important frames for discussion during this election year.
Three of Obama’s four questions arise from lack of consensus around the ideas of freedom from want and fear. This connection is clearest in the first question, where the phrase “opportunity and security” uses Latinate words to restate the absence of “want” (from Old English) and “fear” (from Old Norse by way of Middle English). Opportunity and security are words that emphasize process, and so they are well suited to inquiry. Underlying the question, “How do we give everyone a fair shot at opportunity and security in this new economy?” is the assumption that there is general agreement that “everyone” should be given “a fair shot.” This framing invites discussion of whether the means to that end is a Second Bill of Rights, or some other set of policies. The President did not call for these values to be reconsidered but rather he sought to shore up an established consensus—one based on the wide popularity of Social Security, the New Deal program with the most sustained impact, and the success of later federal programs, including Medicare.
The second and third questions—involving technology and world leadership—highlight some of the most significant differences between Roosevelt’s day and our own. There is a striking gap between Roosevelt’s call for disarmament to create a world free from fear and the race to develop nuclear weapons that was already underway when he spoke. After the bombs were dropped on Hiroshima and Nagasaki in August 1945, nuclear weapons quickly emerged as the iconic representation of how science and technology did not just serve humanity—they also threatened its extinction. Climate change now has even greater symbolic force in this regard. Obama’s exhortation to figure out how to “make technology work for us, and not against us” speaks directly to the challenge of harnessing modern forms of power that compromise human agency, including the capacity for effective governance. The excruciatingly slow response by world leaders to the climate change crisis highlights how technology threatens to overwhelm human capacities for response.
Implicit in the third question is the same focus on directing events, rather than having them direct us: How can American leadership be effective while relying less on the military? Obama described his controversial foreign policy as offering “a smarter approach, a patient and disciplined strategy that uses every element of our national power. It says America will always act, alone if necessary, to protect our people and our allies; but on issues of global concern, we will mobilize the world to work with us, and make sure other countries pull their own weight.” He also evoked the “power of example,” particularly in connection with the need to resist Islamophobia, and he quoted Pope Francis’s remarks on tolerance in his speech to Congress last September, when the Pope said that “to imitate the hatred and violence of tyrants and murderers is the best way to take their place.”
Even as he extended Roosevelt’s freedom of religion to Muslims, President Obama largely ignored the way some groups—including many that are Catholic— have challenged his domestic policies on gay marriage and access to birth control and abortion as violations of their religious freedom. The closest he came to this theme was a reference to persistent disagreements over the Affordable Care Act, which are driven in no small part by provisions for women’s reproductive health. How does religious tolerance coexist with women’s agency and independence? Freedom of religion, largely uncontroversial in Roosevelt’s day, has become a source of profound conflict over social policy.
The fourth and final question—How can we make our politics reflect what’s best in us, and not what’s worst?—returns to a signature theme of the Obama presidency: the need to create a more constructive, less divisive politics. This theme has been a touchstone of his State of the Union addresses over the years, and it is one that he began to develop very early in his national career. As has been widely remarked, Obama came to national prominence in 2004 with a speech to the Democratic National Convention emphasizing commonalities: not blue states or red states, but United States. The focus on unity took on new dimensions in his March 2008 speech “A More Perfect Union,” which he began with the words “We the People”—a phrase that he used again in this State of the Union address. He went on to note that “Our Constitution begins with those three simple words, words we’ve come to recognize mean all the people, not just some; words that insist we rise and fall together, and that’s how we might perfect our Union.” There is consensus about ends, he insisted again: “The future we want—all of us want—opportunity and security for our families, a rising standard of living, a sustainable, peaceful planet for our kids, all that is within our reach. But it will only happen if we work together. It will only happen if we can have rational, constructive debates. It will only happen if we fix our politics.”
The President acknowledged substantive differences and structural barriers—many of them related to the outsize role of money, the consequences of gerrymandering, and the distorting effects of fragmented and conflict-driven news media—and called for trust building, compromise, and active citizenship. Cynicism and skepticism are easy, he observed. Real change is hard and requires what he called “our better selves,” echoing Abraham Lincoln’s evocation of “the better angels of our nature.”
For the last seven years, the national conversation that Obama had hoped to pursue about the appropriate roles for the private and public sectors has been overwhelmed by the cultural issues that he mostly wanted to sidestep. It was this post-Cold War conversation about economic models that he thought might bring Democrats and Republicans to the table. Instead, it earned him the label “neoliberal” from his party’s left wing, while the Republicans gave him the back of their collective hand. Meanwhile, identity politics has been resurgent on both the right and the left: there has not been such intense focus on matters of identity since the early 1990s.
Has the President succeeded in articulating the grounds of a new consensus that will permit “rational, constructive debates” about the four questions of economic justice, technological change, national security and global peacebuilding, and effective citizenship? There was a clear suggestion that a change in tone will require a shift in attitude—akin to what newly elected Canadian Prime Minister Justin Trudeau called a return to “sunny ways” (ways that Michelle Obama evoked with a marigold-colored dress).
Religious rhetoric runs through the President’s address. On two occasions he invoked a spirit of “unarmed truth and unconditional love,” a phrase from Martin Luther King, Jr.’s Nobel Peace Prize address. These words amplify the President’s message and introduce a spiritual dimension to his vision, with the aim of creating a sense of common purpose. Like his rendition of “Amazing Grace” at the memorial service for the victims of the Charlestown shootings last June, these moments from the speech may help to bridge the religious divide and allow for the President’s consensus-building project to proceed. By presenting these questions now, and by infusing them with this spiritual element, he hopes to shape the 2016 campaign—and his legacy.
Sandra M. Gustafson is associate professor of English at the University of Notre Dame. She is the author of Imagining Deliberative Democracy in the Early American Republic and Eloquence Is Power: Oratory and Performance in Early America.
With a 2 percent annual growth rate, 5 percent unemployment, and zero inflation, the US economy is the envy of the world. Growth seems to be rising and unemployment seems to be falling, which means that most analysts expect an even better US economy in 2016. Throw in low gas prices and a strong dollar, and what’s not to like?
If the US economy is doing so well, why are ordinary people so unhappy with their own economic prospects?
The aggregate US economy may be growing but most people’s personal economies are not. Census Bureau data show that real per capita income is still below 2007 levels—despite six years of solid economic growth. And Bureau of Labor Statistics data show that despite today’s low unemployment rates the jobs still haven’t come back.
Back in 2006 the employment rate of the civilian population—the proportion of adults who had jobs—was over 63 percent. Allowing for people who are still in school, people who are retired, people who are disabled, and people who prefer not to work, that was just about everyone. When the economy is doing well, people who want jobs can get jobs.
Compare that with 2015. For all of 2015 to date the employment rate has been stuck below 60 percent. In fact, the employment rate has been not risen above 60 percent since the technical beginning of the “recovery” in June, 2009. Over the last six years, the economy has recovered. Employment has not.
The difference between the 63 percent employment rate of 2006 and the (well under) 60 percent employment rate of 2015 is roughly 7.5 million people. That’s the number of jobs missing in today’s roaring economy. Bringing today’s employment rate back up to 2006 levels would require the creation of more than 7.5 million new jobs.
What’s more, since the Global Financial Crisis there has been a shift from full-time to part-time employment. Some 2.5 million full-time jobs have disappeared, to be replaced by part-time employment. Assuming that people have basically the same preferences as they had before the recession hit, this means that the US economy is really short 10 million full-time jobs.
And remember, this is the economy at its best. The current “recovery” won’t last forever. It is already the fourth longest expansion of all time and about to overtake the World War II period to become the third longest. If the next recession hits while the economy is already 10 million jobs short of full employment, God help us.
The managers of the US economy don’t seem to be worried about this. On December 16, 2015 the Federal Reserve raised interest rates (albeit by a tiny amount) for the first time in seven years. The Fed expects that “economic activity will continue to expand at a moderate pace and labor market indicators will continue to strengthen.” In other words, the Fed expects more good news.
More good news for whom? As analyses from the Financial Times show, banks are increasingly parking their money at the Fed, not lending it out to businesses and consumers. Along with the Fed’s increase in lending rates (from 0 to 0.25 percent) came an increase in the interest rate the Fed pays banks on their own deposits at the Fed (from 0.25 percent to 0.5 percent).
For the last six years banks have parked trillions of dollars of excess funds in their accounts at the Federal Reserve. After all, they can earn 0.25 percent risk-free by borrowing money from the Fed and placing it directly in their own accounts at the Fed. Banks now hold some $2.5 trillion in excess reserves in these accounts. Those holdings give banks collectively an extra $6 billion in annual risk-free profits.
Before the Global Financial Crisis, US banks held virtually $0 in excess reserves in their Federal Reserve accounts.
What we see today is a US economy that is great for banks, great for bankers, and not so great for ordinary workers. Employment rates are down, employment hours are down, and wages are down. Bank profits are up, up, up to record levels. It’s no wonder that ordinary people are not as optimistic as the Board of Governors of the Federal Reserve System.
In the end, the Fed can’t fix the problems of the US economy. The Fed can help the banks (and the bankers who serve on its boards) but it can’t make companies hire more people. Only government can do that, and the US government has shown no willingness to create jobs in this recession, or even in this century.
The US government should be borrowing that cheap Fed money and using it to put people to work. Education, healthcare, and infrastructure could all absorb millions of workers to do jobs that desperately need to be done. President Obama should make this clear to Congress and put people to work. Fixing the jobs crisis can’t wait for the next president—or the next recession. It is already long overdue.
The controversy surrounding Alice Goffman’s On the Run is nothing new—the book’s appearance was met with both laudatory curiosity and defensive criticism, from within and outside academic sociology. On the Run offers an ethnographic account based on Goffman’s work in the field—and the field happens to be a mixed-income, West Philadelphia neighborhood, whose largely African American residents lived their lives under the persistence presence of the cops, whose pervasive policing left Goffman’s subjects, the members of her community, caught in a web of presumed criminality. The elephant(s) in the room: how does a privileged white woman engage in this kind of (often passé) participant-observer research without constantly self-checking her positionality? How can this type of book—and its more sensational elements—be true to the word? Who has permission to write about whom? And what happens when these questions leave the back-and-forth behind the closed doors of the academy and bring up very real suggestions about legal culpability, fabrication, and the politics of representation?
In a long-form piece for the New York Times Magazine, Gideon Lewis-Kraus assesses Goffman’s predicament and how her personal experiences shaped several of the more controversial aspects of the book’s account. All the while, he traces the book’s emergence during a crucial (and heated) moment for the history of sociology, when data-driven analysis has bumped the hybrid reportage/qualitative ethnography favored by Goffman into the margins of social science, and considers how the events following its publication played out in the media—and what all of this might mean for Goffman’s own future (and those of her subjects, neighbors, peers) and that of her discipline.
Following this excerpt, you can read the piece in full here.
But what her critics can’t imagine is that perhaps both of the accounts she has given are true at the same time — that this represents exactly the bridging of the social gap that so many observers find unbridgeable. From the immediate view of a participant, this was a manhunt; from the detached view of an observer, this was a ritual. The account in the book was that of Goffman the participant, who had become so enmeshed in this community that she felt the need for vengeance ‘‘in my bones.’’ The account Goffman provided in response to the felony accusation (which read as if dictated by a lawyer, which it might well have been) was written by Goffman the observer, the stranger to the community who can see that the reason these actors give for their behavior — revenge — is given by the powerless as an attempt to save face; that though this talk was important, it was talk all the same.
The problem of either-or is one that is made perhaps inevitable by the metaphor of ‘‘immersion.’’ The anthropologist Caitlin Zaloom, who studies economic relationships, explained to me that it’s a metaphor her own field has long given up on. The metaphor asks us to imagine a researcher underwater — that is, imperiled, unreachable from above — who then returns to the sun and air, newly qualified to report on the darkness below because the experience has put a chill in her bones. This narrative of transformation is what strikes critics like Rios as so patronizing and self-congratulatory. But Goffman herself never understood her work to be ‘‘immersive’’ in that way. The almost impossible challenge Goffman thus set before herself is the representation of both these views — of drive as manhunt and drive as ritual — in all their simultaneity.
Goffman could have covered herself by adding another paragraph of analysis, one that would have contextualized but also undercut the scene as the participants experienced it. Almost all of her early readers thought she should do that. It would have made her life easier. But she didn’t. This was a book about men whose entire lives — whose whole network of relationships — had been criminalized, and she did not hesitate to criminalize her own. She threw in her lot.
Five hundred years after St. Teresa, and there are still very few models for women of how to live outside of coupledom, whether that is the result of a choice or just bad luck. I can’t remember the last time I saw a television show or a film about a single woman, unless her single status was a problem to be solved or an illustration of how deeply damaged she was. This continues even as more and more women are staying single longer and longer.
I’ve been single for the most part going on 11 years now, and so I have heard every derogatory, patronizing, demeaning thing said about single women. “There has to be someone for you,” a married woman friend once said exasperatedly after I recounted another bad date. Implying, unconsciously, that there must be one man somewhere on the planet who could stand to be around me for more than a few days at a time.
And so it’s hard to get people to understand why a woman would ever choose to live a life alone. We no longer have to choose between being a brain and a body, but I can’t help but think that we lose something when we couple up, and maybe that thing is worth preserving. I pointed out to a different friend that it was the nuns who were the most socially engaged, working with the world’s most vulnerable. My friend, married, asked “as devil’s advocate” whether they were simply compensating for the lack of romantic love and children with their social concern. Yes, I said, maybe. “But we all have needs that aren’t met, and we’re all looking for substitutes.”
To read more about The Dead Ladies Project, click here.
The sociologist Diane Vaughan coined the phrase the normalization of deviance to describe a cultural drift in which circumstances classified as “not okay” are slowly reclassified as “okay.” In the case of the Challenger space-shuttle disaster—the subject of a landmark study by Vaughan—damage to the crucial O‑rings had been observed after previous shuttle launches. Each observed instance of damage, she found, was followed by a sequence “in which the technical deviation of the [O‑rings] from performance predictions was redefined as an acceptable risk.” Repeated over time, this behavior became routinized into what organizational psychologists call a “script.” Engineers and managers “developed a definition of the situation that allowed them to carry on as if nothing was wrong.” To clarify: They were not merely acting as if nothing was wrong. They believed it, bringing to mind Orwell’s concept of doublethink, the method by which a bureaucracy conceals evil not only from the public but from itself.
More explicitly, for Vaughan, the O-ring deviation decision unfolded through the actions and observations of key NASA personnel and aeronautical engineers, who grew acclimated to a culture where high-risk was the norm, and which fostered an increasing descent into poor decision-making. As the book’s jacket (and Useem) note, “[Vaughan] reveals how and why NASA insiders, when repeatedly faced with evidence that something was wrong, normalized the deviance so that it became acceptable to them.”
You can read more about The Challenger Launch Decision here, and the Atlantic piece in full on their site.
In the early days of 1937, the Ohio River, swollen by heavy winter rains, began rising. And rising. And rising. By the time the waters crested, the Ohio and Mississippi had climbed to record heights. Nearly four hundred people had died, while a million more had run from their homes. The deluge caused more than half a billion dollars of damage at a time when the Great Depression still battered the nation.
Timed to coincide with the flood’s seventy-fifth anniversary, The Thousand-Year Flood is the first comprehensive history of one of the most destructive disasters in American history. David Welky first shows how decades of settlement put Ohio valley farms and towns at risk and how politicians and planners repeatedly ignored the dangers. Then he tells the gripping story of the river’s inexorable rise: residents fled to refugee camps and higher ground, towns imposed martial law, prisoners rioted, Red Cross nurses endured terrifying conditions, and FDR dispatched thousands of relief workers. In a landscape fraught with dangers—from unmoored gas tanks that became floating bombs to powerful currents of filthy floodwaters that swept away whole towns—people hastily raised sandbag barricades, piled into overloaded rowboats, and marveled at water that stretched as far as the eye could see. In the flood’s aftermath, Welky explains, New Deal reformers, utopian dreamers, and hard-pressed locals restructured not only the flood-stricken valleys, but also the nation’s relationship with its waterways, changes that continue to affect life along the rivers to this day.
A striking narrative of danger and adventure—and the mix of heroism and generosity, greed and pettiness that always accompany disaster—The Thousand-Year Flood breathes new life into a fascinating yet little-remembered American story.
Like many scientists, Dr. Packer, a professor of ecology, evolution and behavior at the University of Minnesota, has fought his share of battles in the pages of professional journals.
But he has also tangled with far more formidable adversaries than dissenting colleagues. He has sparred with angry trophy hunters, taken on corrupt politicians, fended off death threats and, in one case, thwarted a mugging. Like the lioness, his opponents discovered that he is unlikely to give ground.
“My reflex is to confront the danger and go right at it,” he said.
Dr. Packer’s boldness — he concedes some might call it naïveté — eventually led to the upheaval of his life in Tanzania, where for 35 years he ran the Serengeti Lion Project, dividing his time between Minnesota and Africa. Assisted by a bevy of graduate students, he conducted studies of lion behavior that have shaped much of what scientists understand about the big cats.
But in 2014, Tanzanian wildlife officials withdrew his research permit, accusing him of “tarnishing the image of the Government of Tanzania” by making derogatory statements about the trophy hunting industry in emails, according to a letter they sent him. And in April, while visiting the Serengeti to film a BBC documentary, a chief park warden informed him that he had been barred from the country. (Apparently, he had made it through customs by mistake.)
Dr. Packer described the events leading to his banishment in his recently published book, Lions in the Balance: Man-Eaters, Manes, and Men with Guns. It mixes episodes of spy novel intrigue with detailed descriptions of scientific studies and PowerPoint presentations.
To read more about Packer’s work published by the University of Chicago Press, click here.
To read more about Lions in the Balance, his latest book, click here.
Congrats to Swan Isle Press and Anthony Geist for their translation of The School of Solitude: Collected Poems by Luis Hernández, which was just announced as one of the longlist candidates for the 2016 PEN Literary Award for Poetry in Translation. The book collects the prolific work of the legendary (and legendarily troubled) Peruvian poet Luis Hernández, who published three collections of poetry by the age of twenty-four, not to publish again until his untimely death in 1977 at thirty-six years-old Drawing upon the numerous notebooks he kept in the interim, The School of Solitude is the first book of Hernández’s writing to appear in English.
To read more about The School of Solitude, click here.
This e-book features the complete text found in the print edition of Dangerous Work, without the illustrations or the facsimile reproductions of Conan Doyle’s notebook pages.
In 1880 a young medical student named Arthur Conan Doyle embarked upon the “first real outstanding adventure” of his life, taking a berth as ship’s surgeon on an Arctic whaler, the Hope. The voyage took him to unknown regions, showered him with dramatic and unexpected experiences, and plunged him into dangerous work on the ice floes of the Arctic seas. He tested himself, overcame the hardships, and, as he wrote later, “came of age at 80 degrees north latitude.”
Conan Doyle’s time in the Arctic provided powerful fuel for his growing ambitions as a writer. With a ghost story set in the Arctic wastes that he wrote shortly after his return, he established himself as a promising young writer. A subsequent magazine article laying out possible routes to the North Pole won him the respect of Arctic explorers. And he would call upon his shipboard experiences many times in the adventures of Sherlock Holmes, who was introduced in 1887’s A Study in Scarlet.
Out of sight for more than a century was a diary that Conan Doyle kept while aboard the whaler. Dangerous Work: Diary of an Arctic Adventure makes this account available for the first time. With humor and grace, Conan Doyle provides a vivid account of a long-vanished way of life at sea. His careful detailing of the experience of arctic whaling is equal parts fascinating and alarming, revealing the dark workings of the later days of the British whaling industry. In addition to the transcript of the diary, the e-book contains two nonfiction pieces by Doyle about his experiences; and two of his tales inspired by the journey.
To the end of his life, Conan Doyle would look back on this experience with awe: “You stand on the very brink of the unknown,” he declared, “and every duck that you shoot bears pebbles in its gizzard which come from a land which the maps know not. It was a strange and fascinating chapter of my life.” Only now can the legion of Conan Doyle fans read and enjoy that chapter.
Oldstone-Moore, a lecturer in history at Wright State University (and, at least as recently as his faculty head shot, a beard-wearer), approaches facial hair as an index of the vertiginous roil of masculinity itself. “Whenever masculinity is redefined, facial hairstyles change to suit,” he writes. “The history of men is literally written on their faces.” In considering the subject, Oldstone-Moore is in good company. The Supreme Court, the Roman Catholic Church, Rousseau and Plutarch have all weighed in on the subject.
He is monomaniacal in his attentions, charting the course of human history in the reflection of a razor. Like Zelig, at any given moment in history, beards were (or, as suggestively, weren’t) there. Oldstone-Moore finds them (and their corollary, mustaches) everywhere: in ancient Sumer and ancient Rome; in the Bayeux Tapestry, the plays of Shakespeare and the poems of Whitman; in the courts of Europe as well as its festering proletarian dens. (One of the book’s acknowledged shortcomings is the demographic limit of its focus, largely on Western Europe and the United States.)
Even in our current beardophile moment (Oldstone- Moore notes in his introduction that Gillette’s sales are down), to single out facial hair for sustained, scholarly investigation is to invite charges of triviality. Such accusations are necessarily allayed by the beard-first myopia that allows Jesus Christ to be summarily described as “the most recognizable bearded man in Western civilization” or the sack of Rome in 1527 “another turning point in beard history.” It is probably an overstatement to suggest that, where Hitler and Stalin were concerned, “an analysis of mustaches might have alerted the Western allies to the real possibility of German-Soviet agreement.”
But perhaps this is to give the author too little credit. Oldstone-Moore is a sensitive observer, who dispenses ironies with a light hand; tonsorially enthralled as he may be, he also seems in on the joke.
Whenever we’re in danger of forgetting that the modern Republican Party is captive to a movement, one new excitement or another will jolt us back to reality — whether it is a trio of high-flying presidential candidates who’ve collectively served not a single day in elective office or an uprising by congressional Jacobins giddily dethroning their own leader. Each new insurrection feels spontaneous even as it revives antique crusades to abolish the Internal Revenue Service, “get rid” of the Supreme Court or — most persistent of all — rejuvenate the Old South. Half a century before Rick Perry indicated secession might be an option for Texas, John Tower, the state’s first Republican senator since Reconstruction, accepted the warm greeting of his new colleague, Senator Richard Russell, the Georgia segregationist, who reportedly said, “I want to welcome Texas back into the Confederacy.”
Tower is one of the more statesmanlike figures in “Nut Country,” Edward H. Miller’s well-researched and briskly written account of Dallas’s transformation from Democratic stronghold to “perfect test kitchen” of a new politics of Republican protest that combined the libertarian cry for “freedom” with the states’ rights model of constitutional order.
A go-getting paradise with an economy enriched by government contracts (aerospace and defense), Dallas might seem a curious place for anti-Beltway insurgency. But dependency bred anxiety, and “wealth and fear” took form together, as the journalist Theodore H. White observed in 1954. The tide of newcomers, many from the Midwest, inhaled the fumes of “Texanism,” according to White “a synthetic faith that lets them oppose all the controls and exactions of the federal government in Washington as an invasion of sacred and immemorial rights, while at the same time providing, with its frontier and vigilante memories, a complete answer to the newer problems of minorities, labor and the complexities of city living.”
One of the new Dallas Republicans was Bruce Alger, a Princeton graduate and disciple of Ayn Rand, elected to the House of Representatives in 1954. Initially an Eisenhower supporter, he declined to sign the notorious “Southern manifesto,” with its defiant sneer at civil rights, but soon became an “artful champion of Jim Crow.” In November 1960, four days before the presidential election, he led a group of 300 protesters who converged on a downtown Dallas hotel and accosted Lyndon and Lady Bird Johnson when they entered the lobby. Television cameras captured the moment — along with Alger holding aloft a placard that read “L.B.J. Sold Out to Yankee Socialists” — helping to plant the image of Dallas as a “city of hate.”
Friday marks the anniversary of one of the most infamous legal decisions in the history of our country, Korematsu vs. United States.
Seventy-three years ago, during World War II, the United States government forcibly removed 110,000 Japanese Americans from their homes and confined them in detention camps. Loyal citizens lost their property and liberty, based solely on their ancestry. The Korematsu decision validated that action: Relying on a deeply flawed evidentiary record — which included blatant racial animus, hyperbolized threats and misrepresentations by government lawyers — the Supreme Court ruled that the need to protect against the threat of espionage outweighed individual rights.
The federal government has since acknowledged the injustice of Japanese American internment, through reparations, monuments and even a “confession of error” from an acting solicitor general, Neal Katyal, in 2011. As grandchildren of detainees, we can attest to the value of these gestures.
But as law professors, we must acknowledge another truth: The high court has never formally overruled Korematsu, and indeed has declined opportunities to revisit its decision. In law-school-speak, Korematsu remains “good law,” despite widely acknowledged “bad facts.”
We wish we could say that in 2015, Korematsu is a relic—that it survives as a mere technicality and has no real import. But we know better. Our legal system relies heavily on precedent, meaning that even a discredited opinion is a danger if it remains on the books.
We have reached a crossroads in our history. For all the achievements and riches of our time, the world has never been so unequal or more unjust. A century ago, at the time of the First World War, the richest 20% of the world’s population earned eleven times more than the poorest 20%. By the end of the twentieth century they earned seventy-four times as much. Today, despite seven decades of international development, three decades of the Washington Consensus, and a decade and a half of Millennium Development Goals, our world is even more divided among the haves, the have-nots, and—as President George W. Bush once quipped in an after-dinner speech—the have-mores.
When it comes to wealth, rather than income, the picture is more extreme. Globally, the richest 1% now own nearly half of all the world’s wealth. The poorest 50% of the world, by contrast—fully 3 billion people—own less than 1% of its wealth. Anyone with assets of more than $10,000 a year is an exception to the global norm and is better off than 70% of everyone else alive. Yet most of us are so preoccupied by the relative few with more that we rarely stop to notice this. There is growing awareness today of the consequences in rich countries of rising income inequality: we know what it means to talk of the 1% there. But when it comes to the much greater gaps between rich and poor the world over, we confine ourselves still to talk of “global poverty”.
How often are we told that, if only we could see what life is like in a cramped slum in Dhaka or on some scrabble of land in rural Chad, we would be moved to help? But the problem is not one of our empathy. We are all familiar with the shape of a human body in hunger. The details, like glass paper, scarcely catch the imagination any more. It is not one of distance, either. A growing number of the wealthiest people in this world live in high-rise apartments that tower up and over the slums below—and they know only too well that before all the “beautiful forevers” will be lived a thousand impossible todays.
The problem, rather, is one of perspective, of what we choose not to see. There is no shortage of books telling us “why nations fail” or what “the bottom billion” on this planet must do to succeed, no shortage of policy papers from the World Bank or the International Monetary Fund saying much the same. But we still have not properly confronted how the poverty and suffering of a great many are connected to the wealth and privilege of a few. We are slow to admit that the problem is one not of poverty traps at the bottom of the pyramid but of a great confinement of wealth at the top. Total global wealth was estimated at $263 trillion in mid-2014, up from $117 trillion in 2000. That was the same year that the world agreed to bind itself to achieving the Millennium Development Goals by 2015 (with the headline ambition of halving the proportion of people living on less than $1.25 a day). Those goals end this year, in 2015, in many cases not having been met. Meanwhile, global wealth keeps on growing: by 8.3% from mid-2013 to mid-2014 alone.
There is a politics to this, but it is all too often ignored in a debate which to date has preferred to focus on the economics of who has what. The [it’s time to] paint this wider political context back into the picture, since our problems stem less from market forces than from the failed policies behind them. If this is partly cause for despair, then it is also cause for hope: our present predicaments are more amenable to change than we are often encouraged to believe.
But acting on this requires first grasping the full scale of the problem before us. Few of the world’s richest people intentionally exploit the world’s poor, it is obvious to note, and none of us is personally responsible for the plight of distant strangers. But some of us have not earned the base privilege we enjoy in this life: it is ours by fortune of inheritance and geographical luck, for the most part, and it comes at the cost of others.
Eduardo Lalo, as a review in Necessary Fiction notes, is a name familiar to very few English readers. “At the time of this review, a Google search of ‘Eduardo Lalo’ turns up very little in English—only a basic Wikipedia page. One hoping to read more about the author must brush up on one’s dusty Spanish skills.” The Cuban-born Lalo, however, began to gain more cosmopolitan acclaim with the publication of his book Simone, which won the Rómulo Gallegos International Novel Prize, an award that aims to “perpetuate and honor the work of the [titular] eminent novelist and also to stimulate the creative activity of Spanish language writers.” (The award is somewhat comparable, though much larger in scope, to the Man Booker Prize.) ” On the heels of the award, the the book’s first English language translation, by David Frye, has recently been published by the University of Chicago Press. The plot arc of the novel is complex, and the book’s narrative fealty vacillates between the subject positions of a self-educated Chinese immigrant, a jaded novelist, and the eponymous Simone.
From Necessary Fiction, which manages to condense the core of what is at stake for Lalo:
Just when we have uncomfortably settled into the doomed love story, the book takes a significant turn. Toward the end of the novel, the narrator and a novelist friend of his interrogate a visiting Spanish writer about the literature of the peninsula, and the lower quality work—in their opinion—that many Spanish publishers publish. (There may be some continental agreement to that, as Javier Márias has stated that he had no desire “to be was what they call a ‘real Spanish writer.’”) It is, at first, a strange shift. While the plot is held in abeyance, the book tries to make a larger point about the treatment of literature. In part, the point is that Puerto Rican writers have been unfairly ignored, while more maudlin and unoriginal writings from “real Spanish writers” have received outsized attention.
While the narrator obviously has significant pride in his Puerto Rico, it inevitably comes with a concomitant sense of resentment—part of the dark shadow that follows this novel sentence-by-sentence. Upon seeing the name “Colony Economy” on a carton of milk in a coffee shop, the narrator muses about how Puerto Rico’s history “overwhelms and defines” him. It is an apt lens through which to view Simone—characters who cannot quite escape the world they were born into, or the childhoods they were subjected to, a country shackled by the past and every extension of happiness undercut by sorrow. “What is left of the men and women of this country?” the narrator muses. “What remains but the coffee and the centuries, ground down and percolated, flowing through steel tubes, pouring from plastic spigots?”