JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: *Featured, Most Recent at Top [Help]
Results 1 - 25 of 984
How to use this Page
You are viewing the most recent posts tagged with the words: *Featured in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
Writing in the New York Times recently, art critic Holland Cotter lamented the fact that the current billionaire-dominated market system, “is shaping every aspect of art in the city; not just how artists live, but also what kind of art is made, and how art is presented in the media and in museums.” “Why,” he asks, “in one of the most ethnically diverse cities, does the art world continue to be a bastion of whiteness? Why are African-American curators and administrators, and especially directors, all but absent from our big museums? Why are there still so few black — and Latino, and Asian-American — critics and editors?”
It wasn’t always like this. During the 1930s under the New Deal, the arts were democratized, made accessible to ordinary people who lacked the means to buy paintings worth hundreds of thousands of dollars or to attend Broadway shows at over $100 a ticket. The New Deal’s support for the arts is one of the most interesting and unique episodes in the history of American public policy.
The federal arts programs initiated in the 1930s were intended to alleviate the economic hardships of unemployed cultural workers, to popularize art among a much wider segment of the population, and to boost public morale during a time of deep stress and pessimism, or as New Deal artist Gutzon Borglum remarked, to “coax the soul of America back to life.”
The best known of all the programs that were enacted during the Depression was the WPA (Works Progress Administration) Art Project. It consisted of four distinct projects: a Federal Art Project, a Federal Writers’ Project, a Federal Theatre Project, and a Federal Music Project.
Paintings were given to government offices, while murals, sculptures, bas relief, and mosaics were seen on the walls of schools, libraries, post offices, hospitals, courthouses, and other public buildings. Over the course of its eight years, the WPA commissioned over five hundred murals for New York City’s public hospitals alone. Among the now well-known artists supported by these programs were painters such as Thomas Hart Benton, Jackson Pollack, Willem de Kooning, Raphael and Moses Soyer, and the sculptor, Louise Nevelson.
The print workshops set up by the WPA prepared the ground for the flowering of the graphic arts in the United States, which until that time had been limited in both media and expression. Moreover, since prints were portable and cheap, they became a vehicle for broadening the public’s understanding and appreciation of the creative arts.
Some 100 community art centers, which included galleries, classrooms, and community workshops, were established in twenty-two states–but particularly where opportunities to experience and make art were scarce. Through this effort individuals who may never have seen a large painted scene or a piece of sculpture were given the opportunity to experience not only a finished work of art but to participate in the creative process. In the New York City area alone, an estimated 50,000 people participated in classes under the Federal Art Project auspices each week. According to Smithsonian author, David A. Taylor, “the effect was electric. It jump-started people beginning careers in art amid the devastation.”
The Federal Writers’ project provided employment and experience for editors, art critics, researchers, and historians, a number of whom later became famous for their novels and poetry, such as Richard Wright, Ralph Ellison, Studs Terkel, and Saul Bellow. They were put to work writing state and regional guidebooks that were to portray the social, economic, industrial, and historical background of the country. These guidebooks represented a vast treasury of Americana from the ground up, including facts and folklore, history and legend, and histories of the famous, the infamous, and the excluded. There were also seventeen-volumes of oral histories of the last people who had lived under slavery. An additional set of folklore and oral histories of 10,000 people from all regions, occupations, and ethnic groups were collected and are now held in the American Folklife Center of the Library of Congress.
The Federal Theatre Project was the first and only attempt to create a national theatre in the United States, producing all genres of theater, including classical plays, circuses, puppet shows, musical comedies, vaudeville, dance performances, children’s theatre, and experimental plays. They were performed wherever people could gather—not only in theaters, but in parks, hospitals, convents, churches, schools, armories, circus tents, universities, and prisons. Touring companies brought theater to parts of the country where drama had been non-existent, and provided training and experience for thousands of aspiring actors, directors, stagehands, and playwrights, among them, Orson Wells, Eugene O’Neill, and Joseph Houseman.
The program emphasized preserving and promoting minority cultural forms. At a time of strict racial segregation with arts funding non-existent in African American communities, black theatre companies were established in many cities. Foreign language companies performed works in French, German, Italian, Spanish, and Yiddish.
The Federal Theatre Project also brought controversial issues to the foreground, making it one of the most embattled of all the New Deal programs. Its “Living Newspaper” section produced plays about labor disputes, economic inequality, racism, and similar issues, which infuriated a growing chorus of conservative critics who succeeded in eliminating the program in 1939.
The Federal Music Project employed 15,000 instrumentalists, composers, vocalists, and teachers as well as providing financial assistance for existing orchestras and creating new ones in places that had never had an orchestra. Many other musical forms—opera, band concerts, choral music, jazz, and pop–were also performed. Most of the concerts were either free to the public or offered at very low cost, and free music classes were open to people of all ages and abilities.
In addition to the arts programs, the Farm Security Administration’s photography program oversaw the production of more than 80,000 photographs, as part of the effort to make the nation aware of the plight of displaced rural populations. These images–produced by photographers such as Walker Evans, Gordon Parks, and Dorothea Lange helped humanize the verbal and statistical reports of the terrible poverty and turmoil in the agricultural sector of the economy and brought documentary photography into the cultural pantheon of the nation.
Between 1933 and 1942 ten thousand artists produced some 100,000 easel paintings, 18,000 sculptures, over 13,000 prints, 4,000 murals, over 1.6 million posters, and thousands of photographs. Over a thousand towns and cities now boasted federal buildings embellished with New Deal murals and sculpture. Some 6,686 writers produced more than a thousand books and pamphlets, and the Federal Theatre Project thousands of plays. More than the quantity of the output, however, is the way in which these programs shaped Americans’ understanding of who they were as a people and their country’s possibilities. Before the New Deal, the notion that government should support the arts was unheard of, but thanks to the New Deal, art had been democratized and, for a time, de-commodified, made accessible to the great majority of the American people.
Perhaps Roosevelt himself best summed up the significance of the New Deal arts programs:
A few generations ago, the people of this country were often taught . . . to believe that art was something foreign to America and to themselves . . . But . . . within the last few years . . . they have discovered that they have a part. . . . They have seen in their own towns, in their own villages, in schoolhouses, in post offices, in the back rooms of shops and stores, pictures painted by their sons, their neighbors—people they have known and lived beside and talked to. . . some of it good, some of it not so good, but all of it native, human, eager, and alive–all of it painted by their own kind in their own country, and painted about things that they know and look at often and have touched and loved. The people of this country know now . . . that art is not something just to be owned but something to be made: that it is the act of making and not the act of owning that is art. And knowing this they know also that art is not a treasure in the past or an importation from another land, but part of the present life of all the living and creating peoples—all who make and build; and, most of all, the young and vigorous peoples who have made and built our present wide country.
New Deal support for the arts had coaxed the soul of America back to life, but we are in danger of losing it again. Under the obsession with deficits, arts programs in the public schools are being cut, federal funding for the arts has dropped dramatically, and even private funding has been reduced. Without art, we are ill-equipped as a people with the collective imagination that is needed if we are to resolve the enormous challenges that confront us in the twenty-first century. Who or what will there be to coax this generation back to life?
Sheila D. Collins is Professor of Political Science Emerita, William Paterson University and editor/author with Gertrude Schaffner Goldberg of When Government Helped: Learning from the Successes and Failures of the New Deal. She serves on the speakers’ bureau of the National New Deal Preservation Association, the Research Board of the Living New Deal and the board of the National Jobs for All Coalition, is a member of the Global Ecological Integrity Group and co-chairs two seminars at Columbia University.
Subscribe to the OUPblog via email or RSS.
Subscribe to only art and architecture articles on the OUPblog via email or RSS.
Tensions in the South and East China Seas are high and likely to keep on rising for some time, driven by two powerful factors: power (in the form of sovereignty over and influence in the region) and money (from the rich mineral deposits that lurk beneath the disputed waters). Incidents, such as the outcry over China’s recently announced Air Defence Identification Zone, have come thick and fast the last few years. One country’s historic right is another country’s attempt at annexation. Every new episode in turn prompts a wave of scholarly soul-searching as to the lawfulness of actions taken by the different countries and the ways that international law can, or cannot, help resolve the conflicts.
In order to help keep track of debate in blogs, journals, and newspapers on the international law aspects of the various disputes, we have created a debate map which indexes who has said what and when. It follows on from our previous maps on the use of force against Syria and the prosecution of heads of state and other high-profile individuals at the International Criminal Court. Blog posts in particular have a tendency to disappear off the page once they are a few days old, which often means that their contribution to the debate is lost. The debate maps reflect a belief that these transient pieces of analysis and commentary deserve to be remembered, both as a reflection of the zeitgeist and as important scholarly contributions in their own right.
To help readers make up their own minds about the disputes, the map also includes links to primary documents, such as the official positions of the countries involved and their submissions to the UN Commission on the Limits of the Continental Shelf.
One striking aspect of the map is how old some of the articles are, originating from the early 1970s. Controversies which seem new now actually go back some 40 years. In conflicts such as these, which cannot be understood without their history and where grievances often go back centuries, this awareness is key.
Another surprising feature is the uncertainty surrounding the legal basis of China’s claim to sovereignty over most of the South China Sea—its famous nine-dash line. Semi-official or unofficial statements by Chinese civil servants, or in one case by the Chinese Judge at the International Court of Justice, are seized on as indications of what China’s justifications are for its expansive maritime claims. A clearer official position, and more input from Chinese scholars, would significantly improve the debate.
Ultimately, the overlapping maritime claims and sovereignty disputes in the South and East China Seas are unlikely to be solved any time soon, and will keep commentators busy for years to come. We will keep the map up to date to facilitate and archive the debate. Your help is indispensable: please get in touch if you have any suggestions for improvements or for new blog posts and articles we can link to.
Oxford Public International Law is a comprehensive, single location providing integrated access across all of Oxford’s international law services. Oxford Reports on International Law, the Max Planck Encyclopedia of Public International Law, and Oxford Scholarly Authorities on International Law are ground-breaking online resources working to speed up research and provide easy access to authoritative content, essential for anyone working in international law.
Subscribe to the OUPblog via email or RSS.
Subscribe to only law articles on the OUPblog via email or RSS.
As far as we know, the first African American woman PhD was Dr. Marie Daly in 1947. I am still searching for an earlier one.
Women chemists, especially minority women chemists, have always been the underdogs in science and chemistry. African American women were not allowed to pursue a PhD degree in chemistry until the late in the twentieth century, while white women were pursuing that degree in the late nineteenth and early twentieth century.
Racial prejudice was a major factor. Many African American men were denied access to this degree in the United States. The list of those who were able to receive a PhD in chemistry is short. The Knox brothers were able to receive PhDs in chemistry from MIT and Harvard in the 1930s. Some men had to go abroad to get a degree; Percy Julian obtained his from the University of Vienna in Austria.
In 1975, the American Association for the Advancement of Science sponsored a meeting of minority women scientists to explore what it was like to be both a woman and minority in science. The meeting resulted in a report entitled The Double Bind: The Price of being a Minority woman in Science. Most of the women experienced strong negative influences associated with race or ethnicity as children and teenagers but felt more strongly the handicaps for women as they moved into post-college training in graduate schools or later in careers. When the women entered their career stage, they encountered both racism and sexism.
STS-47 Mission Specialist Mae Jemison in the center aisle of the Spacelab Japan (SLJ) science module aboard the Earth-orbiting Endeavour, Orbiter Vehicle (OV) 105. NASA. Public domain via Wikimedia Commons.
This is still true today in some respects, but it is often unconscious. For example, the organizers of an International Conference for Quantum Chemistry recently posted a list of the speakers. They were all men (the race of the speakers is not known). Three women who are pillars in the field protested and started a petition to add women to the speakers list. The organizers retracted the speaker list.
In 2009 the National Science Foundation sponsored a Women of Color conference. When I attended the meeting and listened to the speakers, it sounded as if not much had changed for women in science. There is still racism and sexism. Even Asian-American women, who do not constitute a minority within the field, were experiencing the same problems.
The 2010 Bayer Facts of Science Education XIV Survey polled 1,226 female and minority chemists and chemical engineers about their childhood, academic, and workplace experiences. The report stated that, girls are not encouraged to study STEM (science, technology, engineering, and mathematics) field early in school, 60% colleges and universities discourage women in science, and 44% of professors discourage female students from pursing STEM degrees.
The top three reasons for the underrepresentation are:
Lack of quality education in math and science in poor school districts
Stereotypes that the STEM isn’t for girls
Financial problems related to the cost of college education
In spite of all the negative information in these reports, women are pursuing STEM careers. In the National Organization for the Professional Advancement of Black Chemists and Chemical Engineers (NOBCChE) women dominate the organization. Years ago, men dominated that organization. The current vice president of the organization is a woman chemical engineer, who is is striving to make the organization better. Many of the NOBCChE female members went to Historically Black Colleges (HBCUs) for undergraduate degree before getting into major universities to obtain their PhD. The HBCUs are the savior for African American students because the professors and administration strive to help them succeed in college.
I am amazed at all these African American women scientists have done in spite of racism and sexism — succeeding and thriving in industry, working as professors and department chairs in major research universities, and providing role models to young women and men who are contemplating a STEM career.
Jeannette Elizabeth Brown is the author of African American Women Chemists. She is a former Faculty Associate at the New Jersey Institute of Technology. She is the 2004 Société de Chimie Industrielle (American Section) Fellow of the Chemical Heritage Foundation, and consistently lectures on African American women in chemistry.
Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.
For 20 years, 14 of those in England, I’ve been giving lectures about the social power afforded to dictionaries, exhorting my students to discard the belief that dictionaries are infallible authorities. The students laugh at my stories about nuns who told me that ain’t couldn’t be a word because it wasn’t in the (school) dictionary and about people who talk about the Dictionary in the same way that they talk about the Bible. But after a while I realized that nearly all the examples in the lecture were, like me, American. At first, I could use the excuse that I’d not been in the UK long enough to encounter good examples of dictionary jingoism. But British examples did not present themselves over the next decade, while American ones kept streaming in. Rather than laughing with recognition, were my students simply laughing with amusement at my ridiculous teachers? Is the notion of dictionary-as-Bible less compelling in a culture where only about 17% of the population consider religion to be important to their lives? (Compare the United States, where 3 in 10 people believe that the Bible provides literal truth.) I’ve started to wonder: how different are British and American attitudes toward dictionaries, and to what extent can those differences be attributed to the two nations’ relationships with the written word?
Our constitutions are a case in point. The United States Constitution is a written document that is extremely difficult to change; the most recent amendment took 202 years to ratify. We didn’t inherit this from the British, whose constitution is uncodified — it’s an aggregation of acts, treaties, and tradition. If you want to freak an American out, tell them that you live in a country where ‘[n]o Act of Parliament can be unconstitutional, for the law of the land knows not the word or the idea’. Americans are generally satisfied that their constitution — which is just about seven times longer than this blog post — is as relevant today as it was when first drafted and last amended. We like it so much that a holiday to celebrate it was instituted in 2004.
Dictionaries and the law
But with such importance placed on the written word of law comes the problem of how to interpret those words. And for a culture where the best word is the written word, a written authority on how to interpret words is sought. Between 2000 and 2010, 295 dictionary definitions were cited in 225 US Supreme Court opinions. In contrast, I could find only four UK Supreme court decisions between 2009 and now that mention dictionaries. American judicial reliance on dictionaries leaves lexicographers and law scholars uneasy; most dictionaries aim to describe common usage, rather than prescribe the best interpretation for a word. Furthermore, dictionaries differ; something as slight as the presence or absence of a the or a usually might have a great impact on a literalist’s interpretation of a law. And yet US Supreme Court dictionary citation has risen by about ten times since the 1960s.
No particular dictionary is America’s Bible—but that doesn’t stop the worship of dictionaries, just as the existence of many Bible translations hasn’t stopped people citing scripture in English. The name Webster is not trademarked, and so several publishers use it on their dictionary titles because of its traditional authority. When asked last summer how a single man, Noah Webster, could have such a profound effect on American English, I missed the chance to say: it wasn’t the man; it was the books — the written word. His “Blue-Backed Speller”, a textbook used in American schools for over 100 years, has been called ‘a secular catechism to the nation-state’. At a time when much was unsure, Webster provided standards (not all of which, it must be said, were accepted) for the new English of a new nation.
American dictionaries, regardless of publisher, have continued in that vein. British lexicography from Johnson’s dictionary to the Oxford English Dictionary (OED) has excelled in recording literary language from a historical viewpoint. In more recent decades British lexicography has taken a more international perspective with serious innovations and industry in dictionaries for learners. American lexicographical innovation, in contrast, has largely been in making dictionaries more user-friendly for the average native speaker.
The Oxford English Dictionary, courtesy of Oxford Dictionaries. Do not use without permission.
Local attitudes: marketing dictionaries
By and large, lexicographers on either side of the Atlantic are lovely people who want to describe the language in a way that’s useful to their readers. But a look at the way dictionaries are marketed belies their local histories, the local attitudes toward dictionaries, and assumptions about who is using them. One big general-purpose British dictionary’s cover tells us it is ‘The Language Lover’s Dictionary’. Another is ‘The unrivalled dictionary for word lovers’.
Now compare some hefty American dictionaries, whose covers advertise ‘expert guidance on correct usage’ and ‘The Clearest Advice on Avoiding Offensive Language; The Best Guidance on Grammar and Usage’. One has a badge telling us it is ‘The Official Dictionary of the ASSOCIATED PRESS’. Not one of the British dictionaries comes close to such claims of authority. (The closest is the Oxford tagline ‘The world’s most trusted dictionaries’, which doesn’t make claims about what the dictionary does, but about how it is received.) None of the American dictionary marketers talk about loving words. They think you’re unsure about language and want some help. There may be a story to tell here about social class and dictionaries in the two countries, with the American publishers marketing to the aspirational, and the British ones to the arrived. And maybe it’s aspirationalism and the attendant insecurity that goes with it that makes America the land of the codified rule, the codified meaning. By putting rules and meanings onto paper, we make them available to all. As an American, I kind of like that. As a lexicographer, it worries me that dictionary users don’t always recognize that English is just too big and messy for a dictionary to pin down.
Lynne Murphy, Reader in Linguistics at the University of Sussex, researches word meaning and use, with special emphasis on antonyms. She blogs at Separated by a Common Language and is on Twitter at @lynneguist.
Subscribe to the OUPblog via email or RSS.
Subscribe to only language articles on the OUPblog via email or RSS.
No, the image to the left is not a newly discovered picture of Jane Austen. The image was taken from my copy of The Complete Letter Writer, published in 1840, well after Jane Austen’s death in 1817. But letter writing manuals were popular throughout Jane Austen’s lifetime, and the text of my copy is very similar to that of much earlier editions of the book, published from the mid-1750s on. It is possible then that Jane Austen might have had access to one. Letter writing manuals contained “familiar letters on the most common occasions in life”, and showed examples of what a letter might look like to people who needed to learn the art of letter writing. The Complete Letter Writer also contains an English grammar, with rules of spelling, a list of punctuation marks and an account of the eight parts of speech. If Jane Austen had possessed a copy, she might have had access to this feature as well.
But I doubt if she did. Her father owned an extensive library, and Austen was an avid reader. But in genteel families such as hers letter writing skills were usually handed down within the family. “I have now attained the true art of letter-writing, which we are always told, is to express on paper what one would say to the same person by word of mouth,” Jane Austen wrote to her sister Cassandra on 3 January 1801, adding, “I have been talking to you almost as fast as I could the whole of this letter.” But I don’t think George Austen’s library contained any English grammars either. He did teach boys at home, to prepare them for further education, but he taught them Latin, not English.
So Jane Austen didn’t learn to write from a book; she learnt to write just by practicing, from a very early age on. Her Juvenilia, a fascinating collection of stories and tales she wrote from around the age of twelve onward, have survived, in her own hand, as evidence of how she developed into an author. Her letters, too, illustrate this. She is believed to have written some 3,000 letters, only about 160 of which have survived, most of them addressed to Cassandra. The first letter that has come down to us reads a little awkwardly: it has no opening formula, contains flat adverbs – “We were so terrible good as to take James in our carriage”, which she would later employ to characterize her so-called “vulgar” characters – and even has an unusual conclusion: “yours ever J.A.”. Could this have been her first letter?
Cassandra wasn’t the only one she corresponded with. There are letters to her brothers, to friends, to her nieces and nephews as well as to her publishers and some of her literary admirers, with whom she slowly developed a slightly more intimate relationship. There is even a letter to Charles Haden, the handsome apothecary who she is believed to have been in love with. Her unusual ending, “Good bye”, suggests a kind of flirting on paper. The language of the letters shows how she varied her style depending on who she was writing to. She would use the word fun, considered a “low” word at the time, only to the younger generation of Austens. Jane Austen loved linguistic jokes, as shown by the reverse letter to her niece Cassandra Esten: “Ym raed Yssac, I hsiw uoy a yppah wen raey”, and she recorded her little nephew George’s inability to pronounce his own name: “I flatter myself that itty Dordy will not forget me at least under a week”.
It’s easy to see how the letters are a linguistic goldmine. They show us how she loved to talk to relatives and friends and how much she missed her sister when they were apart. They show us how she, like most people in those days, depended on the post for news about friends and family, how a new day wasn’t complete without the arrival of a letter. At a linguistic level, the letters show us a careful speller, even if she had different spelling preferences from what was general practice at the time, and someone who was able to adapt her language, word use and grammar alike, to the addressee.
All her writing, letters as well as her fiction, was done at a writing desk, just like the one on the table on the image from the Complete Letter Writer,and just like my own. A present from her father on her nineteenth birthday, the desk, along with the letters written upon it, is on display as one of the “Treasures of the British Library”. The portable desk traveled with her wherever she went. “It was discovered that my writing and dressing boxes had been by accident put into a chaise which was just packing off as we came in,” she wrote on 24 October 1798. A near disaster, for “in my writing-box was all my worldly wealth, 7l”.
Subscribe to the OUPblog via email or RSS.
Subscribe to only language articles on the OUPblog via email or RSS.
Image credits: (1) Image of Jane Austen from The Complete Letter Writer, public domain via Ingrid Tieken-Boon van Ostade (2) Photo of writing desk, Ingrid Tieken-Boon van Ostade.
Finishing a book is a burden lifted accompanied by a sense of loss. At least it is for some. Academic authors, stalked by the REF in Britain and assorted performance metrics in the United States, have little time these days for either emotion. For emeriti, however, there is still a moment for reflecting upon the newly completed work in context—what were it origins, what might it contribute, how does it fit in? The answer to this last query for an historian of colonial America with a collateral interest in Britain of the same period is “oddly.” Somehow the renascence of interest in the British Empire has managed to coincide with a decline in commitment in the American academy to the history of Great Britain itself. The paradox is more apparent than real, but dissolving it simply uncovers further paradoxes nested within each other like so many homunculi.
Begin with the obvious. If Britain is no longer the jumping off point for American history, then at least its Empire retains a residual interest thanks to a supra-national framework, (mostly inadvertent) multiculturalism, and numerous instances of (white) men behaving badly. The Imperial tail can wag the metropolitan dog. But why this loss of centrality in the first place? The answer is also supposed to be obvious. Dramatic changes, actual and projected, in the racial composition of late twentieth and early twenty-first century America require that greater attention be paid to the pasts of non-European cultures. Members of such cultures have in fact been in North America all along, particularly the indigenous populations of North America at the time of European colonization and the African populations transported there to do the heavy work of “settlement.” Both are underrepresented in the traditional narratives. There are glaring imbalances to be redressed and old debts to be settled retroactively. More Africa, therefore, more “indigeneity,” less “East Coast history,” less things British or European generally.
The British Colonies in North America 1763 to 1776
The all but official explanation has its merits, but as it now stands it has no good account of how exactly the respective changes in public consciousness and academic specialization are correlated. Mexico and people of Mexican origin, for example, certainly enjoy a heightened salience in the United States, but it rarely gets beyond what in the nineteenth century would have been called The Mexican Question (illegal immigration, drug wars, bilingualism). Far more people in America can identify David Cameron or Tony Blair than Enrique Peña Nieto or even Vincente Fox. As for the heroic period of modern Mexican history, its Revolution, it was far better known in the youth of the author of this blog (born 1942), when it was still within living memory, than it is at present. That conception was partial and romantic, just as the popular notion of the American Revolution was and is, but at least there was then a misconception to correct and an existing interest to build upon.
One could make very similar points about the lack of any great efflorescence in the study of the Indian Subcontinent or the stagnation of interest in Southeast Asia after the end of the Vietnam War despite the increasing visibility of individuals from both regions in contemporary America. Perhaps the greatest incongruity of all, however, is the state of historiography for the period when British and American history come closest to overlapping. In the public mind Gloriana still reigns: the exhibitions, fixed and traveling, on the four hundredth anniversary of the death of Elizabeth I drew large audiences, and Henry VIII (unlike RichardIII or Macbeth) is one play of Shakespeare’s that will not be staged with a contemporary America setting. The colonies of early modern Britain are another matter. In recent years whole issues of the leading journal in the field of early American history have appeared without any articles that focus on the British mainland colonies, and one number on a transnational theme carries no article on either the mainland or a British colony other than Canada in the nineteenth century. Although no one cares to admit it, there is a growing cacophony in early American historiography over what is comprehended by early and American and, for that matter, history. The present dispensation (or lack thereof) in such areas as American Indian history and the history of slavery has seen real and on more than one occasion remarkable gains. These have come, however, at a cost. Early Americanists no longer have a coherent sense of what they should be talking about or—a matter of equal or greater significance–whom they should be addressing.
Historians need not be the purveyors of usable pasts to customers preoccupied with very narrow slices of the present. But for reasons of principle and prudence alike they are in no position to entirely ignore the predilections and preconceptions of educated publics who are not quite so educated as they would like them to be. In the world’s most perfect university an increase in interest in, say, Latin America would not have to be accompanied by a decrease in the study of European countries except in so far as they once possessed an India or a Haiti. In the current reality of rationed resources this ideal has to be tempered with a dose of “either/or,” considered compromises in which some portion of the past the general public holds dear gives way to what is not so well explored as it needs to be. Instead, there seems instead to be an implicit, unexamined indifference to an existing public that knows something, is eager to know more, and, therefore, can learn to know better. Should this situation continue, outside an ever more introverted Academy the variegated publics of the future may well have no past at all.
Subscribe to the OUPblog via email or RSS.
Subscribe to only history articles on the OUPblog via email or RSS.
Image credit: The British Colonies 1763 to 1776. By William R. Shepherd, 1923. Public domain via Wikimedia Commons.
In the 1860s, the introduction of its first named series of education books, the ‘Clarendon Press Series’ (CPS), encouraged Oxford University Press to standardize its payments to authors. Most of them were offered a very generous deal: 50 or 60% of net profits. These payments were made annually and were recorded in the minutes of the Press’ newly-established Finance Committee. The list of payments lengthened every year, as new titles were published and very few were ever allowed to go out of print. Some authors did very well from their association with the Press, but most earned very modest sums. Many of the books in the Clarendon Press Series yielded almost nothing to publisher or author; once we exclude the handful of exceptional cases, typical payments were in the range of £5 to £15 a year.
W. Aldis Wright.
The outstanding financial successes of the Clarendon Press Series were the editions of separate plays of Shakespeare intended for school pupils and (increasingly) university students. The first to be published was Macbeth in 1869, but it was the next to appear – Hamlet in 1873 – which became something like a bestseller. In its first year, Hamlet sold 3,380 copies; 20 years and five editions later, 73,140 copies had been accounted for to the editor, W. Aldis Wright (a fellow of Trinity College, Cambridge), who received over the years some £1,400 for this play alone. The whole CPS Shakespeare venture brought Wright an income of about £1,000 a year throughout much of the 1880s. To put this in context, the total of all royalties paid to authors in the late 1880s and early 1890s was about £5000 a year; in some years Wright was taking about 20% of that for his editions of Shakespeare alone.
A broader view of the Press’s payments to its authors on the Learned side can be gained by looking at three sample years: 1875, 1885, and 1895. In November 1875, the Finance Committee minutes listed 99 titles for which authors were being paid annual incomes, the total sum being paid out was £2,216. In November 1885, near the peak of publishing activity in the Clarendon Press Series, the Finance Committee minutes listed 238 titles generating revenue for their authors; they earned £4,740 between them. In November 1895, there were 240 titles leading to payments of £5,076. For most authors, their individual incomes were modest; in 1875, the median income was £7 16s, in 1885 it was £7 18s. However, in 1875 four authors and editors earned more than £100: Liddell and Scott received £372 each (for their Greek Lexicon), Aldis Wright received £220 (for various editions of Shakespeare’s plays), and Bishop Charles Wordsworth £152 (for his Greek Grammar). In 1885, eleven were earning more than £100, including Aldis Wright earning £934, Liddell and Scott each earning £350, Skeat earning £270 (for philological works), and Benjamin Jowett earning £261 (for editions of Plato’s works). In 1895, there were ten, including Aldis Wright with £578, J. B. Allen with £542 (for works on Latin grammar), and Liddell and Scott with £389 each.
These authorial incomes should be set against average academic incomes in Oxford. In the later nineteenth century, although there was much variation, the average annual income for a college fellow would be in the order of £600, usually made up of the fellowship dividend plus the tutorial stipend. In the wake of the Selborne Commission, in the early 1880s a reader would be paid £500, a sum might well be augmented by a fellowship dividend; professorships attracted £900 per annum. It is clear that, although most authors’ incomes were extremely small, the most successful authors, both inside and outside the Clarendon Press Series, were at their height earning a significant addition to their salaries through payments from the Press.
The incomes of the most successful were far in excess of what they would have earned had they sold their copyrights outright. On the other hand, those around the median probably earned less than a lump sum payment would have brought in or, at least, they had to wait longer for it. As a minor compensation to those who were paid small annual sums during this period – though it is unlikely that they would have known it – the purchasing power of the pound was rising between the mid-1860s and the mid-1890s, so their later small payments would have bought them more than their earlier small payments. The pound in a person’s pocket was actually worth more at the end of the nineteenth century than it had been at the beginning.
Subscribe to the OUPblog via email or RSS.
Subscribe to only British history articles on the OUPblog via email or RSS.
Image credit: William Aldis Wright (1831-1914), editor, Shakespeare Plays, the Clarendon Press Series (Walter William Ouless, 1887). (The Master and Fellows of Trinity College Cambridge) OUP Archives. Do not reproduce without permission.
Way back in 2007, when Twittering truly was for the birds, a far-sighted editor at the Oxford Dictionary of National Biography piped up: maybe people would like to listen as well as read? So was devised the Oxford DNB’s biography podcast which this week released its 200th episode—the waggerly tale of Charles Cruft (1852-1938), founder of the eponymous dog show held annually in early March.
Over the last seven years we’ve offered two episodes of the podcast per month. Each lasts between 10 and 25 minutes and follows a set format: the reading aloud of a single biography of a historical figure, taken from the Oxford DNB and chosen by Dictionary editors. The structure of an ODNB biography is ideal for the podcast format; dictionary entries being concise, rounded accounts of a life (personal as well as public), told chronologically, and written by specialist authors. Notable writers whose work appears in the podcast list include Will Self on J.G Ballard, Bernard Crick on George Orwell, David Lodge on Malcolm Bradbury, and Anthony Thwaite on Philip Larkin.
Since 2007 many episodes have been commissioned to mark noteworthy anniversaries. For example, Captain Edward Smith and the bandleader Wallace Hartley on the centenary, in 2012, of the sinking of the Titanic; or Ludwig Guttmann, creator of the Paralympics, for the London Games later that year. Others mark notable birthdays (the centenary of the birth of Alan Turing in June 2012, for instance); or dates in the British history calendar (the extraordinary story of Guy Fawkes for 5 November and Fred Perry for Wimbledon fortnight); or one-off events such as the enthronement in March 2013 of Justin Welby, the 105th archbishop of Canterbury, with the story of the first incumbent, St Augustine.
A great many of the 200 episodes—all of which are available free in the archive—chart the lives of well-known people: Anita Roddick, Roald Dahl, Scott of the Antarctic, Dr Crippen, Wallis Simpson, and so on. There are many more familiar names we’d love to include. However, the restrictions of the podcast format (a 25-minute recording allows an upper limit of c.3000 words for a script) means that this isn’t, unfortunately, the place for a Dickens or a Darwin whose ODNB entries run to more than 20,000 words. Even so, it’s possible to touch on major historical figures through the lives of those with whom they spent time: the story of Nora Joyce sheds light on James; that of Alice Liddell (of ‘Wonderland’ fame) on Lewis Carroll.
Photographic study “Pomona” (Alice Liddell as a young woman) by Julia Margaret Cameron, 1872. Public domain via Wikimedia Commons.
A few episodes, among them Orwell and Diana, princess of Wales, have been reduced from the original Oxford DNB article for reading aloud. Likewise, a handful of episodes take the form of dual lives comprising two Dictionary entries fused together: 15 minutes with the motor-car designer Charles Rolls just wouldn’t seem right without the accompanying story of Henry Royce; and so too the combined talents of Fortnum & Mason, Mills & Boon, or Eric & Ernie. Aside from these edits, what’s read aloud is pretty close to what you’ll find in the Oxford DNB for that individual. People with complex lives tend not to receive the podcast treatment: complicated, multi-layered stories are hard to untangle in 15-20 minutes. More suitable are recognizable people who dedicated themselves to a particular purpose (Alexander Fleming and penicillin, for instance) or lesser-known individuals closely associated with a familiar event or artefect, such as Charles Lucas, first recipient of the Victoria Cross.
Over the course a year, we hope to put out a mix of episodes covering a range of time periods, topics, and tones. Our earliest life is Boudicca (d.60/61 AD), the most recent (in terms of date of death) is Beryl Bainbridge (1932-2010). In between there’s plenty for the medievalist as well as the modernist—the life of Emperor Hadrian is much more than the story of wall-building, while that of the hermit St Godric is an ear-catching account of the privations of an 11th-century anchorite. Some of the chosen stories make for difficult listening. Try, for instance, Margaret Roper or Annie Darwin, daughters of Thomas More and Charles Darwin respectively. Others, like the scandalous medieval cleric, Bogo de Clare, or the raffish socialite Neil ‘Bunny’ Roger, are pure pleasure.
Entertainment is important, of course. But the podcast also provides an alternative route to historical biography for school teachers and pupils—many of whom, it’s fair to say, would not otherwise turn to a work of academic reference like the Oxford DNB. Episodes on Wilfred Owen, the abolitionist Olaudah Equiano, or the suffragette Emily Davison relate to aspects of the UK’s national curriculum. Hopefully, the series can also spring a few surprises on older listeners, be they the Hanoverian female soldier Hannah Snell; the doyen of pigeon racing, Albert Osman; or Charles Isham, bringer of garden gnomes to England.
About 650,000 episodes are downloaded annually from the ODNB podcast. Three things may account for this. First, there are our readers, Paul and Lynne—professional voice actors who have brought to life the words and worlds of writers, politicians, criminals, inventors, eccentrics, and—with Elizabeth Parsons—a would-be ghost. Then there’s the London studio where each episode is recorded, edited, and polished to a high standard.
Finally, and most importantly, there’s our common love of human stories, and of other people’s business—as testified by popular BBC radio series, such as “Great Lives”, “Last Word”, or the “New Elizabethans”. The Oxford DNB biography podcast makes a modest contribution to our fascination with real lives, albeit one that spans nearly 2000 years of British history and offers more than 50 hours listening time. That you can—while cooking dinner or walking the dog—be in the company of Mrs Beeton or, now, Charles Cruft seems rather wonderful.
Philip Carter is Publication Editor of the Oxford Dictionary of National Biography.
Oxford Dictionary of National Biography is a collection of 59,003 life stories of noteworthy Britons, from the Romans to the 21st century. The Oxford DNB online is freely available via public libraries across the UK, and many libraries worldwide. Libraries offer ‘remote access’ allowing members to gain access free, from home (or any other computer), 24 hours a day. You can also sample the ODNB with its changing selection of free content: in addition to the podcast; a topical Life of the Day, and historical people in the news via Twitter @odnb. A new e-brochure offers more on the Oxford DNB podcast, along with selected content. All 200 episodes are available as free downloads in the Archive. New episodes in the podcast are available on alternate Wednesdays as ‘Oxford Biographies’ via iTunes.
Subscribe to the OUPblog via email or RSS.
Subscribe to only British history articles on the OUPblog via email or RSS.
Sunday, 9 February 2014 marked the 50th anniversary of the American television broadcast of the Beatles on the Ed Sullivan Show. For many writers on pop music, the appearance on the Sullivan show not only marked the debut of the Beatles in the United States, but also launched their career as international pop music superstars. The mass exposure to millions of television viewers rocketed the Fab Four to national prominence in the United States, and created a chain reaction for stardom in the entire world.
The Beatles, Stockholm, 1963
While the charisma and quality of the Beatles’ music drew great popularity in 1964, the group’s success was assisted by the entrepreneurial skills of American television, notably by the expertise of Ed Sullivan. However, several other television broadcasts predated the Sullivan show appearance, and laid the groundwork for the Beatles’ stardom in the United States. In particular, two news stories about the Beatles were aired in November 1963, four full months before the Sullivan appearance. This, plus another taped appearance by the group by another entrepreneur, NBC’s Jack Paar, paved the way for the Beatles’ stardom in the United States.
The Ed Sullivan Show
Ed Sullivan began his career as a journalist throughout the 1920s and worked his way into the position as theater columnist for the New York Daily News when Walter Winchell left the paper in the early 1930s. Sullivan was also a host for Vaudeville theaters, serving as master of ceremonies for a number of shows during World War II. He broke into television as host of telecasts of New York’s Harvest Moon Ball on CBS, and was asked to host a weekly variety show called Toast of the Town in 1948. The show would be renamed The Ed Sullivan Show in 1955.
With his journalistic experience, Sullivan was able to use his contacts to attract a wide range of celebrities on the show. He attracted comedians such as Dean Martin and Jerry Lewis, Broadway stars like Julie Andrews, jazz greats like Dizzy Gillespie and Ella Fitzgerald, and even opera singers like Maria Callas and Robert Merrill. However, Sullivan may be best known for bringing rock‘n’roll to the small screen. He had Elvis Presley on the show on 6 January 1957, and many rockers such as Buddy Holly, Fats Domino, Bo Diddley, and many others thereafter.
Sullivan’s embrace (or at least tolerance) for rock music paved the way for the Beatles. Sullivan reportedly heard (or heard of) the Beatles during a trip to London and decided to put them on his show. He offered the band $10,000 to appear, a figure that, adjusted for inflation, would be a somewhat modest $75,000 in today’s dollars.
As the show opened on that historic night in 1964, Sullivan reported that Elvis Presley and his manager, Colonel Tom Parker, had sent a telegram to the Beatles wishing them luck. In his introduction, Sullivan also used the increased viewership to plug some of his other acts on previous shows, notably Topo Gigio (the Italian/Spanish mouse puppet created by Maria Perego), Van Heflin, Ella Fitzgerald, and Sammy Davis, Jr. But the tension to hear the Beatles was palpable, and he segued into a commercial quickly, promising the Beatles after the break.
The appearance by the Beatles almost didn’t happen. George Harrison reportedly had a sore throat the week before, but by broadcast, was better. So, the Beatles went live with their full line-up, performing five songs that night: “All My Loving,” “Till There Was You,” “She Loves You,” “I Saw Her Standing There,” and “I Want To Hold Your Hand.”
While the Ed Sullivan appearance marked the first live US TV appearance of the Beatles, the groundwork had already been laid to introduce the band to the United States a few months earlier. NBC News did a four-minute story on the Beatles that was broadcast on The Huntley-Brinkley Report on 16 November 1963, three full months before the Sullivan show. The feature was narrated by reporter Edwin Newman, who would later anchor the NBC News.
Alexander Kendrick, CBS’s London Bureau Chief taped the story, which showed footage of the Beatles performing in England, and the story ended with Kendrick ruminating on the social significance of the group, representing England’s youth, or at least England’s youth as they “wanted to be.”
The Jack Paar Program
Also predating the Sullivan Show, the first prime time film footage of the Beatles actually aired on 3 January 1964. The person responsible was another entrepreneur—NBC’s Jack Paar. Like Ed Sullivan, Paar was not a TV celebrity “natural” and came to television as a master of ceremonies. After World War II, Paar made some appearances in a few low-budget films, and made his way to television as a game show host. He was chosen as the regular replacement for Steve Allen as the host of NBC’s Tonight Show in 1957. Paar did not have Allen’s musical talent, nor his talent for sketch comedy or practical jokes, but was able to surround himself with unusual talent to market his show. While not as “wooden” on stage as Sullivan, Paar tended to be low-key and conversational, rather than charismatic and presentational. Like Sullivan, Paar also had a flair for discovering unique talent and is often credited for discovering, or at least popularizing, such off-beat characters as comedians Jonathan Winters, Bill Cosby, and Bob Newhart. Paar left the Tonight Show (ushering in the Johnny Carson era) in 1962, but went on to host a weekly variety show called The Jack Paar Program, that aired on Friday nights on NBC. It was on this program that he introduced the Beatles to the United States.
Like Sullivan, Paar had heard of the Beatles while in London and decided to show some film footage of the band as a joke. “I thought it was funny,” he quipped later on a television retrospective. He admitted that he had no idea that the band would change the course of music history. On the 1963 broadcast, after showing the footage, he quipped: “Nice to know that England has risen to our [American] cultural level.”
The episode with the footage was taped on 16 November 1963, the same date as the NBC news story (undoubtedly the story was fed to Paar from the network news bureau), but was not aired until 3 January 1964, undoubtedly delayed by the Kennedy assassination. Paar’s film clip still predates the Sullivan appearance by more than a month.
Would the Beatles have made it as superstars without the entrepreneurial efforts of Ed Sullivan and Jack Paar to give them TV coverage? The answer is undoubtedly yes. But the mass exposure they receive through American TV broadcasts by Sullivan and Paar (as well as NBC and CBS news) laid the groundwork for the Beatles success by presenting the group to millions of television viewers in the United States, and the world.
Ron Rodman is Professor of Music at Carleton College, where he teaches courses in the music and cinema and media studies departments. He has published numerous articles on tonal music theory, film music, and music in new media. He is author of Tuning In: American Narrative Television Music.
Subscribe to the OUPblog via email or RSS.
Subscribe to only music articles on the OUPblog via email or RSS.
Image: The Beatles i Hötorgscity 1963, Public Domain via Wikimedia Commons.
The law is always news. It plays a central role of law in our social, political, moral, and economic life. But what is this thing called law? Does it consist of a set of universal moral principles in accordance with nature? Or is it merely a collection of largely man-made rules, commands, or norms? Does the law have a specific purpose, such as the protection of individual rights, the attainment of justice, or economic, political, and sexual equality? Can the law change society as it has done in South Africa?
Nelson Mandela, the first President of a democratic South Africa, with the author Raymond Wacks, following his release from 27 years of imprisonment.
Even the sensationalist criminal trials—real or imagined, staple movie and television fare—capture features of the law that routinely vex legal philosophers. They spawn awkward questions about moral and legal responsibility, the justifications of punishment, the concept of harm, the judicial function, due process, and many more. The philosophy of law, in other words, is by no means exclusively an abstract, intellectual pursuit. Indeed several legal philosophers contribute to important contemporary discussions about highly controversial questions such as abortion, euthanasia, pornography, and human rights.
No society can properly be understood or explained without a coherent conception of its law and legal doctrine. The social, moral, and cultural foundations of the law, and the theories which both inform and account for them, are no less important than the law’s ‘black letter’. Among the many topics within legal theory’s spacious borders is that of the definition of law itself: before we can begin to explore the nature of law, we need to clarify what we mean by this often elusive concept.
Ronald Dworkin (1931-2013) sought to show that law is inextricably bound up with moral values.
One question that continues to dominate legal philosophy is the seemingly intractable problem of the relationship between law and morals continues to dominate academic debate. Can law be as neutral and value-free as legal positivists seek to demonstrate, or is law steeped in inescapable moral values? Can law be analytically severed from morality? Or is the pursuit of neutrality and objectivity by legal positivists—from John Austin and Jeremy Bentham to the Realists and their modern followers—a sanguine will o’ the wisp? Is a ‘science of law’ (exemplified by Hans Kelsen’s ‘Pure Theory’) a fantasy? Is HLA Hart’s focus upon the ‘municipal legal system’ still helpful in our age of globalization and pluralism? If law does have a purpose, what might it be? Can it secure greater justice for all who share our planet?
None of these questions has a simple answer. But it is in their asking—and careful reflection upon them—that we might better understand the nature and purpose of law, and thereby perhaps secure a more just society. In the face of injustice, it is easy to descend into vague oversimplification and rhetoric when reflecting upon the proper nature and function of the law. Analytical clarity and scrupulous jurisprudential deliberation on the fundamental nature of law, justice, and the meaning of legal concepts are indispensable tools. Legal philosophy has a decisive role to play in defining and defending the values and ideals that sustain our way of life.
Subscribe to the OUPblog via emailor RSS.
Subscribe to only philosophy articles on the OUPblog via emailor RSS.
Image credit: (1) By Raymond Wacks: Nelson Mandela with Raymond Wacks. Do not reuse without express permission. (2) By David Shankbone. CC-BY-SA-3.0 via Wikimedia Commons
On 11 September 2013, an unusually long and bright impact flash was observed on the Moon. Its peak luminosity was equivalent to a stellar magnitude of around 2.9.
What happened? A meteorite with a mass of around 400 kg hit the lunar surface at a speed of over 61,000 kilometres per hour.
Rocks often collide with the lunar surface at high speed (tens of thousands of kilometres per hour) and are instantaneously vaporised at the impact site. This gives rise to a thermal glow that can be detected by telescopes from Earth as short duration flashes. These flashes, in general, last just a fraction of a second.
The extraordinary flash in September was recorded from Spain by two telescopes operating in the framework of the Moon Impacts Detection and Analysis System (MIDAS). These devices were aimed to the same area in the night side of the Moon. With a duration of over eight seconds, this is the brightest and longest confirmed impact flash ever recorded on the Moon.
Our calculations show that the impact, which took place at 20:07 GMT, created a new crater with a diameter of around 40 meters in Mare Nubium. This rock had a size raging between 0.6 and 1.4 metres. The impact energy was equivalent to over 15 tons of TNT under the assumption of a luminous efficiency of 0.002 (the fraction of kinetic energy converted into visible radiation as a consequence of the hypervelocity impact).
The detection of impact flashes is one of the techniques suitable to analyze the flux of incoming bodies to the Earth. One of the characteristics of the lunar impacts monitoring technique is that it is not possible to unambiguously associate an impact flash with a given meteoroid stream. Nevertheless, our analysis shows that the most likely scenario is that the impactor had a sporadic origin (i.e., was not associated to any known meteoroid stream). From the analysis of this event we have learnt that that one metre-sized objects may strike our planet about ten times as often as previously thought.
Monthly Notices of the Royal Astronomical Society is one of the world’s leading primary research journals in astronomy and astrophysics, as well as one of the longest established. It publishes the results of original research in astronomy and astrophysics, both observational and theoretical.
Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.
Let’s be clear of one thing right from the word go: this is not in any useful sense a historical movie. It references a couple of major historical events but is not interested in ‘getting them right’. It uses historical characters but abuses them for its own dramatic, largely techno-visual ends. It wilfully commits the grossest historical blunders. This is in fact a historical fantasy-fiction movie and should be viewed and judged only as such. But in case any classroom teachers of Classical civilization or Classical history should be tempted to use it as a teaching aid: caveant magistri — let the teachers beware! Here are just five ways in which the movie is at best un-historical, at worst anti-historical.
(1) Error sets in with the very title: the ’300′ bit is a nod to Zack Snyder’s infinitely more successful 2006 movie to which this is a kind of sequel, and there is not just allusion to but bodily lifting of a couple of scenes from the predecessor. But which Empire is supposed to be on the rise here? I suppose that it’s meant to be, distantly, the ‘Athenian Empire’, but that didn’t even begin to rise until at least two years after the events the movie focuses on: the sea-battles of Artemisium and Salamis that both took place in 480 BCE.
(2) The movie gets underway with a wondrously unhistorical javelin-throw — cast by Athenian hero Themistokles (note the pseudo-authentic spelling of his name with a Greek ‘k’) on the battlefield of Marathon near Athens in 490 BCE, a cast which kills none other than Persian Great King Darius I, next to whom is standing his son and future successor Xerxes. Actually, though Darius had indeed launched the Persian expedition that came to grief at Marathon, he was not himself present there, nor was Xerxes.
Themistocles, on the other hand, was indeed present, but rather than carrying and throwing a javelin he was fighting in a dense phalanx formation and wielding a long, heavy pike armed with a fearsome iron tip made for thrusting into the Persian enemy hand-to-hand.
(3) From the Persians’ Marathon defeat, which (historically) accounts for their return revenge expedition under Xerxes, the scene shifts to the Persians’ fleet — in fact, a whole decade later. Connoisseurs of 300 will have been prepared for the digitally-enhanced, multiply-pierced and bangled Rodrigo Santo reprising his role of ‘god-king’ Xerxes. (Actually Persian king-emperors were not regarded or worshipped as gods.) Even they, though, will not necessarily have expected the Persian fleet to be under the command of a woman, and a Greek woman at that: Queen Artemisia of Halicarnassus (modern Bodrum), who is represented (in the exceedingly fetching person of Eva Green) as the equal if not superior of Xerxes himself, with her own court of fawning and thuggish male attendants, all hunks of beefcake.
Here the filmmakers are indeed drawing on a properly historical well of evidence: Artemisia — so we learn from Herodotus, her contemporary, fellow-countryman, and historian of the Graeco-Persian Wars — was indeed a Greek queen, who did fight for Xerxes and the Persians at Salamis. She did allegedly earn high praise from Xerxes as well as from Herodotus for the ‘manly’ quality of her personal bravery and her sage tactical and strategic advice.
But she was far from being admiral-in-chief of the entire Persian navy. She contributed a mere handful of warships out of the total of 600 or so, and those ships of hers could have made no decisive difference to the outcome of Salamis one way or the other.
(4) For some reason — perhaps because they were conscious of the extreme sameness of most of their material, a relentless succession of ultra-gory, stylised slayings, to the accompaniment of equally relentless drum’n'bass background thrummings — the filmmakers of this movie, unlike of 300, have felt the desire or even the need to include one rather prolonged and really quite explicit heterosexual sex-encounter. Understandably, perhaps, this is not between say Themistokles and his wife (or a slave-girl), or between Xerxes and a member of his (in historical fact, extensive) harem.
But — utterly and completely fantastically — it is between Themistokles and Artemisia in the interim between the battles of Artemisium (presented as a Greek defeat; actually it was a draw) and Salamis. Cue the baring of Eva Green’s considerable pectoral assets, cue some exceptionally violent and degrading verbal sparring, and cue virtual rape — encouraged by Artemisia at the time but later thrown back by her in Themistocles’s face as having been inadequate on the virility front.
(5) The crowning, climactic historical absurdity, however, is not the deeply unpleasant coupling between Themistokles and Artemisia, but the notion that in order for Themistocles and his Athenians to defeat the Persian fleet at Salamis they absolutely required the critical assistance of the massive Spartan navy which — echoes here of the US cavalry in countless westerns — turned up just in the nick of time, commanded by another Greek woman and indeed queen, Gorgo (widow of Leonidas, the hero of 300), again played by Lena Headey.
Actually, Sparta contributed a mere 16 warships to the united Greek fleet of some 400 ships at Salamis, and like Artemisia’s they made absolutely no difference to the outcome, which was resoundingly and incontestably an Athenian victory. The truly Spartan contribution to the overall defeat of the Persian invasion was made in very different circumstances, on land and by the heavy-infantry Spartan hoplites, at the battle of Plataea in the following summer of 479. But that is quite another story, one in which the un- or anti-historical filmmakers show not even a particle or scintilla of interest.
Subscribe to the OUPblog via email or RSS.
Subscribe to only classics and archaeology articles on the OUPblog via email or RSS.
Image credit: 300: Rise of An Empire. (c) Warner Bros. via 300themovie.com
On Saturday, 8 March, we celebrate International Women’s Day. But is there really anything to celebrate?
Last year, the United Nations declared its theme for International Women’s Day to be: “A promise is a promise: Time for action to end violence against women.” But in the United Kingdom in 2012, the government’s own figures show that around 1.2 million women suffered domestic abuse, over 400,000 women were sexually assaulted, 70,000 women were raped, and thousands more were stalked.
In a nutshell, this means that men’s violence against women is simply the most extreme manifestation of a continuum of male privilege, starting with domination of public discourse and decision-making, taking the lion’s share of global income and assets, and finally, controlling women’s actions and agency by force if necessary.
Throughout history and in most cultures, violence against women has been an accepted way in which men maintain power. In this country, the traditional right of a husband to inflict moderate corporal punishment on his wife in order to keep her “within the bounds of duty” was only removed in 1891. Our lingering ambivalence over the rights and wrongs of intervening in the face of domestic violence (“It’s just a domestic” as the police used to say) continues more than a century later. An ICM poll in 2003 found more people would call the police if someone was mistreating their dog than if someone was mistreating their partner (78% versus 53%). Women recognise this culture of condoning and excusing violence against them in their reluctance even today to exert their legal rights and make an official complaint. The most recent figures from the Ministry of Justice show that only 15% of women who have been raped report it to the police. And when they do, they’re likely to be disbelieved: the ‘no-crime’ rate (where a victim reports a crime but the police decide that no crime took place) for overall police recorded crime is 3.4%; for rape it’s 10.8%. All this adds up to a culture of impunity in which violence can continue.
And it’s exacerbated by our media. When the End Violence against Women Coalition, along with some of our members, were invited to give evidence to the Leveson Inquiry, we argued that:
“reporting on violence against women which misrepresents crimes, which is intrusive, which sensationalises and which uncritically blames ‘culture’, is not simply uninformed, trivial or in bad taste. It has real and lasting impact – it reinforces attitudes which blame women and girls for the violence that is done to them, and it allows some perpetrators to believe they will get away with committing violence. Because such news reporting are critical to establishing what behaviour is acceptable and what is regarded as ‘real’ crime, in the long term and cumulatively, this reporting affects what is perceived as crime, which victims come forward, how some perpetrators behave, and ultimately who is and is not convicted of crime.”
When do states become responsible for private acts of violence against women?
The UN Committee on the Elimination of All Forms of Discrimination against Women (CEDAW) says in its General Recommendation No. 19 that states may be responsible for private acts “if they fail to act with due diligence to prevent violations of rights or to investigate and punish acts of violence.”
Due diligence means that states must show the same level of commitment to preventing, investigating, punishing and providing remedies for violence against women as they do other crimes of violence. Arguably, our poor rates of reporting and prosecution suggest that the UK is not fulfilling this obligation.
What are some possible policy solutions to eliminate violence against women?
The last Government developed a national strategy to tackle this problem and the current Government has followed suit, adopting a national action plan that aims to coordinate action at the highest level. This has had the single-minded backing of the Home Secretary, Theresa May — who of course happens to be a woman. Under this umbrella, steps have been taken to focus on what works — although much more needs to be done, for example on the key issue of prevention –changing the attitudes that create a conducive environment for violence. Research by the UN in a number of countries recently showed that 70-80% of men who raped said did so because they felt entitled to; they thought they had a right to sex. Research with young people by the Children’s Commissioner has highlighted the sexual double standard that rewards young men for having sex while passing negative judgment on young women who do so. We need to rethink constructions of gender, particularly of masculinity.
What will the End Violence Against Women Campaign focus on this year?
End Violence Against Women welcomes the fact that the main political parties now recognize that this is a key public policy issue, and we’ll be using the upcoming local and national elections in 2014 and 2015 to question candidates on their practical proposals for ending violence against women and girls. We need to make sure that women’s support services are available in every area. And we’ll be working on our long-term aim of changing the way people talk and think about violence against women and girls — starting in schools, where children learn about gender roles and stereotypes — much earlier than we think. We hope Michael Gove will back our Schools Safe 4 Girls campaign. We also look forward to a historic milestone in April, when the UN special rapporteur on violence against women makes a visit to the UK to assess progress.
On International Women’s Day this year, what is the most urgent issue for the world to focus on?
As Nelson Mandela said: “For every woman and girl violently attacked, we reduce our humanity. Every woman who has to sell her life for sex we condemn to a lifetime in prison. For every moment we remain silent, we conspire against our women.” While women across the world are raped and murdered, systematically beaten, trafficked, bought and sold, ending this “undeclared war on women” has to be our top priority.
Janet Veitch is a member of the board of the End Violence against Women Coalition, a coalition of activists, women’s rights and human rights organisations, survivors of violence, academics and front line service providers calling for concerted action to end violence against women. She is immediate past Chair of the UK Women’s Budget Group. She was awarded an OBE for services to women’s rights in 2011.
On 22 March 2014, the University of Nottingham Human Rights Law Centre will be hosting the 15th Annual Student Human Rights Conference ‘Mind the Gender Gap: The Rights of Women,’ and Janet Veitch will be among the experts on the rights of women who will be speaking. Full details are available on the Human Rights Law Centre webpage.
Human Rights Law Review publishes critical articles that consider human rights in their various contexts, from global to national levels, book reviews, and a section dedicated to analysis of recent jurisprudence and practice of the UN and regional human rights systems.
Oxford University Press is a leading publisher in international law, including the Max Planck Encyclopedia of Public International Law, latest titles from thought leaders in the field, and a wide range of law journals and online products. We publish original works across key areas of study, from humanitarian to international economic to environmental law, developing outstanding resources to support students, scholars, and practitioners worldwide. For the latest news, commentary, and insights follow the International Law team on Twitter @OUPIntLaw.
“March 8 is Women’s Day, a legal holiday,” I wrote to my mother from Moscow. “This is one of the many cute cards that is on sale now, all with flowers somewhere on them. We hope March 8 finds you well and happy, and enjoying an early spring! Alas, here it is -30° C again.”
I spent the 1978-79 academic year working in Moscow in the Soviet Academy of Science’s Institute of Crystallography. I’d been corresponding with a scientist there for several years and when I heard about the exchange program between our nations’ respective Academies, I applied for it. Friends were horrified. The Cold War was raging, and Afghanistan rumbled in the background. But scientists understand each other, just like generals do. I flew to Moscow, family in tow, early in October. The first snow had fallen the night before; women in wool headscarves were sweeping the airport runways with birch brooms.
None of us spoke Russian well when we arrived; this was immersion. We lived on the fourteenth floor of an Academy-owned apartment building with no laundry facilities and an unreliable elevator. It was a cold winter even by Russian standards, plunging to -40° on the C and F scales (they cross there). On weekdays, my daughters and I trudged through the snow to the broad Leninsky Prospect. The five-story brick Institute sat on the near side, and the girls went to Soviet public schools on the far side, behind a large department store. The underpass was a thriving illegal free-market where pensioners sold hard-to-find items like phone books, mushrooms, and used toys. Nearing the schools, we ran the ever-watchful Grandmother Gauntlet. In this country of working mothers, bundled bescarved grandmothers shopped, cooked, herded their charges, and bossed everyone in sight: Put on your hat! Button up your children!
At the Institute, I was supposed to be escorted to my office every day, but after a few months the guards waved me on. I couldn’t stray in any case: the doors along the corridors were always closed. Was I politically untouchable?
But the office was a friendly place. I shared it with three crystallographers: Valentina, Marina, and the professor I’d come to work with. We exchanged language lessons and took tea breaks together. Colleagues stopped by, some to talk shop, some for a haircut (Marina ran a business on the side). Scientists understand each other. My work took new directions.
I also tried to work with a professor from Moscow State University. He was admired in the west and I had listed him as a contact on my application. But this was one scientist I never understood. He arrived late for our appointments at the Institute without excuses or apologies. I was, I soon surmised, to write papers for him, not with him. I held my tongue, as I thought befits a guest, until the February afternoon he showed up two weeks late. Suddenly the spirit of the grandmothers possessed me. “How dare you!” I yelled in Russian. “Get out of here and don’t come back!” “Take some Valium” Valentina whispered; wherever had she found it? But she was as proud as she was worried. The next morning I was untouchable no more: doors opened wide and people greeted me cheerily, “Hi! How’s it going?”
International Women’s Day, with roots in suffrage, labor, and the Russian Revolution, became a national holiday in Russia in 1918, and is still one today. In 1979, the cute postcards and flowers looked more like Mother’s Day cards, but men still gave gifts to the women they worked with. On 7 March I was fêted, along with the Institute’s female scientists, lab technicians, librarians, office staff, and custodians. I still have the large copper medal, unprofessionally engraved in the Institute lab. “8 марта” — 8 March — it says on one side, the lab initials and the year on the other. The once-pink ribbon loops through a hole at the top. Maybe they gave medals to all of us, or maybe I earned it for throwing the professor out of the Institute.
Women’s Day medal, courtesy of Marjorie Senechal.
I’ve returned to Russia many times; I’ve witnessed the changes. Science is changing too; my host, the Academy of Sciences founded by Peter the Great in 1724, may not reach its 300th birthday. But my friends are coping somehow, and I still feel at home there. A few years ago I flew to Moscow in the dead of winter for Russia’s gala nanotechnology kickoff. A young woman met me at the now-ultra-modern airport. She wore smart boots, jeans, and a parka to die for. “Put your hat on!” she barked in English as she led me to the van. “Zip up your jacket!”
In a unanimous decision, New York’s Court of Appeals, the Empire State’s highest court, recently held that John Gaied was not a New York resident for income tax purposes because he had no New York home.
Mr. Gaied was domiciled in New Jersey and had a business on Staten Island to which he commuted daily. He purchased a multi-family apartment building near his business in New York, both as an investment and to house his parents who lived in the building’s first floor apartment.
New York’s tax commissioner claimed that this Staten Island building made Mr. Gaied a New York resident for tax purposes. The New York Tax Appeals Tribunal and the New York Appellate Division affirmed the commissioner’s determination that this building constituted Mr. Gaied’s “permanent place of abode” in New York – even though Mr. Gaied personally did not lived there.
The good news is that Mr. Gaied ultimately prevailed. The bad news is that he had to fight his way to New York’s highest court to prevail. As that court held, “in order for a taxpayer to have maintained a permanent place of abode in New York, the taxpayer must, himself, have a residential interest in the property.” Since it was Mr. Gaied’s parents who lived in the first floor apartment, not Mr. Gaied himself, he was not a New York resident for tax purposes.
Mr. Gaied’s lawyer, Timothy P. Noonan of Hodgson Russ, LLP, is entitled to be proud of this victory for tax sanity in New York. The problem is that such sanity is all too rare. Mr. Gaied had to go to New York’s highest court to establish the common sense proposition that a “place of abode” is a location at which the taxpayer actually lives.
Unfortunately, the kind of irrationality manifested by New York’s tax commissioner in Gaied is endemic to New York’s tax system. Consider, for example, New York’s insistence that the modest beach house owned and used by Mr. John J. Barker for a handful of vacation days each year transforms Mr. Barker into a New York resident, even though his permanent home is in Connecticut. Or consider New York’s “convenience of the employer” doctrine under which New York taxes the income earned by nonresident telecommuters on the days such telecommuters work at their out-of-state homes and don’t set foot in the Empire State. There is much that is irrational and self-destructive in New York tax policy.
Governor Cuomo has eloquently proclaimed that New York can no longer be “the tax capital” of the United States. The Governor is right. Hopefully, Gaied will signal to New York’s policymakers the need to reform New York’s self-destructive approach to personal income taxation. Repairing New York’s definition of residence and abolishing the “convenience of the employer” doctrine would be good places to start.
During November 2012 hundreds of thousands of people across Europe took to the streets. The protesters were, by and large, complaining about government policies that increased taxes and lowered government spending. This initially sounds like a familiar story of popular protests against government austerity programmes, but there is a twist to the tale. Many of the people protesting were not aiming their ire at the national governments making the cuts in spending, but rather at the European Union. In Portugal, people carried effigies of their prime minister on strings and claimed he was a ‘puppet of the EU’; in Greece people burned the EU flag and shouted ‘EU out’; and in Italy people threw stones at the European Parliament offices. It was, at least for some people on the streets, not the incumbent national politicians in Lisbon, Athens, and Rome who were to blame for the problem of the day, but rather politicians and bureaucrats thousands of miles away in Brussels.
The economic crisis in Europe has illustrated that citizens are increasingly blaming not just their national governments, but also ‘Europe’ for their woes. This raises the question of whether citizens can hold European politicians to account for the outcomes for which they are thought to be responsible. The notion of democratic accountability relies on the critical assumption that voters are able to assign responsibility for policy decisions and outcomes, and sanction the government in elections if it is responsible for outcomes not seen to be ‘in their best interest’. This process, however, is clearly complicated in the multilevel system of the European Union where responsibility is not only dispersed across multiple levels of government, but there are also multiple mechanisms for sanctioning governments.
Democratic accountability in multilevel systems can be viewed as a two-step process, where specific requirements need to be met at each step to allow voters to hold governments to account. The first step is one where voters decide which level of government, if any, is responsible for specific policy outcomes and decisions. This depends on the clarity of institutional divisions of powers across levels of government, and the information available about the responsibilities of these divisions. The second step is one where voters should be able to sanction the government in an election on the basis of performance. This depends on government clarity: that is the ability of voters to identify a cohesive political actor that they can sanction accordingly.
Both of these steps are important. Assignment of responsibility to a particular level of government is a necessary, but not sufficient, condition to be able to punish an incumbent at the polls. To do so, voters also need to know which party or individual to vote for or against. Yet, the EU lacks a clear and identifiable government. Executive power is shared between the European Council and the European Commission, and legislative power is shared between the Council of the EU and the European Parliament. The primary mechanism through which citizens can hold EU institutions to account is via elections to the European Parliament. Unlike in national parliamentary systems, the majority in the European Parliament does not ‘elect’ the EU executive, however. Despite the formal powers of the European Parliament over the approval and dismissal of the European Commission there is only a tenuous link between the political majority in the Parliament and the policies of the Commission, not least since there is no clear government-opposition division in the Parliament. Despite current attempts to present rival candidates for the post of Commission president prior to the European Parliament elections in May, there is still no competition between candidates with competing policy agendas and different records at the EU level. Without this kind of politicised contest it is simply not possible for voters to identify which parties are responsible for the current policy outcomes and which parties offer an alternative.
As a consequence, the classic model of electoral accountability cannot be applied to European Parliament elections. Even if citizens think the EU is responsible for poor policy performance in an area, they find it difficult to identify which parties are ‘governing’ and punish, or reward, them at the ballot box. This has broader implications for trust and legitimacy. When people hold the EU responsible for poor performance, but cannot hold it accountable for that performance, they become less trusting of the EU institutions as a whole. Thus the danger for the EU is that every time the system fails to deliver — such as during the Eurozone crisis — the result is declining levels of trust and a crisis of confidence in the regime as a whole, because voters lack the opportunity to punish an incumbent and elect an alternative. In other words, the lack of mechanisms to hold EU policymakers to account may lead to a more fundamental legitimacy crisis in the European Union.
When I began work on my book, I knew I would be fortunate enough to experience a few moments of “Pinch me. This can’t really be happening.” There were, as it turned out, so many that I’d be black and blue if there was actual pinching going on. But of all of those moments, I think the highlight would have to be spending a day at Disneyland with Carol Channing and her late husband, Harry, who were then 90 and 91 respectively.
I had interviewed Carol the day before in front of an adoring audience at the annual Gay Days at Disneyland. But it had been decades since Carol had been in the park and the last time she was, her tour guide was, um, Walt Disney. She had a picture to prove it. Carol, Walt, and Maurice Chevalier on Main Street, USA! I couldn’t exactly beat that, but I did what I could. I mapped out the day with a full compliment of attractions starting gently enough with “Great Moments with Mr. Lincoln,“ an indoor show at which a robotic Abe recites the Gettysburg address. Carol was moved to tears. “It’s Walt!” she exclaimed. “This whole attraction is his spirit. Exactly who he was.” We emerged just in time to hear the Disneyland Marching Band emphatically playing “When the Saints Go Marching In.” We clapped along before we hopped on “The Disneyland Railroad,” a steam train that circles the park. Carol grabbed my hand as we approached and began singing at full voice, “Put on your Sunday clothes when you feel down and out…” the song from Hello, Dolly! that culminates with the full company boarding a similar train. We sang together as we chugged along. I died.
Mickey Mouse bows to Carol Channing. Photo courtesy of Eddie Shapiro.
We rode the Peter Pan ride and the tea cups, we met Mickey Mouse (who literally got on his knees and bowed down to Carol), and we had our own boat on “It’s a Small World.” It was all just as I had planned it until… the unexpected. As we were walking through Fantasyland, Harry kept staring in the direction of the carousel. I hadn’t planned on an attraction as simple as the carousel because, well, it’s a carousel. But I couldn’t help but notice Harry’s interest. “Harry,” I asked, “did you want to ride the carousel?” “I’m lookin’ at it,” came the reply. “Well Harry,” I said, “we’re here! If you want to ride it, let’s ride it.” We boarded and I went off in search of a nice bench for Carol and Harry. Carol seated herself but Harry was determined to mount a horse. At 91, however, he needed a hand or two, so I put my shoulder under his lower back and hoisted him up there. I then ran around to the other side and manually swung his leg astride the horse.
Harry, Carol Channing’s husband, on the carousel. Photo courtesy of Eddie Shapiro.
He was beaming, positively giddy. And in that moment, I realized that I was getting a major life lesson here. Carol and Harry were frail (he, in fact, passed less than three months later); one misstep could have been hugely consequential. A jostle from someone in the crowd could have been dire. But here they were, not just tasting everything life had to offer, but gobbling it up. If there was life to live, they were going to live it. And I thought to myself, “How does one become lucky enough to age into these people? Is it genetic? Is it a choice? What can I do to insure that when my golden years are upon me, I make them as golden as I can? Because these people have figured it out. They are who I aspire to be.”
When the sun was finally setting, we headed back to the hotel. I left them sitting in the lobby next to the grand piano while I went up to the room to retrieve their luggage. I returned just as the pianist was arriving for his set. He spied Carol and in no time he was gently tinkling the notes of “Hello, Dolly!” Before I knew what was happening, Carol was on her feet, one hand on the piano, the other aloft, belting out “Hello, Dolly!” for anyone who happened to be passing through the lobby of the Grand Californian Hotel at 4:30 in the afternoon. It was something to behold and a moment I will never, ever forget.
For months afterward, Harry would call me, just to say hello. “You don’t know the gift you gave us that day,” he would always end with. “Harry,” I’d always reply, “you don’t know the gift you gave me.”
Author Eddie Shapiro, Carol Channing, and her husband Harry on the tea cup ride at Disneyland. Photo courtesy of Eddie Shapiro.
Amores was Ovid’s first complete work of poetry, and is one of his most famous. The poems in Amores document the shifting passions and emotions of a narrator who shares Ovid’s name, and who is in love with a woman he calls Corinna. She is of a higher class and therefore unattainable, but the poems show the progression from infatuation to love to affair to loss. In these excerpts, we see two sides of the affair — a declaration of love, and a hot afternoon spent with Corinna. Our poet here is Jane Alison, author of Change Me: Stories of Sexual Transformation from Ovid, a new translation of Ovid’s love poetry.
It’s only fair: the girl who snared me should love me, too,
or keep me in love forever.
Oh, I want too much: if she’ll just endure my love,
Venus will have granted my prayers.
Please take me. I’d be your slave year after long year.
Please take me. I know how to love true.
I might not be graced with a grand family name,
only knight-blood runs in my veins,
my acres might not need ploughs ad infinitum,
my parents count pennies, are tight—
but I’ve got Apollo, the Muses, and Bacchus,
and Amor, who sent me your way,
plus true fidelity, unimpeachable habits,
barest candor, blushingest shame.
I don’t chase lots of girls—I’m no bounder in love.
Trust me: you’ll be mine forever.
I want to live with you each year the Fates spin me
and die with you there to mourn.
Give me yourself—a subject perfect for poems—
they’ll spring up, adorning their source.
Poems made Io (horrified heifer-girl) famous,
plus that girl led on by a “swan”
and the one who set sail on a make-believe bull,
his lilting horn tight in her fist.
We too will be famous, sung all over the world:
my name bound forever to yours.
Scorching hot, and the day had drifted past noon;
I spread out on my bed to rest.
Some slats of the windows were open, some shut,
the light as if in a forest
or like the sinking sun’s cool glow at dusk
or when night wanes, but dawn’s not come.
It was the sort of light that nervous girls love,
their shyness hoping for shadows.
And oh—in slips Corinna, her thin dress unsashed,
hair rivering down her pale neck,
just as lovely Sameramis would steal into a bedroom,
they say, or Lais, so loved by men.
I pulled at her dress, so scant its loss barely showed,
but still she struggled to keep it.
Though she struggled a bit, she did not want to win:
undone by herself, she gave in.
When she stood before me, her dress on the floor,
her body did not have a flaw.
Such shoulders I saw and touched—oh, such arms.
The form of her breast firm in my palm,
and below that firm fullness a belly so smooth—
her long shapely sides, her young thighs!
Why list one by one? I saw nothing not splendid
and clasped her close to me, bare.
Who can’t guess the rest? And then we lay languid.
Oh, for more middays just so.
Jane Alison is author of Change Me: Stories of Sexual Transformation from Ovid. Her previous works on Ovid include her first novel, The Love-Artist (2001) and a song-cycle entitled XENIA (with composer Thomas Sleeper, 2010). Her other books include a memoir, The Sisters Antipodes (2009), and two novels, Natives and Exotics (2005) and The Marriage of the Sea (2003). She is currently Professor of Creative Writing at the University of Virginia.
Subscribe to the OUPblog via email or RSS.
Subscribe to only classics and archaeology articles on the OUPblog via email or RSS.
“Organized” and “innovation” are words rarely heard together. But an organized approach to innovation is precisely what America needs today, argue Steve Currall, Ed Frauenheim, Sara Jansen Perry, and Emily Hunter. We sat down with the authors of Organized Innovation: A Blueprint for Renewing America’s Prosperity to discuss why American ought to organize its innovation efforts.
Why does America need a more organized innovation system today?
An “innovation gap” has emerged in recent decades — where US universities focus on basic research, and industry concentrates on incremental product development. At the same time, the stakes have risen around technology invention and commercialization. Innovation has become more central to the economic health of nations, but the rate of US innovation is slowing while that if other nations is accelerating. Since 2008, the number of foreign-origin patents that the US Patent and Trademark Office has granted annually has surpassed the number of domestic-origin patents. Between 1999 and 2009, the US share of global research and development spending dropped, while the share of Asia as a whole rose and exceeded the US share in 2009.
What’s behind this innovation gap?
In a nutshell, history and a set of myths held by many in the United States. The gap dates to the 1970s and 1980s, as big US companies retreated from basic research and focused on incremental product development. The shift had to do with a greater focus on short-term financial results, as well as increased competitive pressures. Research fell to the universities, but academic research often remains within particular disciplines, conducted in a vacuum that minimizes societal needs. Too often academic research does not make the leap beyond the lab to the real world. For years, observers have noticed the widening gap, but it has not been addressed. We think that has much to do with three myths—that innovation is about lone geniuses, the free market, and serendipity. These myths blind us from seeing that we tolerate an unorganized, less-than-optimal system of innovation.
What do you propose as a solution?
We call it Organized Innovation. It is a blueprint for better coordinating the key players in the US innovation ecosystem: universities, businesses, and government. The solution taps the power of both the private and public sectors to generate groundbreaking innovations—the kinds of new technologies that create good jobs and improve life for everyone.
The solution has three main pillars:
Channeled Curiosity: steering researchers’ fundamental inquiries toward real-world problems.
Boundary-Breaking Collaboration: tearing down walls between academic disciplines, and between universities and the private sector to better generate novel, high-impact technologies.
Orchestrated Commercialization: coordinating the various players involved in technology commercialization—including scholars, university administrators, entrepreneurs, venture capitalists, and corporations—to translate research insights into real-world benefits.
The Organized Innovation framework already has proven effective in closing the innovation gap. It is inspired by our nearly decade-long study of a highly successful but little-known federal initiative, the National Science Foundation’s Engineering Research Centers. These university-based centers require researchers to link basic science to social and market demand, require interdisciplinary and industry-academic collaboration and encourage the creation of proofs-of-concept to demonstrate that a lab-based technology has commercial potential. From 1985 to 2009, about $1 billion in federal funding was invested in the centers. They have returned more than 10 times that amount in a wide variety of technology innovations.
What is your favorite example about new technology generated from the Engineering Research Center program?
Our favorite case is about Mark Humayun and his artificial retina. Humayun is a fascinating individual, and his team developed a device that captures video from a camera embedded in eyeglasses and wirelessly relays digital signals to an implant placed directly on the retina. The artificial retina, called Argus II, is approved for use in the European Union and won US FDA approval in early 2013. Humayun’s device is changing lives — restoring useful vision to people blinded by retinal diseases.
You propose that the US government changes its approach to funding research and development. What is your message to policy makers?
We propose that federal and state funding agencies devote funds to research programs that embody Organized Innovation principles, which may translate into more funding for research with practical significance or innovation outcomes. The key advantages of our model are that we can maximize the public’s return on research and development investments. Both political parties can support this approach; it is fundamentally bipartisan.
Organized Innovation goes against the grain of widespread doubts about the ability of universities, business, and government to work together to solve problems, especially amid growing public deficits. But we’re convinced Americans will have the courage to see the value of such investments in our future.
Steve Currall, Ed Frauenheim, Sara Jansen Perry, and Emily Hunter are the authors of Organized Innovation: A Blueprint for Renewing America’s Prosperity. Steven C. Currall is Dean and Professor of Management in the Graduate School of Management at the University of California, Davis. Ed Frauenheim is an author, speaker, and associate editorial director of Workforce magazine, where he writes about the intersection of people management, technology and business strategy. Sara Jansen Perry, Assistant Professor of Management in the College of Business at the University of Houston-Downtown, earned her Ph.D. in Industrial-Organizational Psychology at the University of Houston. Emily M. Hunter is Assistant Professor of Management and Entrepreneurship in the Hankamer School of Business at Baylor University after earning her Ph.D. in Industrial-Organizational Psychology at the University of Houston.
Hands up if you’ve heard of National Voter Registration Day? And in the somewhat unlikely event that you have, did you realise that it took place last month?
If this momentous milestone passed you by, you’re not alone. Whatever 5 February means to the people of the United Kingdom, it’s safe to assume that electoral participation doesn’t figure prominently. This is not a surprise; it reflects a deep-seated public disengagement from politics, as indicated by the fact that only two thirds of eligible voters in the 2010 general election actually voted. Throughout the twentieth century, general election turnouts almost always exceeded 70%, but that’s a level of participation that has not been seen since 1997. Incidentally, the highest turnout since 1900 was 86.8% in January 1910, though only rate-paying men over the age of 21 could vote.
Low voter turnout is clearly a problem, but arguably a much greater worry is the growing inequality of that turnout. As a recent report from the Institute for Public Policy Research makes clear, the United Kingdom is very much a ‘divided democracy’, with electoral participation among the young and the poor declining dramatically. In the 1987 general election, for example, the turnout rate for the poorest income group was 4% lower than for the wealthiest. By 2010 the gap had grown to a staggering 23 points. A similar pattern is observable in relation to age groups. In 1970 there was an 18-point gap in turnout rates between 18–24-year-olds and those aged over 65; by 2005 this gap had more than doubled to over 40 points, before narrowing slightly to 32 points in 2010. ”If we focus on participation within these age-groups,” the IPPR report concludes “we can see that at the 2010 general election the turnout rate for a typical 70-year-old was 36 percentage points higher than that of a typical 20-year-old.”
If this isn’t bad enough there is little evidence that young people will simply start voting as they get older. On the contrary, the IPPR’s research suggests that “younger people today are less likely than previous generations to develop the habit of voting as they move into middle age.” These trends mean that politicians tend to address themselves to the older and richer sections of society – the people, in other words, that are most likely to vote. This, in turn, reinforces the views of the young and the poor that politicians don’t care about them. And that, naturally, leads to even greater political estrangement.
So what’s the solution? How do we re-establish a connection between ordinary people and politicians? In particular, how do we persuade the young and the poor that the political system really does have something to offer them?
The answers lie not in quick fixes or technological solutions – such as the introduction of compulsory voting, changing the ballot paper or promoting ‘digital democracy’ – but in adopting a fundamentally deeper, richer and more creative approach to democratic engagement. People will only vote – be they young or old, rich or poor – when they understand why democratic politics matters and what it can deliver. Therefore, to increase electoral participation we must focus on promoting the public understanding of politics from all perspectives (conservative, traditional, radical, etc.) in a way that demonstrates that individual responses to collective social challenges are rarely likely to be effective. It’s this deeper understanding, this notion of political literacy promoted by Sir Bernard Crick and defined as ‘a compound of knowledge, skills and attitudes’that citizens can use to navigate the complex social and political choices that face us all. Political literacy can be seen as a basic social requirement that empowers people to become politically aware, effective, and engaged while also being respectful of differences of opinion or belief.
In this regard, the message from survey after survey is a dismal one. Large sections of the British public appear to know very little about the political system. Even relatively basic questions such as “What do MPs do?” or “What’s the difference between Parliament and the Executive?” tend to elicit a mixture of mild embarrassment and complete bafflement.
Given that levels of political literacy are so low, it’s little surprise that many people choose not to vote. They’re unaware of the very real benefits the political system delivers for them (clean water, social protection, healthcare, education, etc.) and they no longer believe that they can become the engine of real social change. And yet they can. Worse, by opting out of elections they risk diminishing their representation as politicians focus their messages on the groups that do vote. Young people are constantly reminded that to be “uneducated” – let alone innumerate or illiterate – is to risk deprivation and vulnerability, but in many ways to be politically illiterate brings with it exactly the same risks. Moreover, the impact of declining political literacy isn’t only felt at the individual level. With so many people in society alienated from politics, democracy itself is weakened
Such arguments are by no means abstract concerns. On 7 May 2015, a General Election will be held on the basis of individual voter registration rather than the previous system of household voter registration. Research suggests that although this transition is likely to increase electoral security it may also result in a considerable decline in levels of electoral participation amongst – yes, you’ve’ guessed it – the young and the poor. This is not a reason to turn back from individual registration but it is a reason to step-back and acknowledge that if we’re really serious about healing a divided democracy, then we need to focus on promoting engaged citizenship through different channels and processes. We need to take some risks and stir things up, but most of all we need a long-term plan for fostering political literacy.
Within months of being introduced in 2009, enthusiasts were hailing bitcoin, the digital currency and peer-to-peer payment system, as the successor to the dollar, euro, and yen as the world’s most important currency.
The collapse of the Mt. Gox bitcoin exchange last month has dulled some of the enthusiasm for the online currency. According to bitcoincharts.com, the price of bitcoin, which had peaked at over $1100 in December, tumbled to about half of that in the wake of the Mt. Gox failure, leading a number of commentators to suggest that bitcoin is finished.
Others remain bullish on the currency, arguing that the collapse will lead to greater scrutiny of the system and the reemergence of a stronger, more secure bitcoin. Although the price of bitcoin has declined since the Mt. Gox collapse and volatility remains high, rallies are not unheard of. On 3 March 2014, for example, bitcoin began the day trading around $580 and peaked at over $700 before falling back into the upper $600s (data from bitcoincharts.com).
I have argued elsewhere that if bitcoin were to replace the leading world currencies, the results would be catastrophic. The most important objection is that—when it works according to plan—bitcoin mimics the gold standard. The total number of bitcoins that can be created (“mined” in bitcoin terminology, just to maintain the image of gold) is fixed and cannot be altered. Adopting a bitcoin standard would make it virtually impossible for central bankers to undertake aggressive monetary measures—as the Fed and European Central Bank have done—to bolster a flagging economy and a financial system on the point of collapse.
Another public policy downside of bitcoin is that because it is peer-to-peer, without a centralized monitoring authority, it allows funds to be transferred away from the prying eyes of government. This famously came to light last fall when the on-line drug bazaar Silk Road—which conducted much of its business in bitcoin–was shut down by the FBI and its proprietor arrested on drug and computer charges. Needless to say, the attractiveness of a payments system like bitcoin to criminals and terrorists should dampen the fervor of even the most enthusiastic bitcoin devotee.
Is there anything to like about bitcoin?
Yes. Bitcoin—or, more precisely, a system with some of bitcoin’s attributes—would give a boost to commerce.
Moving money with bitcoin is cheaper than using PayPal, credit cards, or bank transfers, all of which charge one or both parties fees. The savings on international transactions are even greater, since these transactions, when carried out with traditional currencies, typically involve both higher fees for moving the money as well as additional charges for converting form one currency to another. Denominating the transaction in bitcoin eliminates the currency conversion fee altogether.
Eliminating fees associated with commercial transactions is the most compelling argument in favor of bitcoin, as anyone who has ever used a credit card overseas, tried to transfer money, or used an out-of-network ATM will attest. The disadvantages of bitcoin far outweigh its benefits. Still, its ability to facilitate cheaper trade is appealing. The sooner someone figures out how to adopt that aspect of bitcoin for safer, more adaptable traditional currencies, the better for all of us.
Subscribe to the OUPblog via email or RSS.
Subscribe to only business and economics articles on the OUPblog via email or RSS.
Image credit: Bitcoin banknote by CASASCIUS. Creative Commons License via Wikimedia Commons.
The final sentence in the essay posted in January was not a statement but a question. We had looked at several hypotheses on the origin of the verb beg and found that none of them carried conviction. It also remained unclear whether beg was a back formation on beggar or whether beggar arose as a noun agent from the verb. Today we will examine the ideas connecting beggar with the religious order of the Beguines.
The order appeared in the thirteenth century and was active for at least three hundred years. Its modern descendants will not interest us here. As the form of the French word Beguine shows, we are dealing with a feminine noun, and, when Latinized, it was also feminine. The order took care of widows, unmarried women, and of the many solitary wives left at home by their crusading husbands. The male counterpart of the Beguines was called Beghards. In the detective story that is now unfolding (and a good etymology is always a thriller), the denouement will come next week. But it is not too early to reveal some facts. The word beggar has been tentatively derived from Beguine. However, there is a problem with this derivation: the Beguines were, at least initially, not a mendicant order — the women worked all day long. It is not even certain that, when beggars swarmed Europe and called themselves (or were called) Beguines, the connection between their occupation and the name was justified. Therefore, assuming that such a connection existed, it seems to have been established after the fact. We have to explore the etymology of the name Beguine, to see whether its inner form could suggest disapproval or perhaps a reference to the practice of asking for alms. The picture I am going to lay out is well-known, but the end result (beggars, buggers, and bigots) will be partly new.
One guess traces Beguine to French beige “gray.” This idea has little to recommend it. Even if the Beguines and Beghards wore gray clothes, this color could not be distinctive enough for giving the name to the orders. Monks (and the Beguines/Beghards were not nuns and monks) and many other people preaching moderation and the virtues of early Christianity, quite naturally, did not parade flamboyant apparel. Think of the gray monks, associated with the Benedictines (and, if you are tired of etymology and need a really depressing thriller, reread Chekhov’s “The Black Monk”). To repeat, it is most unlikely that the Beguines were recognized mainly because they wore gray clothes.
The founder of the sisterhood of the Beguines was Lambert le Bègue. French still has the word bègue (être bègue “to stammer”). However, it is not known whether Lambert was a stammerer. The word might refer to an impediment of speech or be an ironic reference to an endless repetition (mumbling) of prayers. Not improbably, people invented the nickname Bègue in retrospect, to provide a link between the name and the order the man founded. Medieval nicknames are tricky, and their origin sometimes poses insurmountable difficulties. Even in the Middle Ages Beguines needed an explanation, and suggestions about its etymology did not go beyond intelligent guessing. References to the color and stuttering, stammering, mumbling resemble exercises in folk etymology.
In my exposition, I am strongly influenced by a series of articles by Jozef van Mierlo, who wrote them between the mid-twenties and the mid-forties of the twentieth century. His conclusions were supported by Jozef Vercoullie, a distinguished historical linguist and the author of the first modern etymological dictionary of Dutch. The names of Van Mierlo and Vercoullie say nothing to non-specialists and little to anyone outside the circle of Germanic etymologists, except of course in the Netherlands, because both scholars wrote only in Dutch (at any rate, I have not seen anything by them in French or German).
Van Mierlo traced the word Beguine to Albigenses. This was not an original idea, but we should return to it because today, as in the past, few people share it. I am not going into a discussion of the Albigensian heresy. Suffice it say that the sect was eventually crushed by the Albigensian Crusades (1209-1229). It should be borne in mind that all the events surrounding the origin of the word beggar happened in the thirteenth century, and we depend on the records whose dating does not shed enough light on linguistic reconstruction. For example, if a word surfaced in texts in the twelve-tens, it does not mean that it was unknown several decades earlier.
In any case, with the destruction of the Albigenses, their name became a term of abuse. The loss of the first syllable in such long words is common, and there are no serious arguments against tracing Beguine to Albigen-. We need to discover the origin and spread of Beguine, to understand why it gave rise to beggar (if it did!). Presumably, bigen-, the stump of Albigen-, circulated widely as an indiscriminate term of abuse (and the more frequent a word, the greater the chance that it will shed syllables). It assumed various forms, and the similarity between Beghard and beggar is strong. But to make the derivation convincing, we should take note of an intermediate step. The (Al)bigenses stood for the most detested heretics. The Beguines and Beghards did not, but they too stayed outside the mainstream and were therefore often singled out for the opprobrium of the population. Religious or any other type of tolerance was not among the most conspicuous virtues of the Middle Ages.
The label derived from “Bigensians” developed in several directions. It could acquire the senses “hypocrite” and “parasite.” This is probably how the Beguines and Beghards became “beggars.” Curiously, even today we sometimes use the word beggar to express contempt, as in poor little beggar. If my story has credence, the events developed so. A word for a certain heresy broadened its sphere and began to express abhorrence, unconnected with religion. That word was Albigenses, known well in France and the Netherlands, from where it spread to England. It lost its first syllable, and the stump began to serve as a vague term of abuse. Among other things, it yielded the French source of beggar, an English innovation. The connection between the religious order and beggar “mendicant” is real but indirect. Given this scenario, beg was a back formation on beggar, but here too the picture may be more complicated than it seems.
Subscribe to the OUPblog via email or RSS.
Subscribe to only language articles on the OUPblog via email or RSS.
Image credit: Picture of a beguine woman, from Des dodes dantz, printed in Lübeck in 1489. Public domain via Wikimedia Commons.
The annual Academy Awards ceremony draws weeks of media attention, hours of live television coverage beginning with stars strolling down the red carpet, and around 40 million viewers nationwide on Oscar night. The Academy of Motion Picture Arts and Sciences relegates the awards for technical achievement to a separate ceremony a couple of weeks before, a sedate affair in a hotel ballroom rather the spectacular setting of the Dolby Theater. While this division between the arts and sciences is clear in awards season, that boundary has almost disappeared in the movies themselves, as computer-generated imagery and digital 3-D now occupy a prominent position in most major studio productions.
Academy Award for Toot, Whistle, Plunk and Boom at the Walt Disney Family Museum. Photo by Loren Javier. CC BY-ND 2.0 via Flickr.
For almost a century popular American cinema has been primarily a storytelling medium, with the motion picture sciences playing a more secondary role, but the distinction between the popular arts of Hollywood and the engineering of Silicon Valley is blurring. The movie business is being incorporated into a TED world where technology and design are the cornerstones of most big-budget entertainment.
For the first three hours of Sunday’s broadcast, Alfonso Cuarón’s Gravity seemed to be soaring toward a Best Picture Oscar, a victory that would have marked a new stage in this transformation of the American movie industry. A tour de force of technological innovation, Gravity won a total of seven Academy Awards, including the bellwether prizes for Best Editing and Best Director, and the voters appeared on the verge of bestowing their top honor on one of the first films to utilize the full potential of 3-D, a film that creates an almost visceral, stomach-dropping sensation of weightlessness as the camera and bodies appear to bob and drift through space. At other times the camera hurtles forward and the storyline rushes us from one space vehicle to another, propelled by an accidental explosion or the blast of a strategically deployed fire extinguisher. In those moments the weakness of Gravity is as unmistakable as its technical prowess: its virtuoso, gravity-defying feats are accompanied by an almost absurdly insubstantial and implausible plot, even by the standards of Hollywood, where happy endings have been arriving on cue for decades and most cars seem to have a magical sixth gear that allows them to fly over rising drawbridges. The narrative seems almost like an afterthought in Gravity, a pretext to link together one floating space platform and the next and to celebrate cinematic technology in itself, untethering it from earthly concerns like the plot.
But the Academy voters obviously had a different narrative in mind when they submitted their ballots, and in keeping with a long tradition of last-minute plot twists, they managed to compose a far more heartening conclusion to the year in film. In your average year, the Academy Awards are, to borrow the title of one of this year’s Best Picture contenders, an “American hustle.” Every March, we anticipate the canonization of a new Citizen Kane or Vertigo, half-forgetting that these films, among the most revered American movies ever made, won a grand total of one Oscar (Herman Mankiewicz and Orson Welles, for the screenplay for Citizen Kane). Kane was nominated in nine categories and lost eight of them, and Hitchcock and the other makers of Vertigo left the Pantages Theater empty-handed in 1959.
The list of regrettable Academy Award decisions and omissions (for example, Hitchcock’s career-long snub in the Best Director category or the single statuette given to Stanley Kubrick in his lifetime, for visual effects in 2001) is at least as long as Oscar’s triumphs. While viewers tune in for the glitz, glamor, comedy, fashion, and, on occasion, a genuinely moving acceptance speech (or a train wreck taking place at the podium), the ceremony also promises to provide an annual assessment of the state of American cinema. The opulent spectacle arrives each year without fail, but the Academy almost habitually overlooks the truly vibrant pictures and artists working in the film industry in the United States. What does Oscar reward instead?
The recipients of the major awards are usually not the most lucrative blockbusters (which have already received their rewards at the box office) nor are they the type of formally innovative and idiosyncratic pictures that enter the canon retrospectively. The films that tend to be overrated by the Academy are well-meaning films that appear to address an important social issue, while discovering some heroes and reasons for hope in an otherwise trying situation (Slumdog Millionaire, Crash, and Million Dollar Baby, to name three of the last eight Best Picture winners). Films by recognized American auteurs like Martin Scorsese, the Coen brothers, or Kathryn Bigelow have also fared well (see, for example, The Departed in 2006, No Country for Old Men in the following year, and The Hurt Locker in 2009), as have historical films that depict a triumph over hardship, with the formula for contemporary cinema—adversity, heroism, survival, and even a measure of vindication—retooled for use in the past. (See The King’s Speech in 2010 for the most recent example, but note also the run of five consecutive awards beginning in 1993 for Schindler’s List, Forrest Gump, Braveheart, The English Patient, and Titanic, which together established the historical film as a one of the surest paths to the podium.) What matters at Oscar time is the appearance of importance and a willingness to return to historical tragedies or to glance at contemporary social ills.
Viewed in retrospect, the Academy Awards perform something of a bait and switch, as instead of recognizing the best films created in the previous year they provide a barometer of the social and historical problems that continue to haunt us, including (to focus on this year’s nominees) political corruption, the excesses of Wall Street, uneven development, slavery and racism, the AIDS crisis, and the persistence of homophobia. This year’s Best Picture nominees have been justly scrutinized precisely because they seem so intimately linked with the problems they address. Four of the nine nominees are based on actual events drawn from the very recent past, another (Philomena) recounts a true story spanning a 50-year period from the middle of the twentieth century to the present, and 12 Years a Slave retells the autobiography of Solomon Northup, a free African-American from New York who was kidnapped and sold into bondage in Louisiana. Add Gravity to this strong group of films, and oddsmakers were predicting the tightest contest in recent memory, with these many returns to history pitted against an immersive, high-tech cinematic experience of the future.
In TheWolf of Wall Street, Jordan Belfort, a real-life financial scam artist played by Leonardo DiCaprio, finds himself unable to drive home after an overdose of Quaaludes that leaves him prostrate on the front steps of his country club. Summoning all his strength, he manages to slither across the driveway, hoist himself into his gull-winged sports car, and steer through a series of obstacles unscathed. Or at least that’s how the events unfold the first time, in what appears to be Jordan’s experience of reality. Immediately after that sequence, we see the police arrive and Scorsese presents us with a revisionist version, with a wreckage of cars and signposts left flattened in his wake. Hollywood’s approach to the past often resembles the first, more delusional of these scenes, with the heroic figure emerging triumphant from history.
In 12 Years a Slave the historical devastation caused by slavery is more frightening because the damage is all pervasive, because nothing is left uncorrupted by the system that frames every interaction through the lens of property. Screenwriter John Ridley and director McQueen had the courage to let Solomon Northup’s story remain largely unchanged from the original autobiography and to frame the most searing images in the simplest, most direct way, as in the agonizingly long take where a near lynching unfolds almost in slow motion. And in the best tradition of classical Hollywood cinema, McQueen manages to combine a compelling narrative with a series of subtle character portraits, as Northup travels through a looking glass from his prior existence as an accomplished musician and family man in New York to what seems like an alternative universe, where survival depends on the stripping away of those markers of identity and humanity. Rather than present slavery as an incomprehensible evil from another time, the film also chronicles the everyday rationalizations that allow the master to accept depravity as a way of life and the foundation of an economic order.
In most years the Oscars ceremony performs a bait and switch, as we await the announcement of the year’s best films and hear the name of a soon-to-be-forgotten film. But the Academy Awards also remind us why we continue to care about movies and ascribe to them a social significance and power all out of proportion with the relatively modest ambitions of even the Best Picture nominees, let alone the more standard studio fare. The Oscars are an advertisement for the potential of cinema to engage with traumatic historical and contemporary realities, even if we usually have to look elsewhere for the films that address those issues in all of their complexity. 12 Years a Slave, one of the few masterpieces also to win the award for Best Picture, reminds us that sometimes those films can come straight from Hollywood.
Nationalist, conservative, and anti-immigration parties as well as political movements have risen or become stronger all over Europe in the aftermath of EU’s financial crisis and its alleged solution, the politics of austerity. This development has been similar in countries like Greece, Portugal, and Spain where radical cuts to public services such as social security and health care have been implemented as a precondition for the bail out loans arranged by the European Central Bank and International Monetary Fund, and in countries such as Finland, France, and the Netherlands that have contributed to the bailout while struggling with the crisis themselves. Together, the downturn that was initiated by the crisis and its management with austerity politics have created an enormous potential of discontent, despair, and anger among Europeans. These collective emotions have fueled protests against governments held responsible for unpopular decisions.
Protests in Greece after austerity cuts in 2008
However, the financial crisis alone cannot fully explain these developments, since they have also gained momentum in countries like Britain, Denmark, Norway, and Sweden that do not belong to the Eurozone and have not directly participated in the bailout programs. Another unresolved question is why protests channel (once again) through the political right, rather than the left that has benefited from dissatisfaction for the last decades? And how is it that political debate across Europe makes increasing use of stereotypes and populist arguments, fueling nationalist resentments?
A protester with Occupy Wall Street
One way to look at these issues is through the complex affective processes intertwining with personal and collective identities as well as with fundamental social change. A particularly obvious building block consists of fear and insecurity regarding environmental, economic, cultural, or social changes. At the collective level, both are constructed and shaped in discourse with political parties and various interest groups strategically stirring the emotions of millions of citizens. At the individual level, insecurities manifest themselves as fear of not being able to live up to salient social identities and their inherent values, many of which originate from more secure and affluent times, and as shame about this anticipated or actual inability, especially in competitive market societies where responsibility for success and failure is attributed primarily to the individual. Under these conditions, many tend to emotionally distance themselves from the social identities that inflict shame and other negative feelings, instead seeking meaning and self-esteem from those aspects of identity perceived to be stable and immune to transformation, such as nationality, ethnicity, religion, language, and traditional gender roles – many of which are emphasized by populist and nationalist parties.
The urgent need to better understand the various kinds of collective emotions and their psychological and social repercussions is not only evident by looking at the European crisis and the re-emergence of nationalist movements throughout Europe. Across the globe, collective emotions have been at the center of major social movements and political transformations, Occupy Wall Street and the Arab Spring just being two further vivid examples. Unfortunately, our knowledge of the collective emotional processes underlying these developments is yet sparse. This is in part so because the social and behavioral sciences have only recently begun to systematically address collective emotions in both individual and social terms. The relevance of collective emotions in recent political developments both in Europe and around the globe suggests that it is time to expand the “emotional turn” of sciences to these affective phenomena as well.
Christian von Scheve is Assistant Professor of Sociology at Freie Universität Berlin, where he heads the Research Area Sociology of Emotion at the Institute of Sociology. Mikko Salmela is an Academy Research Fellow at the Helsinki Collegium for Advanced Studies and a member of Finnish Center of Excellence in the Philosophy of Social Sciences. Together they are the authors of Collective Emotions published by Oxford University Press.
Since their introduction in the United States in the 1940s, artificial fluoridation programmes have been credited with reducing tooth decay, particularly in deprived areas. They are acknowledged by the US Centers for Disease Control and Prevention as one of the ten great public health achievements of the 20th century (alongside vaccination and the recognition of tobacco use as a health hazard). Such plaudits however, have only gone on to fuel what is an extremely polarised ‘water fight’. Those opposed to artificial fluoridation continue to claim it causes a range of health conditions and diseases such as reduced IQ in children, reduced thyroid function, and increased risk of bone cancer. Regardless of the controversy, the one thing that everyone agrees upon is that little or no high quality research is available to confirm or refute any public concerns. The York systematic review of water fluoridation has previously highlighted the weakness of the evidence base by acknowledging the quality of the research included in the review was low to moderate.
Fluoride changes the structure of tooth enamel making it more resistant to acid attack and can reduce the incidence of tooth decay. This is why it is added to drinking water as part of artificial fluoridation programmes. The aim is to dose naturally occurring fluoride to a level that provides optimum benefit for the prevention of dental caries. The optimum range can depend on temperature but falls within the range of 0.7-1.2 parts per million (ppm) for Great Britain. Levels lower than 0.7ppm are considered to provide little or no benefit. Drinking water standards are set so that the level of fluoride must not exceed 1.5ppm in accordance with national regulations that come directly from EU law.
Severn Trent Water, Northumbrian Water, South Staffordshire Water, United Utilities, and Anglian Water are the only water companies in Great Britain that artificially fluoridate their water supply to a target level of 1 ppm. The legal agreements to fluoridate currently sit with the Secretary of State, acting through Public Health England, although local authorities are the ultimate decision makers when it comes to establishing, maintaining, adjusting or terminating artificial fluoridation programmes. As a programme dedicated to improving oral health, all of the associated costs come from the public health budget. Therefore, it is important to know that the money is being spent in the most effective way.
Our study has, for the first time, enabled an in-depth examination of the relationship between the incidence of two of the most common types of bone cancer that are found in children and young adults, osteosarcoma and Ewing sarcoma, and fluoride levels in drinking water across the whole of Great Britain. We have combined case data from population based cancer registries, fluoride monitoring data from water companies and census data within a computerised geographic information system, to enable us to carry out sophisticated geo-statistical analyses.
The study found no evidence of an association between fluoride in drinking water and osteosarcoma or Ewing sarcoma. The study also found no evidence that those who lived in an area of Great Britain with artificially fluoridated drinking water, or who were supplied with drinking water containing naturally occurring fluoride at a level within the optimal range, were at an increased risk of osteosarcoma or Ewing sarcoma.
It is important to note that finding no evidence of an association between the geographical occurrences of osteosarcoma or Ewing sarcoma and fluoride levels in drinking water, does not necessarily mean there is no association. Indeed, intake of fluids and food products that contain fluoride will not be the same for everyone and not taking this variation into consideration is one of the limitations of our study. Nevertheless, the methodologies we have developed could be used in the future to examine fluoride exposure over time and take other risk factors into consideration at an individual level. Such an approach could help the controversy surrounding artificial fluoridation ebb rather than flow.
Another important, although unexpected, finding arose from our use of fluoride monitoring data. We found that the fluoridation levels of approximately one third of the artificially fluoridated water supply zones were below 0.7ppm (the minimum limit of the optimum range). This finding reinforces that it is incorrect to assume an artificially fluoridated area is dosed up to 1ppm. In reality, it may be a lot less. A number of previous studies have mistakenly made this assumption making their conclusions unreliable. Our study shows that you cannot guarantee that fluoride levels in all artificially fluoridated water supply zones are close to the target level of 1ppm. Assuming that water fluoridation is a safe practice and evidence surrounding calculation of recommended dosage is reliable, this finding has economic implications in terms of public health. If public money is paying for artificial fluoridation shouldn’t the water supply zones be dosed up to a level that will provide the greatest benefit? If they aren’t then could it be that public money is merely being thrown down the drain?
The International Journal of Epidemiology is an essential requirement for anyone who needs to keep up to date with epidemiological advances and new developments throughout the world. It encourages communication among those engaged in the research, teaching, and application of epidemiology of both communicable and non-communicable disease, including research into health services and medical care.