What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing Blog: OUPblog, Most Recent at Top
Results 1 - 25 of 4,500
Visit This Blog | Login to Add to MyJacketFlap
Blog Banner
Introducing brilliant authors to the blogosphere. The Official blog of Oxford University Press.
Statistics for OUPblog

Number of Readers that added this blog to their MyJacketFlap: 4
1. Bimonthly etymology gleanings for August and September 2014. Part 1

I was out of town at the end of this past August and have a sizable backlog of unanswered questions and comments. It may take me two or even three weeks to catch up with them. I am not complaining: on the contrary, I am delighted to have correspondents from Sweden to Taiwan. Today I will deal with the questions only about the two most recent posts.

Kiss

Our regular correspondent Mr. John Larsson took issue with my remark that kiss has nothing to do with chew and cited some arguments in favor of the chew connection. We should distinguish between the “institute of kissing” and the word for the action. As could be expected, no one knows when people invented kissing, but, according to one theory, everything began with mothers chewing their food and passing it on to their babies from mouth to mouth. I am not an anthropologist and can have no opinion about such matters. But the oldest form of the Germanic verb for “chew” must have sounded approximately like German kauen (initial t in Old Norse tyggja is hardly original). The distance between kauen and kussjan cannot be bridged.

Also from Scandinavia, Mr. Christer Wallenborg informs me that in Sweden two words compete: kyssa is a general term for kissing, while for informal purposes pussa is used. I know this and will now say more about the verbs used for kissing in the Germanic-speaking world. Last time I did not travel farther than the Netherlands (except for mentioning the extinct Goths). My survey comes from an article by the distinguished philologist Theodor Siebs (1862-1941). It was published in the journal of the society for the promotion of Silesian popular lore (Mitteilungen der Schlesischen Gesellschaft für Volkskunde) for 1903. Modern dialect atlases may contain more synonyms.

Below I will list only some of the words and phrases, without specifying the regions. Germany: küssen, piepen, snüttern (long ü), -snudeln (long u), slabben, flabben, smacken, smukken, smatschen, muschen, bussen, bütsen, pützschen, pupen (some of these words are colloquial, some verge on the vulgar). Many verbs for “kiss” (the verb and the noun) go back to Mund and Maul “mouth,” for example, mundsen, mul ~ mull, müll, mill, and the like. Mäulchen “little mouth” is not uncommon for “a kiss,” and Goethe, who was born in Frankfurt, used it. With regard to their sound shape, most verbs resemble Engl. puss, pipe, smack, flap, and slap.

Friesland (Siebs was an outstanding specialist in the modern dialects and history of Frisian): æpke (æ has the value of German ä) ~ apki, make ~ mæke, klebi, totje, kükken, and a few others, borrowed from German and Dutch. Dutch: zoenen, poenen (both mentioned in my previous blog on kiss), kussen, kissen, smokken, smakken, piper geven, and tysje.

Siebs became aware of Nyrop’s book (see again my previous blog on kiss about it) after his own work had been almost completed and succeeded in obtaining a copy of it only because Nyrop sent him one. He soon realized that his predecessor had covered a good deal of the material he had been collecting, but Nyrop’s book did not make Siebs’s 19-page article redundant, because Nyrop’s focus was on the situations in which people kiss (a friendly kiss, a kiss of peace, an erotic kiss, etc.), while Siebs dealt with the linguistic aspect of his data. It appeared that kiss usually goes back to the words for the mouth and lips; for something sweet (German gib mir ’nen Süssen “give me a sweet [thing]”); for love (so in Greek, in Slavic, and in Old Icelandic minnask, literally “to love one another”), and for embracing (as in French embrasser). Some words for kissing are onomatopoeic, and some developed from various metaphors or expanded their original sense (I mentioned the case of Russian: from “be whole” to “kiss”; Nyrop cited several similar examples). We can see that chewing has not turned up in this small catalog.

Tristram and Isolde by John William Waterhouse, 1916. Public domain via WikiArt
Tristram and Isolde by John William Waterhouse, 1916. Public domain via WikiArt

Siebs also ventured an etymology of kiss and included this word in his first group. In his opinion, Gothic kukjan “to kiss” retained the original form of Old Engl. kyssan, Old Norse kyssa, and their cognates. In Old Frisian, kokk seems to have meant “speaker” and “mouth” and may thus be related to Old Icelandic kok “throat.” Siebs went on to explain how the protoform guttús yielded kyssan. Specialists know this reconstruction, but everything in it is so uncertain that the origin of kiss cannot be considered solved.

In the picture, chosen to illustrate this post, you will see the moment when Tristan and Isolde drink the fateful love potion. Two quotations from Gottfried’s poem in A. T. Hatto’s translation will serve us well: “He kissed her and she kissed him, lovingly and tenderly. Here was a blissful beginning for Love’s remedy: each poured and quaffed the sweetness that welled up from their hearts” (p. 200), and “One kiss from one’s darling’s lips that comes stealing from the depths of her heart—how it banished love’s cares!” (p. 204).

The color brown and brown animals

The protoform of beaver must have been bhebrús or bhibhrús. This looks like an old formation because it has reduplication (bh-bh) and is a -u stem. The form does not contain the combination bher-bher “carry-carry.” Beavers are famous for building dams rather than for carrying logs from place to place. Francis A. Wood, apparently, the only scholar who offered an etymology of beaver different from the current one, connected the word with the Indo-European root bheruo- ~ bhreu- “press, gnaw, cut,” as in Sanskrit bhárvati “to gnaw; chew” (note our fixation on chewing in this post!). His idea has been ignored, rather than refuted (a usual case in etymological studies). Be that as it may, “brown” underlies many names of animals (earlier I mentioned the bear and the toad; I still think that the brown etymology of the bear is the best there is) and plants. Among the plants are, most probably, the Slavic name of the mountain ash (rowan tree) and the Scandinavian name of the partridge.

American Beaver by John James Audubon, 1844. Public domain via WikiArt.
American Beaver by John James Audubon, 1844. Public domain via WikiArt.

And of course I am fully aware of the trouble with the Greek word for “toad.” I have read multiple works by Dutch scholars that purport to show how many Dutch and English words go back to the substrate (the enigmatic initial a, nontraditional ablaut, and so forth). It is hard for me to imagine that in prehistoric times the bird ouzel (German Amsel), the lark, the toad, and many other extremely common creatures retained their indigenous names. According to this interpretation, the invading Indo-Europeans seem to have arrived from places almost devoid of animal life and vegetation. It is easier to imagine all kinds of “derailments” (Entgleisungen) in the spirit of Noreen and Levitsky than this scenario. Words for “toad” and “frog” are subject to taboo all over the world (some references can be found in the entry toad in my dictionary), which further complicates a search for their etymology. But this is no place to engage in a serious discussion on the pre-Indo-European substrate. I said what I could on the subject in my review of Dirk Boutkan’s etymological dictionary of Frisian. Professor Beekes wrote a brief comment on my review.

Anticlimax: English grammar (Mr. Twitter, a comedian)

I have once commented on the abuse of as clauses unconnected with the rest of the sentence. These quasi-absolute constructions often sound silly. In a letter to a newspaper, a woman defends the use of Twitter: “As someone who aspires to go into comedy, Twitter is an incredible creative outlet.” Beware of unconscious humor: the conjunction as is not a synonym of the preposition for.

The post Bimonthly etymology gleanings for August and September 2014. Part 1 appeared first on OUPblog.

0 Comments on Bimonthly etymology gleanings for August and September 2014. Part 1 as of 10/1/2014 9:09:00 AM
Add a Comment
2. Can Cameron capture women’s votes?

After the Scottish Independence Referendum, the journalist Cathy Newman wrote of the irony that Cameron – the man with the much reported ‘problem’ with women – in part owes his job to the female electorate in Scotland. As John Curtice’s post-referendum analysis points out, women seemed more reluctant than men to vote ‘yes’ due to relatively greater pessimism of the economic consequences of a yes vote.

The Scottish vote should remind Cameron and the Conservative strategists who advise him of a very clear message: ignore women voters at your peril.

For several decades after UK women won the right to vote, Conservatives could rely on women’s votes and the gender gap in voting was consistently in double figures. However in recent decades this gap has diminished, particularly amongst younger women and party competition to mobilize female voters has become more important. Of course women voters have many diverse interests but understanding the concerns of different groups of women voters is crucial as female voters often make their decisions on voting closer to the election.

So what does Cameron need to do to firmly secure women’s votes at the general election? We argue the Conservative Party needs to make sure it represents women descriptively, substantively, and symbolically. On all three counts we see problems with Cameron’s strategy to win women’s votes.

Pre-election rhetoric and pledges to feminise the party through women’s descriptive representation have not been matched with clear and tangible outcomes. Cameron tried to increase the number of women MPs but still the share of women in the Conservative Party in the House of Commons is just 16%. As the latest Sex and Power Report highlights this looks unlikely to increase significantly in GE2015 as so few women have been selected to stand in safe Conservative seats despite the campaigning and support work undertaken by Women2Win.

Prime Minister David Cameron talks about the future of the United Kingdom following the Scottish Referendum result. Photographer: Arron Hoare. Photo: Crown copyright via Number 10 Flickr.
Prime Minister David Cameron talks about the future of the United Kingdom following the Scottish Referendum result. Photographer: Arron Hoare. Photo: Crown copyright via Number 10 Flickr.

Even where Cameron has strong power and autonomy to improve women’s presence – by fulfilling his pledge that one-third of his government would be women by the end of parliament – he has managed just 22%. Last July’s reshuffle did not erase the impression that women are not included at Cameron’s top table.

Without enough women representatives in Parliament and in Government to advise on policy proposals in development, there have been many problematic policy initiatives, such as the disastrous proposal to raise child care ratios. The Government’s approach to addressing public debt through austerity has been detrimental to women by reducing incomes, public services, and jobs, the effects of which even female Conservative supporters are more likely to express concerns about.

Cameron’s Conservatives in government also do not have the institutional capacity to get policies right for women. There are still not enough women in strategically significant places. For example in the Coalition Quad of Cameron, Osborne, Clegg, and Alexander control policy making. The gender equality machinery set up by the last government to monitor and address gender inequality in a strategic and long-term way has been stripped out. Even at the emergency post-referendum meeting at Chequers to discuss the UK’s constitutional future there was just one woman at the table.

Although the gender gap in voting, which currently favours Labour, is likely to narrow as the election approaches, the Conservatives have, we argue, inflicted significant psephological damage on themselves in their strategies to attract women’s votes: by not promoting women into politics, by not protecting women from austerity, and by stripping out the governmental institutions which give voice to women and promote gender equality.

Cameron’s political face may have been saved by Scottish women last month but for the reasons outlined in this blog post, we suggest that in the critical contestation for women’s votes at the 2015 general election there are long standing weaknesses in the Conservative Party’s strategy for mobilising women’s votes and restoring the Party’s historical dominance among women voters.

The post Can Cameron capture women’s votes? appeared first on OUPblog.

0 Comments on Can Cameron capture women’s votes? as of 10/1/2014 9:09:00 AM
Add a Comment
3. On the importance of independent, objective policy analysis

I have written about the dangers of making economic policy on the basis of ideology rather than cold, hard economic analysis. Ideologically-based economic policy has laid the groundwork for many of the worst economic disasters of the last 200 years.

  • The decision to abandon the first and second central banks in the United States in the early 19th century led to chronic financial instability for much of the next three quarters of a century.
  • Britain’s re-establishment of the gold standard in 1925, which encouraged other countries to do likewise, contributed to the spread and intensification of the Great Depression.
  • Europe’s decision to adopt the euro, despite the fact that economic theory and history suggested that it was a mistake, contributed to the European sovereign debt crisis.
  • President George W. Bush’s decision to cut taxes three times during his first term while embarking on substantial spending connected to the wars in Afghanistan and Iraq, was an important driver of the macroeconomic boom-bust cycle that led to the subprime crisis.

In each of these four cases, a policy was adopted for primarily ideological, rather than economic reasons. In each case, prominent thinkers and policy makers argued forcefully against adoption, but were ignored. In each case, the consequences of the policy were severe.

So how do we avoid excessively ideological economic policy?

One way is by making sure that policy-makers are exposed to a wide range of opinions during their deliberations. This method has been taken on board by a number central banks, where many important officials are either foreign-born or have considerable policy experience outside of their home institution and/or country. Mark Carney, a Canadian who formerly ran that that country’s central bank, is not the first non-British governor of the Bank of England in its 320-year history. Stanley Fischer, who was born in southern Africa and has been governor of the Bank of Israel, is now the vice chairman of the US Federal Reserve. The widely respected governor of the Central Bank of Ireland, Patrick Honohan, spent nearly a decade at the World Bank in Washington, DC. One of Honohan’s deputies is a Swede with experience at the Hong Kong Monetary Authority; the other is a Frenchman.

Money cut in pieces, by Tax Credits. CC-BY-2.0 via Flickr.
Money cut in pieces, by Tax Credits (TaxCredits.net). CC-BY-2.0 via Flickr.

But isn’t it unreasonable to expect politicians to come to the policy making process without any ideological bent whatsoever? After all, don’t citizens deserve to see a grand contest of ideas between those who propose higher taxes and greater public spending with those who argue for less of both?

In fact, we do expect—and want–our politicians to come to the table with differing views. Nonetheless, politicians often support their arguments with unfounded assertions that their policies will lead to widespread prosperity, while those of their adversaries will lead to doom. The public needs to be able to subject those competing claims to cold, hard economic analysis.

Fortunately, the United States and a growing number of other countries have established institutions that are mandated to provide high quality, professional, non-partisan economic analysis. Typically, these institutions are tasked with forecasting the budgetary effects of legislation, making it difficult for one side or the other to tout the economic benefits of their favorite policies without subjecting them to a reality check by a disinterested party.

In the United States, this job is undertaken by the Congressional Budget Office (CBO) which offers well-regarded forecasts of the budgetary effects of legislation under consideration by Congress. [Disclaimer: The current director of the CBO is a graduate school classmate of mine.]

The CBO is not always the most popular agency in Washington. When the CBO calculates that that the cost of a congressman’s pet project is excessive, that congressman can be counted on to take the agency to task in the most public manner possible.

According to the New York Times, the CBO’s “…analyses of the Clinton-era legislation were so unpopular among Democrats that [then-CBO Director Robert Reischauer] was referred to as the ‘skunk at the garden party.’ It has since become a budget office tradition for the new director to be presented with a stuffed toy skunk.”

For the most part, however, congressional leaders from both sides of the aisle hold the CBO and its work in high regard, as do observers of the economic scene from the government, academia, journalism, and the private sector.

The CBO, founded in 1974, is one of the oldest of such agencies, predated only by the Netherlands Bureau for Economic Policy Analysis (1945) and the Danish Economic Council (1962). More recent additions to the growing ranks of these agencies include Australia’s Parliamentary Budget Office (2012), Canada’s Parliamentary Budget Officer (2006), South Korea’s National Assembly Budget Office (2003), and the UK’s Office for Budget Responsibility (2010).

These organizations each have their own institutional history and slightly different responsibilities. For the most part, however, they are constituted to be non-partisan, independent agencies of the legislative branch of government. We should be grateful for their existence.

The post On the importance of independent, objective policy analysis appeared first on OUPblog.

0 Comments on On the importance of independent, objective policy analysis as of 10/1/2014 6:21:00 AM
Add a Comment
4. Setting the scene of New Orleans during Reconstruction

The Reconstruction era was a critical moment in the history of American race relations. Though Abraham Lincoln’s Emancipation Proclamation made great strides towards equality, the aftermath was a not-quited newly integrated society, greatly conflicted and rife with racial tension. At the height of Radical Reconstruction, in June 1870, seventeen-month-old Irish-American Mollie Digby was kidnapped from her home in New Orleans — allegedly by two Afro-Creole women. In The Great New Orleans Kidnapping Case: Race, Law, and Justice in the Reconstruction Era, Michael A. Ross offers the first ever full account of this historic event and subsequent investigation that electrified the South. The following images set the scene of New Orleans during this time period of racial amalgamation, social friction, and tremendous unease.

Featured image: The City of New Orleans, Louisiana, Harper’s Weekly, May 1862. Public Domain via Wikimedia Commons.

The post Setting the scene of New Orleans during Reconstruction appeared first on OUPblog.

0 Comments on Setting the scene of New Orleans during Reconstruction as of 1/1/1900
Add a Comment
5. Austin City Limits through the years

Austin City Limits is the longest running musical showcase in the history of television, spanning over four decades and showcasing the talents of musicians from Willie Nelson and Ray Charles to Arcade Fire and Eminem. The show is a testament to the evolution of media and popular music and the audience’s relationship to that music, and to the city of Austin, Texas. In Austin City Limits: A History, author Tracey E. W. Laird takes us behind-the-scenes with interviews, anecdotes, and personal photographs to pay homage to this landmark festival. In doing so, she also illuminates the overarching discussion of the US public media and its influence on the broadcasting and funding of music and culture. This year, the festival celebrates its 40th anniversary with guests such as Bonnie Raitt, Jimmie Vaughan, Sheryl Crow, and Alabama Shakes, which will air on PBS on Oct. 3 at 9pm ET.

Featured image: Night view of Austin skyline and Lady Bird Lake as seen from Lou Neff Point. Photo by LoneStarMike. CC BY 3.0n via Wikimedia Commons.

The post Austin City Limits through the years appeared first on OUPblog.

0 Comments on Austin City Limits through the years as of 1/1/1900
Add a Comment
6. Keeping caffeinated for International Coffee Day

Of all the beverages favored by Oxford University Press staff, coffee may be the lifeblood of our organization. From the coffee bar in the Fairway of our Oxford office to the coffee pots on every floor of the New York office, we’re wired for work. Here’s a brief gallery of employees with their preferred roast — grabbed from a street cart, made to order, or part of an elaborate weekly routine.

coffee group

Oxford staff are ready for their next meeting!

Shakespeare-insults-coffee2

“My cappuccino from the OUP coffee bar in my Shakespeare insults mug – so I can fire creative insults and keep caffeinated at the same time…you canker-blossom!”
Hannah Charters, Senior Marketing Executive, Online Product Marketing

Rachel-princess-mug

“An Americano from the OUP espresso bar. The mug shows the mantra I like living by!”
Rachel Fenwick, Associate Marketing Manager, Online Product Marketing

Coffee Selfie

“Tall Pike in a Grande cup from Starbucks”
Jennifer Bernard, Assistant Online Marketing Manager

vaccumpotcoffee

“A Vacuum Pot Coffee at Edison Food and Drink Lab, Tampa Florida”
Erin Rabbit, Designer, Creative Services, Marketing

Ryan - Intl Coffee Day

“Grabbing coffee with a friend is one of my favorite pastimes. Good conversation over an even better coffee is the best! I’m a huge fan of locally owned coffee shops, so I always find myself recommending the Stumptown Coffee on E 8th downtown. I splurge and get the largest latte I can—iced or hot, depending on the season. The flavor is so strong! It’s a kick in the face. Otherwise, my typical go to is a cup, (or 2…or 3) of any flavored Keurig coffee in the OUP office. No match to Stumptown, but it does the job. I grew up in the south, so I like my coffee southern-style—lots of sugar and cream. Props to Mom and Dad for the sweet mug!”
Ryan Cury, Assistant Marketing Manager

P1000184

“Always opt for an espresso mid-afternoon for two equally important reasons. Firstly, it provides the boost I need to conquer the remains of the day and, secondly, it makes me feel like a giant when drinking.”
Dan Parker, Social Media Marketing Executive

single

“I enjoy a standard Americano with cold milk. Because: I can’t be done with faffy coffee.”
Kirsty Doole, Publicity Manager

P1000142

“Mine was a salted caramel mochaccino.”
Simon Thomas, Content Marketing Executive, Dictionaries

P1000180

“Mine was a lovely frothy milky latte – filling and delicious!”
Kate Farquhar-Thomson, Publicity Director

P1000178

“Decaf filter coffee, for those times when you think three coffees in three hours might be too much.”
Nicola Burton, Publicity Manager

freshpot-small

“Put that pungent brew to your lips and feel the satisfaction.”
Sam Blum, Publicity Assistant and member of the Fresh Pots Society

The post Keeping caffeinated for International Coffee Day appeared first on OUPblog.

0 Comments on Keeping caffeinated for International Coffee Day as of 1/1/1900
Add a Comment
7. CERN: glorious past, exciting future

Today, 60 years ago, the visionary convention establishing the European Organization for Nuclear Research – better known with its French acronym, CERN – entered into force, marking the beginning of an extraordinary scientific adventure that has profoundly changed science, technology, and society, and that is still far from over.

With other pan-European institutions established in the late 1940s and early 1950s — like the Council of Europe and the European Coal and Steel Community — CERN shared the same founding goal: to coordinate the efforts of European countries after the devastating losses and large-scale destruction of World War II. Europe had in particular lost its scientific and intellectual leadership, and many scientists had fled to other countries. Time had come for European researchers to join forces towards creating of a world-leading laboratory for fundamental science.

Sixty years after its foundation, CERN is today the largest scientific laboratory in the world, with more than 2000 staff members and many more temporary visitors and fellows. It hosts the most powerful particle accelerator ever built. It also hosts exhibitions, lectures, shows, meetings, and debates, providing a forum of discussion where science meets industry and society.

What has happened in these six decades of scientific research? As a physicist, I should probably first mention the many ground-breaking discoveries in Particle Physics, such as the discovery of some of the most fundamental building block of matter, like the W and Z bosons in 1983; the measurement of the number of neutrino families at LEP in 1989; and of course the recent discovery of the Higgs boson in 2012, which prompted the Nobel Prize in Physics to Peter Higgs and Francois Englert in 2013.

But looking back at the glorious history of this laboratory, much more comes to mind: the development of technologies that found medical applications such as PET scans; computer science applications such as globally distributed computing, that finds application in many fields ranging from genetic mapping to economic modeling; and the World Wide Web, that was developed at CERN as a network to connect universities and research laboratories.

CERN Control Center (2).jpg
“CERN Control Center (2)” by Martin Dougiamas – Flickr: CERN control center. Licensed under CC BY 2.0 via Wikimedia Commons.

If you’ve ever asked yourself what such a laboratory may look like, especially if you plan to visit it in the future and expect to see building with a distinctive sleek, high-tech look, let me warn you that the first impression may be slightly disappointing. When I first visited CERN, I couldn’t help noticing the old buildings, dusty corridors, and the overall rather grimy look of the section hosting the theory institute. But it was when an elevator brought me down to visit the accelerator that I realized what was actually happening there, as I witnessed the colossal size of the detectors, and the incredible degree of sophistication of the technology used. ATLAS, for instance, is a 25 meters high, 25 meters wide and 45 meters long detector, and it weighs about 7,000 tons!

The 27-km long Large Hadron Collider is currently shut down for planned upgrades. When new beams of protons will be circulated in it at the end of 2014, it will be at almost twice the energy reached in the previous run. There will be about 2800 bunches of protons in its orbit, each containing several hundred billion protons, separated by – as in a car race, the distance between bunches can be expressed in units of time – 250 billionths of a second. The energy of each proton will be compared to that of a flying mosquito, but concentrated in a single elementary particle. And the energy of an entire bunch of protons will be comparable to that of a medium-sized car launched at highway speed.

Why these high energies? Einstein’s E=mc2 tells us that energy can be converted to mass, so by colliding two protons with very high energy, we can in principle produce very heavy particles, possibly new particles that we have never before observed. You may wonder why we would expect that such new particles exist. After all we have already successfully created Higgs bosons through very high-energy collisions, what can we expect to find beyond that? Well, that’s where the story becomes exciting.

Some of the best motivated theories currently under scrutiny in the scientific community – such as Supersymmetry – predict that not only should new particles exist, but they could explain one of the greatest mysteries in Cosmology: the presence of large amounts of unseen matter in the Universe, which seem to dominate the dynamics of all structures in the Universe, including our own Milky Way galaxy — Dark Matter.

Identifying in our accelerators the substance that permeates the Universe and shapes its structure would represent an important step forward in our quest to understand the Cosmos, and our place in it. CERN, 60 years and still going strong, is rising up to challenge.

Headline image credit: An example of simulated data modeled for the CMS particle detector on the Large Hadron Collider (LHC) at CERN. Image by Lucas Taylor, CERN. CC BY-SA 3.0 via Wikimedia Commons.

The post CERN: glorious past, exciting future appeared first on OUPblog.

0 Comments on CERN: glorious past, exciting future as of 1/1/1900
Add a Comment
8. Celebrating 60 years of CERN

2014 marks not just the centenary of the start of World War I, and the 75th anniversary of World War II, but on 29 September it is 60 years since the establishment of CERN, the European Centre for Nuclear Research or, in its modern form, Particle Physics. Less than a decade after European nations had been fighting one another in a terrible war, 12 of those nations had united in science. Today, CERN is a world laboratory, famed for having been the home of the world wide web, brainchild of then CERN scientist Tim Berners-Lee; of several Nobel Prizes for physics, although not (yet) for Peace; and most recently, for the discovery of the Higgs Boson. The origin of CERN, and its political significance, are perhaps no less remarkable than its justly celebrated status as the greatest laboratory of scientific endeavour in history.

Its life has spanned a remarkable period in scientific culture. The paradigm shifts in our understanding of the fundamental particles and the forces that control the cosmos, which have occurred since 1950, are in no small measure thanks to CERN.

In 1954, the hoped for simplicity in matter, where the electron and neutrino partner a neutron and proton, had been lost. Novel relatives of the proton were proliferating. Then, exactly 50 years ago, the theoretical concept of the quark was born, which explains the multitude as bound states of groups of quarks. By 1970 the existence of this new layer of reality had been confirmed, by experiments at Stanford, California, and at CERN.

During the 1970s our understanding of quarks and the strong force developed. On the one hand this was thanks to theory, but also due to experiments at CERN’s Intersecting Storage Rings: the ISR. Head on collisions between counter-rotating beams of protons produced sprays of particles, which instead of flying in all directions, tended to emerge in sharp jets. The properties of these jets confirmed the predictions of quantum chromodynamics – QCD – the theory that the strong force arises from the interactions among the fundamental quarks and gluons.

CERN had begun in 1954 with a proton synchrotron, a circular accelerator with a circumference of about 600 metres, which was vast at the time, although trifling by modern standards. This was superseded by a super-proton synchrotron, or SPS, some 7 kilometres in circumference. This fired beams of protons and other particles at static targets, its precision measurements building confidence in the QCD theory and also in the theory of the weak force – QFD, quantum flavourdynamics.

Cern - Public Domain
The Globe of Science and Innovation. CC0 via Pixabay

QFD brought the electromagnetic and weak forces into a single framework. This first step towards a possible unification of all forces implied the existence of W and Z bosons, analogues of the photon. Unlike the massless photon, however, the W and Z were predicted to be very massive, some 80 to 90 times more than a proton or neutron, and hence beyond reach of experiments at that time. This changed when the SPS was converted into a collider of protons and anti-protons. By 1984 experiments at the novel accelerator had discovered the W and Z bosons, in line with what QFD predicted. This led to Nobel Prizes for Carlo Rubbia and Simon van der Meer, in 1984.

The confirmation of QCD and QFD led to a marked change in particle physics. Where hitherto it had sought the basic templates of matter, from the 1980s it turned increasingly to understanding how matter emerged from the Big Bang. For CERN’s very high-energy experiments replicate conditions that were prevalent in the hot early universe, and theory implies that the behaviour of the forces and particles in such circumstances is less complex than at the relatively cool conditions of daily experience. Thus began a period of high-energy particle physics as experimental cosmology.

This raced ahead during the 1990s with LEP – the Large Electron Positron collider, a 27 kilometre ring of magnets underground, which looped from CERN towards Lake Geneva, beneath the airport and back to CERN, via the foothills of the Jura Mountains. Initially designed to produce tens of millions of Z bosons, in order to test QFD and QCD to high precision, by 2000 its performance was able to produce pairs of W bosons. The precision was such that small deviations were found between these measurements and what theory implied for the properties of these particles.

The explanation involved two particles, whose subsequent discoveries have closed a chapter in physics. These are the top quark, and the Higgs Boson.

As gaps in Mendeleev’s periodic table of the elements in the 19th century had identified new elements, so at the end of the 20th century a gap in the emerging pattern of particles was discerned. To complete the menu required a top quark.

The precision measurements at LEP could be explained if the top quark exists, too massive for LEP to produce directly, but nonetheless able to disturb the measurements of other quantities at LEP courtesy of quantum theory. Theory and data would agree if the top quark mass were nearly two hundred times that of a proton. The top quark was discovered at Fermilab in the USA in 1995, its mass as required by the LEP data from CERN.

As the 21st century dawned, all the pieces of the “Standard Model” of particles and forces were in place, but one. The theories worked well, but we had no explanation of why the various particles have their menu of masses, or even why they have mass at all. Adding mass into the equations by hand is like a band-aid, capable of allowing computations that agree with data to remarkable precision. However, we can imagine circumstances, where particles collide at energies far beyond those accessible today, where the theories would predict nonsense — infinity as the answer for quantities that are finite, for example. A mathematical solution to this impasse had been discovered fifty years ago, and implied that there is a further massive particle, known as the Higgs Boson, after Peter Higgs who, alone of the independent discoveries of the concept, drew attention to some crucial experimental implications of the boson.

Discovery of the Higgs Boson at CERN in 2012 following the conversion of LEP into the LHC – Large Hadron Collider – is the climax of CERN’s first 60 years. It led to the Nobel Prize for Higgs and Francois Englert, theorists whose ideas initiated the quest. Many wondered whether the Nobel Foundation would break new ground and award the physics prize to a laboratory, CERN, for enabling the experimental discovery, but this did not happen.

CERN has been associated with other Nobel Prizes in Physics, such as to Georges Charpak, for his innovative work developing methods of detecting radiation and particles, which are used not just at CERN but in industry and hospitals. CERN’s reach has been remarkable. From a vision that helped unite Europe, through science, we have seen it breach the Cold War, with collaborations in the 1960s onwards with JINR, the Warsaw Pact’s scientific analogue, and today CERN has become truly a physics laboratory for the world.

The post Celebrating 60 years of CERN appeared first on OUPblog.

0 Comments on Celebrating 60 years of CERN as of 1/1/1900
Add a Comment
9. Why Britain should leave the European Union

With the next General Election on the horizon, the Conservative’s proposed European Union ‘In/Out’ referendum, slated for 2017, has become a central issue. Scotland chose to stay part of a larger union – would the same decision be taken by the United Kingdom?

In the first of a pair of posts, some key legal figures share their views on why Britain should leave the European Union.

* * * * *

“[The] EU as I see it is an anti-democratic system of governance that steadily drains decision-making power from the people and their elected national and sub-national representatives and re-allocates it to a virtually non-accountable Euro-elite. In many ways, this is its purpose. … Important policy decisions in sensitive areas of civil liberties … [are] taken by government officials and ministers with minimal input from parliaments and virtually unremarked by the media and general public.

“The European Commission started life as a regulatory agency attached to a trade bloc, which rapidly turned its regulatory powers on the Member States that had put it in place. Much the same can be said of the centralising Court of Justice, a significant policy-maker in the EU system.

Democracy, in Robert Dahl’s sense of popular control over governmental policies and decisions or as a broad array of the rights, freedoms and – most important – the opportunities that tend to develop among people who govern themselves democratically, is out of the reach of the EU system of regulatory governance.”

Carol Harlow, Emeritus Professor of Law, London School of Economics, and author of Accountability in the European Union and State Liability: Tort Law and Beyond

European Bills. Photo by Images_of_Money. CC BY-SA 2.0 via Images_of_Money Flickr.
European Bills. Photo by Images_of_Money, TaxRebate.org.uk. CC BY-SA 2.0 via Images_of_Money Flickr.

* * * * *

“There is little if any direct trade advantage for remaining a member of the EU on the present terms. The direct financial burden of EU membership is some £17bn gross (£11bn net) and rising. … 170 countries in the world now operate in a global market based on trade according to “rules of origin”, and the UK now trades mostly with them, not with the EU.

“Advocates of the EU always present the “single market” as indispensable to the UK. Is this really so? The EU Single Market is the never ending pretext for the EU’s harmonisation of standards and laws across the EU. EU Single Market rules now extend well beyond what was the Single European Act 1986, and far beyond what is necessary to enable borderless trading within the EU.

“As the sixth largest trading nation in the world, were the UK to leave the EU Single Market, we would be joining the 170 other nations who trade freely in the global single market. We would regain control of our own markets and over our trade with the rest of the globe.”

Bernard Jenkin, MP for Harwich and North Essex and Chair of the House of Commons Public Administration Select Committee

* * * * *

 

“The single currency is the crux. We did opt out of the Euro, but we can’t escape the Euro. The deflationary bias in the Eurozone, the catastrophic effects of a single monetary policy across such disparate economies and societies, culminating in banking and government debt crises, all continue to bear down on our exports and our overall economic performance.

“As the eighteen Eurozone countries meet apart from the non-Euro members of the EU to determine major issues of financial and fiscal policy, so we are increasingly marginalized within the EU, while having to live with the consequences of decisions in which we’ve had no part. … The EU will continue to be dominated by the Eurozone countries. They will do their best to salvage the single currency and will probably succeed, at least for some years to come. If British policy is to be characterized by more than passivity and fatalism, we will either have to establish new terms of membership of the EU (well-nigh impossible to achieve on a meaningful basis when the unanimous agreement of the EU is required), or find a way to split the existing EU into two unions of different kinds, or leave altogether.”

Alan Howarth, Baron Howarth of Newport, former Member of Parliament

* * * * *

“Whatever the merits of the European Union from an economic or political perspective, its legal system is unfit for purpose. In the United Kingdom we expect our statutory laws to be clear and the means by which these laws are made to be transparent. We equally expect our court processes to be efficient and to deliver unambiguous judgments delineating clear legal principles. We expect there to be a clear demarcation between those who make the laws and those who interpret them. … All [matters] are quite absent within the legal institutions of the EU.

“It is perhaps understandable that an institution which is seeking to unite 28 divergent legal traditions, with multiple different languages, struggles to produce an effective legal system. However, the EU legal system sits above and is constitutionally superior to the domestic UK one. … The failures of the EU legal system are so fundamental that they constitute a flagrant violation of the rule of law. Regardless of the position of the UK within the EU, these institutions should be radically and urgently reformed.”

Dan Tench, Partner, Olswang LLP and co-founder of UKSCblog – the Supreme Court blog

* * * * *

“The Euro has faced serious difficulties for the last five years. The economic crisis exposed flaws in the basic design, while the effort to save the currency union has led to recession and high unemployment, especially youth unemployment, in the weaker nations. This has moved the focus of the EU from the single market to the economically more important project of saving the Euro.

“The effect of this on the UK is that the direction of the EU has become more integrationist and has subordinated the interests of the non-Euro states. Currently this covers ten countries but only the UK and Denmark have a permanent opt out. The protocol being developed to ensure that the eighteen do not force their will on the ten will need revising when it becomes twenty-six versus two. The EU will not be willing to give the UK and Denmark a veto over all financial regulation. Inevitably, this will need some form of renegotiation as the UK has a disproportionately valuable banking sector which cannot be expected to accept rules designed entirely for the advancement of the Euro.”

Jacob Rees-Mogg, MP, North East Somerset

* * * * *

“The reluctance of the European Court of Human Rights (ECtHR) to find violations of human rights in sensitive matters affecting States’ interests raises the question whether subscribing to the European Convention on Human Rights (‘ECHR’) should be a pre-requisite of European Union membership, as is now expected under the Treaty of Lisbon. … [T]he decisions of the ECtHR are accorded a special significance in the EU by the European Court of Justice because the ECHR is part of the EU’s legal system.

“This was recently demonstrated in S.A.S. v France, concerning an unnamed 24-year-old French woman of Pakistani origin who wore both the burqa and the niqab. In 2011, France introduced a ‘burqa ban’, arguin that facial coverings interfere with identification, communication, and women’s freedoms. … A British Judge has said, “I reject the view … that the niqab is somehow incompatible with participation in public life.” The ECtHR held [that] France’s burqa ban encouraged citizens to “live together” this being a “legitimate aim” of the French authorities. … Britain could leave the ECHR and make its own decisions  but then, insofar as the EU continues to accord special significance to ECtHR decisions, still effectively be bound by them.”

Satvinder Juss, Professor of Law, King’s College London and Barrister-at-Law, Gray’s Inn

The post Why Britain should leave the European Union appeared first on OUPblog.

0 Comments on Why Britain should leave the European Union as of 9/30/2014 3:57:00 PM
Add a Comment
10. Who decides ISIS is a terrorist group?

Recent surmounting media coverage of the Islamic State in Iraq and Syria (ISIS) has evoked fear of impending terrorist threats in the minds of many. I spoke with Colin Beck, Assistant Professor of Sociology at Pomona College, to gain his thoughts on the group, as well as the designations and motivations of terrorism. Beck is the author of “Who Gets Designated a Terrorist and Why,” recently published in Social Forces. His article examines formal terrorism designations by governments through the lens of organization studies research on categorization processes.

Why are we so concerned with ISIS now, relative to other terrorist groups and terrorist threats?

While there are a number of militant groups in Syria that foreign governments could focus on, ISIS has three things that makes it appear as a pressing threat. First, ISIS’s sudden advances in Iraq were an unanticipated event, and consequently created a media spectacle. No one really expected the Iraqi central government or Kurdish authorities to lose control of major cities and sites so quickly. Once they did, there was a major story there. Second, and related, the group has territorial control. While ISIS had controlled territory in Syria and Iraq previously, the declaration of an Islamic State in late June creates a clear target. There is little evidence that the Islamic State intends to directly attack outside of Iraq and Syria, but territorial control signals capability and threat, in the same way that aviation attacks do, as Miner and I argued in our study. Finally, ISIS engages in classically “terrorist” behavior—beheadings of captives and attacks on civilian populations. In essence, it’s the combination of sudden success, territorial control, and markers of terrorism that bring attention to the Islamic State.

None of these are sufficient explanations by themselves. For instance, compare ISIS to the recent reports about the Khorasan militant group located in Syria; in the media and even government accounts it takes on a secondary importance even though it has been suggested that Khorasan was planning direct attacks against Euro-American targets. And other Syrian Islamist groups, like al-Nusra Front, control territory but have not expanded their area so dramatically.

To what extent does the release of videos showing the beheading of victims help define ISIS as a terrorist group in the eyes of Americans?

Even before the video-taped beheadings, the attacks on Yezidis and other religious minorities seemed to signify international terrorism to the American public. There’s a seemingly odd confusion here in public opinion. While the Taliban in Afghanistan never carried out international terrorism, they were the target of the American response to September 11th just as much as Al-Qaeda was. Similarly, in Iraq, various militant groups were seen as international terrorists even without action beyond the context of the Iraqi Insurgency. Americans have thus learned to think of any militant Islamic group as terrorists; all the group needs to do is reveal its Islamicness. Attacks on religious minorities certainly do that. In this environment, beheading hostages is just another marker, especially as it echoes the acts of previously militants defined as terrorists—Al Qaeda’s beheading of Daniel Pearl in 2002 or the frequent beheadings of captives by Al Qaeda in Iraq during the Insurgency.

Destination Damascus. © gmutlu via iStock.
Destination Damascus. © gmutlu via iStock.

Why do you think that ISIS beheaded Americans so publicly? To what extent is this attempt to consolidate their strength in their region?

I am speculating here, but I wonder if the beheadings are actually more a product of cross-militant competition than a message to the outside world. The Islamic State’s leadership is not imprudent, so they must have known that attacking the citizens of western countries would create a response. (This, in fact, was one of the non-surprising results of our study of terrorism designations.) So why do it? The Islamic State could believe that the response will not actually imperil their organization and its gains. Or, possibly, that beheadings would encourage other governments to pay for hostages which have been a lucrative source of funding in recent years. More importantly, I think it is also likely that the Islamic State was trying to prove its bonafides. ISIS has been fighting with other Islamist groups among the Syrian rebels, and, in 2013, struggled with Zawahiri of Al-Qaeda over who best represents Islamist interests in the Syrian conflict. This sort of cross-Islamist conflict is quite typical, as Charles Kurzman discusses in his book The Missing Martyrs. So, perhaps, beheading hostages is a way to establish their credibility with other militants.

Is there anything else you think we can learn about terrorism from the case of ISIS?

ISIS really demonstrates the large amount of variation there is among “terrorist” groups. There are lots of different ideologies, lots of different goals, and lots of different types of groups among militants. While policymakers and the public tend to view certain forms, such as transnational networks of Islamists, as threatening, organizational forms might be best seen as different ways of solving resource dilemmas and meeting goals. I take this point up extensively in my book Radicals, Revolutionaries, and Terrorists that will be published next year by Polity Press. Also, ISIS illustrates that groups might strategically seek to conform to, or avoid, perceptions of what constitutes terrorism for various reasons. I am exploring how this might work for media labeling of terrorism in my next project.

Headline image credit: Map of the claim to power of the organization Islamic State. Created by Fiver, der Hellseher. CC BY-SA 4.0 via Wikimedia Commons.

The post Who decides ISIS is a terrorist group? appeared first on OUPblog.

0 Comments on Who decides ISIS is a terrorist group? as of 9/30/2014 3:57:00 PM
Add a Comment
11. The pros and cons of research preregistration

Research transparency is a hot topic these days in academia, especially with respect to the replication or reproduction of published results.

There are many initiatives that have recently sprung into operation to help improve transparency, and in this regard political scientists are taking the lead. Research transparency has long been a focus of effort of The Society for Political Methodology, and of the journal that I co-edit for the Society, Political Analysis. More recently the American Political Science Association (APSA) has launched an important initiative in Data Access and Research Transparency. It’s likely that other social sciences will be following closely what APSA produces in terms of guidelines and standards.

One way to increase transparency is for scholars to “preregister” their research. That is, they can write up their research plan and publish that prior to the actual implementation of their research plan. A number of social scientists have advocated research preregistration, and Political Analysis will soon release new author guidelines that will encourage scholars who are interested in preregistering their research plans to do so.

However, concerns have been raised about research preregistration. In the Winter 2013 issue of Political Analysis, we published a Symposium on Research Registration. This symposium included two longer papers outlining the rationale for registration: one by Macartan Humphreys, Raul Sanchez de la Sierra, and Peter van der Windt; the other by Jamie Monogan. The symposium included comments from Richard Anderson, Andrew Gelman, and David Laitin.

In order to facilitate further discussion of the pros and cons of research preregistration, I recently asked Jaime Monogan to write a brief essay that outlines the case for preregistration, and I also asked Joshua Tucker to write about some of the concerns that have been raised about how journals may deal with research preregistration.

*   *   *   *   *

The pros of preregistration for political science

By Jamie Monogan, Department of Political Science, University of Georgia

 

1024px-Howard_Tilton_Library_Computers_2010
Howard Tilton Library Computers, Tulane University by Tulane Public Relations. CC-BY-2.0 via Wikimedia Commons.

Study registration is the idea that a researcher can publicly release a data analysis plan prior to observing a project’s outcome variable. In a Political Analysis symposium on this topic, two articles make the case that this practice can raise research transparency and the overall quality of research in the discipline (“Humphreys, de la Sierra, and van der Windt 2013; Monogan 2013).

Together, these two articles describe seven reasons that study registration benefits our discipline. To start, preregistration can curb four causes of publication bias, or the disproportionate publishing of positive, rather than null, findings:

  1. Preregistration would make evaluating the research design more central to the review process, reducing the importance of significance tests in publication decisions. Whether the decision is made before or after observing results, releasing a design early would highlight study quality for reviewers and editors.
  2. Preregistration would help the problem of null findings that stay in the author’s file drawer because the discipline would at least have a record of the registered study, even if no publication emerged. This will convey where past research was conducted that may not have been fruitful.
  3. Preregistration would reduce the ability to add observations to achieve significance because the registered design would signal in advance the appropriate sample size. It is possible to monitor the analysis until a positive result emerges before stopping data collection, and this would prevent that.
  4. Preregistration can prevent fishing, or manipulating the model to achieve a desired result, because the researcher must describe the model specification ahead of time. By sorting out the best specification of a model using theory and past work ahead of time, a researcher can commit to the results of a well-reasoned model.

Additionally, there are three advantages of study registration beyond the issue of publication bias:

  1. Preregistration prevents inductive studies from being written-up as deductive studies. Inductive research is valuable, but the discipline is being misled if findings that are observed inductively are reported as if they were hypothesis tests of a theory.
  2. Preregistration allows researchers to signal that they did not fish for results, thereby showing that their research design was not driven by an ideological or funding-based desire to produce a result.
  3. Preregistration provides leverage for scholars who face result-oriented pressure from financial benefactors or policy makers. If the scholar has committed to a design beforehand, the lack of flexibility at the final stage can prevent others from influencing the results.

Overall, there is an array of reasons why the added transparency of study registration can serve the discipline, chiefly the opportunity to reduce publication bias. Whatever you think of this case, though, the best way to form an opinion about study registration is to try it by preregistering one of your own studies. Online study registries are available, so you are encouraged to try the process yourself and then weigh in on the preregistration debate with your own firsthand experience.

*   *   *   *   *

Experiments, preregistration, and journals

By Joshua Tucker, Professor of Politics (NYU) and Co-Editor, Journal of Experimental Political Science

 
I want to make one simple point in this blog post: I think it would be a mistake for journals to come up with any set of standards that involves publically recognizing some publications as having “successfully” followed their pre-registration design while identifying others publications as not having done so. This could include a special section for articles that matched their pre-registration design, an A, B, C type rating system for how faithfully articles had stuck with the pre-registration design, or even an asterisk for articles that passed a pre-registration faithfulness bar.

Let me be equally clear that I have no problem with the use of registries for recording experimental designs before those experiments are implemented. Nor do I believe that these registries should not be referenced in published works featuring the results of those experiments. On the contrary, I think authors who have pre-registered designs ought to be free to reference what they registered, as well as to discuss in their publications how much the eventual implementation of the experiment might have differed from what was originally proposed in the registry and why.

My concern is much more narrow: I want to prevent some arbitrary third party from being given the authority to “grade” researchers on how well they stuck to their original design and then to be able to report that grade publically, as opposed to simply allowing readers to make up their own mind in this regard. My concerns are three-fold.

First, I have absolutely no idea how such a standard would actually be applied. Would it count as violating a pre-design registry if you changed the number of subjects enrolled in a study? What if the original subject pool was unwilling to participate for the planned monetary incentive, and the incentive had to be increased, or the subject pool had to be changed? What if the pre-registry called for using one statistical model to analyze the data, but the author eventually realized that another model was more appropriate? What if survey questions that was registered on a 1-4 scale was changed to a 1-5 scale? Which, if any of these, would invalidate the faithful application of the registry? Would all of them together? It seems to the only truly objective way to rate compliance is to have an all or nothing approach: either you do exactly what you say you do, or you didn’t follow the registry. Of course, then we are lumping “p-value fishing” in the same category as applying a better a statistical model or changing the wording of a survey question.

This bring me to my second point, which is a concern that giving people a grade for faithfully sticking to a registry could lead to people conducting sub-optimal research — and stifle creativity — out of fear that it will cost them their “A” registry-faithfulness grade. To take but one example, those of us who use survey experiments have long been taught to pre-test questions precisely because sometime some of the ideas we have when sitting at our desks don’t work in practice. So if someone registers a particular technique for inducing an emotional response and then runs a pre-test and figures out their technique is not working, do we really want the researcher to use the sub-optimal design in order to preserve their faithfulness to the registered design? Or consider a student who plans to run a field experiment in a foreign country that is based on the idea that certain last names convey ethnic identity. What happens if the student arrives in the field and learns that this assumption was incorrect? Should the student stick with the bad research design to preserve the ability to publish in the “registry faithful” section of JEPS? Moreover, research sometimes proceeds in fits and spurts. If as a graduate student I am able to secure funds to conduct experiments in country A but later as a faculty member can secure funds to replicate these experiments in countries B and C as well, should I fear including the results from country A in a comparative analysis because my original registry was for a single country study? Overall, I think we have to be careful about assuming that we can have everything about a study figured out at the time we submit a registry design, and that there will be nothing left for us to learn about how to improve the research — or that there won’t be new questions that can be explored with previously collected data — once we start implementing an experiment.

At this point a fair critique to raise is that the points in preceding paragraph could be taken as an indictment of registries generally. Here we venture more into simply a point of view, but I believe that there is a difference between asking people to document what their original plans were and giving them a chance in their own words — if they choose to do so — to explain how their research project evolved as opposed to having to deal with a public “grade” of whatever form that might take. In my mind, the former is part of producing transparent research, while the latter — however well intentioned — could prove paralyzing in terms of making adjustments during the research process or following new lines of interesting research.

This brings me to my final concern, which is that untenured faculty would end up feeling the most pressure in this regard. For tenured faculty, a publication without the requisite asterisks noting registry compliance might not end up being too big a concern — although I’m not even sure of that — but I could easily imagine junior faculty being especially worried that publications without registry asterisks could be held against them during tenure considerations.

The bottom line is that registries bring with them a host of benefits — as Jamie has nicely laid out above — but we should think carefully about how to best maximize those benefits in order to minimize new costs. Even if we could agree on how to rate a proposal in terms of faithfulness to registry design, I would suggest caution in trying to integrate ratings into the publication process.

The views expressed here are mine alone and do not represent either the Journal of Experimental Political Science or the APSA Organized Section on Experimental Research Methods.

Heading image: Interior of Rijksmuseum research library. Rijksdienst voor het Cultureel Erfgoed. CC-BY-SA-3.0-nl via Wikimedia Commons.

The post The pros and cons of research preregistration appeared first on OUPblog.

0 Comments on The pros and cons of research preregistration as of 9/28/2014 7:15:00 AM
Add a Comment
12. Cinematic tragedies for the intractable issues of our times

Tragedies certainly aren’t the most popular types of performances these days. When you hear a film is a tragedy, you might think “outdated Ancient Greek genre, no thanks!” Back in those times, Athenians thought it their civic duty to attend tragic performances of dramas like Antigone or Agammemnon. Were they on to something that we have lost in contemporary Western society? That there is something specifically valuable in a tragic performance that a spectator doesn’t get from other types or performances, such as those of our modern genres of comedy, farce, and melodrama?

Since films reach a greater audience in our culture than plays, after updating Aristotle’s Poetics for the twenty-first century, we analyzed what we call “cinematic tragedies”: films that demonstrate the key components of Aristotelian tragedy. We conclude that a tragedy must consist in the representation of an action that is: (1) complete; (2) serious; (3) probable; (4) has universal significance; (5) involves a reversal of fortune (from good to bad); (6) includes recognition (a change in epistemic state from ignorance to knowledge); (7) includes a specific kind of irrevocable suffering (in the form of death, agony or a terrible wound); (8) has a protagonist who is capable of arousing compassion; and (9) is performed by actors. The effects of the tragedy must include: (10) the arousal in the spectator of pity and fear; and (11) a resolution of pity and fear that is internal to the experience of the drama.

Unlike melodrama (which we hold is the most common film genre), tragedy calls on spectators to ponder thorny moral issues and to navigate them with their own moral compass. One such cinematic tragedy — Into The Wild, 2007, directed by Sean Penn — thematizes the preciousness and precariousness of human life alongside environmental problems, raising questions about human beings’ apparent inability to live on earth without despoiling the beauty and integrity of the biosphere. Other cinematic tragedies deal with a variety of problems with which our modern societies must grapple.

One such topic is illegal immigration, a highly politicized issue that is far more complex than national governments seem equipped to handle, especially beyond the powers of the two parties in the American system. Cinematic tragedies that deal with this issue have been produced over several decades involving immigration into various Western countries, especially the United States; these include Black Girl (France, 1966), El norte (US/UK, 1983), and Sin nombre (Mexico, 2009), the last of which we will expand on here.

Paulina Gaitan (left) and Edgar Flores (right) star in writer/director Cary Joji Fukunaga's epic dramatic thriller Sin Nombre, a Focus Features release. Photo credit: Cary Joji Fukunaga via Focus Features
Paulina Gaitan (left) and Edgar Flores (right) star in writer/director Cary Joji Fukunaga’s epic dramatic thriller Sin Nombre, a Focus Features release. Photo credit: Cary Joji Fukunaga via Focus Features

In US director Cary Fukunaga’s Sin nombre (which means “Nameless” but which was released in the United States under the Spanish title), Hondurans escaping from their harsh political and economic realities risk their lives in order to make it to the United States, through Mexico, on the tops of rail cars. They travel in this manner since, as we all know, there would be no other legal way for most of these foreign citizens to come to the United States. Over the course of the journey, the immigrants endure terrible suffering or die at the hands of gang members who rob, rape, and even kill some of them.

The film focuses on just a few of the multitudes atop the trains: on a teenage Honduran girl, Sayra, migrating with her father and uncle; and on a few of the gang members. One of them, Casper, has had a change of heart and is no longer loyal to the gang, after its leader killed Casper’s girlfriend after trying to rape her. Casper and other gang members are atop the train robbing the migrants, but he defends Sayra by killing the leader when he tries to rape her. Ultimately, Sayra will arrive in the United States. However, she realizes that the cost has been too great—her father has died falling off of the train; she has lost Casper who is, ironically, shot to death by the pre-pubescent boy whom he himself had trained in the ways of the gang in the opening scenes of the film.

The tremendous losses, and the scenes of suffering, rape, and murder, make unlikely the possibility that the spectator will feel that Sayra’s arrival constitutes a happy ending. In some other aesthetic treatment, Casper’s ultimate death might have been melodramatized as redemptive selflessness for the sake of his new girlfriend. But in Fukunaga’s film, the juxtaposed images imply a continuing cycle of despair and death: Casper’s young killer in Mexico is promoted up the ranks of the gang with a new tattoo, while Sayra’s uncle, back in Honduras after being deported from Mexico, starts the voyage to the United States all over again. Sayra too may face deportation in the future. Following the scene of the reinvigoration of the criminal gang system, as its new young leader gets his first tattoo, the viewer sees Sayra outside a shopping mall in the American southwest. The teenage girl has arrived in the United States and may aspire to participate in advanced consumer capitalism, yet she has lost so much and suffered so undeservingly.

This aesthetic juxtaposition prompts the spectator to attend to the failure of Western political leaders to create a humane system of immigration for the twenty-first century, one which cannot be reached with the entrenched politicized views of the “two sides of the aisle” who miss the human story of immigrants’ plight. This film—like all tragedies—promotes the spectator’s active pondering, that is, it challenges them to respond in some way.

In the tradition of philosophers as various as Aristotle, Seneca, Schopenhauer, Nietzsche, Martha Nussbaum, and Bernard Williams, we find that tragedies bring to conscious awareness the most significant moral, social, political, and existential problems of the human condition. A film such as Sin nombre, through its tragic performance, points to one of these terrible necessities with which our contemporary Western culture must grapple. While it doesn’t offer an answer, this cinematic tragedy prompts us to recognize and deal with a seemingly intractable problem that needs to move beyond the current impasse of political debate, as we in the industrialized nations continue to shop for and watch movies in the comfort of our malls.

The post Cinematic tragedies for the intractable issues of our times appeared first on OUPblog.

0 Comments on Cinematic tragedies for the intractable issues of our times as of 9/27/2014 8:40:00 AM
Add a Comment
13. World War I in the Oxford Dictionary of Quotations

Coverage of the centenary of the outbreak of the First World War has made us freshly familiar with many memorable sayings, from Edward Grey’s ‘The lamps are going out all over Europe’, to Wilfred Owen’s ‘My subject is War, and the pity of war/ The Poetry is in the pity’, and Lena Guilbert Horne’s exhortation to ‘Keep the Home-fires burning’.

But as I prepared the new edition of the Oxford Dictionary of Quotations, I was aware that numerous other ‘quotable quotes’ also shed light on aspects of the conflict. Here are just five.

One vivid evocations of the conflict striking passage comes not from a War Poet but from an American novelist writing in the 1930s. In F. Scott Fitzgerald’s Tender is the Night (1934), Dick Diver describes the process of trench warfare:

See that little stream—we could walk to it in two minutes. It took the British a month to walk it—a whole empire walking very slowly, dying in front and pushing forward behind. And another empire walked very slowly backward a few inches a day, leaving the dead like a million bloody rugs.

This was, of course, on the Western Front, but there were other theatres of war. One such was the Gallipoli Campaign of 1915–16, where many ‘Anzacs’ lost their lives. In 1934, a group of Australians visited Anzac Cove, Gallipoli, and heard an address by Kemal Atatürk—Commander of the Turkish forces during the war, and by then President of Turkey. Speaking of the dead on both sides, he said:

There is no difference between the Johnnies and the Mehmets to us where they lie side by side in this country of ours. You, the mothers, who sent their sons from faraway countries, wipe away your tears. Your sons are now lying in our bosom and are in peace. After having lost their lives on this land, they have become our sons as well.

Atatürk’s words were subsequently inscribed on the memorial at Gallipoli, and on memorials in Canberra and Wellington.

World War I is often is often seen as a watershed, after which nothing could be the same again. (The young Robert Graves’s autobiography published in 1929 was entitled Goodbye to All That.) Two quotations from ODQ look ahead from the end of the war to what might be the consequences. For Jan Christiaan Smuts, President of South Africa, the moment was one of promise. He saw the setting up of the League of Nations in the aftermath of the war as a hope for better things:

Mankind is once more on the move. The very foundations have been shaken and loosened, and things are again fluid. The tents have been struck, and the great caravan of humanity is once more on the march.

However a much less optimistic, and regrettably more prescient comment, had been recorded in 1919 by Marshal Foch on the Treaty of Versailles,

This is not a peace treaty, it is an armistice for twenty years.

Not all ‘war poems’ are immediately recognizable as such. In 1916, the poet and army officer Frederick William Harvey was made a prisoner of war (the Oxford Dictionary of National Biography tells us that he went on to experience seven different prison camps). Returning from a period of solitary confinement, he apparently noticed the drawing of a duck on water made by a fellow-prisoner. This inspired what has become a very well-loved poem.

From troubles of the world
I turn to ducks
Beautiful comical things.

How many people, encountering the poem today, consider that the ‘troubles’ might include a world war?

Headline image credit: A message-carrying pigeon being released from a port-hole in the side of a British tank, near Albert, France. Photo by David McLellan, August 1918. Imperial War Museums. IWM Non-Commercial License via Wikimedia Commons.

The post World War I in the Oxford Dictionary of Quotations appeared first on OUPblog.

0 Comments on World War I in the Oxford Dictionary of Quotations as of 9/27/2014 8:41:00 AM
Add a Comment
14. Why do you love the VSIs?

The 400th Very Short Introduction, ‘Knowledge‘, was published this week. In order to celebrate this remarkable series, we asked various colleagues at Oxford University Press to explain why they love the VSIs:

*   *   *   *   *

“Why do I love the VSIs? They’re an easy, yet comprehensive way to learn about a topic. From general topics like Philosophy to more specific like Alexander the Great, I finish the book after a few trips on the train and I feel smarter. VSIs also help to quickly fill knowledge gaps that I may have–I never took a chemistry class in college but in just 150 pages, I can have a better understanding of physical chemistry should it ever come up during a trivia challenge. It’s true, VSIs give you the knowledge so you can lead your team to victory at your next pub trivia challenge.”

Brian Hughes, Senior Platform Marketing Manager

*   *   *   *   *

“They’re very effective knowledge pills after taking which I feel so much better equipped for exploring new disciplines. Each ends with a very helpful bibliography section which is a great guide for getting more and more interested in the subject. They’re concise, authoritative and fun to read, and that’s precisely why I love them so much!”

Anna Ready, Online Project Manager

*   *   *   *   *

“I love VSIs because it’s like talking to an expert who is approachable and personable, and doesn’t mind if it takes you a while to understand what they’re saying! They walk you through difficult ideas and concepts in an easily understandable way and you come away feeling like you have a deeper understanding of the topic, often wanting to find out more.”

Hannah Charters, Senior Marketing Executive

VSI cake
‘VSI 400 cake’, by Jack Campbell-Smith. Image used with permission.

*   *   *   *   *

“With the VSI series, you can expect to see a clear explanation of the subject matter presented in a consistent style.”

Martin Buckmaster, Data Engineer

*   *   *   *   *

“A book is a gift. The precious gift of knowledge hard earned by humankind through generations of experience, deep contemplation and a bursts of single minded desire to push the very limits of curiosity. But I’m a postmodern man in a postmodern world; my attention span is wrecked and presented with all the information in the world at my fingertips the best I can manage is to look up pictures of cats. I don’t know what I need to know from what I don’t or even where to start. What I need is a starting point, a rock solid foundation of just what I need to know on the topic of my choice, enough to know if I want to know more, enough to light that old spark of curiosity and easily enough to win an argument down the pub. Not just the gift of knowledge, but the gift of time. That’s why I love VSIs.”

Anonymous

*   *   *   *   *

“I love the VSIs because there is a never ending supply of interesting topics to learn more about. Whenever I found out I would be taking on the Religion & Theology list, I raided my neighbors cubicles for any religion-themed VSIs to read. Whenever I’m out of a book for the train ride home, I go next door to the VSI Marketing Manger’s cubicle, to see what new VSIs she has that I can borrow. They’re the perfect book to fit in your purse and go.”

Alyssa Bender, Marketing Coordinator

*   *   *   *   *

“I told Mrs Dalloway’s this week that purchasing the VSIs from Oxford was just like printing money. They’re smaller than an electronic reading device and fit in my cargo shorts, I mean blazer pocket. I can’t wait for Translation: A Very Short Introduction.”

George Carroll, Commissioning Rep from Great Northwest, USA

*   *   *   *   *

“I love the VSI series because it is so wonderfully wide-ranging. With almost any topic that comes to mind, if I wonder ‘is there a VSI to that?’, the answer is usually yes. It’s a great way to learn a little more about an area you’re already interested in, or as a first foray into one which is entirely new. Long live VSIs!”

Simon Thomas, Oxford Dictionaries Marketing Executive

*   *   *   *   *

“VSIs allow me to sound like I know a lot more about a subject than I actually do, in a very short space of time. An essential cheat for job interviews, pub quizzes, dates etc.”

Rachel Fenwick, Associate Marketing Manager

*   *   *   *   *

“I love the VSIs because they make such broad subjects immediately accessible. If you ever want to understand a subject in its entirety or fill in the gaps in your knowledge, the VSIs should always be your first port of call. From my University studies to my morning commute, the VSIs have, without fail, filled in the gaping holes in my knowledge and allowed me to converse with much smarter people about subjects I would never have previously understood. For that, I’m very grateful!”

Daniel Parker, Social Media Executive

The post Why do you love the VSIs? appeared first on OUPblog.

0 Comments on Why do you love the VSIs? as of 9/27/2014 5:54:00 AM
Add a Comment
15. Do health apps really matter?

Apps are all the rage nowadays, including apps to help fight rage. That’s right, the iTunes app store contains several dozen apps designed to manage anger or reduce stress. Smartphones have become such a prevalent component of everyday life, it’s no surprise that a demand has risen for phone programs (also known as apps) that help us manage some of life’s most important elements, including personal health. But do these programs improve our ability to manage our health? Do health apps really matter?

Early apps for patients with diabetes demonstrate how a proposed app idea can sound useful in theory but provide limited tangible health benefits in practice. First generation diabetes apps worked like a digital notebook, in which apps linked with blood glucose monitors to record and catalog measured glucose levels. Although doctors and patients were initially charmed by high tech appeal and app convenience, the charm wore off as app use failed to improve patient glucose monitoring habits or medication compliance.

Fitness apps are another example of rough starts among early health app attempts. Initial running apps served as an electronic pedometer, recording the number of steps and/or the total distance ran. These apps again provided a useful convenience over using a conventional pedometer, but were unlikely to lead to increased exercise levels or appeal to individuals who didn’t already run. Apps for other health related topics such as nutrition, diet, and air pollution ran into similar limitations in improving healthy habits. For a while, it seemed as if the initial excitement among the life sciences community for e-health simply couldn’t be translated to tangible health benefits among target populations.

800px-Personal_Health_Apps_for_Smartphones
Image credit: Personal Health Apps for Smartphones.jgp, by Intel Free Press. CC-BY-2.0 via Wikimedia Commons.

Luckily, recent changes in app development ideology have led to noticeable increases in health app impacts. Health app developers are now focused on providing useful tools, rather than collections of information, to app users. The diabetes app ManageBGL.com, for example, predicts when a patient may develop hypoglycemia (low blood sugar levels) before the visual/physical signs and adverse effects of hypoglycemia occur. The running app RunKeeper connects to other friend’s running profiles to share information, provide suggested running routes, and encourage runners to speed up or slow down for reaching a target pace. Air pollution apps let users set customized warning levels, and then predict and warn users when they’re heading towards an area with air pollution that exceeds warning levels. Health apps are progressing beyond providing mere convenience towards a state where they can help the user make informed decisions or perform actions that positively affect and/or protect personal health.

So, do health apps really matter? It’s unlikely that the next generation of health apps will have the same popularity as Facebook or widespread utility such as Google maps. The impact, utility, and popularity of health apps, however, are increasing at a noticeable rate. As health app developers continue to better their understanding of health app strengths and limitations and upcoming technologies that can improve health apps such as miniaturized sensors and smartglass become available, the importance of health related apps and proportion of the general public interested in health apps are only going to get larger.

The post Do health apps really matter? appeared first on OUPblog.

0 Comments on Do health apps really matter? as of 9/26/2014 10:28:00 AM
Add a Comment
16. Plagiarism and patriotism

Thou shall not plagiarize. Warnings of this sort are delivered to students each fall, and by spring at least a few have violated this academic commandment. The recent scandal involving Senator John Walsh of Montana, who took his name off the ballot after evidence emerged that he had copied without attribution parts of his master’s thesis, shows how plagiarism can come back to haunt.

But back in the days of 1776, plagiarism did not appear as a sign of ethical weakness or questionable judgment. Indeed, as the example of Mercy Otis Warren suggests, plagiarism was a tactic for spreading Revolutionary sentiments.

An intimate of American propagandists such as Sam Adams, Warren used her rhetorical skill to pillory the corrupt administration of colonial Massachusetts. She excelled at producing newspaper dramas that savaged the governor, Thomas Hutchinson, and his cast of flunkies and bootlickers. Her friend John Adams helped arrange for the anonymous publication of satires so sharp that they might well have given readers paper cuts.

An expanded version soon followed, replete with new scenes in which patriot leaders inspired crowds to resist tyrants. Although the added material uses her characters and echoes her language, they were not written by Warren. As she tells the story, her original drama was “taken up and interlarded with the productions of an unknown hand. The plagiary swelled” her satirical sketch into a pamphlet.

Mercy Otis Warren
Portrait of Mercy Otis Warren, American writer, by John Singleton Copley (1763). Public domain via Wikimedia Commons.

But Warren didn’t seem to mind the trespass all that much. Her goal was to disseminate the critique of colonial government. There’s evidence that she intentionally left gaps in her plays so that readers could turn author and add new scenes to the Revolutionary drama.

Original art was never the point; instead art suitable for copying formed the basis of her public aesthetic. In place of authenticity, imitation allowed others to join the cause and continue the propagation of Revolutionary messages.

Could it be that plagiarism was patriotic?

Thankfully, this justification is not likely to hold up in today’s classroom. There’s no compelling national interest that requires a student to buy and download a paper on Heart of Darkness.

Warren’s standards are woefully out of date. And yet, she does offer a lesson about political communication that still has relevance. Where today we see plagiarism, she saw a form of dissent had been made available to others.

Headline image credit: La balle a frappé son amante, gravé par L. Halbou. Library of Congress.

The post Plagiarism and patriotism appeared first on OUPblog.

0 Comments on Plagiarism and patriotism as of 1/1/1900
Add a Comment
17. Do children make you happier?

A new study shows that women who have difficulty accepting the fact that they can’t have children following unsuccessful fertility treatment, have worse long-term mental health than women who are able to let go of their desire for children. It is the first to look at a large group of women (over 7,000) to try to disentangle the different factors that may affect women’s mental health over a decade after unsuccessful fertility treatment. These factors include whether or not they have children, whether they still want children, their diagnosis, and their medical treatment.

It was already known that people who have infertility treatment and remain childless have worse mental health than those who do manage to conceive with treatment. However, most previous research assumed that this was due exclusively to having children or not, and did not consider the role of other factors. Alongside my research colleagues from the Netherlands, where the study took place, we found only that there is a link between an unfulfilled wish for children and worse mental health, and not that the unfulfilled wish is causing the mental health problems. This is due to the nature of the study, in which the women’s mental health was measured at only one point in time rather than continuously since the end of fertility treatment.

We analysed answers to questionnaires completed by 7,148 women who started fertility treatment at any of 12 IVF hospitals in the Netherlands between 1995-2000. The questionnaires were sent out to the women between January 2011 and 2012, meaning that for most women their last fertility treatment would have been between 11-17 years ago. The women were asked about their age, marital status, education and menopausal status, whether the infertility was due to them, their partner, both or of unknown cause, and what treatment they had received, including ovarian stimulation, intrauterine insemination, and in vitro fertilisation / intra-cytoplasmic sperm injection (IVF/ICSI). In addition, they completed a mental health questionnaire, which asked them how they felt during the past four weeks. The women were asked whether or not they had children, and, if they did, whether they were their biological children or adopted (or both). They were also asked whether they still wished for children.

The majority of women in the study had come to terms with the failure of their fertility treatment. However, 6% (419) still wanted children at the time of answering the study’s questionnaire and this was connected with worse mental health. We found that women who still wished to have children were up to 2.8 times more likely to develop clinically significant mental health problems than women who did not sustain a child-wish. The strength of this association varied according to whether women had children or not. For women with no children, those with a child-wish were 2.8 times more likely to have worse mental health than women without a child-wish. For women with children, those who sustained a child-wish were 1.5 times more likely to have worse mental health than those without a child-wish. This link between a sustained wish for children and worse mental health was irrespective of the women’s fertility diagnosis and treatment history.

Happy Family photo
Happy family photo by Vera Kratochvil. Public domain via Wikimedia Commons.

Our research found that women had better mental health if the infertility was due to male factors or had an unknown cause. Women who started fertility treatment at an older age had better mental health than women who started younger, and those who were married or cohabiting with their partner reported better mental health than women who were single, divorced, or widowed. Better educated women also had better mental health than the less well educated.

This study improves our understanding of why childless people have poorer adjustment. It shows that it is more strongly associated with their inability to let go of their desire to have children. It is quite striking to see that women who do have children but still wish for more children report poorer mental health than those who have no children but have come to accept it. The findings underline the importance of psychological care of infertility patients and, in particular, more attention should be paid to their long-term adjustment, whatever the outcome of the fertility treatment.

The possibility of treatment failure should not be avoided during treatment and a consultation at the end of treatment should always happen, whether the treatment is successful or unsuccessful, to discuss future implications. This would enable fertility staff to identify patients more likely to have difficulties adjusting to the long term, by assessing the women’s possibilities to come to terms with their unfulfilled child-wish. These patients could be advised to seek additional support from mental health professionals and patient support networks.

It is not known why some women may find it more difficult to let go of their child-wish than others. Psychological theories would claim that how important the goal is for the person would be a relevant factor. The availability of other meaningful life goals is another relevant factor. It is easier to let go of a child-wish if women find other things in life that are fulfilling, like a career.

We live in societies that embrace determination and persistence. However, there is a moment when letting go of unachievable goals (be it parenthood or other important life goals) is a necessary and adaptive process for well-being. We need to consider if societies nowadays actually allow people to let go of their goals and provide them with the necessary mechanisms to realistically assess when is the right moment to let go.

Featured image: Baby feet by Nina-81. Public Domain via Pixabay.

The post Do children make you happier? appeared first on OUPblog.

0 Comments on Do children make you happier? as of 1/1/1900
Add a Comment
18. What’s so great about being the VSI commissioning editor?

With the 400th Very Short Introduction on the topic of ‘Knowledge’ publishing this month, I’ve been thinking about how long this series has been around, and how long I have been a commissioning editor for the series, from before the 200th VSI published (number 163 – Human Rights in fact), through number 300 and 400, and how undoubtedly I’ll still be here for the 500th VSI!

Having previously been an editor for law, tax, and accountancy lists, and latterly the OUP Police list, the opportunity to be the VSI editor was one that I simply could not pass up. I already owned, and had read, several VSIs, so I understood broadly what the series was trying to do and who the series was aimed at. I liked the idea of working across a wide range of topics (except science – these VSIs are commissioned by my esteemed colleague Latha Menon) and with a vast array of different authors. I also liked the idea that I would learn lots of new things and be a pub quiz team whizz. Unfortunately in order to be good at pub quizzes you have to be able to retain and recall information and details quickly. I like to think that if someone was able to explore deep inside my brain they would find hundreds of fascinating facts about hundreds of topics that are buried in there somewhere. (What has been exciting is to have on occasion been able to answer a University Challenge question, causing much excitement).

I naively thought that authors would be able to write 35,000 (or so) words easily and quickly, and therefore that they would deliver perfect manuscripts on time which would be easy to edit and a pleasure to read. For the most part I think this is true, but in my seven years as editor, I think I’ve seen and heard it all. ‘The dog ate my homework’ excuses, authors taking eight or nine years to deliver their manuscripts, and one author delivering a 70,000 word manuscript thinking that we could just ‘cut it a little’. There’s never a dull moment. I’ve seen ebooks come into fruition, an online service being launched, and new editions of old and popular VSIs come into being. Marketing has changed too, from the traditional brochure and bookshop displays, to YouTube videos, Facebook pages, and blog posts.

400
“VSI 400″ image courtesy of VSI editorial team.

I often get asked what I do all day. The myth is that I do a lot of wining and dining, drinking coffee, putting my feet up on the desk reading manuscripts, and jetting to conferences. The reality is that I do a bit of everything and it doesn’t involve enough wining and dining – the tax authors ten years ago were far worse for this! I decide (with input from sales, marketing, the US VSI editor, and the science VSI editor) what topics to commission, I seek out the best authors I possibly can, I negotiate contracts, I talk to agents, I read manuscripts, I look at cover blurbs, and I panic about the size of my overflowing inbox.

People also ask me what my favourite VSI is, which is a very difficult question to answer. The first VSI I ever read was Mary Beard and John Henderson’s Classics (number 1 in the series) and I still think it’s a wonderful book. Of those I’ve commissioned, I love Angels and English Literature. And who is my favourite author? Now that would be telling, but I have passed countless happy hours with many of my authors. And that’s the best thing about being the VSI editor. I get to meet so many different authors and help them turn their vast amount of knowledge (and sometimes their lifetime’s work) into a short book that they can be proud of. My favourite quote from an author is, ‘now my children, grandchildren and friends might finally understand what I do’!

The post What’s so great about being the VSI commissioning editor? appeared first on OUPblog.

0 Comments on What’s so great about being the VSI commissioning editor? as of 9/26/2014 4:52:00 AM
Add a Comment
19. The Hunger Games and a dystopian Eurozone economy

The following is an extract from ‘Europe’s Hunger Games: Income Distribution, Cost Competitiveness and Crisis‘, published in the Cambridge Journal of Economics. In this section, Servaas Storm and C.W.M. Naastepad are comparing The Hunger Games to Eurozone economies:

Dystopias are trending in contemporary popular culture. Novels and movies abound that deal with fictional societies within which humans, individually and collectively, have to cope with repressive, technologically powerful states that do not usually care for the well-being or safety of their citizens, but instead focus on their control and extortion. The latest resounding dystopian success is The Hunger Games—a box-office hit located in a nation known as Panem, which consists of 12 poor districts, starved for resources, under the absolute control of a wealthy centre called the Capitol. In the story, competitive struggle is carried to its brutal extreme, as poor young adults in a reality TV show must fight to death in an outdoor arena controlled by an authoritarian Gamemaker, until only one individual remains. The poverty and starvation, combined with terror, create an atmosphere of fear and helplessness that pre-empts any resistance based on hope for a better world.

We fear that part of the popularity of this science fiction action-drama, in Europe at least, lies in the fact that it has a real-life analogue: the Spectacle—in Debord’s (1967) meaning of the term—of the current ‘competitiveness game’ in which the Eurozone economies are fighting for their survival. Its Gamemaker is the European Central Bank (ECB), which—completely stuck to Berlin’s hard line that fiscal profligacy in combination with rigid, over-regulated labour markets has created a deep crisis of labour cost competitiveness—has been keeping the pressure on Eurozone countries so as to let them pay for their alleged fiscal sins. The ECB insists that there will be ‘no gain without pain’ and that the more one is prepared to suffer, the more one is expected to prosper later on.

The contestants in the game are the Eurozone members—each one trying to bootstrap its economy out of the throes of the most severe crisis in living memory. The audience judging each country’s performance is not made up of reality TV watchers but of financial (bond) markets and credit rating agencies, whose supposedly rational views can make or break any economy. The name of the game is boosting cost-competitiveness and exports—and its rules are carved into stone in March 2011 in a Euro Plus ‘Competitiveness Pact’ (Gros, 2011).

The Hunger Games, by Kendra Miller. CC-BY-2.0 via flickr.
The Hunger Games, by Kendra Miller. CC-BY-2.0 via Flickr.

Raising competitiveness here means reducing costs, and more specifically cutting labour costs, which means lowering the wage share by means of reducing employment protection, lowering minimum wages, raising retirement ages, lowering pensions and, last but not least, cutting real wages. Economic inequality, poverty and social exclusion will all initially increase, but don’t worry: structural reforms hurt in the beginning, but their negative effects will be offset over time by changes in ‘confidence,’ boosting spending and exports. But it will not work, and the damage done by austerity and structural reforms is enormous; sadly, most of it was and is avoidable. The wrong policies follow from ‘design faults’ built into the Euro project right from the start—the creation of an ‘independent’ European Central Bank being the biggest ‘fault’, as it precluded the necessary co-ordination of fiscal and monetary policy and disabled the central banking system from providing support to national governments (Arestis and Sawyer, 2011). But as Palma (2009) reminds us, it is wrong to think about these ‘faults’ as being caused by perpetual incompetence—the monetarist Euro project should instead be read as a purposeful ‘technology of power’ to transform capitalism into a rentiers’ paradise. This way, one can understand why policy makers persist in abandoning the unemployed.

The post The Hunger Games and a dystopian Eurozone economy appeared first on OUPblog.

0 Comments on The Hunger Games and a dystopian Eurozone economy as of 1/1/1900
Add a Comment
20. Atheism and feminism

At first glance atheism and feminism are two sides of the same coin.

After all, the most passionate criticism of patriarchy has come from religious (or formerly religious) female scholars. First-hand experience of male domination in such contexts has led many to translate their views into direct political activism. As a result, the fight for women’s rights has often been inseparable from the critique of organised religion.

For example, a nineteenth-century campaigner for civil rights, Ernestine Rose, began by rebelling against an arranged marriage at the tender age of 16, and then gradually added other injustices she witnessed during her travels around Europe and the United States to her list of causes.

Rose was born in a Jewish family, and her religious background certainly affected her subsequent life in two distinct ways. Judaism fostered an inquisitive and critical attitude to the world around her, while at the same time making her aware of the gender inequalities in her own and other religious traditions. She went to the United States in 1836 where she soon started to give public lectures on ending slavery, religious freedom and women’s rights. After one of such public appearance, she was described by the local paper as a ‘female Atheist … a thousand times below a prostitute’.

Negative publicity meant that Rose’s popularity grew significantly, although her speeches were met with such outrage that had to flee the more conservative towns. She continued to make appearances at women’s rights conventions across the United States, although her outspoken atheism caused unease to both men and women.

It did not, however, stop her from becoming the president of the National Women’s Rights Convention in 1854. She worked and made friends with other politically involved women of her time, such as Elizabeth Cady Stanton, Susan B. Anthony, and Sojourner Truth. Rose’s atheism was not exactly at the forefront of her struggle for justice but it implicitly informed her views and actions. For example, she blamed both organised religion and capitalism for the inferior status of women.

Bali 033 - Ubud - flower offerings
Flower offerings. By mckaysavage. CC-BY-2.0 via Wikimedia Commons

Well over a century later the number and variety of female atheists are growing. Nonetheless, atheism remains a male-dominated affair. Data collected by the Atheist Alliance International (2011) show that in Britain, women account for 21.6% of atheists (as opposed to 77.9% men). In the United States men make up 70% of Americans who identify as atheist. In Poland, 32% of atheists are female, and similarly in Australia it is 31.5% .

On rare occasions when female atheists appear in the media, they are invariably feminist activists. This is hardly a problem but unfortunately it leads to a conflation of feminist activism and atheism, which in turn makes the ‘everyday’ female atheists invisible. It also encourages stereotyping of the most simplistic sort whereby the feminist stance becomes the primary focus while the atheism is treated as an add-on. But the two do not necessarily go together, and the women may not see them as equally central to their lives.

As significant progress has been made with regard to gender equality, and traditional religion has largely lost its influence over women’s lives, the connection between atheism and feminism has become more complicated.

My current project involves talking to self-identified female atheists from Britain, Poland, Australia, and the United States. Times may have changed but the core values held by these women closely resemble those espoused by Ernestine Rose, and the passion with which they speak about global and local injustice indicates a very particular atheism, far removed from the detached, rational and scientific front presented by some of the famous (male) faces of the atheist movement.

Two themes have emerged. One is the ease with which an atheist identity can be combined with ethics of care and altruism (thus demonstrating the compatibility of non-belief with goodness). Two is discrimination against women within the atheist movement.

ErnestineRose
Ernestine Rose. Public domain via Wikimedia Commons

The latter reminds me of a paper I once heard at a Gender and Religion conference in Tel Aviv. The presenter compared two synagogues in Paris: a progressive and liberal one which had a female rabbi, and a conservative one which preserved the strict division of gender roles. The paradox lay in the fact that more instances of discrimination against women, including overt sexism and sexual harassment, were reported among the members of the liberal synagogue.

Clearly, nobody looks for sexism in a place defined as non-sexist. A similar paradox applies to atheists. An activist in the atheist community told me that she received the worst abuse from her fellow (male) atheists, not religious hardliners.

One of the explanations for women’s greater religiosity is their need for community, emotional support, and a guiding light in life. Conservative religions perform this role very well, but so do alternative spiritualities where traditional religion is in decline and women suffer from emotional, not material, deprivation.

Atheism does the same for my interviewees. The task of a sociologist is to de-familiarise the familiar and to find the unexpected in the everyday through the grace of serendipity. Female atheists find empowerment and means of expression in their atheism, while at the same time defining it for themselves, rather than relying on the prominent male figures in the atheist community. While on the surface they lack the structure present in religious communities of women, they create networks of support with other women where atheism is but one, albeit a crucial one, feature of their self-definition.

The openness provides a more inclusive and flexible starting point for coming together and fighting for equality and justice, not necessarily on the barricades. Activism is inspiring but values spread more effectively it is in the everyday, mundane activities. In this sense, deeply religious and deeply atheist women have a lot in common. Both find fulfilment and joy in forging connections with other people and creating a safe haven for themselves and those close to them.

The female atheist activists all say the same thing: ‘I do it because I want to help’. A modest statement which can achieve a lot in the long run.

The post Atheism and feminism appeared first on OUPblog.

0 Comments on Atheism and feminism as of 1/1/1900
Add a Comment
21. A Study in Brown and in a Brown Study, Part 1

Color words are among the most mysterious ones to a historian of language and culture, and brown is perhaps the most mysterious of them all. At first blush (and we will see that it can have a brownish tint), everything is clear. Brown is produced by mixing red, yellow, and black. Other authorities suggest: orange and black. In any case, it has two sides: dark (black) and bright (red or orange). This color name does not seem to occur in the New Testament, and that is why of all the Old Germanic languages only Gothic lacks it (in Gothic a sizable part of a fourth-century translation of the New Testament has been preserved). In the Old Testament, the word appears most rarely. Genesis XXX: 32, 35, and 40 describes the division of Laban’s cattle. According to Verse 35 from the Authorized Version, “…he removed that day the he goats that were ringstraked and spotted, and all the she goats that were speckled and spotted, and every one that had some white in it, and all the brown among the sheep, and gave them into the hand of his sons.” Those sheep were indeed brown, but the situation is not always so clear. For example, an Old English poet called waves brown, and brown is a common epithet attached to swords in early Germanic poetry. Were waves and swords really brown, like Laban’s sheep?

In Old Germanic languages, brown had the form brun, with a long vowel (that is, with the vowel of Modern Engl. boo), and we can be fairly certain that the ancient Indo-Europeans had the same hue in mind we do, because at least three unmistakably brown animals were called brown. One of them is the bear, also known as Bruin (the word is pure Dutch). People were afraid of pronouncing the terrible beast’s name and coined a euphemism (“the brown one”). When they said brown, the bear could no longer think that is was summoned and would not come. The other animal with a “brown” name is beaver. If bears and beavers were called “brown” and the biblical Laban had brown sheep, why then brown waves and brown swords? We’ll have to wait rather long for the answer: this blog is a serial.

Let us first look at etymology. Those who have read the relatively recent posts on gray may remember that that Germanic color name made its way into Romance languages. The same holds for brown (vide French brun and Italian brun). Later, as happened more than once, Old French brun returned to Middle English and reinforced the native word; compare also brunet(te), from French, with reference to people with chestnut-colored or black (!) hair. In the posts on gray, I mentioned two current explanations of why gray, brown, and some other color names enjoyed such popularity outside their country of origin. Allegedly, Germanic mercenaries brought them to the Romance-speaking territory with either the words for their horse breeds or for their shields. There must have been something special about both. The root of brown can also be seen in Engl. burnish. The suffix -ish was added to the root of Old French burnir, from brunir. “To make brown” acquired the meaning “polish (metal) by friction.” This returns us to the brown weapons of Old Germanic.

Abkaou, reçoit ses offrandes. 11e dynastie. Louvre Museum. Photo by Rama. CC BY-SA 2.0 FR via Wikimedia Commons.
Abkaou, reçoit ses offrandes. 11e dynastie. Louvre Museum. Photo by Rama. CC BY-SA 2.0 FR via Wikimedia Commons.

The origin of bear and beaver from brown, though highly probable, is not absolutely assured, but the derivation of the Greek word phryne “toad” (stress on the first syllable) can hardly be put into question. Phryne looks like a perfect cognate of brown. (The famous hetaera Phryne is said to have received this nickname for her sallow skin, but other prostitutes were often called the same, and I have my own explanation of this fact; see below.) Toads, detested by some for all kinds of reasons, have occupied a conspicuous place in the superstitions of the whole world, beginning with at least the ancient Egyptian times. In Egypt, far from being shunned, they stood for fertility, and an amulet in the form of a toad supposedly replicated the uterus. Hequet was a goddess with the head of a frog.

Stories about frogs and toads are countless. One is especially famous. It is about a young man (prince) marrying a frog, which turns into a beautiful maiden. The Grimms knew a short and uninspiring version of this story (it is the opening one in their collection). In it the frog that insists on sleeping in the girl’s bed becomes a handsome prince, which is a variant of “Beauty and the Beast”; as a rule, in such tales the frog or the toad is a female. I would like to suggest, that the nickname Phryne had nothing to do with the hetaera’s skin. All other prostitutes who were called this could not have had the same tint. Since in the popular imagination toads and fertility went together and since Egyptian mythology and beliefs exercised a strong influence on the Greek mind, calling prostitutes toads would have made good sense.

Thus, as we can see, toads (brown creatures) were associated with things bad and good. On the one hand, they were feared for their supposed ugliness and identified with witches. On the other, they were venerated and thought to promote fertility. In that capacity, they frequently received votive offerings. From Egypt we should go to the British Isles, for whose sake I have told my story. As far as I can judge, no accepted etymology of brownie “imp” exists. The books at my disposal only say that brownies, benevolent imps, originated in Scotland and were brown. The earliest citations go back to the early seventeenth century. I have as little trust in brown brownies as in the brown-skinned Phryne among the Greeks. The name must have had magic connotations, but whether positive or negative is open to question. As time goes on, such creatures often change their attitude toward the houses they haunt. They can be friendly if treated well and hostile if offended. By contrast, brownies, chocolate cakes with nuts, are always brown and sweet (chocolate-colored, by definition).

My second example is literary. In Dickens’s novel Dombey and Son, Mr. Dombey’s little daughter Florence is abducted by an ugly old rag and bone vendor. When the girl asks the woman about her name, it is given to her as Mrs. Brown and amended to Good Mrs. Brown. “She was a very ugly old woman, with red rims round her eyes, and a mouth that mumbled and chattered of itself when she was not speaking.” This is how she introduced herself to Florence: “…don’t vex me. If you don’t, I tell you I won’t hurt you. But if you do, I’ll kill you. I could have killed you at any time—even if you was in your own bed at home.” I am sure somewhere in the immense literature on Dickens the folklore of Mrs. Brown was explained long ago. In any case, Dickens must have had a reason for calling the witch Mrs. Brown and adding ominously the ironic epithet good to the name, to reinforce the impression.

And here is a final flourish for today. I will be grateful for some reliable information on the origin of the last name Brown ~ Braune. Dictionaries say that the name goes back to the color of its bearers. I find this explanation puzzling. It is as though thousands of our neighbors were bears, beavers, and toads.

To be continued.

The post A Study in Brown and in a Brown Study, Part 1 appeared first on OUPblog.

0 Comments on A Study in Brown and in a Brown Study, Part 1 as of 9/24/2014 11:14:00 AM
Add a Comment
22. Intergenerational perspectives on psychology, aging, and well-being

Why are people afraid to get old? Research shows that having a bad attitude toward aging at a young age is only detrimental to the young person’s health and well-being in the long-run. Contrary to common wisdom, our sense of well-being actually increases with our age–often even in the presence of illness or disability. Mindy Greenstein, PhD, and Jimmie Holland, MD, debunk the myth that growing older is something to fear in their new book Lighter as We Go: Virtues, Character Strengths, and Aging. In the following videos, Dr. Greenstein and Dr. Holland are joined by Holland’s granddaughter Madeline in a thought-provoking discussion about their different perspectives on aging in correlation to well-being.

The Relationship between Wisdom and Age

The Bridge between Older People and Younger Generations

On Fluctuations in Well-Being throughout Life

The Vintage Readers Book Club

Headline image credit: Cloud Sky over Brest. Photo by Luca Lorenzi. CC BY-SA 3.0 via Wikimedia Commons

The post Intergenerational perspectives on psychology, aging, and well-being appeared first on OUPblog.

0 Comments on Intergenerational perspectives on psychology, aging, and well-being as of 9/24/2014 11:14:00 AM
Add a Comment
23. The Oxford DNB at 10: new perspectives on medieval biography

September 2014 marks the tenth anniversary of the publication of the Oxford Dictionary of National Biography. Over the next month a series of blog posts explore aspects of the Dictionary’s online evolution in the decade since 2004. In this post, Henry Summerson considers how new research in medieval biography is reflected in ODNB updates.

Today’s publication of the Oxford Dictionary of National Biography’s September 2014 update—marking the Dictionary’s tenth anniversary—contains a chronological bombshell. The ODNB covers the history of Britons worldwide ‘from the earliest times’, a phrase which until now has meant since the fourth century BC, as represented by Pytheas, the Marseilles merchant whose account of the British Isles is the earliest known to survive. But a new ‘biography’ of the Red Lady of Paviland—whose incomplete skeleton was discovered in 1823 in Wales, and which today resides in Oxford’s Museum of Natural History—takes us back to distant prehistory. As the earliest known site of ceremonial human burial in western Europe, Paviland expands the Dictionary’s range by over 32,000 years.

The Red Lady’s is not the only ODNB biography pieced together from unidentified human remains (Lindow Man and the Sutton Hoo burial are others), while the new update also adds the fifteenth-century ‘Worcester Pilgrim’ whose skeleton and clothing are on display at the city’s cathedral. However, the Red Lady is the only one of these ‘historical bodies’ whose subject has changed sex—the bones having been found to be those of a pre-historical man, and not (as was thought when they were discovered), of a Roman woman.

The process of re-examination and re-interpretation which led to this discovery can serve as a paradigm for the development of the DNB, from its first edition (1885-1900) to its second (2004), and its ongoing programme of online updates. In the case of the Red Lady the moving force was in its broadest sense scientific. In this ‘he’ is not unique in the Dictionary. The bones of the East Frankish queen Eadgyth (d.946), discovered in 2008 provide another example of human remains giving rise to a recent biography. But changes in analysis have more often originated in more conventional forms of historical scholarship. Since 2004 these processes have extended the ODNB’s pre-1600 coverage by 300 men and women, so bringing the Dictionary’s complement for this period to more than 7000 individuals.

In part, these new biographies are an evolution of the Dictionary as it stood in 2004 as we broaden areas of coverage in the light of current scholarship. One example is the 100 new biographies of medieval bishops that, added to the ODNB’s existing selection, now provide a comprehensive survey of every member of the English episcopacy from the Conquest to the Reformation—a project further encouraged by the publication of new sources by the Canterbury and York Society and the Early English Episcopal Acta series.

Taken together these new biographies offer opportunities to explore the medieval church, with reference to incumbents’ background and education, the place of patronage networks, or the shifting influence of royal and papal authority. That William Alnwick (d.1449), ‘a peasant born of a low family’, could become bishop of Norwich and Lincoln is, for example, indicative of the growing complexity of later medieval episcopal administration and its need for talented men. A second ODNB project (still in progress) focuses on late-medieval monasticism. Again, some notable people have come to light, including the redoubtable Elizabeth Cressener, prioress of Dartford, who opposed even Thomas Cromwell with success.

Magna Carta, courtesy of the British Library. Public domain via Wikimedia Commons.
Magna Carta, courtesy of the British Library. Public domain via Wikimedia Commons.

Away from religious life, recent projects to augment the Dictionary’s medieval and early modern coverage have focused on new histories of philanthropy—with men like Thomas Alleyne, a Staffordshire clergyman whose name is preserved by three schools—and of royal courts and courtly life. Hence first-time biographies of Sir George Blage, whom Henry VIII used to address as ‘my pig’, and at a lower social level, John Skut, the tailor who made clothes for most of the king’s wives: ‘while Henry’s queens came and went, John Skut remained.’

Alongside these are many included for remarkable or interesting lives which illuminate the past in sometimes unexpected ways. At the lowest social level, such lives may have been very ordinary, but precisely because they were commonplace they were seldom recorded. Where a full biography is possible, figures of this kind are of considerable interest to historians. One such is Agnes Cowper, a Southwark ‘servant and vagrant’ in the years around 1600; attempts to discover who was responsible for her maintenance shed a fascinating light on a humble and precarious life, and an experience shared by thousands of late-Tudor Londoners. Such light falls only rarely, but the survival of sources, and the readiness of scholars to investigate them, have also led to recent biographies of the Roman officers and their wives at Vindolanda, based on the famous ‘tablets’ found at Chesterholm in Northumberland; the early fourteenth-century anchorite Christina Carpenter, who provoked outrage by leaving her cell (but later returned to it), and whose story has inspired a film, a play and a novel; and trumpeter John Blanke, whose fanfares enlivened the early Tudor court and whose portrait image is the only identifiable likeness of a black person in sixteenth-century British art.

While people like Blanke are included for their distinctiveness, most ODNB subjects can be related to the wider world of their contemporaries. A significant component of the Dictionary since 2004 has been an interest in recreating medieval and early modern networks and associations; they include the sixth-century bringers of Christianity to England, the companions of William I, and the enforcers of Magna Carta. Each establishes connections between historical figures, sets the latter in context, and charts how appreciations of these networks and their participants have developed over time—from the works of early chroniclers to contemporary historians. Indeed, in several instances, notably the Round Table knights or the ‘Merrie Men’, it is this (often imaginative) interpretation and recreation of Britain’s medieval past that is to the fore.

The importance of medieval afterlives returns us to the Red Lady of Paviland. His biography presents what can be known, or plausibly surmised, about its subject, alongside the ways in which his bodily remains (and the resulting life) have been interpreted by successive generations—each perceptibly influenced by the cultural as well as scholarly outlook of the day. Next year sees the 800th anniversary of the granting of Magna Carta, a centenary which can be confidently expected to bring further medieval subjects into Oxford Dictionary of National Biography. It is unlikely that the historians responsible will be unaffected by considerations of the long-term significance of the Charter. Nor, indeed, should they be—it is the interaction of past and present which does most to bring historical biography to life.

The post The Oxford DNB at 10: new perspectives on medieval biography appeared first on OUPblog.

0 Comments on The Oxford DNB at 10: new perspectives on medieval biography as of 9/25/2014 6:42:00 AM
Add a Comment
24. Q&A with Jake Bowers, co-author of 2014 Miller Prize Paper

Despite what many of my colleagues think, being a journal editor is usually a pretty interesting job. The best part about being a journal editor is working with authors to help frame, shape, and improve their research. We also have many chances to honor specific authors and their work for being of particular importance. One of those honors is the Miller Prize, awarded annually by the Society for Political Methodology for the best paper published in Political Analysis the proceeding year.

The 2013 Miller Prize was awarded to Jake Bowers, Mark M. Fredrickson, and Costas Panagopoulos, for their paper, “Reasoning about Interference Between Units: A General Framework.” To recognize the significance of this paper, it is available for free online access for the next year. The award committee summarized the contribution of the paper:

“..the article tackles an difficult and pervasive problem—interference among units—in a novel and compelling way. Rather than treating spillover effects as a nuisance to be marginalized over or, worse, ignored, Bowers et al. use them as an opportunity to test substantive questions regarding interference … Their work also brings together causal inference and network analysis in an innovative and compelling way, pointing the way to future convergence between these domains.”

In other words, this is an important contribution to political methodology.

I recently posed a number of question to one of the authors of the Miller Prize paper, Jake Bowers, asking him to talk more about this paper and its origins.

R. Michael Alvarez: Your paper, “Reasoning about Interference Between Units: A General Framework” recently won the Miller Prize for the best paper published in Political Analysis in the past year. What motivated you to write this paper?

Jake Bowers: Let me provide a little background for readers not already familiar with randomization-based statistical inference.

Randomized designs provide clear answers to two of the most common questions that we ask about empirical research: The Interpretation Question: “What does it mean that people in group A act differently from people in group B?” and The Information Question: “How precise is our summary of A-vs-B?” (Or, more defensively, “Do we really have enough information to distinguish A from B?”).

If we have randomly assigned some A-vs-B intervention, then we can answer the interpretation question very simply: “If group A differs from group B, it is only because of the A-vs-B intervention. Randomization ought to erase any other pre-existing differences between groups A and B.”

In answering the information question, randomization alone also allows us to characterize other ways that the experiment might have turned out: “Here are all of the possible ways that groups A and B could differ if we re-randomized the A-vs-B intervention to the experimental pool while entertaining the idea that A and B do not differ. If few (or none) of these differences is as large as the one we observe, we have a lot of information against the idea that A and B do not differ. If many of these differences are as large as the one we see, we don’t have much information to counter the argument that A and B do not differ.”

Of course, these are not the only questions one should ask about research, and interpretation should not end with knowing that an input created an output. Yet, these concerns about meaning and information are fundamental and the answers allowed by randomization offer a particularly clear starting place for learning from observation. In fact, many randomization-based methods for summarizing answers to the information question tend to have validity guarantees even with small samples. If we really did repeat the experiment all the possible ways that it could have been done, and repeated a common hypothesis test many times, we would reject a true null hypothesis no more than α% of the time even if we had observed only eight people (Rosenbaum 2002, Chap 2).

In fact a project with only eight cities impelled this paper. Costa Panagopoulos had administered a field experiment of newspaper advertising and turnout to eight US cities, and he and I began to discuss how to produce substantively meaningful, easy to interpret, and statistically valid, answers to the question about the effect of advertising on turnout. Could we hypothesize that, for example, the effect was zero for three of the treated cites, and more than zero for one of the treated cites? The answer was yes.

I realized that hypotheses about causal effects do not need to be simple, and, furthermore, they could represent substantive, theoretical models very directly. Soon, Mark Fredrickson and I started thinking about substantive models in which treatment given to one city might have an effect on another city. It seemed straightforward to write down these models. We had read Peter Aronow’s and Paul Rosenbaum’s papers on the sharp null model of no effects and interference, and so we didn’t think we were completely off base to imagine that, if we side-stepped estimation of average treatment effects and focused on testing hypotheses, we could learn something about what we called “models of interference”. But, we had not seen this done before. So, in part because we worried about whether we were right about how simple it was to write down and test hypotheses generated from models of spillover or interference between units, we wrote the “Reasoning about Interference” paper to see if what we were doing with Panagopoulos’ eight cities would scale, and whether it would perform as randomization-based tests should. The paper shows that we were right.

R. Michael Alvarez: In your paper, you focus on the “no interference” assumption that is widely discussed in the contemporary literature on causal models. What is this assumption and why is it important?

Jake Bowers: When we say that some intervention, (Zi), caused some outcome for some person, (i), we often mean that the outcome we would have seen for person (i) when the intervention is not-active, (Zi=0) — written as (y{i,Zi=0}) — would have been different from the outcome we would have seen if the intervention were active for that same person (at that same moment in time), (Zi=1), — written as (y{i,Z_i=1}). Most people would say that the treatment had an effect on person (i) when (i) would have acted differently under the intervention than under the control condition such that y{i,Zi=1} does not equal y{i,Zi=0}. David Cox (1958) noticed that this definition of causal effects involves an assumption that an intervention assigned to one person does not influence the potential outcomes for another person. (Henry Brady’s piece, “Causation and Explanation in Social Science” in the Oxford Handbook of Political Methodology provides an excellent discussion of the no-interference assumption and Don Rubin’s formalization and generalization of Cox’s no-interference assumption.)

As an illustration of the confusion that interference can cause, imagine we had four people in our study such that (i in {1,2,3,4}). When we write that the intervention had an effect for person (i=1),(y{i=1,Z1=1} does not equal y{i=1,Z1=0}), we are saying that person 1 would act the same when (Z{i=1}=1) regardless of how intervention was assigned to any other person such that

(y{i=1,{Z_1=1,Z_2=1,Z_3=0,Z_4=0}}=y{i=1,{Z_1=1,Z_2=0,Z_3=1,Z_4=0\}}=y{i=1,\{Zi=1,…}})

If we do not make this assumption then we cannot write down a treatment effect in terms of a simple comparison of two groups. Even if we randomly assigned the intervention to two of the four people in this little study, we would have six potential outcomes per person rather than only two potential outcomes (you can see two of the six potential outcomes for person 1 in above). Randomization does not help us decide what a “treatment effect” means and six counterfactuals per person poses a challenge for the conceptualization of causal effects.

So, interference is a problem with the definition of causal effects. It is also a problem with estimation. Many folks know about what Paul Holland (1986) calls the “Fundamental Problem of Causal Inference” that the potential outcomes heuristic for thinking about causality reveals: we cannot ever know the causal effect for person (i) directly because we can never observe both potential outcomes. I know of three main solutions for this problem, each of which have to deal with problems of interference:

  • Jerzy Neyman (1923) showed that if we change our substantive focus from individual level to group level comparisons, and to averages in particular, then randomization would allow us to learn about the true, underlying, average treatment effect using the difference of means observed in the actual study (where we only see responses to intervention for some but not all of the experimental subjects).
  • Don Rubin (1978) showed a Bayesian predictive approach — a probability model of the outcomes of your study and a probability model for the treatment effect itself allows you can predict the unobserved potential outcomes for each person in your study and then take averages of those predictions to produce an estimate of the average treatment effect.
  • Ronald Fisher (1935) suggested another approach which maintained attention on the individual level potential outcomes, but did not use models to predict them. He showed that randomization alone allows you to test the hypothesis of “no effects” at the individual level. Interference makes it difficult to interpret Neyman’s comparisons of observed averages and Rubin’s comparison of predicted averages as telling us about causal effects because we have too many possible averages.

It turns out that Fisher’s sharp null hypothesis test of no effects is simple to interpret even when we have unknown interference between units. Our paper starts from that idea and shows that, in fact, one can test sharp hypotheses about effects rather than only no effects.

Note that there has been a lot of great recent work trying to define and estimate average treatment effects recently by folks like Cyrus Samii and Peter Aronow, Neelan Sircar and Alex Coppock, Panos Toulis and Edward Kao, Tyler Vanderweele, Eric Tchetgen Tchetgen and Betsy Ogburn, Michael Sobel, and Michael Hudgens, among others. I also think that interference poses a smaller problem for Rubin’s approach in principle — one would add a model of interference to the list of models (of outcomes, of intervention, of effects) used to predict the unobserved outcomes. (This approach has been used without formalization in terms of counterfactuals in both the spatial and networks models worlds.) One might then focus on posterior distributions of quantities other than simple differences of averages or interpret such differences reflecting the kinds of weightings used in the work that I gestured to at the start of this paragraph.

R. Michael Alvarez: How do you relax the “no interference” assumption in your paper?

Jake Bowers: I would say that we did not really relax an assumption, but rather side-stepped the need to think of interference as an assumption. Since we did not use the average causal effect, we were not facing the same problems of requiring that all potential outcomes collapse down to two averages. However, what we had to do instead was use what Paul Rosenbaum might call Fisher’s solution to the fundamental problem of causal inference. Fisher noticed that, even if you couldn’t say that a treatment had an effect on person (i), you could ask whether we had enough information (in our design and data) to shed light on a question about whether or not the treatment had an effect on person (i). In our paper, Fisher’s approach meant that we did not need to define our scientifically interesting quantity in terms of averages. Instead, we had to write down hypotheses about no interference. That is, we did not really relax an assumption, but instead we directly modelled a process.

Rosenbaum (2007) and Aronow (2011), among others, had noticed that the hypothesis that Fisher is most famous for, the sharp null hypothesis of no effects, in fact does not assume no interference, but rather implies no interference (i.e., if the treatment has no effect for any person, then it does not matter how treatment has been assigned). So, in fact, the assumption of no interference is not really a fundamental piece of how we talk about counterfactual causality, but a by-product of a commitment to the use of a particular technology (simple comparisons of averages). We took a next step in our paper and realized that Fisher’s sharp null hypothesis implied a particular, and very simple, model of interference (a model of no interference). We then set out to see if we could write other, more substantively interesting models of interference. So, that is what we show in the paper: one can write down a substantive theoretical model of interference (and of the mechanism for an experimental effect to come to matter for the units in the study) and then this model can be understood as a generator of sharp null hypotheses, each of which could be tested using the same randomization inference tools that we have been studying for their clarity and validity previously.

R. Michael Alvarez: What are the applications for the approach you develop in your paper?

Jake Bowers: We are working on a couple of applications. In general, our approach is useful as a way to learn about substantive models of the mechanisms for the effects of experimental treatments.

For example, Bruce Desmarais, Mark Fredrickson, and I are working with Nahomi Ichino, Wayne Lee, and Simi Wang on how to design randomized experiments to learn about models of the propagation of treatments across a social network. If we think that an experimental intervention on some subset of Facebook users should spread in some certain manner, then we are hoping to have a general way to think about how to design that experiment (using our approach to learn about that propagation model, but also using some of the new developments in network-weighted average treatment effects that I referenced above). Our very early work suggests that, if treatment does propagate across a social network following a common infectious disease model, that you might prefer to assign relatively few units to direct intervention.

In another application, Nahomi Ichino, Mark Fredrickson, and I are using this approach to learn about agent-based models of the interaction of ethnicity and party strategies of voter registration fraud using a field experiment in Ghana. To improve our formal models, another collaborator, Chris Grady, is going to Ghana to do in-depth interviews with local party activists this fall.

R. Michael Alvarez: Political methodologists have made many contributions to the area of causal inference. If you had to recommend to a graduate student two or three things in this area that they might consider working on in the next year, what would they be?

Jake Bowers: About advice for graduate students: Here are some of the questions I would love to learn about.

  • How should we move from formal, equilibrium-oriented, theories of behavior to models of mechanisms of treatment effects that would allow us to test hypotheses and learn about theory from data?
  • How can we take advantage of estimation-based procedures or procedures developed without specific focus on counterfactual causal inference if we want to make counterfactual causal inferences about models of interference? How should we reinterpret or use tools from spatial analysis like those developed by Rob Franzese and Jude Hayes or tools from network analysis like those developed by Mark Handcock to answer causal inference questions?
  • How can we provide general advice about how to choose test-statistics to summarize the observable implications of these theoretical models? We know that the KS-test used in our article is pretty low-powered. And we know from Rosenbaum (Chap 2, 2002) that certain classes of test statistics have excellent properties in one-dimension, but I wonder about general properties of multi-parameter models and test statistics that can be sensitive to multi-way differences in distribution between experimental groups.
  • How should we apply ideas from randomized studies to the observational world? What does adjustment for confounding/omitted variable bias (by matching or “controlling for” or weighting) mean in the context of social networks or spatial relations? How should we do and judge such adjustment? Would might what Rosenbaum-inspired sensitivity analysis or Manski-inspired bounds analysis might mean when we move away from testing one parameter or estimating one quantity?

R. Michael Alvarez: You do a lot of work with software tool development and statistical computing. What are you working on now that you are most excited about?

Jake Bowers: I am working on two computationally oriented projects that I find very exciting. The first involves using machine learning/statistical learning for optimal covariance adjustment in experiments (with Mark Fredrickson and Ben Hansen). The second involves collecting thousands of hand-drawn maps on Google maps as GIS objects to learn about how people define and understand the places where they live in Canada, the United Kingdom, and the United States (with Cara Wong, Daniel Rubenson, Mark Fredrickson, Ashlea Rundlett, Jane Green, and Edward Fieldhouse).

When an experimental intervention has produced a difference in outcomes, comparisons of treated to control outcomes can sometimes fail to detect this effect, in part, because the outcomes themselves are naturally noisy in comparison to the strength of the treatment effect. We would like to reduce the noise that is unrelated to treatment (say, remove the noise related to background covariates, like education) without ever estimating a treatment effect (or testing a hypothesis about a treatment effect). So far, people shy away from using covariates for precision enhancement of this type because of every model in which they soak up noise with covariates is also a model in which they look at the p-value for their treatment effect. This project learns from the growing literature in machine learning (aka statistical learning) to turn specification of the covariance adjustment part of a statistical model over to an automated system focused on the control group only which thus bypasses concerns about data snooping and multiple p-values.

The second project involves using Google maps embedded in online surveys to capture hand-drawn maps representing how people respond when asked to draw the boundaries of their “local communities.” So far we have over 7000 such maps from a large survey of Canadians, and we plan to have data from this module carried on the British Election Study and the US Cooperative Congressional Election Study within the next year. We are using these maps and associated data to add to the “context/neighborhood effects” literature to learn how psychological understandings of place by individuals relates to Census measurements and also to individual level attitudes about inter-group relations and public goods provision.

Headline image credit: Abstract city and statistics. CC0 via Pixabay.

The post Q&A with Jake Bowers, co-author of 2014 Miller Prize Paper appeared first on OUPblog.

0 Comments on Q&A with Jake Bowers, co-author of 2014 Miller Prize Paper as of 9/25/2014 9:28:00 AM
Add a Comment
25. Seven fun facts about the ukulele

The ukulele, a small four-stringed instrument of Portuguese origin, was patented in Hawaii in 1917, deriving its name from the Hawaiian word for “leaping flea.” Immigrants from the island of Madeira first brought to Hawaii a pair of Portuguese instruments in the late 1870s from which the ukuleles eventually developed. Trace back to the origins of the ukulele, follow its evolution and path to present-day popularity, and explore interesting facts about this instrument with Oxford Reference.

1. Developed from a four-string Madeiran instrument and built from Hawaiian koa wood, ukuleles were popular among the Hawaiian royalty in the late 19th century.

2. 1893’s World’s Columbian Exposition in Chicago saw the first major performance of Hawaiian music with ukulele on the mainland.

3. By 1916, Hawaiian music became a national craze, and the ukulele was incorporated into popular American culture soon afterwards.

4. Singin’ In The Rain vocalist Cliff Edwards was also known as Ukulele Ike, and was one of the best known ukulele players during the height of the instrument’s popularity in the United States.

Cliff Edwards playing ukulele with phonograph, 1947. Photography from the William P. Gottlieb Collection. Public domain via Wikimedia Commons.
Cliff Edwards playing ukulele with phonograph, 1947. Photography from the William P. Gottlieb Collection. Public domain via Wikimedia Commons.

5. When its sales reached millions in the 1920s, the ukulele became an icon of the decade in the United States.

6. Ernest Ka’ai wrote the earliest known ukulele method in The Ukulele, A Hawaiian Guitar and How to Play It, 1906.

7. The highest paid entertainer and top box office attraction in Britain during the 1930s and 40s, George Fromby, popularized the ukulele in the United Kingdom.

Headline image credit: Ukuleles. Photo by Ian Ransley. CC BY 2.0 via design-dog Flickr.

The post Seven fun facts about the ukulele appeared first on OUPblog.

0 Comments on Seven fun facts about the ukulele as of 9/25/2014 9:28:00 AM
Add a Comment

View Next 25 Posts