What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

JacketFlap Sponsors

Spread the word about books.
Put this Widget on your blog!
  • Powered by JacketFlap.com

Are you a book Publisher?
Learn about Widgets now!

Advertise on JacketFlap

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing: Blog Posts Tagged with: journals, Most Recent at Top [Help]
Results 1 - 25 of 179
1. Early blues and country music

Beginning in the early 1920s, and continuing through the mid 1940s, record companies separated vernacular music of the American South into two categories, divided along racial lines: the “race” series, aimed at a black audience, and the “hillbilly” series, aimed at a white audience. These series were the precursors to the also racially separated Rhythm & Blues and Country & Western charts, and arguably the source of the frequent racial divisions of today’s recording industry. But a closer examination reveals that the two populations rely heavily on many of the same musical resources, and that early blues and country music exhibit thorough interpenetration.

Many admirers of early blues and country music observe that black and white musicians from the 1920s to the 1940s share much with respect to repertoire and genre, and that the separation of the two on commercial recordings grew out of the prejudices of record companies. It becomes even more apparent how deeply intertwined the two traditions are when we examine blues and country musicians’ shared stock of schemes. Schemes are preexisting harmonic grounds and melodic structures that are common resources for the creation of songs. A scheme generates multiple distinct songs, with different lyrics and titles. Many schemes generated songs in both blues and country music.

There are several different types of blues and country schemes. One type is a harmonic progression that combines with one particular tune. The “Trouble In Mind” scheme, for example, generates both Bertha Chippie Hill’s “Trouble in Mind” (1) and the Hackberry Ramblers’ “Fais Pas Ça” (2). Both use the same harmonic progression, and the two melodies have relatively slight variation. Hill recorded for the “race” series, and the Hackberry Ramblers for the “hillbilly” series.

1. Bertha “Chippie” Hill, “Trouble in Mind” (Bertha “Chippie” Hill—Document Records)

2. Hackberry Ramblers, “Fais Pas Ça” (Jolie Blonde—Arhoolie Productions)

A second type of scheme is a preexisting harmonic progression that musicians associate primarily with a specific tune, which they set to lyrics about various subjects, but which they also use to support original melodies. In the “Frankie and Johnny” scheme, the same melody combines with lyrics about Frankie’s shooting of Johnny (or Albert) (3), the Boll Weevil infestation at the turn of the twentieth century (4), and the gambler Stack O’Lee, who shot and killed fellow gambler Billy Lyons (5). Singers also use the harmonic progression to support original melodies, with lyrics about Frankie (6), Stack O’Lee (7), or another subject (8).

In all of the examples, the same correspondence between lyrics and harmony is evident in the harmonic shift that accompanies the completion of the opening rhyming couplet, on the words “above” (3), “your home” (4), “road” (5), “beer” (6), the first “Stack O’Lee” (7), and “that line” (8), and in the harmonic shifts that accompany emphasized words in the refrain, on the words “man” and “wrong” (3, 5, and 6), “no home” and “no home” (4), “bad man” and “Stack O’Lee” (7), and “bad” and “bad” (8). Four of the recordings given here are from the “race” labels, and two are from the “hillbilly” labels, but the same scheme generates all of them.

Jimmie Rodgers. Public domain via Wikimedia Commons.
Jimmie Rodgers. Public domain via Wikimedia Commons.

3. Jimmie Rodgers, “Frankie and Johnny” (The Essential Jimmie Rodgers—Sony)

4. W. A. Lindsey, “Boll Weevil” (People Take Warning—Tomkins Square)

5. Ma Rainey, “Stack O’Lee Blues” (Ma Rainey’s Black Bottom—Yazoo)

6. Charley Patton, “Frankie and Albert” (Charley Patton Complete Recordings—JSP Records)

7. Mississippi John Hurt, “Stack O’Lee” (Before the Blues—Yazoo)

8. Henry Thomas, “Bob McKinney” (Texas Worried Blues—Document Records)

A third type of scheme is a preexisting harmonic progression that musicians use primarily to support original melodies. This type of scheme is the most productive, and often supports countless melodies. The most well-known and productive of this type is the standard twelve-bar blues scheme. All seven of the following recordings (9–15)—four from the “race” series and three from the “hillbilly” series—contain original melodies combined with the standard twelve-bar blues harmonic progression, and all demonstrate the AAB poetic form that typically combines with the scheme, in which singers state the opening A line of a couplet twice and follow it with one statement of the rhyming B line.

9. Ida Cox, “Lonesome Blues” (Ida Cox Complete Recorded Works—Document Records)

10. Charley Patton, “Moon Going Down” (Charlie Patton Founder of the Delta Blues—Mastercopy Pty Ltd)

11. Jesse “Babyface” Thomas, “Down in Texas Blues” (The Stuff that Dreams are Made Of)

12. Lonnie Johnson, “Mr. Johnson’s Blues No. 2” (A Smithsonian Collection of Classic Blues Singers—Sony/Smithsonian)

13. W. Lee O’Daniel & His Hillbilly Boys, “Dirty Hangover Blues” (White Country Blues—Sony)

14. Jesse “Babyface” Thomas, “Down in Texas Blues” (The Stuff that Dreams are Made Of) (White Country Blues—Sony)

15. Carlisle & Ball, “Guitar Blues” (White Country Blues—Sony)

A fourth type of scheme is a preexisting melodic structure whose harmonizations display considerable variance and yet also certain requirements. The following four examples—two by black musicians and two by white musicians—are all realizations of the “Sitting on Top of the World” scheme, and use the same melodic structure. Their harmonizations are in some ways quite similar—for example, all four harmonize the beginning of the second, rhyming line with the same harmony, and accelerate the rate of harmonic change going into the cadence—but the harmonizations vary more than the melodic structure.

16. Tampa Red, “Things ‘Bout Coming My Way No. 2” (Tampa Red the Guitar Wizard—Sony)

17. Bill Broonzy, “Worrying You Off My Mind” (Big Bill Broonzy Good Time Tonight—Sony)

18. Bob Wills & His Texas Playboys, “Sittin’ on Top of the World” (Bob Wills & His Texas Playboys Anthology—Puzzle Productions)

19. The Carter Family, “I’m Sitting on Top of the World” (On Border Radio—Arhoolie)

Finally, a fifth type of scheme is a preexisting melodic structure for which performers have little shared conception of the harmonic progression. The last four examples—one by a black musician and three by white musicians—are all realizations of the “John Henry” scheme, and use the same melodic structure, but very different harmonic progressions. Riley Puckett, in his instrumental version, uses only one harmony throughout (20). Woody Guthrie uses two harmonies (21). The Williamson Brothers & Curry also use two harmonies, but arrive at a much different harmonization than Guthrie (22). Leadbelly uses three harmonies (23).

20. Riley Puckett, “A Darkey’s Wail” (White Country Blues—Sony)

21. Woody Guthrie, “John Henry” (Woody Guthrie Sings Folk Songs—Smithsonian Folkways Recordings)

22. Williamson Brothers & Curry, “Gonna Die with My Hammer in My Hand” (Anthology of American Folk Music—Smithsonian Folkways Recordings)

23. Leadbelly, “John Henry” (Lead Belly’s Last Sessions— Smithsonian Folkways Recordings)

Record companies presented American vernacular music in the context of a racial divide, but examining the common stock of schemes helps to reveal how extensively black and white musical traditions are intertwined. There are stylistic differences between blues and country music, but many differences lie on the surface, while on a deeper level the two populations frequently rely on the same musical foundations.

Headline image credit: Fiddlin’ Bill Hensley. Asheville, North Carolina. Public domain via Library of Congress.

The post Early blues and country music appeared first on OUPblog.

0 Comments on Early blues and country music as of 10/21/2014 5:05:00 AM
Add a Comment
2. Corporate short-termism, the media, and the self-fulfilling prophecy

The business press and general media often lament that firm executives are exhibiting “short-termism”, succumbing to the pressure by stock market investors to maximize quarterly earnings while sacrificing long-term investments and innovation. In our new article in the Socio-Economic Review, we suggest that this complaint is partly accurate, but partly not.

What seems accurate is that the maximization of short-term earnings by firms and their executives has become somewhat more prevalent in recent years, and that some of the roots of this phenomenon lead to stock market investors. What is inaccurate, though, is the assumption that investors – even if they were “short-term traders” – would inherently attend to short-term quarterly earnings when making trading decisions. Namely, even “short-term trading” (i.e. buying stocks with the aim to sell them after few minutes, days, or months) does not equal or necessitate “short-term earnings focus”, i.e., making trading decisions based on short-term earnings (let alone based on short-term earnings only). This means that in case the media observes – or executives perceive – that firms are pressured by stock market investors to focus on short-term earnings, such a pressure is illusionary, in part.

The illusion, in turn, is based on the phenomenon of “vociferous minority”: a minority of stock investors may be focusing on short-term earnings, causing some weak correlation between short-term earnings and stock price jumps / drops. But the illusion is born when this gets interpreted as if most or all investors (i.e., the majority) would be focusing on short-term earnings only. Alas, such an interpretation may, in the dynamic markets, lead to a self-fulfilling prophecy – whereby an increasing number of investors join the vociferous minority and focus increasingly on short-term earnings (even if still not the majority of investors would focus on short-term earnings only). And more importantly – or more unfortunately – firm executives may start to increasingly maximize short-term earnings, too, due to the (inaccurate) illusion that the majority of investors would prefer that.

rolls royce
Rolls Royce, by Christophe Verdier. CC-BY-2.0 vis Flickr.

A final paradox is the role of the media. Of course, the media have good intentions in lamenting about short-termism in the markets, trying to draw attention to an unsatisfactory state of affairs. However, such lamenting stories may actually contribute to the emergence of the self-fulfilling prophecy. Namely, despite the lamenting tone of the media articles, they are in any case emphasizing that the market participants are focusing just on short-term earnings. This contributes to the illusion that all investors are focusing on short-term earnings only – which in turn may lead a bigger majority of investors and firms to actually join the minority’s bandwagon, in the illusion that everyone else is doing that too.

Should the media do something different, then? Well, we suggest that in this case, the media should report more on “positive stories”, or cases whereby firms have managed to create great innovations with a patient, longer-term focus. The media could also report on an increasing number of investors looking at alternative, long-term measures (such as patents or innovation rates) instead of short-term earnings.

So, more stories like this one about Rolls-Royce – however, without claiming or lamenting that most investors are just wanting “quick results” (i.e., without portraying cases like Rolls-Royce just as rare exceptions). Such positive stories could, in the best scenario, contribute to a reverse, self-fulfilling prophecy – whereby more and more investors, and thereafter firm executives, would replace some of the excessive focus on short-term earnings that they might currently have.

The post Corporate short-termism, the media, and the self-fulfilling prophecy appeared first on OUPblog.

0 Comments on Corporate short-termism, the media, and the self-fulfilling prophecy as of 10/21/2014 5:05:00 AM
Add a Comment
3. Questions surrounding open access licensing

Open access (OA) publishing stands at something of a crossroads. OA is now part of the mainstream. But with increasing success and increasing volume come increasing complexity, scrutiny, and demand. There are many facets of OA which will prove to be significant challenges for publishers over the next few years. Here I’m going to focus on one — licensing — and discuss how the arguments seen over licensing in recent months shine a light on the difference between OA as a movement, and OA as a reality.

Today’s authors face a number of conflicting pressures. Publish in a high impact journal. Publish in a journal with the correct OA options as mandated by your funder. Publish in a journal with the correct OA options as mandated by your institution. Publish your article in a way which complies with government requirements on research excellence. They are then met by a wide array of options, and it’s no wonder we at OUP sometimes receive queries from authors confused as to which OA option they should choose.

One of the most interesting aspects of the various surveys Taylor & Francis (T&F) have conducted on open access over the past year or two has been the divergence between what authors say they want, and what their funders/governments mandate. The T&F findings imply that, whilst there is generally a shared consensus as to what is meant by accessible, there are divergent positions and preferences between funders and researchers as to what constitutes reasonable reuse. T&F’s surveys always reveal the most restrictive licences in the Creative Commons (CC) suite such as Creative Commons Attribution Non-Commercial No-Derivs (CC BY-NC-ND) to be the most popular, with the liberal Creative Commons Attribution (CC BY) licence coming in last. This neither squares with the mandates of funders which are usually, but not always, pro CC BY, or author behaviour at OUP, where CC BY-NC-ND usually comes in a resounding third behind CC BY and CC BY-NC where it’s available. It’s not a dramatic logical step to think that proliferation may lead to confusion, but given the conflicting evidence and demand, and potential for change, it’s logical for publishers to offer myriad options. At the same time elsewhere in the OA space we have a recent example of pressure to remove choice.

Creative Commons. Image by Giulio Zannol. CC BY 2.0 via giuli-o Flickr.
Creative Commons. Image by Giulio Zannol. CC BY 2.0 via giuli-o Flickr.

In July 2014, the International Association of Science, Technical and Medical Publishers (STM) released their ‘model licences’ for open access. These were at their core a series of alternatives for, and extensions to the terms of the established CC licences. STM’s new addition did not go down well in OA circles, as a ‘Global Coalition’ subsequently called for their withdrawal. One of the interesting elements of the Coalition’s call was that, in amongst some very valid points about interoperability, etc. it fell back on the kind of language more commonly associated with a sermon to make the STM actions seem incompatible with some fundamental precepts about the practice of science: “let us work together in a world where the whole sum of human knowledge… is accessible, usable, reusable, and interoperable.” At root, it could be interpreted that the Coalition was using positive terminology to frame an essentially negative action – barring a new entry to the market. Personally, I don’t have a strong opinion on the new STM licences. We don’t have any plans to adapt them at OUP (we use CC). But it was odd and striking that rather than letting a competitor to the CC status quo exist and in all likelihood fail, some serious OA players felt the need to call for that competitor’s withdrawal.

This illustrates one of the central challenges of the dichotomy of OA. On one hand you have OA as a political movement seeking to replace commercial interests with self-organized and self-governed communities of interest – a bottom-up aspiration for the common good, often suggested to be applied in quite restricted ways, usually adhering to the Berlin, Budapest, and Bethesda declarations. On the other you have OA as a top-down pragmatic means to an end, aiming to improve the flow of research and by extension, economic performance. The OA pragmatist might suggest that it’s fine for an author to be given the choice of liberal or less liberal OA licences, as long as they meet the basic criteria of being free to read and easy to re-use. The OA dogmatist might only be satisfied with the most liberal licence, and with OA along the terms they’ve come to believe is the correct interpretation of their core precepts. The danger of this approach is that there is a ‘right’ and a ‘wrong’ and, as can be seen from the language of the Global Coalition in responding to the STM licences, that can very easily translate into; “If you’re not with us, you’re against us.”

Against this backdrop, publishers find themselves in a thorny position. Do you (a) respect author choice, but possibly at some expense of simplicity, or do you (b) offer fewer options, but potentially leave members of the scholarly community feeling dissatisfied or disenfranchised by your standard option?

Oxford University Press at the moment chooses option (a), as we feel this is the more inclusive way to proceed. To me at least it feels right to give your customers choice. But there is an argument for streamlining processes, avoiding confusion, and giving users consistent knowledge of what to expect. Nature Publishing Group (NPG), for example, recently announced that as part of their move to full OA for Nature Communications they would be making CC BY their default, and only allowing other options on request. This is notable in as much as it’s a very strong steer in a particular direction, while not ruling out everything else. NPG has done more than most to examine the choice issue – changing the order of their licences to see what authors select, sometimes varying charges, etc. Empirical evidence such as this is essential for a viable and credible resolution to the future of OA licensing. Perhaps the Global Coalition should have given a more considered and less emotional response to the STM licences. Was repudiation necessary in a broad OA community which should be able to recognise and accept different variants of OA? It would be a shame if all the positive impacts of open access for the consumer come hand in hand with a diminution of scholarly freedom for the producer.

The opinions and other information contained in this blog post and comments do not necessarily reflect the opinions or positions of Oxford University Press.

The post Questions surrounding open access licensing appeared first on OUPblog.

0 Comments on Questions surrounding open access licensing as of 10/21/2014 5:05:00 AM
Add a Comment
4. Race, sex, and colonialism

As an Africanist historian who has long been committed to reaching broader publics, I was thrilled when the research team for the BBC’s popular genealogy program Who Do You Think You Are? contacted me late last February about an episode they were working on that involved mixed race relationships in colonial Ghana. I was even more pleased when I realized that their questions about the practice and perception of intimate relationships between African women and European men in the Gold Coast, as Ghana was then known, were ones I had just explored in a newly published American Historical Review article, which I readily shared with them. This led to a month-long series of lengthy email exchanges, phone conversations, Skype chats, and eventually to an invitation to come to Ghana to shoot the Who Do You Think You Are? episode.

WDYTYA_0

After landing in Ghana in early April, I quickly set off for the coastal town of Sekondi where I met the production team, and the episode’s subject, Reggie Yates, a remarkable young British DJ, actor, and television presenter. Reggie had come to Ghana to find out more about his West African roots, but discovered instead that his great grandfather was a British mining accountant who worked in the Gold Coast for several years. His great grandmother, Dorothy Lloyd, was a mixed-race Fante woman whose father—Reggie’s great-great grandfather—was rumored to be a British district commissioner at the turn of the century in the Gold Coast.

The episode explores the nature of the relationship between Dorothy and George, who were married by customary law around 1915 in the mining town of Broomassi, where George worked as the paymaster at the local mine. George and Dorothy set up house in Broomassi and raised their infant son, Harry, there for two years before George left the Gold Coast in 1917 for good. Although their marriage was relatively short lived, it appears that Dorothy’s family and the wider community that she lived in regarded it as a respectable union and no social stigma was attached to her or Harry after George’s departure from the coast.

WDYTYA_1-3000px

George and Dorothy lived openly as man and wife in Broomassi during a time period in which publicly recognized intermarriages were almost unheard of. As a privately employed European, George was not bound by the colonial government’s directives against cohabitation between British officers and local women, but he certainly would have been aware of the informal codes of conduct that regulated colonial life. While it was an open secret that white men “kept” local women, these relationships were not to be publicly legitimated.

Precisely because George and Dorothy’s union challenged the racial prescripts of colonial life, it did not resemble the increasingly strident characterizations of interracial relationships as immoral and insalubrious in the African-owned Gold Coast press. Although not a perfect union, as George was already married to an English woman who lived in London with their children, the trajectory of their relationship suggests that George and Dorothy had a meaningful relationship while they were together, that they provided their son Harry with a loving home, and that they were recognized as a respectable married couple. No doubt this had much to do with why the wider African community seemingly embraced the couple, and why Dorothy was able to “marry well” after George left. Her marriage to Frank Vardon, a prominent Gold Coaster, would have been unlikely had she been regarded as nothing more than a discarded “whiteman’s toy,” as one Gold Coast writer mockingly called local women who casually liaised with European men. In her own right, Dorothy became an important figure in the Sekondi community where she ultimately settled and raised her son Harry, alongside the children she had with Frank Vardon.

WDYTYA_2-3000px

The “white peril” commentaries that I explored in my AHR article proved to be a rhetorically powerful strategy for challenging the moral legitimacy of British colonial rule because they pointed to the gap between the civilizing mission’s moral rhetoric and the sexual immorality of white men in the colony. But rhetoric often sacrifices nuance for argumentative force and Gold Coasters’ “white peril” commentaries were no exception. Left out of view were men like George Yates, who challenged the conventions of their times, even if imperfectly, and women like Dorothy Lloyd who were not cast out of “respectable” society, but rather took their place in it.

This sense of conflict and connection and of categorical uncertainty is what I hope to have contributed to the research process, storyline development, and filming of the Reggie Yates episode of Who Do You Think You Are? The central question the show raises is how do we think about and define relationships that were so heavily circumscribed by racialized power without denying the “possibility of love?” By “endeavor[ing] to trace its imperfections, its perversions,” was Martinican philosopher and anticolonial revolutionary Frantz Fanon’s answer. While I have yet to see the episode, Fanon’s insight will surely reverberate throughout it.

All images courtesy of Carina Ray.

The post Race, sex, and colonialism appeared first on OUPblog.

0 Comments on Race, sex, and colonialism as of 10/20/2014 7:44:00 AM
Add a Comment
5. Neurology and psychiatry in Babylon

How rapidly does medical knowledge advance? Very quickly if you read modern newspapers, but rather slowly if you study history. Nowhere is this more true than in the fields of neurology and psychiatry.

It was believed that studies of common disorders of the nervous system began with Greco-Roman Medicine, for example, epilepsy, “The sacred disease” (Hippocrates) or “melancholia”, now called depression. Our studies have now revealed remarkable Babylonian descriptions of common neuropsychiatric disorders a millennium earlier.

There were several Babylonian Dynasties with their capital at Babylon on the River Euphrates. Best known is the Neo-Babylonian Dynasty (626-539 BC) associated with King Nebuchadnezzar II (604-562 BC) and the capture of Jerusalem (586 BC). But the neuropsychiatric sources we have studied nearly all derive from the Old Babylonian Dynasty of the first half of the second millennium BC, united under King Hammurabi (1792-1750 BC).

The Babylonians made important contributions to mathematics, astronomy, law and medicine conveyed in the cuneiform script, impressed into clay tablets with reeds, the earliest form of writing which began in Mesopotamia in the late 4th millennium BC. When Babylon was absorbed into the Persian Empire cuneiform writing was replaced by Aramaic and simpler alphabetic scripts and was only revived (translated) by European scholars in the 19th century AD.

The Babylonians were remarkably acute and objective observers of medical disorders and human behaviour. In texts located in museums in London, Paris, Berlin and Istanbul we have studied surprisingly detailed accounts of what we recognise today as epilepsy, stroke, psychoses, obsessive compulsive disorder (OCD), psychopathic behaviour, depression and anxiety. For example they described most of the common seizure types we know today e.g. tonic clonic, absence, focal motor, etc, as well as auras, post-ictal phenomena, provocative factors (such as sleep or emotion) and even a comprehensive account of schizophrenia-like psychoses of epilepsy.

babylon large
Epilepsy Tablet and the Dying Lioness, reproduced with kind permission of The British Museum.

Early attempts at prognosis included a recognition that numerous seizures in one day (i.e. status epilepticus) could lead to death. They recognised the unilateral nature of stroke involving limbs, face, speech and consciousness, and distinguished the facial weakness of stroke from the isolated facial paralysis we call Bell’s palsy. The modern psychiatrist will recognise an accurate description of an agitated depression, with biological features including insomnia, anorexia, weakness, impaired concentration and memory. The obsessive behaviour described by the Babylonians included such modern categories as contamination, orderliness of objects, aggression, sex, and religion. Accounts of psychopathic behaviour include the liar, the thief, the troublemaker, the sexual offender, the immature delinquent and social misfit, the violent, and the murderer.

The Babylonians had only a superficial knowledge of anatomy and no knowledge of brain, spinal cord or psychological function. They had no systematic classifications of their own and would not have understood our modern diagnostic categories. Some neuropsychiatric disorders e.g. stroke or facial palsy had a physical basis requiring the attention of the physician or asû, using a plant and mineral based pharmacology. Most disorders, such as epilepsy, psychoses and depression were regarded as supernatural due to evil demons and spirits, or the anger of personal gods, and thus required the intervention of the priest or ašipu. Other disorders, such as OCD, phobias and psychopathic behaviour were viewed as a mystery, yet to be resolved, revealing a surprisingly open-minded approach.

From the perspective of a modern neurologist or psychiatrist these ancient descriptions of neuropsychiatric phenomenology suggest that the Babylonians were observing many of the common neurological and psychiatric disorders that we recognise today. There is nothing comparable in the ancient Egyptian medical writings and the Babylonians therefore were the first to describe the clinical foundations of modern neurology and psychiatry.

A major and intriguing omission from these entirely objective Babylonian descriptions of neuropsychiatric disorders is the absence of any account of subjective thoughts or feelings, such as obsessional thoughts or ruminations in OCD, or suicidal thoughts or sadness in depression. The latter subjective phenomena only became a relatively modern field of description and enquiry in the 17th and 18th centuries AD. This raises interesting questions about the possibly slow evolution of human self awareness, which is central to the concept of “mental illness”, which only became the province of a professional medical discipline, i.e. psychiatry, in the last 200 years.

The post Neurology and psychiatry in Babylon appeared first on OUPblog.

0 Comments on Neurology and psychiatry in Babylon as of 1/1/1900
Add a Comment
6. Five key moments in the Open Access movement in the last ten years

In 2014 Oxford University Press celebrates ten years of open access (OA) publishing. In that time open access has grown massively as a movement and an industry. Here we look back at five key moments which have marked that growth.

2004/05 – Nucleic Acids Research (NAR) converts to OA

At first glance it might seem parochial to include this here, but as Rich Roberts noted on this blog in 2012, Nucleic Acids Research’s move to open access was truly ‘momentous’. To put it in context, in 2004 NAR was OUP’s biggest owned journal and it was not at all clear that many of the elements were in place to drive the growth of OA. But in 2004/2005 NAR moved from being free to publish to free to read – with authors now supporting the journal financially by paying APCs (Article Processing Charges). No wonder Roberts adds that it was ‘with great trepidation’ that OUP and the editors made the change. Roberts needn’t have worried — NAR’s switch has been a huge success — its impact factor has increased, and submissions, which could have fallen off a cliff, have continued to climb. As with anything, there are elements of the NAR model which couldn’t be replicated now, but NAR helped show the publishing world in particular that OA could work. It’s saying something that it’s only ten years on, with the transition of Nature Communications to OA, that any journal near NAR’s size has made the switch.

NAR Revenue Streams  2004
NAR Revenue Streams 2004
NAR Revenue Streams 2013
NAR Revenue Streams 2013

2008 – National Institutes of Health (NIH) Mandate Introduced

Open access presents huge opportunities for research funders; the removal of barriers to access chimes perfectly with most funders’ aim to disseminate the fruits of their research as widely as possible. But as both the NIH and Wellcome, amongst others, have found out, author interests don’t always chime exactly with theirs. Authors have other pressures to consider – primarily career development – and that means publishing in the best journal, the journal with the highest impact factor, etc. and not necessarily the one with the best open access options. So it was that in 2008 the NIH found it was getting a very low rate of compliance with its recommended OA requirements for authors. What happened next was hugely significant for the progress of open access. As part of an Act which passed through the US legislature, it was made mandatory for all NIH-funded authors to make their works available 12 months after publication. This was transformative in two ways: it meant thousands of articles published from NIH research became available through PubMed Central (PMC), and perhaps just as importantly it legitimised government intervention in OA policy, setting a precedent for future developments in Europe and the United Kingdom.

2008 – Springer buys BioMed Central (BMC)

BioMed Central was the first for-profit open access publisher – and since its inception in 2000 it was closely watched in the industry to see if it could make OA ‘work’. When it was purchased by one of the world’s largest publishers, and when that company’s CEO declared that OA was now a ‘sustainable part of STM publishing’, it was a pretty clear sign to the rest of the industry, and all OA-watchers, that the upstart business model was now proving to be more than just an interesting side line. It also reflected the big players in the industry starting to take OA very seriously, and has been followed by other acquisitions – for example Nature purchasing Frontiers in early 2013. The integration of BMC into Springer has happened gradually over the past five years, and has also been marked by a huge expansion of OA at the parent company. Springer was one of the first subscription publishers to embrace hybrid OA, in 2004, but since acquiring BMC they have also massively increased their fully OA publishing. It seems bizarre to think that back in 2008 there were even some who feared the purchase was aimed at moving all BMC’s journals back to subscription access.

2007 on – Growth of PLOS ONE

The head and shoulders of Janet Finch, pictured on the platform as a guest speaker at the 11 November 2003 General Meeting of the Keele University Students' Union. KUSU Ballroom, Keele, Staffordshire, UK. Public domain via Wikimedia Commons.
The head and shoulders of Janet Finch, pictured on the platform as a guest speaker at the 11 November 2003 General Meeting of the Keele University Students’ Union. KUSU Ballroom, Keele, Staffordshire, UK. Public domain via Wikimedia Commons.

The Public Library of Science (PLOS) started publishing open access journals back in 2003, but while its journals quickly developed a reputation for high-quality publishing, the not-for-profit struggled to succeed financially. The advent of PLOS ONE changed all that. PLOS ONE has been transformative for several reasons, most notably its method of peer review. Typically top journals have tended to have their niche, and be selective. A journal on carcinogens would be unlikely to accept a paper about molecular biology, and it would only accept a paper on carcinogens if it was seen to be sufficiently novel and interesting. PLOS ONE changed that. It covers every scientific field, and its peer review is methodological (i.e. is the basic science sound) rather than looking for anything else. This enabled PLOS ONE to rapidly turn into the biggest journal in the world, publishing a staggering 31,500 papers in 2013 alone. PLOS ONE’s success cannot be solely attributed to its OA nature, but it was being OA which enabled PLOS ONE to become the ‘megajournal’ we know today. It would simply not be possible to bring such scale to a subscription journal. The price would balloon beyond the reach of even the biggest library budget. PLOS ONE has spawned a rash of similar journals and more than any one title it has energised the development of OA, dispelling previously-held notions of what could and couldn’t be done in journals publishing.

2012 – The ‘Finch’ Report

It’s difficult to sum up the vast impact of the Finch Report on journals publishing in the UK. The product of a group chaired by the eponymous Dame Janet Finch, the report, by way of two government investigations, catalysed a massive investment in gold open access (funded by APCs) from the UK government, crystallised by Research Councils UK’s OA policy. In setting the direction clearly towards gold OA, ‘Finch’ led to a huge number of journals changing their policies to accommodate UK researchers, and the establishment of OA policies, departments, and infrastructure at academic institutions and publishers across the UK and beyond. The wide-ranging policy implications of ‘Finch’ continue to be felt as time progresses, through 2014’s Higher Education Funding Council (HEFCE) for England policy, through research into the feasibility of OA monographs, and through deliberations in other jurisdictions over whether to follow the UK route to open access. HEFCE’s OA mandate in particular will prove incredibly influential for UK researchers – as it directly ties the assessment of a university’s funding to their success in ensuring their authors publish OA. The mainstream media attention paid to ‘Finch’ also brought OA publishing into the public eye in a way never seen before (or since).

Headline image credit: Storm of Stars in the Trifid Nebula. NASA/JPL-Caltech/UCLA

The post Five key moments in the Open Access movement in the last ten years appeared first on OUPblog.

0 Comments on Five key moments in the Open Access movement in the last ten years as of 1/1/1900
Add a Comment
7. Political Analysis Letters: a new way to publish innovative research

There’s a lot of interesting social science research these days. Conference programs are packed, journals are flooded with submissions, and authors are looking for innovative new ways to publish their work.

This is why we have started up a new type of research publication at Political Analysis, Letters.

Research journals have a limited number of pages, and many authors struggle to fit their research into the “usual formula” for a social science submission — 25 to 30 double-spaced pages, a small handful of tables and figures, and a page or two of references. Many, and some say most, papers published in social science could be much shorter than that “usual formula.”

We have begun to accept Letters submissions, and we anticipate publishing our first Letters in Volume 24 of Political Analysis. We will continue to accept submissions for research articles, though in some cases the editors will suggest that an author edit their manuscript and resubmit it as a Letter. Soon we will have detailed instructions on how to submit a Letter, the expectations for Letters, and other information, on the journal’s website.

We have named Justin Grimmer and Jens Hainmueller, both at Stanford University, to serve as Associate Editors of Political Analysis — with their primary responsibility being Letters. Justin and Jens are accomplished political scientists and methodologists, and we are quite happy that they have agreed to join the Political Analysis team. Justin and Jens have already put in a great deal of work helping us develop the concept, and working out the logistics for how we integrate the Letters submissions into the existing workflow of the journal.

I recently asked Justin and Jens a few quick questions about Letters, to give them an opportunity to get the word out about this new and innovative way of publishing research in Political Analysis.

Political Analysis is now accepting the submission of Letters as well as Research Articles. What are the general requirements for a Letter?

Letters are short reports of original research that move the field forward. This includes, but is not limited to, new empirical findings, methodological advances, theoretical arguments, as well as comments on or extensions of previous work. Letters are peer reviewed and subjected to the same standards as Political Analysis research articles. Accepted Letters are published in the electronic and print versions of Political Analysis and are searchable and citable just like other articles in the journal. Letters should focus on a single idea and are brief—only 2-4 pages and no longer than 1500-3000 words.

Why is Political Analysis taking this new direction, looking for shorter submissions?

Political Analysis is taking this new direction to publish important results that do not traditionally fit in the longer format of journal articles that are currently the standard in the social sciences, but fit well with the shorter format that is often used in the sciences to convey important new findings. In this regard the role model for the Political Analysis Letters are the similar formats used in top general interest science journals like Science, Nature, or PNAS where significant findings are often reported in short reports and articles. Our hope is that these shorter papers also facilitate an ongoing and faster paced dialogue about research findings in the social sciences.

What is the main difference between a Letter and a Research Paper?

The most obvious difference is the length and focus. Letters are intended to only be 2-4 pages, while a standard research article might be 30 pages. The difference in length means that Letters are going to be much more focused on one important result. A letter won’t have the long literature review that is standard in political science articles and will have much more brief introduction, conclusion, and motivation. This does not mean that the motivation is unimportant; it just means that the motivation has to briefly and clearly convey the general relevance of the work and how it moves the field forward. A Letter will typically have 1-3 small display items (figures, tables, or equations) that convey the main results and these have to be well crafted to clearly communicate the main takeaways from the research.

If you had to give advice to an author considering whether to submit their work to Political Analysis as a Letter or a Research Article, what would you say?

Our first piece of advice would be to submit your work! We’re open to working with authors to help them craft their existing research into a format appropriate for letters. As scholars are thinking about their work, they should know that Letters have a very high standard. We are looking for important findings that are well substantiated and motivated. We also encourage authors to think hard about how they design their display items to clearly convey the key message of the Letter. Lastly, authors should be aware that a significant fraction of submissions might be desk rejected to minimize the burden on reviewers.

You both are Associate Editors of Political Analysis, and you are editing the Letters. Why did you decide to take on this professional responsibility?

Letters provides us an opportunity to create an outlet for important work in Political Methodology. It also gives us the opportunity to develop a new format that we hope will enhance the quality and speed of the academic debates in the social sciences.

Headline image credit: Letters, CC0 via Pixabay.

The post Political Analysis Letters: a new way to publish innovative research appeared first on OUPblog.

0 Comments on Political Analysis Letters: a new way to publish innovative research as of 10/19/2014 8:54:00 AM
Add a Comment
8. Biologists that changed the world

Biology Week is an annual celebration of the biological sciences that aims to inspire and engage the public in the wonders of biology. The Society of Biology created this awareness day in 2012 to give everyone the chance to learn and appreciate biology, the science of the 21st century, through varied, nationwide events. Our belief that access to education and research changes lives for the better naturally supports the values behind Biology Week, and we are excited to be involved in it year on year.

Biology, as the study of living organisms, has an incredibly vast scope. We’ve identified some key figures from the last couple of centuries who traverse the range of biology: from physiology to biochemistry, sexology to zoology. You can read their stories by checking out our Biology Week 2014 gallery below. These biologists, in various different ways, have had a significant impact on the way we understand and interact with biology today. Whether they discovered dinosaurs or formed the foundations of genetic engineering, their stories have plenty to inspire, encourage, and inform us.

If you’d like to learn more about these key figures in biology, you can explore the resources available on our Biology Week page, or sign up to our e-alerts to stay one step ahead of the next big thing in biology.

Headline image credit: Marie Stopes in her laboratory, 1904, by Schnitzeljack. Public domain via Wikimedia Commons.

The post Biologists that changed the world appeared first on OUPblog.

0 Comments on Biologists that changed the world as of 10/18/2014 6:06:00 AM
Add a Comment
9. Going inside to get a taste of nature

For many of us, nature is defined as an outdoor space, untouched by human hands, and a place we escape to for refuge. We often spend time away from our daily routines to be in nature, such as taking a backwoods camping trip, going for a long hike in an urban park, or gardening in our backyard. Think about the last time you were out in nature, what comes to mind? For me, it was a canoe trip with friends. I can picture myself in our boat, the sound of the birds and rustling leaves in the background, the smell of cedars mixed with the clearing morning mist, and the sight of the still waters in front of me. Most of all, I remember a sense of calmness and clarity which I always achieve when I’m in nature.

Nature takes us away from the demands of life, and allows us to concentrate on the world around us with little to no effort. We can easily be taken back to a summer day by the smell of fresh cut grass, and force ourselves to be still to listen to the distant sound of ocean waves. Time in nature has a wealth of benefits from reducing stress, improving mood, increasing attentional capacities, and facilitating and creating social bonds. A variety of work supports nature being healing and health promoting at both an individual level (such as being energized after a walk with your dog) and a community level (such as neighbors coming together to create a local co-op garden). However, it can become difficult to experience the outdoors when we spend most of our day within a built environment.

I’d like you to stop for a moment and look around. What do you see? Are there windows? Are there any living plants or animals? Are the walls white? Do you hear traffic or perhaps the hum of your computer? Are you smelling circulated air? As I write now I hear the buzz of the florescent lights above me, and take a deep inhale of the lingering smell from my morning coffee. There is no nature except for the few photographs of the countryside and flowers that I keep tapped to my wall. I often feel hypocritical researching nature exposure sitting in front of a computer screen in my windowless office. But this is the reality for most of us. So how can we tap into the benefits of nature in order to create healthy and healing indoor environments that mimic nature and provide us with the same benefits as being outdoors?

Crater_Lake_Garfield_Peak_Trail_View_East
Crater Lake Garfield Peak Trail View East by Markgorzynski. CC-BY-SA-3.0 via Wikimedia Commons.

Urban spaces often get a bad rap. Sure, they’re typically overcrowded, high in pollution, and limited in their natural and green spaces, but they also offer us the ability to transform the world around us into something that is meaningful and also health promoting. Beyond architectural features such as skylights, windows, and open air courtyards, we can use ambient features to adapt indoor spaces to replicate the outdoors. The integration of plants, animals, sounds, scents, and textures into our existing indoor environments enables us to create a wealth of natural environments indoors.

Notable examples of indoor nature, are potted plants or living walls in office spaces, atriums providing natural light, and large mural landscapes. In fact, much research has shown that the presence of such visual aids provides the same benefits of being outdoors. Incorporating just a few pieces of greenery into your workspace can help increase your productivity, boost your mood, improve your health, and help you concentrate on getting your work done. But being in nature is more than just seeing, it’s experiencing it fully and being immersed into a world that engages all of your senses. The use of natural sounds, scents, and textures (e.g. wooden furniture or carpets that look and feel like grass) provides endless possibilities for creating a natural environment indoors, and encouraging built environments to be therapeutic spaces. The more nature-like the indoor space can be, the more apt it is to illicit the same psychological and physical benefits that being outdoors does. Ultimately, the built environment can engage my senses in a way that brings me back to my canoe trip, and help me feel that same clarity and calmness that I did on the lake.

On a broader level, indoor nature may also be a means of encouraging sustainable and eco-friendly behaviors. With more generations growing up inside, we risk creating a society that is unaware of the value of nature. It’s easy to suggest that the solution to our declining involvement with nature is to just “go outside”; but with today’s busy lifestyle, we cannot always afford the time and money to step away. Integrating nature into our indoor environment is one way to foster the relationship between us and nature, and to encourage a sense of stewardship and appreciation for our natural world. By experiencing the health promoting and healing properties of nature, we can instill individuals with the significance of our natural world.

As I look around my office I’ve decided I need to take some of my own advice and bring my own little piece of nature inside. I encourage you to think about what nature means to you, and how you can incorporate this meaning into your own space. Does it involve fresh cut flowers? A photograph of your annual family campsite? The sound of birds in the background as you work? Whatever it is, I’m sure it’ll leave you feeling a little bit lighter, and maybe have you working a little bit faster.

Image: World Financial Center Winter Garden by WiNG. CC-BY-3.0 via Wikimedia Commons.

The post Going inside to get a taste of nature appeared first on OUPblog.

0 Comments on Going inside to get a taste of nature as of 1/1/1900
Add a Comment
10. Celebrating World Anaesthesia Day 2014

World Anaesthesia Day commemorates the first successful demonstration of ether anaesthesia at the Massachusetts General Hospital on 16 October 1846. This was one of the most significant events in medical history, enabling patients to undergo surgical treatments without the associated pain of an operation. To celebrate this important day, we are highlighting a selection of British Journal of Anaesthesia podcasts so you can learn more about anaesthesia practices today.

Fifth National Audit Project on Accidental Awareness during General Anaesthesia

Accidental awareness during general anaesthesia (AAGA) is a rare but feared complication of anaesthesia. Studying such rare occurrences is technically challenging but following in the tradition of previous national audit projects, the results of the fifth national audit project have now been published receiving attention from both the academic and national press. In this BJA podcast Professor Jaideep Pandit (NAP5 Lead) summarises the results and main findings from another impressive and potentially practice changing national anaesthetic audit. Professor Pandit highlights areas of AAGA risk in anaesthetic practice, discusses some of the factors (both technical and human) that lead to accidental awareness, and describes the review panels findings and recommendations to minimise the chances of AAGA.
October 2014 || Volume 113 – Issue 4 || 36 Minutes

 

Pre-hospital Anaesthesia

Emergency airway management in trauma patients is a complex and somewhat contentious issue, with opinions varying on both the timing and delivery of interventions. London’s Air Ambulance is a service specialising in the care of the severely injured trauma patient at the scene of an accident, and has produced one of the largest data sets focusing on pre-hospital rapid sequence induction. Professor David Lockey, a consultant with London’s Air Ambulance, talks to the BJA about LAA’s approach to advanced airway management, which patients benefit from pre-hospital anaesthesia and the evolution of RSI algorithms. Professor Lockey goes on to discuss induction agents, describes how to achieve a 100% success rate for surgical airways and why too much choice can be a bad thing, as he gives us an insight into the exciting world of pre-hospital emergency care.
August 2014 || Volume 113 – Issue 2 || 35 Minutes

 

Fluid responsiveness: an evolution in our understanding

Fluid therapy is a central tenet of both anaesthetic and intensive care practice, and has been a solid performer in the medical armamentarium for over 150 years. However, mounting evidence from both surgical and medical populations is starting to demonstrate that we may be doing more harm than good by infusing solutions of varying tonicity and pH into the arms of our patients. As anaesthetists we arguably monitor our patient’s response to fluid-based interventions more closely than most, but in emergency departments and on intensive care units this monitoring me be unavailable or misleading. For this podcast Dr Paul Marik, Professor and Division Chief of Pulmonary Critical Care at Eastern Virginia Medical Center delivers a masterclass on the physiology of fluid optimisation, tells us which monitors to believe and importantly under which circumstances, and reviews some of the current literature and thinking on fluid responsiveness.
April 2014 || Volume 112 – Issue 4 || 43 Minutes

 

Post-operative Cognitive Decline

Post-operative cognitive decline (POCD) has been detected in some studies in up to 50% patients undergoing major surgery. With an ageing population and an increasing number of elective surgeries, POCD may represent a major public health problem. However POCD research is complex and difficult to perform, and the current literature may not tell the full story. Dr Rob Sanders from the Wellcome Department of Imaging Neuroscience at UCL talks to us about the methodological limitations of previous studies and the important concept of a cognitive trajectory. In addition, Dr Sanders discusses the risk factors and role of inflammation in causing brain injury, and reveals the possibility that certain patients may in fact undergo post-operative cognitive improvement (POCI).
March 2014 || Volume 112 – Issue 3 || 20 Minutes

 

Needle Phobia – A Psychological Perspective

For anaesthetists, intravenous cannulation is the gateway procedure to an increasingly complex and risky array of manoeuvres, and as such becomes more a reflex arc than a planned motor act. For some patients however, that initial feeling of needle penetrating epidermis, dermis and then vessel wall is a dreaded event, and the cause of more anxiety than the surgery itself. Needle phobia can be a deeply debilitating disease causing patients not to seek help even under the most dire circumstances. Dr Kate Jenkins, a hospital clinical psychologist describes both the psychology and physiology of needle phobia, what we as anaesthetists need to be aware of, and how we can better serve out patients for whom ‘just a small scratch’ may be their biggest fear.
July 2014 || Volume 113 – Issue 1 || 32 Minutes

 

For more information, visit the dedicated BJA World Anaesthesia Day webpage for a selection of free articles.

Headline image credit: Anaesthesia dreams, by Tc Morgan. CC-BY-SA-2.0 via Flickr.

The post Celebrating World Anaesthesia Day 2014 appeared first on OUPblog.

0 Comments on Celebrating World Anaesthesia Day 2014 as of 10/16/2014 10:49:00 AM
Add a Comment
11. The life of a bubble

They might be short-lived — but between the time a bubble is born (Fig 1 and Fig 2a) and pops (Fig 2d-f), the bubble can interact with surrounding particles and microorganisms. The consequence of this interaction not only influences the performance of bioreactors, but also can disseminate the particles, minerals, and microorganisms throughout the atmosphere. The interaction between microorganism and bubbles has been appreciated in our civilizations for millennia, most notably in fermentation. During some of these metabolic processes, microorganisms create gas bubbles as a byproduct. Indeed the interplay of bubbles and microorganisms is captured in the origin of the word fermentation, which is derived from the Latin word ‘fervere’, or to boil. More recently, the importance of bubbles on the transfer of microorganisms has been appreciated. In the 1940s, scientists linked red tide syndrome to toxins aerosolized by bursting bubbles in the ocean. Other more deadly illnesses, such as Legionnaires’ disease have been linked since.

bubbles
Figure 1: Bubble formation during wave breaking resulting in the white foam made of a myriad of bubbles of various sizes. (Walls, Bird, and Bourouiba, 2014, used with permission)

Bubbles are formed whenever gas is completely surrounded by an immiscible liquid. This encapsulation can occur when gas boils out of a liquid or when gas is injected or entrained from an external source, such as a breaking wave. The liquid molecules are attracted to each other more than they are to the gas molecules, and this difference in attraction leads to a surface tension at the gas-liquid interface. This surface tension minimizes surface area so that bubbles tend to be spherical when they rise and rapidly retract when they pop.

Figure 2: Schematic example of Bubble formation (a), rise (b), surfacing (c), rupture (d), film droplet formation (e), and finally jet droplet formation (f) illustrating the life of bubbles from birth to death. (Bird, 2014, used with permission)
Figure 2: Schematic example of Bubble formation (a), rise (b), surfacing (c), rupture (d), film droplet formation (e), and finally jet droplet formation (f) illustrating the life of bubbles from birth to death. (Bird, 2014, used with permission)

When microorganisms are near a bubble, they can interact in several ways. First, a rising bubble can create a flow that can move, mix, and stress the surrounding cells. Second, some of the gas inside the bubble can dissolve into the surrounding fluid, which can be important for respiration and gas exchange. Microorganisms can likewise influence a bubble by modifying its surface properties. Certain microorganisms secrete surfactant molecules, which like soap move to the liquid-gas interface and can locally lower the surface tension. Microorganisms can also adhere and stick on this interface. Thus, a submerged bubble travelling through the bulk can scavenge surrounding particulates during its journey, and lift them to the surface.

When a bubble reaches a surface (Figure 2c), such as the air-sea interface, it creates a thin, curved film that drains and eventually pops. In Figure 3, a sequence of images shows a bubble before (Fig 3a), during, and after rupture (Fig 3b). The schematic diagrams displayed in Fig 2c-f complement this sequence. Once a hole nucleates in the bubble film (Fig 2d), surface tension causes the film to rapidly retract and centripetal acceleration acts to destabilize the rim so that it forms ligaments and droplets. For the bubble shown, this retraction process occurs over a time of 150 microseconds, where each microsecond is a millionth of a second. The last image of the time series shows film drops launching into the surrounding air. Any particulates that became encapsulated into these film droplets, including all those encountered by the bubble on its journey through the water column, can be transported throughout the atmosphere by air currents.

bubbles three
Figure 3: Photographs, before, during, and after bubble ruptures. The top panel illustrated the formation of small film droplets; the bottom panel illustrates the formation of larger jet drops. (Bird, 2014, used with permission)

Another source of droplets occurs after the bubble has ruptured (Fig 3b). The events occurring after the bubble ruptures is presented in the second time series of photographs. Here the time between photographs is one milliseconds, or 1/1000th of a second. After the film covering the bubble has popped, the resulting cavity rapidly closes to minimize surface area. The liquid filling the cavity overshoots, creating an upward jet that can break up into vertically propelled droplets. These jet drops can also transport any nearby particulates, also including those scavenged by the bubble on its journey to the surface. Although both film and jet drops can vary in size, jet drops tend to be bigger.

Whether it is for the best or the worst, bubbles are ubiquitous in our everyday life. They can expose us to diseases and harmful chemicals, or tickle our palate with fresh scents and yeast aromas, such as those distinctly characterizing a glass of champagne. Bubbles are the messenger that can connect the depth of the waters to the air we breathe and illustrate the inherent interdependence and connectivity that we have with our surrounding environment.

The post The life of a bubble appeared first on OUPblog.

0 Comments on The life of a bubble as of 10/15/2014 5:45:00 AM
Add a Comment
12. Gangs: the real ‘humanitarian crisis’ driving Central American children to the US

The spectacular arrival of thousands of unaccompanied Central American children at the southern frontier of the United States over the last three years has provoked a frenzied response. President Obama calls the situation a “humanitarian crisis” on the United States’ borders. News interviews with these vulnerable children appear almost daily in the global news media alongside official pronouncements by the US government on how it intends to stem this flow of migrants.

But what is not yet recognised is that these children represent only the tip of the iceberg of a deeper new humanitarian crisis in the region. Of course, recent figures for unaccompanied children (UAC) arriving in the US from the three countries of the Northern Triangle of Central America: El Salvador, Guatemala and Honduras are alarming.

Apprehensions of unaccompanied children from these countries rose from about 3,000-4,000 per year to 10,000 in the 2012 financial year and then doubled again in the same period in 2013 to 20,000.

But it’s important to pull back and look at the bigger picture, which is that there has been a steep increase in border guard apprehensions of nationals from the three Northern Triangle countries – not just unaccompanied children, but adults and families as well.

The unaccompanied children we’ve been hearing so much about are not exceptional but represent just one strand (albeit a more photogenic and newsworthy strand) of a broader – and massive – increase in irregular migration to the US from El Salvador, Guatemala and Honduras.

If you dig a little deeper into US government data, it helps to explain why so many people are trying to get across the border any way they can. There’s been a striking rise, over the same period, in the number of people reporting that they are too scared to return to their country: from 3,000-4,000 claims from Northern Triangle nationals in previous years, the figure leaped to 8,500 for 2012 and then almost tripled to more than 23,000 in 2013.

It may be tempting to dismiss this fear of returning – “they would say that, wouldn’t they?” – but this increase is particular to the Northern Triangle and has been found increasingly by US officials to be credible – and not generally found among other asylum-seekers in the United States.

Fleeing gang violence

This official data correlates with my ESRC-funded research in El Salvador, Guatemala and Honduras last year, which identified a dramatic increase in forced displacement generated by organised crime in these countries from around 2011.

As such, the timing of the increased numbers of UACs (and adults) arriving in the US corresponds closely to the explosion of people being forced from their homes by criminal violence in the Northern Triangle. The changing violent tactics of organised criminal groups are thus the principal motor driving the increased irregular migration to the US from these countries.

In all three countries, street gangs of Californian origin such as the Mara Salvatrucha and Barrio 18 have consolidated their presence in urban locations, particularly in the poorer parts of bigger cities.


Gang violence: a member of the Mara Salvatrucha in a Honduran jail.


EPA/Stringer

In recent years these gangs have become more organised, criminal and brutal. Thus, for instance, whereas the gangs used to primarily extort only businesses, in the last few years they have begun to demand extortion monies from many local householders as well. This shift in tactics has fuelled a surge of people fleeing their homes in zones where the gangs are present.

There is variation within this bigger picture. For instance, the presence of gangs is extraordinary in El Salvador, quite extensive in the Honduras and relatively confined in Guatemala. This fact explains why, per head of population, the recent US government figures for Salvadorians claiming to fear return on arrival to the US are almost double those of Hondurans, which are almost double those of Guatemalans.

It also explains why 65% of Salvadorian UACs interviewed in the United States in 2013 mentioned gangs as their reason for leaving, as compared to 33% for Honduran UACs and 10% for Guatemalan UACs.

Worse than Colombia

What is not yet properly appreciated in the current debate is that these violent criminal dynamics are generating startling levels of internal displacement within these countries. If we take El Salvador as an example, we see that in 2012 some 3,300 Salvadorian children arrived in the US and 4,000 Salvadorians claimed to fear returning home.

By contrast, survey data for 2012 indicates that around 130,000 people were internally displaced within El Salvador due to criminal violence in just that one year.

The number of people seeking refuge in the United States fade in significance as against this new reality in the region.

Proportionally, 2.1% of El Salvadorians were forced to flee their homes in 2012 as a result of criminal violence and intimidation. Almost one-third of these people were displaced twice or more within the same period. If you compare this to even the worst years of gang-related violence in Colombia – the annual rate of internal displacement barely reached 1% of the population. Incredibly, the rates of forced displacement in countries such as El Salvador thus seem to surpass active war zones like Colombia.

The explosion of forced displacement caused by organised criminal groups in El Salvador, Guatemala and Honduras (not to mention Mexico) is the region’s true “humanitarian crisis”, of which the unaccompanied children are but one symptom.

Knee-jerk efforts by the US government to stop children arriving at its border miss this bigger picture and are doomed to failure. It would almost certainly be a better use of funds to help Central American governments to provide humanitarian support to the many uprooted families for whom survival in the resource-poor economies of the Northern Triangle is now an everyday struggle.

This article was originally published on The Conversation. Read the original article. The Conversation

The post Gangs: the real ‘humanitarian crisis’ driving Central American children to the US appeared first on OUPblog.

0 Comments on Gangs: the real ‘humanitarian crisis’ driving Central American children to the US as of 10/11/2014 11:19:00 AM
Add a Comment
13. Domestic violence and the NFL. Are players at greater risk for committing the act?

As the domestic violence controversy in the NFL has captured the attention of fans and global media, it seems it has become the No. 1 off-field issue for the league. To gain further perspective into the matter of domestic violence and the current NFL situation, I spoke with Greta Friedemann-Sánchez, PhD and Rodrigo Lovatón, authors of the article, “Intimate Partner Violence in Colombia: Who Is at Risk?,” published in Social Forces, that explores the prevalence of intimate partner violence and the certain risk factors that increase its likelihood.

What do you think of the recent media coverage of domestic violence in the NFL?

In 2010, the Center for Disease Control and Prevention (CDC) estimated that in the United States 24% of women and 13% of men have experienced severe physical violence by an intimate partner at some point during their life. Furthermore, the Bureau of Justice Statistics (Department of Justice) calculates that domestic violence accounted for 21% of all violent victimizations between 2003 and 2012 and about 1.5 million cases in 2013. If emotional abuse and stalking are taken into account, the prevalence rates increase. In some countries the prevalence is even higher. In Colombia, for example, 39% of women have experienced physical violence in their lifetimes. The recent media coverage of domestic violence shows that this is an important policy issue that has not received adequate attention in the United States or internationally. Unfortunately, this is a missed opportunity to educate the public on the high prevalence rates and the negative effects domestic violence has, not only for the victim but for all the members of a family. Equally invisible in the coverage is the fact that domestic violence is an “equal opportunity” event, meaning that it is present in families regardless of socioeconomic status, race, ethnic affiliation, and so on. Domestic violence, and more specifically intimate partner violence, can be just as present in NFL players’ families who are on the eye of the public, as it can be in any other family. The issue, however, remains hidden for the most part. It takes a celebrity to be involved for the issue to gain visibility. In that sense, we are glad the media covered it. This is a policy issue that needs to be appropriately analyzed and addressed.

What do you think is an appropriate punishment for an NFL player who is convicted of domestic violence?

We agree that a professional sports organization, that has extensive media coverage with a large audience, including children and adolescents, should not allow a player who is convicted of domestic violence to participate. Organized sports organizations sell more than just games, they sell the personalities and lives of their players. Players are often held as role models, their careers and lives are admired. To allow a player to continue playing would endorse and normalize violent behavior. Intimate partner violence has long term negative physical, emotional, and economic consequences for the victims, which are often overlooked. In fact, children who witness violence at home have negative emotional and educational outcomes too. Witnessing violence as a child or being a victim of violence as a child are some of the strongest predictors for becoming a victim or a perpetrator of violence later in life. Therefore, the NFL or any sports organization should reject this kind of behavior by disallowing domestic violence offenders from participating in any of their activities.

Do you think that giving a person who commits domestic violence a more severe punishment will decrease the chances that the person will commit violence again?

Types and intensity of violence are varied, and so are the legal mechanisms in place to protect victims and punish batterers. Victims do not always get the support they need from law enforcement. Furthermore, protective and punitive laws are not always enforced in an adequate manner, consequently, victims have a chance to be re-victimized and re-traumatized as the perpetrators become even more violent as a result of the victims’ reporting. The proportion of domestic violence crimes reported to the police represents about 50% of all identified cases between 2003 and 2012 in the United States, according to the Bureau of Justice Statistics, Department of Justice. These issues are recursive. The experience for victims outside of the United States can be even direr as domestic violence legislation may be in its infancy.

Do you think that the recent media attention surrounding domestic and/or that this will increase or decrease the likelihood of/reduce other victims coming forward to report abuse?

Neither. Resolving intimate partner violence requires a multi-pronged approach. Increased visibility of the problem afforded by the recent media coverage might propel better law enforcement, increased funding for research, and implementation of prevention pilot programs that engage men and boys, just to name a few. We need better and more preventive, protective, and punitive mechanisms in place. In addition, the mechanisms in place need to be evaluated for effectiveness in responding to the issue. Until some of these steps happen, simply having more media attention will not have an effect on reporting.

Abandoned child’s shoe on balcony with diffuse filter. © sil63 via iStock.
Abandoned child’s shoe on balcony with diffuse filter. © sil63 via iStock.

What are some of the reasons women tend to stay in domestic violence situations?

Why do perpetrators exercise violence against their intimate partners? These questions go hand in hand, yet it is usually the first that is asked, although both are increasingly in the scope of research given the increase in violence against women worldwide. Women’s economic dependence on their partners, which gets amplified when children are present, contributes to women being locked into violent situations. Lack of employment options, being unemployed, having low-wage employment makes women financially dependent on their partners. Lack of affordable day care, day care with limited hours, and school schedules without after-school programs limit women’s participation in employment. Even women who are employed and have livable wages might find it hard to leave if temporary shelters and affordable housing are not available. But the barriers to exiting a violent relationship are not only material. Being abused is a stigmatizing experience. Victims are reluctant to be shamed by their family, friends, and society at large. In addition, the exercise of controlling and humiliating behaviors on the part of batterers has the effect of lowering the victims’ self-esteem and self-efficacy. Victims may doubt their capacity to survive on their own and with their children. But controlling behaviors also include batterers’ being effective at sabotaging the victims’ efforts to access her social support network, to gain employment, or to arrange an alternative living place. In many instances, the episodes of abuse are interspersed by weeks or months of relative calm, and victims may believe their partners have changed, only to find themselves in the same or worse situation. In addition, societies have cultural scripts of what is included in the marital contract, which may justify violence under certain circumstances. Gender norms give men the right to control their intimate partner’s behavior, to exert influence, and to resolve disputes with violence. Furthermore, women are socialized to prioritize the children and family “unity” over their welfare; women may perceive that the children will be negatively affected by a separation, not knowing the negative effects they may already be experiencing.

Who are most at risk for being a victim of domestic violence?

Several factors contribute to the risk of being a victim of intimate partner violence. While there are general patterns, the specifics may vary by country. In our recent study using data from Colombia’s Demographic and Health Surveys, we found that the highest risk factors were associated with the maltreatment of a woman’s partner when he was a child, and current child maltreatment by the woman’s partner. Higher risk is associated with lower educational status of both partners, lower socioeconomic status (only for physical violence), for younger women, and for women working outside of the home. This last factor is especially interesting given the role that income plays in household negotiation dynamics. Gender differences in power among family members affect each member’s economic choices and behavior, including individual’s bargaining over the allocation of material and time resources within the household, over gender norms, and even over how much abuse to exert or resist. It has long been hypothesized that income provides women with strong leverage in family negotiations. But our results and those found in studies in other countries are revealing that the dynamics of negotiation and violence may be heavily mediated by gender norms. In effect, gender norms about women’s socially acceptable behavior, including working for pay, might trump the leverage they can effect with income. In addition, we do not know the effect of relative wages of both partners on violence. What is known for the United States is that economic stress in a family increases the risk for violence. Gender norms of masculinity that prescribe men as the breadwinners have an effect: men who are unemployed are at greater risk for being perpetrators of violence. The same is true for men who endorse rigid views of masculinity, including the norms that men should dominate women.

How can we best help those most at risk of domestic violence?

Interventions at the individual and community level that address gender equitable norms and the construction of gender relations via socialization are simultaneously protective (batterer intervention programs) and preventive. In the same vein, promoting boys and men’s participation in activities considered feminine under rigid norms of masculinity, such as taking care of children, of the sick and disabled, and doing domestic work. Another line of response is to work on those risk factors that can be shaped by public policies, such as promoting equitable access to employment for women and an extended access to education to the population in general. In addition, special care is required for those groups that are at greater risk to suffer from violence, such as households with lower socioeconomic status, with younger women, more children, and where the partners have a previous history of maltreatment. Workshops on parenting skills and non-violent forms of disciplining children. Last, a policy response should also include better mechanisms for the victims to come forward and report the problem, support systems to help them escape from abusive domestic environments, and psychological service for trauma recovery.

Is there anything else you think we can learn about domestic violence in the United States from the recent NFL cases?

From the way the media covered it, it is clear that the general public is not well informed about intimate partner violence. More education will help de-stigmatize the issue.

Headline image credit: Grass. CC0 via Pixabay.

The post Domestic violence and the NFL. Are players at greater risk for committing the act? appeared first on OUPblog.

0 Comments on Domestic violence and the NFL. Are players at greater risk for committing the act? as of 10/10/2014 3:25:00 PM
Add a Comment
14. Falling out of love and down the housing ladder?

Since World War II, homeownership has developed into the major tenure in almost all European countries. This democratization of homeownership has turned owned homes from luxury items available to a lucky few into inherent and attainable life goals for many. In the general perception, owning is often associated with better homes with larger gardens, in better neighbourhoods with better schools. To rent, in contrast, is considered pouring money down the drain. Therefore, especially as people marry and children are planned, homeownership becomes the preferred choice of tenure. This choice has been strongly subsidized by governments and has become the norm in countries such as Australia, Britain, Belgium, and the United States. Once people have better jobs or more children, they move to ever bigger and better homes. This has been described as people moving up the housing ladder.

However, the underlying idea of a stable, married family – which has been the standard convention for most of the twentieth century – is outdated. Many (though declining numbers of) marriages end in separation today. Besides the emotional turmoil that the marital separation causes, this event has profound effects on the chances to remain in homeownership for both ex-partners. Generally, at least one, if not both partners, will leave the previously shared dwelling. As separation often involves a loss of financial resources, people may have a hard time re-entering homeownership. After falling out of love and separating, a fall down the housing ladder may follow, as we show in a study recently published in European Sociological Review.

Figure 1: Average ownership rate before and after separation in Britain. Source: Lersch/Vidal 2014
Figure 1: Average ownership rate before and after separation in Britain. Source: Lersch/Vidal 2014

How drastic this fall will be depends very much on the housing market environment (see Figures 1 and 2). In the past in Britain, easy access to housing finance and high supply facilitated (re-)entry into homeownership for ex-partners even under house price inflation in the 1990s and early 2000s. In tight housing markets ex-partners will face more difficulties, and once access to mortgages becomes restricted, as happened in Britain after the recent crash in the housing market, problems may arise. So in the past British ex-partners could return to homeownership at some point in their lives because access to mortgages was easy – and they needed to return because alternatives in the private and social rental sector were and are unattractive. This may no longer work in future. Ex-partners may increasingly face similar problems that new market entrants currently encounter, for which the term generation rent has already been coined.

To better understand what may happen to British ex-partners, we can consider the example of Germany. The German housing market is in many ways different from the British, not the least because private rental accommodation is an attractive alternative to homeownership. Access to mortgages is also more restricted than in Britain, even after the recent tightening of regulations in Britain. High down payments are the rule in Germany. In this market environment, homeownership is a once-in-a-lifetime opportunity for many, while a considerable share of people will never enter homeownership. After separation, very few Germans will be able to return to homeownership (see Figure 2). Ex-partners will be less likely to be in homeownership through their lives post-separation. This scenario may foreshadow the British situation in the near future.

Figure 2: Average ownership rate before and after separation in Germany. Source: Lersch/Vidal 2014
Figure 2: Average ownership rate before and after separation in Germany. Source: Lersch/Vidal 2014

Being excluded from homeownership in the German context is not as consequential as it may turn out to be in Britain, however. First, more Germans will accept to rent after separation compared to the British, because attractive, and most of all, secure accommodation is available for – internationally seen – reasonable costs. Second, the German public pension system is relatively generous for those who continuously worked throughout their lives. To build up private wealth as a cushion for old age is not as necessary as in Britain. In Britain, where individuals are expected to privately invest in financial products and property to build an individual safety net – an idea called asset-based welfare – people that experience a separation may lose this safety net. This may result in stark disparities between the separated and those remaining married in old life.

Homeownership may offer many advantages for families. At the same time, homeownership is a long-term investment that does not necessarily fit well with the dynamics of modern partnership and family life. Everybody needs suitable and secure accommodation. Such diverse accommodation may sometimes be better provided in the private and social rental sector, which must not result in less security or quality compared to homeownership as can be seen in Germany. To make this work people need decent options to build up a safety net for rainy days outside of the housing market. However, people should also have reasonable tenure choice, which is not currently the case for many ex-partners in Germany.

Headline image credit: Keys. CC0 via Pixabay.

The post Falling out of love and down the housing ladder? appeared first on OUPblog.

0 Comments on Falling out of love and down the housing ladder? as of 10/9/2014 7:37:00 PM
Add a Comment
15. Examining 25 years of public opinion data on gay rights and marriage

Over the past decade, the debate over same-sex marriage has dominated the news cycle in the United States and other nations around the world. As public opinion polls have shown, a majority of Americans now support gay marriage. In fact, 55% of Americans support same-sex marriage according to a May 2014 Gallup poll.

Traditionally, framed by the news media as a debate between moral and religious objections vs. equal rights, marriage equality is just one of a range of civil rights issues that remain important to members of the LGBT community. Many of these issues including employment nondiscrimination, second parent adoption, and open service in the military have been eclipsed by the almost singular focus on marriage equality by interest groups, the media, and public opinion pollsters.

As the chart below shows, 51.5% of Americans expressed support for firing known homosexual teachers when Pew first started collecting data in 1987. By 2012, only 21% of Americans still expressed support for the practice.

lgbtpublicopinion
Graph courtesy of Amy B. Becker, via “Employment Discrimination, Local School Boards, and LGBT Civil Rights: Reviewing 25 Years of Public Opinion Data” in International Journal of Public Opinion Research.

Who are the 21%?

These individuals — the 21% — are what researchers call the hard core, those who retain minority political viewpoints in the face of majority opposition. As the results of the data analysis show, this 21% or the hard core tend to be older males who are less educated, more religious, more conservative in their politics, and more likely to have old-fashioned values when it comes to marriage and family.

The analyses look at what factors explain support for variation in employment discrimination over time. Not surprisingly, the influence of religious and ideological value predispositions matters most. Demographics (e.g., gender, age, and level of education) are also important as are key cultural values like having old-fashioned views on marriage and family. Much like the same-sex marriage debate, the importance of partisanship (e.g., being a Democrat vs. Republican) has waned in importance over time and is no longer a significant factor driving opinions after 2002.

When it comes to change over time, the results show that the influence of year or time matters more between 2002-2012 than between 1987-2002 indicating that, much like the same-sex marriage debate, the rate or pace of change on this issue has shifted more rapidly, more recently.

Thus while we’ve been primarily focusing our attention on marriage equality, opinions have shifted on other LGBT civil rights issues as well.

In July 2014, President Obama issued an executive order barring discrimination on the basis of sexual orientation or gender identity among federal workers. It is estimated that this action alone extended protections to 20% of the US labor force. Federal legislation, however, still falls short.

At present, the US Congress has yet to add sexual orientation or gender identity to the Employment Nondiscrimination Act. While the US Senate supported the measure this past November, the bill stalled given a lack of support in the US House of Representatives. At the time of the article’s drafting, fully 29 states failed to offer protections against employment discrimination on the basis of sexual orientation.

So while opinions may have shifted just like with the case of marriage equality, and while President Obama has continued his “evolution” on issues of gay rights, federal legislation on employment nondiscrimination still lags behind.

Headline image credit: Two women at sunset. CC0 Public Domain via Pixabay.

The post Examining 25 years of public opinion data on gay rights and marriage appeared first on OUPblog.

0 Comments on Examining 25 years of public opinion data on gay rights and marriage as of 1/1/1900
Add a Comment
16. Childhood obesity and maternal employment

It is well known that obesity rates have been increasing around the Western world.

The American obesity prevalence was less than 20% in 1994. By 2010, the obesity prevalence was greater than 20% in all states and 12 states had an obesity prevalence of 30%. For American children aged 2 – 19, approximately 17% are obese in 2011-2012. In the UK, the rifeness of obesity was similar to the US numbers. Between 1993 and 2012, the commonness of obesity increased from 13.2% to 24.4% for men and for women from 16.4% to 25.1%. The obesity prevalence is around 18% for children aged 11-15 and 11% for children aged 2-10.

Policy makers, researchers, and the general public are concerned about this trend because obesity is linked to an increase likelihood of health conditions such as diabetes and heart disease, among others. The increase in the obesity prevalence among children is of concern because of the possibility that obesity during childhood will increase the likelihood of being obese as an adult thereby leading to even higher rates of these health conditions in the future.

Researchers have investigated many possible causes for this trend including lower rates of participation in physical activity and easier access to fast food. Anderson, Butcher, and Levine (2003) identified maternal employment as a possible culprit when they noticed that in the US the timing of these two trends was similar. While the prevalence of obesity was increasing for children so was the employment rate of mothers. Other researchers have found similar results for other countries – more hours of maternal employment is related to a higher likelihood of children being obese.

What could be the relationship between a mother’s hours of work and childhood obesity? When mothers work they have less time to devote to activities around the home, which may mean less concern about nutrition, more meals eaten outside of the home or less time devoted to physical activities. On the other hand, more maternal employment could mean more income and an ability to purchase more nutritious food or encourage healthy activities for children.

Child playing with dreidels, by Dana Friedlander for Israel Photo Gallery. CC-BY-SA-2.0 via Flickr
Child playing with dreidels, by Dana Friedlander for Israel Photo Gallery. CC-BY-SA-2.0 via Flickr

We looked at this relationship for Canadian children 12-17 years old – an older group of children than studied in earlier papers. For youths aged 12 to 17 in Canada, the obesity prevalence was 7.8% in 2008. We analysed not only at the relationship between maternal employment and child obesity, but also the possible reasons that maternal employment may affect child obesity.

We find that the effect of hours of work differs from the effect of weeks of work. More hours of maternal work are related to activities we expect to be related to higher rates of obesity – more television viewing, less likely to eat breakfast daily, and a higher allowance. On the other hand, more weeks of maternal employment are related to behaviour expected to lower obesity – less television viewing and more physical activity. This difference between hours and weeks of work raises some interesting questions. How do families adapt to different aspects of the labour market? When mothers work for more weeks does this indicate a more regular attachment to the labour force? Do these families have schedules and routines that allow them to manage their child’s weight?

Unlike other studies that focus on younger children, we do not find a relationship between maternal employment and likelihood of obesity for adolescents. Does the impact of maternal employment at younger ages not last into adolescence? Is adolescence a stage during which obesity status is difficult to predict?

The debate over appropriate policy remedies should not focus on whether mothers should work, but rather should focus on what children are doing when mothers are working. What can be done to reduce the obesity prevalence in adolescents? Some ideas include working with the education system and local communities to create an environment for adolescents that fosters healthy weight status, supporting families with quality childcare, provision of viable and high-quality alternative activities, or flexible work hours. Programs or policies that help families establish a healthy routine are important. It may not be a case of simply providing activities for adolescents, but that these activities are easy for families to attend on a regular basis.

The post Childhood obesity and maternal employment appeared first on OUPblog.

0 Comments on Childhood obesity and maternal employment as of 1/1/1900
Add a Comment
17. The power of oral history as a history-making practice

This week, we have a special podcast with managing editor Troy Reeves and Oral History Review 41.2 contributor Amy Starecheski. Her article, “Squatting History: The Power of Oral History as a History-Making Practice,” explores the ways in which an in intergenerational group of activists have used oral history to pass on knowledge through public discussions about the past. In the podcast, Starecheski discusses her motivation for the project and her involvement in the upcoming Annual Meeting of the Oral History Association. Check out the podcast below.

 

https://soundcloud.com/oral-history-review/the-power-of-oral-history-as-a-history-making-practice/

You can learn more about the Annual Meeting of the Oral History Association in the Meeting Program. If you have any trouble playing the podcast, you can download the mp3.

 

 

 

 

 

 

 

 

 

Headline image credit: Courtesy of Amy Starecheski.

The post The power of oral history as a history-making practice appeared first on OUPblog.

0 Comments on The power of oral history as a history-making practice as of 1/1/1900
Add a Comment
18. Can Cameron capture women’s votes?

After the Scottish Independence Referendum, the journalist Cathy Newman wrote of the irony that Cameron – the man with the much reported ‘problem’ with women – in part owes his job to the female electorate in Scotland. As John Curtice’s post-referendum analysis points out, women seemed more reluctant than men to vote ‘yes’ due to relatively greater pessimism of the economic consequences of a yes vote.

The Scottish vote should remind Cameron and the Conservative strategists who advise him of a very clear message: ignore women voters at your peril.

For several decades after UK women won the right to vote, Conservatives could rely on women’s votes and the gender gap in voting was consistently in double figures. However in recent decades this gap has diminished, particularly amongst younger women and party competition to mobilize female voters has become more important. Of course women voters have many diverse interests but understanding the concerns of different groups of women voters is crucial as female voters often make their decisions on voting closer to the election.

So what does Cameron need to do to firmly secure women’s votes at the general election? We argue the Conservative Party needs to make sure it represents women descriptively, substantively, and symbolically. On all three counts we see problems with Cameron’s strategy to win women’s votes.

Pre-election rhetoric and pledges to feminise the party through women’s descriptive representation have not been matched with clear and tangible outcomes. Cameron tried to increase the number of women MPs but still the share of women in the Conservative Party in the House of Commons is just 16%. As the latest Sex and Power Report highlights this looks unlikely to increase significantly in GE2015 as so few women have been selected to stand in safe Conservative seats despite the campaigning and support work undertaken by Women2Win.

Prime Minister David Cameron talks about the future of the United Kingdom following the Scottish Referendum result. Photographer: Arron Hoare. Photo: Crown copyright via Number 10 Flickr.
Prime Minister David Cameron talks about the future of the United Kingdom following the Scottish Referendum result. Photographer: Arron Hoare. Photo: Crown copyright via Number 10 Flickr.

Even where Cameron has strong power and autonomy to improve women’s presence – by fulfilling his pledge that one-third of his government would be women by the end of parliament – he has managed just 22%. Last July’s reshuffle did not erase the impression that women are not included at Cameron’s top table.

Without enough women representatives in Parliament and in Government to advise on policy proposals in development, there have been many problematic policy initiatives, such as the disastrous proposal to raise child care ratios. The Government’s approach to addressing public debt through austerity has been detrimental to women by reducing incomes, public services, and jobs, the effects of which even female Conservative supporters are more likely to express concerns about.

Cameron’s Conservatives in government also do not have the institutional capacity to get policies right for women. There are still not enough women in strategically significant places. For example in the Coalition Quad of Cameron, Osborne, Clegg, and Alexander control policy making. The gender equality machinery set up by the last government to monitor and address gender inequality in a strategic and long-term way has been stripped out. Even at the emergency post-referendum meeting at Chequers to discuss the UK’s constitutional future there was just one woman at the table.

Although the gender gap in voting, which currently favours Labour, is likely to narrow as the election approaches, the Conservatives have, we argue, inflicted significant psephological damage on themselves in their strategies to attract women’s votes: by not promoting women into politics, by not protecting women from austerity, and by stripping out the governmental institutions which give voice to women and promote gender equality.

Cameron’s political face may have been saved by Scottish women last month but for the reasons outlined in this blog post, we suggest that in the critical contestation for women’s votes at the 2015 general election there are long standing weaknesses in the Conservative Party’s strategy for mobilising women’s votes and restoring the Party’s historical dominance among women voters.

The post Can Cameron capture women’s votes? appeared first on OUPblog.

0 Comments on Can Cameron capture women’s votes? as of 10/1/2014 9:09:00 AM
Add a Comment
19. Learning with body participation through motion-sensing technologies

Have you ever thought that your body movements can be transformed into learning stimuli and help to deal with abstract concepts? Subjects in natural science contain plenty of abstract concepts which are difficult to understand through reading-based materials, in particular for younger learners who are at the stage of developing their cognitive ability. For example, elementary school students would find it hard to distinguish the differences in similar concepts of fundamental optics such as concave lens imaging versus convex lens imaging. By performing a simulated exercise in person, learners can comprehend concepts easily because of the content-related actions involved during the process of learning natural science.

As far as commonly adopted virtual simulations of natural science experiments are concerned, the learning approach with keyboard and mouse lacks a comprehensive design. To make the learning design more comprehensive, we suggested that learners be provided with a holistic learning context based on embodied cognition, which views mental simulations in the brain, bodily states, environment, and situated actions as integral parts of cognition. In light of recent development in learning technologies, motion-sensing devices have the potential to be incorporated into a learning-by-doing activity for enhancing the learning of abstract concepts.

When younger learners study natural science, their body movements with external perceptions can positively contribute to knowledge construction during the period of performing simulated exercises. The way of using keyboard/mouse for simulated exercises is capable of conveying procedural information to learners. However, it only reproduces physical experimental procedures on a computer. For example, when younger learners use conventional controllers to perform fundamental optics simulation exercises, they might not benefit from such controller-based interaction due to the routine-like operations. If environmental factors, namely bodily states and situated actions, were well-designed as external information, the additional input can further help learners to better grasp the concepts through meaningful and educational body participation.

learning body participation
Photo by Nian-Shing Chen. Used with permission.

Based on the aforementioned idea, we designed an embodiment-based learning strategy to help younger learners perform optics simulation exercises and learn fundamental optics better. With this learning strategy enabled by the motion-sensing technologies, younger learners can interact with digital learning content directly through their gestures. Instead of routine-like operations, the gestures are designed as content-related actions for performing optics simulation exercises. Younger learners can then construct fundamental optics knowledge in a holistic learning context.

One of the learning goals is to acquire knowledge. Therefore, we created a quasi-experiment to evaluate the embodiment-based learning strategy by comparing the leaning performance of the embodiment-based learning group with that of the keyboard-mouse learning group. The result shows that the embodiment-based learning group significantly outperformed the keyboard-mouse learning group. Further analysis shows that no significant difference of cognitive load was found between these two groups although applying new technologies in learning could increase the consumption of learners’ cognitive resources. As it turned out, the embodiment-based learning strategy is an effective learning design to help younger learners comprehend abstract concepts of fundamental optics.

For natural science learning, the learning content and the process of physically experimenting are both important for learners’ cognition and thinking. The operational process conveys implicit knowledge regarding how something works to learners. In the experiments of lens imaging, the position of virtual light source and the type of virtual lens can help learners determine the attributes of the virtual image. By synchronizing gestures with virtual light source, a learner not only concentrates on the simulated experimental process but realizes the details of the external perception. Accordingly, learners can further understand how movements of the virtual light source and the types of virtual lens change the virtual image and learn the knowledge of fundamental optics better.

Our body movements have the potential to improve our learning if adequate learning strategies and designs are applied. Although motion-sensing technologies are now available to the general public, massive applications will depend on economical price and evidence-based approaches recommended for the educational purposes. The embodiment-based design has launched a new direction and is hoped to continuously shed light on improving our future learning.

The post Learning with body participation through motion-sensing technologies appeared first on OUPblog.

0 Comments on Learning with body participation through motion-sensing technologies as of 1/1/1900
Add a Comment
20. The Hunger Games and a dystopian Eurozone economy

The following is an extract from ‘Europe’s Hunger Games: Income Distribution, Cost Competitiveness and Crisis‘, published in the Cambridge Journal of Economics. In this section, Servaas Storm and C.W.M. Naastepad are comparing The Hunger Games to Eurozone economies:

Dystopias are trending in contemporary popular culture. Novels and movies abound that deal with fictional societies within which humans, individually and collectively, have to cope with repressive, technologically powerful states that do not usually care for the well-being or safety of their citizens, but instead focus on their control and extortion. The latest resounding dystopian success is The Hunger Games—a box-office hit located in a nation known as Panem, which consists of 12 poor districts, starved for resources, under the absolute control of a wealthy centre called the Capitol. In the story, competitive struggle is carried to its brutal extreme, as poor young adults in a reality TV show must fight to death in an outdoor arena controlled by an authoritarian Gamemaker, until only one individual remains. The poverty and starvation, combined with terror, create an atmosphere of fear and helplessness that pre-empts any resistance based on hope for a better world.

We fear that part of the popularity of this science fiction action-drama, in Europe at least, lies in the fact that it has a real-life analogue: the Spectacle—in Debord’s (1967) meaning of the term—of the current ‘competitiveness game’ in which the Eurozone economies are fighting for their survival. Its Gamemaker is the European Central Bank (ECB), which—completely stuck to Berlin’s hard line that fiscal profligacy in combination with rigid, over-regulated labour markets has created a deep crisis of labour cost competitiveness—has been keeping the pressure on Eurozone countries so as to let them pay for their alleged fiscal sins. The ECB insists that there will be ‘no gain without pain’ and that the more one is prepared to suffer, the more one is expected to prosper later on.

The contestants in the game are the Eurozone members—each one trying to bootstrap its economy out of the throes of the most severe crisis in living memory. The audience judging each country’s performance is not made up of reality TV watchers but of financial (bond) markets and credit rating agencies, whose supposedly rational views can make or break any economy. The name of the game is boosting cost-competitiveness and exports—and its rules are carved into stone in March 2011 in a Euro Plus ‘Competitiveness Pact’ (Gros, 2011).

The Hunger Games, by Kendra Miller. CC-BY-2.0 via flickr.
The Hunger Games, by Kendra Miller. CC-BY-2.0 via Flickr.

Raising competitiveness here means reducing costs, and more specifically cutting labour costs, which means lowering the wage share by means of reducing employment protection, lowering minimum wages, raising retirement ages, lowering pensions and, last but not least, cutting real wages. Economic inequality, poverty and social exclusion will all initially increase, but don’t worry: structural reforms hurt in the beginning, but their negative effects will be offset over time by changes in ‘confidence,’ boosting spending and exports. But it will not work, and the damage done by austerity and structural reforms is enormous; sadly, most of it was and is avoidable. The wrong policies follow from ‘design faults’ built into the Euro project right from the start—the creation of an ‘independent’ European Central Bank being the biggest ‘fault’, as it precluded the necessary co-ordination of fiscal and monetary policy and disabled the central banking system from providing support to national governments (Arestis and Sawyer, 2011). But as Palma (2009) reminds us, it is wrong to think about these ‘faults’ as being caused by perpetual incompetence—the monetarist Euro project should instead be read as a purposeful ‘technology of power’ to transform capitalism into a rentiers’ paradise. This way, one can understand why policy makers persist in abandoning the unemployed.

The post The Hunger Games and a dystopian Eurozone economy appeared first on OUPblog.

0 Comments on The Hunger Games and a dystopian Eurozone economy as of 1/1/1900
Add a Comment
21. Q&A with Jake Bowers, co-author of 2014 Miller Prize Paper

Despite what many of my colleagues think, being a journal editor is usually a pretty interesting job. The best part about being a journal editor is working with authors to help frame, shape, and improve their research. We also have many chances to honor specific authors and their work for being of particular importance. One of those honors is the Miller Prize, awarded annually by the Society for Political Methodology for the best paper published in Political Analysis the proceeding year.

The 2013 Miller Prize was awarded to Jake Bowers, Mark M. Fredrickson, and Costas Panagopoulos, for their paper, “Reasoning about Interference Between Units: A General Framework.” To recognize the significance of this paper, it is available for free online access for the next year. The award committee summarized the contribution of the paper:

“..the article tackles an difficult and pervasive problem—interference among units—in a novel and compelling way. Rather than treating spillover effects as a nuisance to be marginalized over or, worse, ignored, Bowers et al. use them as an opportunity to test substantive questions regarding interference … Their work also brings together causal inference and network analysis in an innovative and compelling way, pointing the way to future convergence between these domains.”

In other words, this is an important contribution to political methodology.

I recently posed a number of question to one of the authors of the Miller Prize paper, Jake Bowers, asking him to talk more about this paper and its origins.

R. Michael Alvarez: Your paper, “Reasoning about Interference Between Units: A General Framework” recently won the Miller Prize for the best paper published in Political Analysis in the past year. What motivated you to write this paper?

Jake Bowers: Let me provide a little background for readers not already familiar with randomization-based statistical inference.

Randomized designs provide clear answers to two of the most common questions that we ask about empirical research: The Interpretation Question: “What does it mean that people in group A act differently from people in group B?” and The Information Question: “How precise is our summary of A-vs-B?” (Or, more defensively, “Do we really have enough information to distinguish A from B?”).

If we have randomly assigned some A-vs-B intervention, then we can answer the interpretation question very simply: “If group A differs from group B, it is only because of the A-vs-B intervention. Randomization ought to erase any other pre-existing differences between groups A and B.”

In answering the information question, randomization alone also allows us to characterize other ways that the experiment might have turned out: “Here are all of the possible ways that groups A and B could differ if we re-randomized the A-vs-B intervention to the experimental pool while entertaining the idea that A and B do not differ. If few (or none) of these differences is as large as the one we observe, we have a lot of information against the idea that A and B do not differ. If many of these differences are as large as the one we see, we don’t have much information to counter the argument that A and B do not differ.”

Of course, these are not the only questions one should ask about research, and interpretation should not end with knowing that an input created an output. Yet, these concerns about meaning and information are fundamental and the answers allowed by randomization offer a particularly clear starting place for learning from observation. In fact, many randomization-based methods for summarizing answers to the information question tend to have validity guarantees even with small samples. If we really did repeat the experiment all the possible ways that it could have been done, and repeated a common hypothesis test many times, we would reject a true null hypothesis no more than α% of the time even if we had observed only eight people (Rosenbaum 2002, Chap 2).

In fact a project with only eight cities impelled this paper. Costa Panagopoulos had administered a field experiment of newspaper advertising and turnout to eight US cities, and he and I began to discuss how to produce substantively meaningful, easy to interpret, and statistically valid, answers to the question about the effect of advertising on turnout. Could we hypothesize that, for example, the effect was zero for three of the treated cites, and more than zero for one of the treated cites? The answer was yes.

I realized that hypotheses about causal effects do not need to be simple, and, furthermore, they could represent substantive, theoretical models very directly. Soon, Mark Fredrickson and I started thinking about substantive models in which treatment given to one city might have an effect on another city. It seemed straightforward to write down these models. We had read Peter Aronow’s and Paul Rosenbaum’s papers on the sharp null model of no effects and interference, and so we didn’t think we were completely off base to imagine that, if we side-stepped estimation of average treatment effects and focused on testing hypotheses, we could learn something about what we called “models of interference”. But, we had not seen this done before. So, in part because we worried about whether we were right about how simple it was to write down and test hypotheses generated from models of spillover or interference between units, we wrote the “Reasoning about Interference” paper to see if what we were doing with Panagopoulos’ eight cities would scale, and whether it would perform as randomization-based tests should. The paper shows that we were right.

R. Michael Alvarez: In your paper, you focus on the “no interference” assumption that is widely discussed in the contemporary literature on causal models. What is this assumption and why is it important?

Jake Bowers: When we say that some intervention, (Zi), caused some outcome for some person, (i), we often mean that the outcome we would have seen for person (i) when the intervention is not-active, (Zi=0) — written as (y{i,Zi=0}) — would have been different from the outcome we would have seen if the intervention were active for that same person (at that same moment in time), (Zi=1), — written as (y{i,Z_i=1}). Most people would say that the treatment had an effect on person (i) when (i) would have acted differently under the intervention than under the control condition such that y{i,Zi=1} does not equal y{i,Zi=0}. David Cox (1958) noticed that this definition of causal effects involves an assumption that an intervention assigned to one person does not influence the potential outcomes for another person. (Henry Brady’s piece, “Causation and Explanation in Social Science” in the Oxford Handbook of Political Methodology provides an excellent discussion of the no-interference assumption and Don Rubin’s formalization and generalization of Cox’s no-interference assumption.)

As an illustration of the confusion that interference can cause, imagine we had four people in our study such that (i in {1,2,3,4}). When we write that the intervention had an effect for person (i=1),(y{i=1,Z1=1} does not equal y{i=1,Z1=0}), we are saying that person 1 would act the same when (Z{i=1}=1) regardless of how intervention was assigned to any other person such that

(y{i=1,{Z_1=1,Z_2=1,Z_3=0,Z_4=0}}=y{i=1,{Z_1=1,Z_2=0,Z_3=1,Z_4=0\}}=y{i=1,\{Zi=1,…}})

If we do not make this assumption then we cannot write down a treatment effect in terms of a simple comparison of two groups. Even if we randomly assigned the intervention to two of the four people in this little study, we would have six potential outcomes per person rather than only two potential outcomes (you can see two of the six potential outcomes for person 1 in above). Randomization does not help us decide what a “treatment effect” means and six counterfactuals per person poses a challenge for the conceptualization of causal effects.

So, interference is a problem with the definition of causal effects. It is also a problem with estimation. Many folks know about what Paul Holland (1986) calls the “Fundamental Problem of Causal Inference” that the potential outcomes heuristic for thinking about causality reveals: we cannot ever know the causal effect for person (i) directly because we can never observe both potential outcomes. I know of three main solutions for this problem, each of which have to deal with problems of interference:

  • Jerzy Neyman (1923) showed that if we change our substantive focus from individual level to group level comparisons, and to averages in particular, then randomization would allow us to learn about the true, underlying, average treatment effect using the difference of means observed in the actual study (where we only see responses to intervention for some but not all of the experimental subjects).
  • Don Rubin (1978) showed a Bayesian predictive approach — a probability model of the outcomes of your study and a probability model for the treatment effect itself allows you can predict the unobserved potential outcomes for each person in your study and then take averages of those predictions to produce an estimate of the average treatment effect.
  • Ronald Fisher (1935) suggested another approach which maintained attention on the individual level potential outcomes, but did not use models to predict them. He showed that randomization alone allows you to test the hypothesis of “no effects” at the individual level. Interference makes it difficult to interpret Neyman’s comparisons of observed averages and Rubin’s comparison of predicted averages as telling us about causal effects because we have too many possible averages.

It turns out that Fisher’s sharp null hypothesis test of no effects is simple to interpret even when we have unknown interference between units. Our paper starts from that idea and shows that, in fact, one can test sharp hypotheses about effects rather than only no effects.

Note that there has been a lot of great recent work trying to define and estimate average treatment effects recently by folks like Cyrus Samii and Peter Aronow, Neelan Sircar and Alex Coppock, Panos Toulis and Edward Kao, Tyler Vanderweele, Eric Tchetgen Tchetgen and Betsy Ogburn, Michael Sobel, and Michael Hudgens, among others. I also think that interference poses a smaller problem for Rubin’s approach in principle — one would add a model of interference to the list of models (of outcomes, of intervention, of effects) used to predict the unobserved outcomes. (This approach has been used without formalization in terms of counterfactuals in both the spatial and networks models worlds.) One might then focus on posterior distributions of quantities other than simple differences of averages or interpret such differences reflecting the kinds of weightings used in the work that I gestured to at the start of this paragraph.

R. Michael Alvarez: How do you relax the “no interference” assumption in your paper?

Jake Bowers: I would say that we did not really relax an assumption, but rather side-stepped the need to think of interference as an assumption. Since we did not use the average causal effect, we were not facing the same problems of requiring that all potential outcomes collapse down to two averages. However, what we had to do instead was use what Paul Rosenbaum might call Fisher’s solution to the fundamental problem of causal inference. Fisher noticed that, even if you couldn’t say that a treatment had an effect on person (i), you could ask whether we had enough information (in our design and data) to shed light on a question about whether or not the treatment had an effect on person (i). In our paper, Fisher’s approach meant that we did not need to define our scientifically interesting quantity in terms of averages. Instead, we had to write down hypotheses about no interference. That is, we did not really relax an assumption, but instead we directly modelled a process.

Rosenbaum (2007) and Aronow (2011), among others, had noticed that the hypothesis that Fisher is most famous for, the sharp null hypothesis of no effects, in fact does not assume no interference, but rather implies no interference (i.e., if the treatment has no effect for any person, then it does not matter how treatment has been assigned). So, in fact, the assumption of no interference is not really a fundamental piece of how we talk about counterfactual causality, but a by-product of a commitment to the use of a particular technology (simple comparisons of averages). We took a next step in our paper and realized that Fisher’s sharp null hypothesis implied a particular, and very simple, model of interference (a model of no interference). We then set out to see if we could write other, more substantively interesting models of interference. So, that is what we show in the paper: one can write down a substantive theoretical model of interference (and of the mechanism for an experimental effect to come to matter for the units in the study) and then this model can be understood as a generator of sharp null hypotheses, each of which could be tested using the same randomization inference tools that we have been studying for their clarity and validity previously.

R. Michael Alvarez: What are the applications for the approach you develop in your paper?

Jake Bowers: We are working on a couple of applications. In general, our approach is useful as a way to learn about substantive models of the mechanisms for the effects of experimental treatments.

For example, Bruce Desmarais, Mark Fredrickson, and I are working with Nahomi Ichino, Wayne Lee, and Simi Wang on how to design randomized experiments to learn about models of the propagation of treatments across a social network. If we think that an experimental intervention on some subset of Facebook users should spread in some certain manner, then we are hoping to have a general way to think about how to design that experiment (using our approach to learn about that propagation model, but also using some of the new developments in network-weighted average treatment effects that I referenced above). Our very early work suggests that, if treatment does propagate across a social network following a common infectious disease model, that you might prefer to assign relatively few units to direct intervention.

In another application, Nahomi Ichino, Mark Fredrickson, and I are using this approach to learn about agent-based models of the interaction of ethnicity and party strategies of voter registration fraud using a field experiment in Ghana. To improve our formal models, another collaborator, Chris Grady, is going to Ghana to do in-depth interviews with local party activists this fall.

R. Michael Alvarez: Political methodologists have made many contributions to the area of causal inference. If you had to recommend to a graduate student two or three things in this area that they might consider working on in the next year, what would they be?

Jake Bowers: About advice for graduate students: Here are some of the questions I would love to learn about.

  • How should we move from formal, equilibrium-oriented, theories of behavior to models of mechanisms of treatment effects that would allow us to test hypotheses and learn about theory from data?
  • How can we take advantage of estimation-based procedures or procedures developed without specific focus on counterfactual causal inference if we want to make counterfactual causal inferences about models of interference? How should we reinterpret or use tools from spatial analysis like those developed by Rob Franzese and Jude Hayes or tools from network analysis like those developed by Mark Handcock to answer causal inference questions?
  • How can we provide general advice about how to choose test-statistics to summarize the observable implications of these theoretical models? We know that the KS-test used in our article is pretty low-powered. And we know from Rosenbaum (Chap 2, 2002) that certain classes of test statistics have excellent properties in one-dimension, but I wonder about general properties of multi-parameter models and test statistics that can be sensitive to multi-way differences in distribution between experimental groups.
  • How should we apply ideas from randomized studies to the observational world? What does adjustment for confounding/omitted variable bias (by matching or “controlling for” or weighting) mean in the context of social networks or spatial relations? How should we do and judge such adjustment? Would might what Rosenbaum-inspired sensitivity analysis or Manski-inspired bounds analysis might mean when we move away from testing one parameter or estimating one quantity?

R. Michael Alvarez: You do a lot of work with software tool development and statistical computing. What are you working on now that you are most excited about?

Jake Bowers: I am working on two computationally oriented projects that I find very exciting. The first involves using machine learning/statistical learning for optimal covariance adjustment in experiments (with Mark Fredrickson and Ben Hansen). The second involves collecting thousands of hand-drawn maps on Google maps as GIS objects to learn about how people define and understand the places where they live in Canada, the United Kingdom, and the United States (with Cara Wong, Daniel Rubenson, Mark Fredrickson, Ashlea Rundlett, Jane Green, and Edward Fieldhouse).

When an experimental intervention has produced a difference in outcomes, comparisons of treated to control outcomes can sometimes fail to detect this effect, in part, because the outcomes themselves are naturally noisy in comparison to the strength of the treatment effect. We would like to reduce the noise that is unrelated to treatment (say, remove the noise related to background covariates, like education) without ever estimating a treatment effect (or testing a hypothesis about a treatment effect). So far, people shy away from using covariates for precision enhancement of this type because of every model in which they soak up noise with covariates is also a model in which they look at the p-value for their treatment effect. This project learns from the growing literature in machine learning (aka statistical learning) to turn specification of the covariance adjustment part of a statistical model over to an automated system focused on the control group only which thus bypasses concerns about data snooping and multiple p-values.

The second project involves using Google maps embedded in online surveys to capture hand-drawn maps representing how people respond when asked to draw the boundaries of their “local communities.” So far we have over 7000 such maps from a large survey of Canadians, and we plan to have data from this module carried on the British Election Study and the US Cooperative Congressional Election Study within the next year. We are using these maps and associated data to add to the “context/neighborhood effects” literature to learn how psychological understandings of place by individuals relates to Census measurements and also to individual level attitudes about inter-group relations and public goods provision.

Headline image credit: Abstract city and statistics. CC0 via Pixabay.

The post Q&A with Jake Bowers, co-author of 2014 Miller Prize Paper appeared first on OUPblog.

0 Comments on Q&A with Jake Bowers, co-author of 2014 Miller Prize Paper as of 9/25/2014 9:28:00 AM
Add a Comment
22. Do children make you happier?

A new study shows that women who have difficulty accepting the fact that they can’t have children following unsuccessful fertility treatment, have worse long-term mental health than women who are able to let go of their desire for children. It is the first to look at a large group of women (over 7,000) to try to disentangle the different factors that may affect women’s mental health over a decade after unsuccessful fertility treatment. These factors include whether or not they have children, whether they still want children, their diagnosis, and their medical treatment.

It was already known that people who have infertility treatment and remain childless have worse mental health than those who do manage to conceive with treatment. However, most previous research assumed that this was due exclusively to having children or not, and did not consider the role of other factors. Alongside my research colleagues from the Netherlands, where the study took place, we found only that there is a link between an unfulfilled wish for children and worse mental health, and not that the unfulfilled wish is causing the mental health problems. This is due to the nature of the study, in which the women’s mental health was measured at only one point in time rather than continuously since the end of fertility treatment.

We analysed answers to questionnaires completed by 7,148 women who started fertility treatment at any of 12 IVF hospitals in the Netherlands between 1995-2000. The questionnaires were sent out to the women between January 2011 and 2012, meaning that for most women their last fertility treatment would have been between 11-17 years ago. The women were asked about their age, marital status, education and menopausal status, whether the infertility was due to them, their partner, both or of unknown cause, and what treatment they had received, including ovarian stimulation, intrauterine insemination, and in vitro fertilisation / intra-cytoplasmic sperm injection (IVF/ICSI). In addition, they completed a mental health questionnaire, which asked them how they felt during the past four weeks. The women were asked whether or not they had children, and, if they did, whether they were their biological children or adopted (or both). They were also asked whether they still wished for children.

The majority of women in the study had come to terms with the failure of their fertility treatment. However, 6% (419) still wanted children at the time of answering the study’s questionnaire and this was connected with worse mental health. We found that women who still wished to have children were up to 2.8 times more likely to develop clinically significant mental health problems than women who did not sustain a child-wish. The strength of this association varied according to whether women had children or not. For women with no children, those with a child-wish were 2.8 times more likely to have worse mental health than women without a child-wish. For women with children, those who sustained a child-wish were 1.5 times more likely to have worse mental health than those without a child-wish. This link between a sustained wish for children and worse mental health was irrespective of the women’s fertility diagnosis and treatment history.

Happy Family photo
Happy family photo by Vera Kratochvil. Public domain via Wikimedia Commons.

Our research found that women had better mental health if the infertility was due to male factors or had an unknown cause. Women who started fertility treatment at an older age had better mental health than women who started younger, and those who were married or cohabiting with their partner reported better mental health than women who were single, divorced, or widowed. Better educated women also had better mental health than the less well educated.

This study improves our understanding of why childless people have poorer adjustment. It shows that it is more strongly associated with their inability to let go of their desire to have children. It is quite striking to see that women who do have children but still wish for more children report poorer mental health than those who have no children but have come to accept it. The findings underline the importance of psychological care of infertility patients and, in particular, more attention should be paid to their long-term adjustment, whatever the outcome of the fertility treatment.

The possibility of treatment failure should not be avoided during treatment and a consultation at the end of treatment should always happen, whether the treatment is successful or unsuccessful, to discuss future implications. This would enable fertility staff to identify patients more likely to have difficulties adjusting to the long term, by assessing the women’s possibilities to come to terms with their unfulfilled child-wish. These patients could be advised to seek additional support from mental health professionals and patient support networks.

It is not known why some women may find it more difficult to let go of their child-wish than others. Psychological theories would claim that how important the goal is for the person would be a relevant factor. The availability of other meaningful life goals is another relevant factor. It is easier to let go of a child-wish if women find other things in life that are fulfilling, like a career.

We live in societies that embrace determination and persistence. However, there is a moment when letting go of unachievable goals (be it parenthood or other important life goals) is a necessary and adaptive process for well-being. We need to consider if societies nowadays actually allow people to let go of their goals and provide them with the necessary mechanisms to realistically assess when is the right moment to let go.

Featured image: Baby feet by Nina-81. Public Domain via Pixabay.

The post Do children make you happier? appeared first on OUPblog.

0 Comments on Do children make you happier? as of 1/1/1900
Add a Comment
23. Do health apps really matter?

Apps are all the rage nowadays, including apps to help fight rage. That’s right, the iTunes app store contains several dozen apps designed to manage anger or reduce stress. Smartphones have become such a prevalent component of everyday life, it’s no surprise that a demand has risen for phone programs (also known as apps) that help us manage some of life’s most important elements, including personal health. But do these programs improve our ability to manage our health? Do health apps really matter?

Early apps for patients with diabetes demonstrate how a proposed app idea can sound useful in theory but provide limited tangible health benefits in practice. First generation diabetes apps worked like a digital notebook, in which apps linked with blood glucose monitors to record and catalog measured glucose levels. Although doctors and patients were initially charmed by high tech appeal and app convenience, the charm wore off as app use failed to improve patient glucose monitoring habits or medication compliance.

Fitness apps are another example of rough starts among early health app attempts. Initial running apps served as an electronic pedometer, recording the number of steps and/or the total distance ran. These apps again provided a useful convenience over using a conventional pedometer, but were unlikely to lead to increased exercise levels or appeal to individuals who didn’t already run. Apps for other health related topics such as nutrition, diet, and air pollution ran into similar limitations in improving healthy habits. For a while, it seemed as if the initial excitement among the life sciences community for e-health simply couldn’t be translated to tangible health benefits among target populations.

800px-Personal_Health_Apps_for_Smartphones
Image credit: Personal Health Apps for Smartphones.jgp, by Intel Free Press. CC-BY-2.0 via Wikimedia Commons.

Luckily, recent changes in app development ideology have led to noticeable increases in health app impacts. Health app developers are now focused on providing useful tools, rather than collections of information, to app users. The diabetes app ManageBGL.com, for example, predicts when a patient may develop hypoglycemia (low blood sugar levels) before the visual/physical signs and adverse effects of hypoglycemia occur. The running app RunKeeper connects to other friend’s running profiles to share information, provide suggested running routes, and encourage runners to speed up or slow down for reaching a target pace. Air pollution apps let users set customized warning levels, and then predict and warn users when they’re heading towards an area with air pollution that exceeds warning levels. Health apps are progressing beyond providing mere convenience towards a state where they can help the user make informed decisions or perform actions that positively affect and/or protect personal health.

So, do health apps really matter? It’s unlikely that the next generation of health apps will have the same popularity as Facebook or widespread utility such as Google maps. The impact, utility, and popularity of health apps, however, are increasing at a noticeable rate. As health app developers continue to better their understanding of health app strengths and limitations and upcoming technologies that can improve health apps such as miniaturized sensors and smartglass become available, the importance of health related apps and proportion of the general public interested in health apps are only going to get larger.

The post Do health apps really matter? appeared first on OUPblog.

0 Comments on Do health apps really matter? as of 9/26/2014 10:28:00 AM
Add a Comment
24. Cinematic tragedies for the intractable issues of our times

Tragedies certainly aren’t the most popular types of performances these days. When you hear a film is a tragedy, you might think “outdated Ancient Greek genre, no thanks!” Back in those times, Athenians thought it their civic duty to attend tragic performances of dramas like Antigone or Agammemnon. Were they on to something that we have lost in contemporary Western society? That there is something specifically valuable in a tragic performance that a spectator doesn’t get from other types or performances, such as those of our modern genres of comedy, farce, and melodrama?

Since films reach a greater audience in our culture than plays, after updating Aristotle’s Poetics for the twenty-first century, we analyzed what we call “cinematic tragedies”: films that demonstrate the key components of Aristotelian tragedy. We conclude that a tragedy must consist in the representation of an action that is: (1) complete; (2) serious; (3) probable; (4) has universal significance; (5) involves a reversal of fortune (from good to bad); (6) includes recognition (a change in epistemic state from ignorance to knowledge); (7) includes a specific kind of irrevocable suffering (in the form of death, agony or a terrible wound); (8) has a protagonist who is capable of arousing compassion; and (9) is performed by actors. The effects of the tragedy must include: (10) the arousal in the spectator of pity and fear; and (11) a resolution of pity and fear that is internal to the experience of the drama.

Unlike melodrama (which we hold is the most common film genre), tragedy calls on spectators to ponder thorny moral issues and to navigate them with their own moral compass. One such cinematic tragedy — Into The Wild, 2007, directed by Sean Penn — thematizes the preciousness and precariousness of human life alongside environmental problems, raising questions about human beings’ apparent inability to live on earth without despoiling the beauty and integrity of the biosphere. Other cinematic tragedies deal with a variety of problems with which our modern societies must grapple.

One such topic is illegal immigration, a highly politicized issue that is far more complex than national governments seem equipped to handle, especially beyond the powers of the two parties in the American system. Cinematic tragedies that deal with this issue have been produced over several decades involving immigration into various Western countries, especially the United States; these include Black Girl (France, 1966), El norte (US/UK, 1983), and Sin nombre (Mexico, 2009), the last of which we will expand on here.

Paulina Gaitan (left) and Edgar Flores (right) star in writer/director Cary Joji Fukunaga's epic dramatic thriller Sin Nombre, a Focus Features release. Photo credit: Cary Joji Fukunaga via Focus Features
Paulina Gaitan (left) and Edgar Flores (right) star in writer/director Cary Joji Fukunaga’s epic dramatic thriller Sin Nombre, a Focus Features release. Photo credit: Cary Joji Fukunaga via Focus Features

In US director Cary Fukunaga’s Sin nombre (which means “Nameless” but which was released in the United States under the Spanish title), Hondurans escaping from their harsh political and economic realities risk their lives in order to make it to the United States, through Mexico, on the tops of rail cars. They travel in this manner since, as we all know, there would be no other legal way for most of these foreign citizens to come to the United States. Over the course of the journey, the immigrants endure terrible suffering or die at the hands of gang members who rob, rape, and even kill some of them.

The film focuses on just a few of the multitudes atop the trains: on a teenage Honduran girl, Sayra, migrating with her father and uncle; and on a few of the gang members. One of them, Casper, has had a change of heart and is no longer loyal to the gang, after its leader killed Casper’s girlfriend after trying to rape her. Casper and other gang members are atop the train robbing the migrants, but he defends Sayra by killing the leader when he tries to rape her. Ultimately, Sayra will arrive in the United States. However, she realizes that the cost has been too great—her father has died falling off of the train; she has lost Casper who is, ironically, shot to death by the pre-pubescent boy whom he himself had trained in the ways of the gang in the opening scenes of the film.

The tremendous losses, and the scenes of suffering, rape, and murder, make unlikely the possibility that the spectator will feel that Sayra’s arrival constitutes a happy ending. In some other aesthetic treatment, Casper’s ultimate death might have been melodramatized as redemptive selflessness for the sake of his new girlfriend. But in Fukunaga’s film, the juxtaposed images imply a continuing cycle of despair and death: Casper’s young killer in Mexico is promoted up the ranks of the gang with a new tattoo, while Sayra’s uncle, back in Honduras after being deported from Mexico, starts the voyage to the United States all over again. Sayra too may face deportation in the future. Following the scene of the reinvigoration of the criminal gang system, as its new young leader gets his first tattoo, the viewer sees Sayra outside a shopping mall in the American southwest. The teenage girl has arrived in the United States and may aspire to participate in advanced consumer capitalism, yet she has lost so much and suffered so undeservingly.

This aesthetic juxtaposition prompts the spectator to attend to the failure of Western political leaders to create a humane system of immigration for the twenty-first century, one which cannot be reached with the entrenched politicized views of the “two sides of the aisle” who miss the human story of immigrants’ plight. This film—like all tragedies—promotes the spectator’s active pondering, that is, it challenges them to respond in some way.

In the tradition of philosophers as various as Aristotle, Seneca, Schopenhauer, Nietzsche, Martha Nussbaum, and Bernard Williams, we find that tragedies bring to conscious awareness the most significant moral, social, political, and existential problems of the human condition. A film such as Sin nombre, through its tragic performance, points to one of these terrible necessities with which our contemporary Western culture must grapple. While it doesn’t offer an answer, this cinematic tragedy prompts us to recognize and deal with a seemingly intractable problem that needs to move beyond the current impasse of political debate, as we in the industrialized nations continue to shop for and watch movies in the comfort of our malls.

The post Cinematic tragedies for the intractable issues of our times appeared first on OUPblog.

0 Comments on Cinematic tragedies for the intractable issues of our times as of 9/27/2014 8:40:00 AM
Add a Comment
25. The pros and cons of research preregistration

Research transparency is a hot topic these days in academia, especially with respect to the replication or reproduction of published results.

There are many initiatives that have recently sprung into operation to help improve transparency, and in this regard political scientists are taking the lead. Research transparency has long been a focus of effort of The Society for Political Methodology, and of the journal that I co-edit for the Society, Political Analysis. More recently the American Political Science Association (APSA) has launched an important initiative in Data Access and Research Transparency. It’s likely that other social sciences will be following closely what APSA produces in terms of guidelines and standards.

One way to increase transparency is for scholars to “preregister” their research. That is, they can write up their research plan and publish that prior to the actual implementation of their research plan. A number of social scientists have advocated research preregistration, and Political Analysis will soon release new author guidelines that will encourage scholars who are interested in preregistering their research plans to do so.

However, concerns have been raised about research preregistration. In the Winter 2013 issue of Political Analysis, we published a Symposium on Research Registration. This symposium included two longer papers outlining the rationale for registration: one by Macartan Humphreys, Raul Sanchez de la Sierra, and Peter van der Windt; the other by Jamie Monogan. The symposium included comments from Richard Anderson, Andrew Gelman, and David Laitin.

In order to facilitate further discussion of the pros and cons of research preregistration, I recently asked Jaime Monogan to write a brief essay that outlines the case for preregistration, and I also asked Joshua Tucker to write about some of the concerns that have been raised about how journals may deal with research preregistration.

*   *   *   *   *

The pros of preregistration for political science

By Jamie Monogan, Department of Political Science, University of Georgia

 

1024px-Howard_Tilton_Library_Computers_2010
Howard Tilton Library Computers, Tulane University by Tulane Public Relations. CC-BY-2.0 via Wikimedia Commons.

Study registration is the idea that a researcher can publicly release a data analysis plan prior to observing a project’s outcome variable. In a Political Analysis symposium on this topic, two articles make the case that this practice can raise research transparency and the overall quality of research in the discipline (“Humphreys, de la Sierra, and van der Windt 2013; Monogan 2013).

Together, these two articles describe seven reasons that study registration benefits our discipline. To start, preregistration can curb four causes of publication bias, or the disproportionate publishing of positive, rather than null, findings:

  1. Preregistration would make evaluating the research design more central to the review process, reducing the importance of significance tests in publication decisions. Whether the decision is made before or after observing results, releasing a design early would highlight study quality for reviewers and editors.
  2. Preregistration would help the problem of null findings that stay in the author’s file drawer because the discipline would at least have a record of the registered study, even if no publication emerged. This will convey where past research was conducted that may not have been fruitful.
  3. Preregistration would reduce the ability to add observations to achieve significance because the registered design would signal in advance the appropriate sample size. It is possible to monitor the analysis until a positive result emerges before stopping data collection, and this would prevent that.
  4. Preregistration can prevent fishing, or manipulating the model to achieve a desired result, because the researcher must describe the model specification ahead of time. By sorting out the best specification of a model using theory and past work ahead of time, a researcher can commit to the results of a well-reasoned model.

Additionally, there are three advantages of study registration beyond the issue of publication bias:

  1. Preregistration prevents inductive studies from being written-up as deductive studies. Inductive research is valuable, but the discipline is being misled if findings that are observed inductively are reported as if they were hypothesis tests of a theory.
  2. Preregistration allows researchers to signal that they did not fish for results, thereby showing that their research design was not driven by an ideological or funding-based desire to produce a result.
  3. Preregistration provides leverage for scholars who face result-oriented pressure from financial benefactors or policy makers. If the scholar has committed to a design beforehand, the lack of flexibility at the final stage can prevent others from influencing the results.

Overall, there is an array of reasons why the added transparency of study registration can serve the discipline, chiefly the opportunity to reduce publication bias. Whatever you think of this case, though, the best way to form an opinion about study registration is to try it by preregistering one of your own studies. Online study registries are available, so you are encouraged to try the process yourself and then weigh in on the preregistration debate with your own firsthand experience.

*   *   *   *   *

Experiments, preregistration, and journals

By Joshua Tucker, Professor of Politics (NYU) and Co-Editor, Journal of Experimental Political Science

 
I want to make one simple point in this blog post: I think it would be a mistake for journals to come up with any set of standards that involves publically recognizing some publications as having “successfully” followed their pre-registration design while identifying others publications as not having done so. This could include a special section for articles that matched their pre-registration design, an A, B, C type rating system for how faithfully articles had stuck with the pre-registration design, or even an asterisk for articles that passed a pre-registration faithfulness bar.

Let me be equally clear that I have no problem with the use of registries for recording experimental designs before those experiments are implemented. Nor do I believe that these registries should not be referenced in published works featuring the results of those experiments. On the contrary, I think authors who have pre-registered designs ought to be free to reference what they registered, as well as to discuss in their publications how much the eventual implementation of the experiment might have differed from what was originally proposed in the registry and why.

My concern is much more narrow: I want to prevent some arbitrary third party from being given the authority to “grade” researchers on how well they stuck to their original design and then to be able to report that grade publically, as opposed to simply allowing readers to make up their own mind in this regard. My concerns are three-fold.

First, I have absolutely no idea how such a standard would actually be applied. Would it count as violating a pre-design registry if you changed the number of subjects enrolled in a study? What if the original subject pool was unwilling to participate for the planned monetary incentive, and the incentive had to be increased, or the subject pool had to be changed? What if the pre-registry called for using one statistical model to analyze the data, but the author eventually realized that another model was more appropriate? What if survey questions that was registered on a 1-4 scale was changed to a 1-5 scale? Which, if any of these, would invalidate the faithful application of the registry? Would all of them together? It seems to the only truly objective way to rate compliance is to have an all or nothing approach: either you do exactly what you say you do, or you didn’t follow the registry. Of course, then we are lumping “p-value fishing” in the same category as applying a better a statistical model or changing the wording of a survey question.

This bring me to my second point, which is a concern that giving people a grade for faithfully sticking to a registry could lead to people conducting sub-optimal research — and stifle creativity — out of fear that it will cost them their “A” registry-faithfulness grade. To take but one example, those of us who use survey experiments have long been taught to pre-test questions precisely because sometime some of the ideas we have when sitting at our desks don’t work in practice. So if someone registers a particular technique for inducing an emotional response and then runs a pre-test and figures out their technique is not working, do we really want the researcher to use the sub-optimal design in order to preserve their faithfulness to the registered design? Or consider a student who plans to run a field experiment in a foreign country that is based on the idea that certain last names convey ethnic identity. What happens if the student arrives in the field and learns that this assumption was incorrect? Should the student stick with the bad research design to preserve the ability to publish in the “registry faithful” section of JEPS? Moreover, research sometimes proceeds in fits and spurts. If as a graduate student I am able to secure funds to conduct experiments in country A but later as a faculty member can secure funds to replicate these experiments in countries B and C as well, should I fear including the results from country A in a comparative analysis because my original registry was for a single country study? Overall, I think we have to be careful about assuming that we can have everything about a study figured out at the time we submit a registry design, and that there will be nothing left for us to learn about how to improve the research — or that there won’t be new questions that can be explored with previously collected data — once we start implementing an experiment.

At this point a fair critique to raise is that the points in preceding paragraph could be taken as an indictment of registries generally. Here we venture more into simply a point of view, but I believe that there is a difference between asking people to document what their original plans were and giving them a chance in their own words — if they choose to do so — to explain how their research project evolved as opposed to having to deal with a public “grade” of whatever form that might take. In my mind, the former is part of producing transparent research, while the latter — however well intentioned — could prove paralyzing in terms of making adjustments during the research process or following new lines of interesting research.

This brings me to my final concern, which is that untenured faculty would end up feeling the most pressure in this regard. For tenured faculty, a publication without the requisite asterisks noting registry compliance might not end up being too big a concern — although I’m not even sure of that — but I could easily imagine junior faculty being especially worried that publications without registry asterisks could be held against them during tenure considerations.

The bottom line is that registries bring with them a host of benefits — as Jamie has nicely laid out above — but we should think carefully about how to best maximize those benefits in order to minimize new costs. Even if we could agree on how to rate a proposal in terms of faithfulness to registry design, I would suggest caution in trying to integrate ratings into the publication process.

The views expressed here are mine alone and do not represent either the Journal of Experimental Political Science or the APSA Organized Section on Experimental Research Methods.

Heading image: Interior of Rijksmuseum research library. Rijksdienst voor het Cultureel Erfgoed. CC-BY-SA-3.0-nl via Wikimedia Commons.

The post The pros and cons of research preregistration appeared first on OUPblog.

0 Comments on The pros and cons of research preregistration as of 9/28/2014 7:15:00 AM
Add a Comment

View Next 25 Posts