JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: journals, Most Recent at Top [Help]
Results 1 - 25 of 185
How to use this Page
You are viewing the most recent posts tagged with the words: journals in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
The 2014 Oral History Association Annual Meeting featured an exciting musical plenary session led by Michael Honey and Pat Krueger. They presented the songs and stories of John Handcox, the “poet laureate” of the interracial Southern Tenant Farmers Union, linking generations of struggle in the South through African American song and oral poetry traditions. The presentation built on Dr. Honey’s article in Oral History Review 41.2, “’Sharecroppers’ Troubadour': Can We Use Songs and Oral Poetry as Oral History?,” as well as his recent book.
Imagine you are in class and your friend has just made a fool of the teacher. How do you feel? Although this will depend on the personalities of those involved, you might well find yourself laughing along with your classmates at the teacher’s expense. The experience of sharing an emotion with your friends (in this case the fun of getting one over on the teacher) will probably strengthen your friendship further. But in a class of one hundred students, there are likely to be one or two who have trouble understanding the joke.
The ability to infer and understand other peoples’ emotions and beliefs plays an important role in human social relationships. However, for individuals with autism spectrum disorder (ASD) — a developmental disorder that affects approximately 1% of the population and for which there is no established treatment — this can be challenging. While high-functioning individuals with ASD may be able to compensate for difficulties in inferring others’ beliefs, they often continue to have trouble understanding others’ emotions, and this leads to impaired social functioning.
Increasing evidence suggests that oxytocin — a neuropeptide that promotes social behavior and bonding in humans and in animals — can improve emotion recognition in ‘typically developing’ individuals, i.e. those without ASD. Notably, oxytocin improves the ability to infer others’ emotions more than the ability to identify their beliefs. Oxytocin has also been shown to improve social behavior in individuals with autism and to partially reverse patterns of brain dysfunction thought to be responsible for the deficits. This has led to the suggestion that oxytocin could be used to develop medications for currently untreatable psychiatric conditions characterized by social impairments.
However, studies to date have only investigated the ability of oxytocin to improve recognition of basic emotions such as fear or happiness. These differ from “social” emotions such as embarrassment and shame, which require us to represent the mental state of another. Moreover, most existing studies have provided participants with so-called “direct cues” as to others’ emotions, such as their facial expressions or tone of voice. However, these cues are not always available in real life and the ability to identify others’ emotions using only indirect cues is itself important for social functioning. We therefore decided to investigate whether oxytocin would also improve the ability of individuals with ASD to recognise social emotions, even in the absence of direct cues.
To do so, we modified a cartoon-based task called the “Sally-Anne task,” which is commonly used to test for understanding of other peoples’ false beliefs, and used MRI scans to measure brain activity in subjects with and without ASD as they performed the task. In the standard version, participants are shown a cartoon in which one protagonist (Sally) places a ball in a box and then leaves the room. In her absence, another protagonist (Anne) moves the ball to a second box to the right of the first, and Sally then returns. At the end of the story, participants are asked the following questions: “Is the ball in the left-hand box?” to test comprehension of the story, and “Does Sally look for her ball in the left-hand box?” to test for understanding of Sally’s false belief about the location of the ball. To examine participants’ ability to infer others’ emotions, we introduced a third question: “How does Anne feel when Sally opens the left-hand box?”. Given that Ann’s gain effectively depends on Sally’s loss, the emotions involved will be complex social emotions: Ann, for example, might gloat upon realizing that she has fooled Sally by moving the ball.
We discovered that individuals with ASD are less accurate than IQ-matched controls in inferring social emotions in the absence of direct cues such as facial expressions. Moreover, individuals with ASD showed lower activity than controls in two brain regions that contribute to this ability, namely the right anterior insula and superior temporal sulcus. Individuals with ASD who had a normal IQ were not significantly impaired in inferring others’ beliefs; however, they did show lower brain activity than controls in a region implicated in this process, the dorsomedial prefrontal cortex.
In order to determine whether oxytocin could improve the ability of individuals with ASD to identify others’ social emotions, we conducted a double-blind trial. We administered a single dose of either oxytocin or placebo in the form of an intranasal spray to subjects with ASD and to matched controls. As predicted, oxytocin increased the accuracy with which individuals with ASD were able to identify others’ social emotions in the absence of direct cues, and also enhanced their originally-diminished brain activity in the right anterior insula. This increase in activity was not observed in other brain regions or during attempts to understand others’ beliefs, suggesting that oxytocin acts specifically on the ability to infer social emotions.
Ultimately therefore, the results of our behavioral experiments and brain activity studies lend support to the idea that intranasal oxytocin could potentially form the basis of a treatment for at least some of the social impairments in ASD.
Stress seems to be everywhere we turn. Much of the daily news is stressful, whether it pertains to the recent Ebola outbreak in western Africa (and its subsequent entry into the United States), beheadings by the radical Islamic group called ISIS, or the economic doldrums that continue to plague much of the developed world. Moreover, we all experience frequent stress in our daily lives. Stress can come from your job, your family, a romantic relationship, personal attacks by way of social media, or, if you’re a student, your school performance. Counselors, psychotherapists, even self-help books and other materials may help us cope with stress, but these sources don’t usually give us very much information about what is actually happening to our brain and our body when we’re stressed.
If we think about it for a moment, it becomes clear that stress is not a recent phenomenon brought about by the features of contemporary western societies. Our hominid ancestors who evolved on the African savanna were surely stressed in the course of meeting their basic biological needs of finding food and water, acquiring shelter, and keeping safe from predators. Moreover, the principal brain and endocrine (i.e. hormonal) systems that underlie the cognitive, behavioral, and physiological responses to stress are found throughout the animal kingdom, indicating that these systems arose much earlier in evolutionary history than the appearance of the first hominids. So just what are these systems and how do they work?
A lot of research has focused on the hormonal systems that are turned on during stress. These responses are easier to access than brain responses, since researchers usually need only to obtain samples of the person’s blood, saliva, or urine to determine whether her endocrine system is showing a normal stress response or perhaps is functioning abnormally due to the effects of previous stress exposure. There are two parts to the endocrine stress response, both involving the adrenal glands. The inner part of the adrenal gland, called the adrenal medulla, rapidly secretes the hormones epinephrine and norepinephrine (also called adrenalin and noradrenalin) in response to a stressor. These hormones help prepare the person for rapid physical action by elevating heart rate and blood pressure, mobilizing sugar from the liver for instant energy, and increasing blood flow to the skeletal muscles. The outer part of the adrenal gland, called the adrenal cortex, is also activated by stressors but a bit more slowly. This part of the gland secretes glucocorticoids such as cortisol, which not only works in conjunction with epinephrine and norepinephrine but also affects inflammation, immune function, and brain activity.
For many years, researchers focused on how stress, especially chronic stress, can damage the adult brain and body. More recently, however, it has become clear that stress may be particularly destructive during development. We now know, for example, that repeated childhood maltreatment and abuse increase the child’s vulnerability to a later onset of clinical depression or post-traumatic stress disorder. But stress can exert deleterious effects even earlier in development, namely during the prenatal period. Although the fetal adrenal glands begin to function before birth, it seems likely that stress is transmitted to the fetus mainly through maternal hormones such as cortisol. The placenta breaks down much of the mother’s cortisol before it reaches the fetus, but some of the hormone manages to get through. One example that shows how prenatal stress can adversely affect offspring development stems from a terrible ice storm that hit Québec Province in Canada in January of 1998. Three million people lost electrical power for up to 40 days, resulting in significant privation. David Laplante and colleagues at Douglas Hospital of McGill University later studied 89 five-and-a-half-year-old children whose mothers had been pregnant with them during the power outage. Children whose mothers endured the greatest hardship as a result of the storm scored noticeably lower in verbal IQ scores and in a vocabulary test than children whose mother experienced low or moderate hardship.
While natural disasters like the Québec ice storm afford researchers the opportunity to investigate some of the deleterious effects of prenatal stress exposure, there are many limitations of such studies because the stress cannot be controlled experimentally and there are additional confounding variables such as differing postnatal experiences among the participants. To overcome some of these limitations and additionally permit a more detailed examination of behavioral, endocrine, and brain function than normally available with human participants, models of stress (including prenatal stress) have been developed for studying nonhuman primates such as rhesus monkeys. Offspring of rhesus monkeys exposed during mid-to-late pregnancy either to repeated mild stress or to pharmacological stimulation of cortisol release show behavioral and brain abnormalities that are still present at least several years later.
The implication of both the human and primate research is clear. We must pay closer attention to the well-being of pregnant women in order to minimize whatever life stresses can be controlled. By so doing, we can help newborn children begin life with better prospects for their future mental and physical health.
While food insecurity in America is by no means a new problem, it has been made worse by the Great Recession. And, despite the end of the Great Recession, food insecurity rates remain high. Currently, about 49 million people in the U.S. are living in food insecure households. In a recently-released article in Applied Economics Policy and Perspectives my co-authors, Elaine Waxman and Emily Engelhard, and I provide an overview of Map the Meal Gap, a tool that is used to establish food insecurity rates at the local level for Feeding America (the umbrella organization for food banks in the United States).
For 35 years, Feeding America has responded to the hunger crisis in America by providing food to people in need through a nationwide network of food banks. Today, Feeding America is the nation’s largest domestic hunger-relief organization—a powerful and efficient network of 200 food banks across the country. You can learn more about food insecurity rates in America by listening to the below podcast:
What are the state-level determinants of food insecurity? What is the distribution of food insecurity across counties in the United States? How do the county-level food insecurity estimates generated in Map the Meal Gap compare with other sources? Along with reviewing Map the Meal Gap and finding out the answers to these questions, we discuss ways that policies can and are being used to reduce food insecurity in the United States.
Headline image credit: Supermarket trolleys, by Rd. Vortex. CC-BY-2.0 via Flickr.
What is jihad? What do fundamentalists want? How will moderate Islamists react? These are questions that should be discussed. We may not have easy answers, but if we don’t start a dialogue, we may miss an opportunity to curtail horror.
The film Timbuktu from African director Abderrahmane Sassako about his native country serves as a needed point of departure for discussion — in government, in schools, in boardrooms, and in families.
Jihadism and terrorism are the 21st century’s “-isms,” following the horrors of fascism and communism. In hindsight, we wonder if we could have prevented the horrors of the 20th century. The devastating results have taught us that people do not want war; they want to live and work in peace. Should we not learn from history’s mistakes and prevent future genocides?
In the name of jihad, innocent victims are beheaded, kidnapped, raped, tortured, terrorized, left without families, and without homes. Extremist Muslims wage war against Christians and Jews, and against other Muslims (Sunnis vs. Shiites). Havoc is occurring in Syria, Iraq, Lebanon, Gaza, West Bank, Mali, Sudan, etc. It may soon take hold of our cities where jihadists threaten to set up terrorist cells.
Powerful and courageous, Timbuktu mesmerizes us with its blend of colors and music amidst a gentle background of sand dunes. Yet, juxtaposed to the serene beauty of Mali’s nature is the ferocious narrative of men turned into animals, forcing their machine guns on the quiet people of Timbuktu. We bear witness to the atrocious acts of barbarism.
Based on a true story when jihadists took over northern Mali in 2012, Sassako gives us a mosaic of characters who represent multi-cultural Africa. The camera takes us directly into their tragedies using a cause and effect structure:
We see a fisherwoman who refuses to wear a veil and gloves, for how would she be able to see or pick up the fish she must sell? Her rebellion, despite her mother’s pleas and the jihadist threats, is frightening.
Several friends play the guitar and sing together in the quiet of their home. The result? They are arrested and stoned to death.
A boy has a soccer ball, and accidentally the ball rolls down steps and through sand dunes to fall in front of several jihadists. The punishment? 40 lashes.
A caring man defends his young shepherd when their cow is killed. The outcome? A fight and the destruction of a family.
The leader of the community, the imam, tells several jihadists to leave the mosque with their guns and boots. People are praying. He warns them that Allah does not want destruction or terror. We fear the imam’s end.
These characters are not abstract; they are real victims. We follow their story, care for them, empathize with their pride, and suffer with their courage.
The contrast between good and evil, beauty and terror, are presented in alternating scenes and play havoc with our emotions. Sometimes we want to close our eyes as the evil becomes unbearable; we fear what horror will follow.
Sassako is a master storyteller and painter of landscape. His color palette holds our eyes as our hearts cringe at the story. Beautiful moments linger amidst savage reality. We see ballet in the scene when a dozen young men play soccer without a soccer ball. How graceful is their athletic movements and how deep their pleasure. We are mesmerized, and at the same time, we are panicked to think what the next scene will bring. The film’s power comes from its majestic beauty – a beauty that we fear cannot exist with the evil we are watching.
Sassako parallels the opening scene with the final scene. The film begins showing an elegant deer running through the soft dunes. It ends with the same scene, but the animal is replaced by the twelve-year-old heroine who runs desperately through the same dunes as she tries to escape her tragic reality. Sassako’s circle is a vicious cycle with no end to crimes against humanity.
Timbuktu is a difficult film to watch because it depicts a possible future that no one wants to see: genocide. All the more reason to see this film now.
The fatal shooting of African-American teenager Michael Brown, in Ferguson, Missouri during a police altercation in Augusts 2014, resulted in massive civil unrest and protests that received considerable attention from the United States and abroad. To gain further perspective on the situation in Ferguson and its implications of race relations in America, I spoke with Wayne A. Santoro and Lisa Broidy, authors of the article “Gendered Rioting: A General Strain Theoretical Approach” published in Social Forces. This articles is freely available for a limited time.
Why do you think there has been so much media attention on the situation in Ferguson following the Michael Brown shooting?
Police shootings and mistreatment of black citizens is not, unfortunately, an uncommon experience in the United States. Protests like street marches have become so routinized that at best they get covered in the back pages of the local newspaper. But what no one can ignore are protests that turn violent. Whether we call them riots or rebellions, they are front page news. They are dramatic and unpredictable, threaten life and property, and capture the media’s attention. Policymakers cannot ignore them. After all, it is not every day that a state governor calls out the National Guard to maintain law and order. And whether the public views the protestors in a sympathetic or unsympathetic manner, we are mesmerized by the ongoing drama. How long will the rioting last? How will law enforcement respond? What will be the cost in lives lost and property destroyed?
Why do you think that the shooting of Michael Brown sparked protest by citizens? What was unique about the circumstances in Ferguson, or the Michael Brown case?
Four factors stand out, some unique to the incident and to Ferguson while others are more typical. First, the single best predictor of black riots is police shootings or abuse of blacks by police. Indeed, in our research we find that a particularly strong predictor of joining a riot is having experienced police mistreatment personally. Police harassment is the spark that ignites protests that turn violent. This was a central conclusion of the famous 1968 Kenner Commission that studied black rioting in the late sixties.
Second, blacks in Ferguson have long complained about police harassment. Numerous blacks in Ferguson have recited to the media past experiences with police mistreatment. One resident recalled how he was roughed up by the police during a minor traffic stop. Another spoke of how she called the police for assistance only to have the police arrest her upon arrival. There was an incident in 2009 where a black man accused officers of beating him and then found out that he was subsequently charged with damaging government property by getting his blood on their police uniforms. Some of this mistreatment is suggested by data in Ferguson on race, traffic stops, and arrests.
Blacks comprise 67% of Ferguson’s population (in 2010) but account for 86% of all traffic stops by the police and 93% of all arrests resulting from these stops. Blacks are also twice as likely as white drivers to have the police search their car despite the fact that whites are more likely to have contraband found in their car. These data point to racially biased police practices. This is not unique to Ferguson, and in fact national survey data tell us that it is common knowledge among blacks that the police often act as agents of repression. For instance, in a New York Times/CBS News national survey conducted 10 days after the shooting, 45% of blacks report that they had personally experienced police discrimination because of their race (7% of whites report this experience). Similarly, 71% of blacks believe that local police are more likely to use deadly force against a black person (only 31% of whites agreed). Thus, it is a racially charged shooting of a black man within the context of widespread experiences of police racial abuse that fuel motivations for protest and the belief that the use of violence against the state is legitimate.
Third, the circumstances of the shooting matter. Was the shooting a legitimate or excessive use of police force? It is relevant that so many local blacks think that not only was Michael Brown unarmed (which is undisputed) but that he had his hands raised and was surrendering at the time of the shooting. What matters is not so much whether the “hands raised and surrendering” scenario is accurate (this likely will remain in dispute) but that so many local residents found it believable that a white police officer would shoot six times an unarmed black man trying to surrender. People believe narratives that resonate with their personal experiences and this again tells us something about what these personal experiences with the police have been.
Fourth, blacks in Ferguson have been excluded almost completely from positions of power. People protest when their voices are not being heard, and in Ferguson it appears that those who make policy decisions and influence police behavior are particularly deaf to the concerns of the black community. Referring to an incident where Ferguson officials were unresponsive to a relatively minor request, one black resident remarked “You get tired. You keep asking, you keep asking. Nothing gets done.” One arena where this exclusion is evident is in the police department. In the Ferguson police department only 3 (some report 4) of 53 commissioned officers — about 6% — are black. Recall that Ferguson is 67% black. Police departments are seldom responsive to minority communities when policy and street-level enforcement decisions are made solely by whites. Moreover, minority distrust of the police is likely when few police officers are minority. The racial power disparity is evident in elected positions as well. As Jeff Smith (2014) wrote in the New York Times, “Ferguson has a virtually all-white power structure: a white mayor; a school board with six white members and one Hispanic, which recently suspended a highly regarded young black superintendent who then resigned; a City Council with just one black member.” Access to political positions and direct influence into policymaking tend to channel discontent into institutional arenas. Protest is a marker that a population is politically marginalized. Protest is inherently a response to blocked access and influence over the political system.
To what degree is Ferguson unique as opposed to being emblematic of race relations in America?
Ferguson is more typical than atypical. There remains in the United States deep and enduring racial disparities in socioeconomic status, wealth, and well-being. No other population in the United States has experienced the degree of residential segregation from whites as have blacks. We imprison black men at a staggering rate. What the Kerner Commission stated nearly 50 years ago remains true today: we are a “nation of two societies, one black, one white – separate and unequal.” This inequality has been noted repeatedly by black residents in Ferguson who see the local governing regime as unresponsive, the police force as hostile, and the school system as abysmal. Ferguson also is typical in that it reveals how views of racial progress and incidents like the shooting of Michael Brown are racially polarized. In the New York Times/CBS News survey noted above, 49% of blacks thought that the protests in Ferguson were about right or did not go far enough — only 19% of whites held such views.
In two ways, however, Ferguson seems atypical. First, in Ferguson the growth in the black population relative to whites is a recent occurrence. In 1990, blacks comprised 25% of the city’s population but that percentage grew to 52% in 2000 and 67% in 2010. This demographic transition was not followed by a corresponding transition in black access to political positions, the police force, union representation, and the like. Sociologists speak of the “backlash hypothesis,” meaning that when whites feel threatened such as by increases in the minority population they respond with greater hostility to the “threatening” population. The recency of the demographic transition likely has altered the social and political dynamics of the city in ways that do not characterize other contemporary major cities in the United States especially those that are majority black like Detroit or Atlanta.
Second, Ferguson is unusual in the degree that the city uses the municipal court system and the revenue it generates as a way to raise city funds. Court fines make up the second highest source of revenue for the city. This created a financial incentive to issue tickets and then impose excessive fees on people who did not pay. Data bear this out. Ferguson issued more than 1,500 warrants per 1,000 people in 2013 and this rate exceeds all other Missouri cities with a population larger than 10,000 people. To put this another way, Ferguson has a population of just over 21,000 people but issued more than 24,000 warrants which add up to three warrants per Ferguson household. Writes Frances Robles (2014) in the New York Times: “Young black men in Ferguson and surrounding cities routinely find themselves passed from jail to jail as they are picked up on warrants for unpaid fines.” Thus, in Ferguson the primary interaction between many black residents and the police take place because of these warrants. Recent work on social movements has argued that such daily insults and humiliations can play a strong role in motivating people to protest, and certainly serve to undermine trust in the local police and city policymakers.
What will be the likely short- and longer-term consequences of the Ferguson protests?
Understanding how policymakers and others respond to a protest — especially one that turns violent — is complex. There is no typical response and historically one could cite examples of elites either trying to ameliorate the conditions that gave rise to the protest or responding in a more punitive manner. Nonetheless, in the short term there are reasons to think that policymakers will respond in ways favorable to the local black community by addressing some of their grievances. As political scientist James Button has written, policymakers tend to respond more favorably to riots when riots are large enough to garner public and media attention but not so severe and widespread to cause major societal disruption. This describes the Ferguson riots, unlike, for instance, the riots during the late 1960s in the United States. Moreover, policymakers who are sympathetic to minorities tend to respond in ways more favorable to minorities than less receptive policymakers. Social movement scholars refer to this as a favorable “political opportunity structure.” In the United States, the former tend to come from the ranks of the Democratic Party while the latter from the ranks of the Republican Party. Thus the fact that the Ferguson protests occurred during the Obama administration suggests a more ameliorative than punitive response, at least at the national level. It is not surprising that three times more blacks, 60% to 20%, report being satisfied rather than unsatisfied with how President Obama has responded to the situation in Ferguson.
There is some evidence that policymakers are indeed responding in ways favorable to the local black community and their grievances. For instance, Attorney General Eric H. Holder Jr. announced an independent investigation of the shooting and traveled to Ferguson to meet with investigators. Moreover, his office has started a civil rights investigation into whether the police have repeatedly violated the civil rights of residents. At the local level, some changes also are evident. The Ferguson City Council on 8 September agreed to establish a citizen review board to monitor the local police department. The city also has pledged that it would revamp its policy of using court fines to fund such a large share of its city budget. For instance, the city council has eliminated a $50 warrant recall fee and a $15 notification fee.
It is more of a leap of faith, however, to expect major long-term changes in Ferguson because of the insurgency. There remains, for instance, an on-going debate by scholars of the modern civil-rights movement (circa 1955-1968) as to whether the more than decade-long movement produced meaningful change in the lives of most blacks. If a decade of protests produced less than satisfactory change in the opinion of some, what chance do the Ferguson protests have? In particular, there is little reason to think that levels of black poverty, unemployment, underemployment, and educational disparities will improve noticeably in Ferguson unless other social forces are brought into play. These more substantive changes are more likely to be produced by years of community organizing, securing elected positions, joining governing political coalitions with sympathetic allies, and favorable economic conditions like the growth of blue-collar employment opportunities.
Have white police shootings of minorities (or African-Americans) become more or less common in recent years?
This is an empirical question and the relevant data are limited. There are no national data on police shootings that do not result in death. National data on police shootings that result in death come from three sources: the Federal Bureau of Investigations (FBI), Bureau of Justice Statistics (BJS), and the Centers for Disease Control (CDC). However, data from each of these sources are limited. The FBI collects data on “justifiable homicide” by police as a voluntary component of the Supplemental Homicide Report data collected from police departments nationwide. Unfortunately few departments (less than 5%) voluntarily provide these data, leaving obvious questions about their representativeness and utility. Moreover, even if they were complete, these data would tell us little beyond the demographics of those killed. Particularly, we cannot discern the degree to which these incidents represent excessive use of force by police. BJS collects similar data on deaths that occur during an arrest. These data are collected at the state level and then reported to BJS. Compliance is better, with 48 states reporting. But it is not clear how complete or comparable the data from each state are.
Is there anything else you think we can learn about race relations or racially motivated social movements in the United States from the case of Ferguson?
A few lessons. First, we often talk about the civil rights movement in the past tense. We think of it as something that happened; we might even debate why it “ended” and what it accomplished. But Ferguson reminds us that the struggle for racial justice continues. It is not always so newsworthy, but everyday many blacks and black advocacy organizations struggle to overcome racial barriers. Second, it underscores the deep racial divide in the United States. White and black views, especially concerning racial matters, are often polar opposite. Where whites see progress, blacks see setbacks. Where whites see black advancement, blacks see persistent racial disparities. Especially polarized are views on the criminal justice system and police. Third, there are costs to a society when a population is politically and economically marginalized. These costs may not always be apparent to outsiders nor make national headlines. But the price we pay for racial disparities is that violent protests will continue to be an enduring feature of the US landscape. The national memory of the Ferguson riots will fade only to be replaced by the next Ferguson-style protest. The question becomes what are we as individuals and as a collective willing to do to eradicate the racial inequality that motivates such protest?
Heading image: Ferguson, Day 4, Photo 26 by Loavesofbread. CC-BY-SA-4.0 via Wikimedia Commons.
Beginning in the early 1920s, and continuing through the mid 1940s, record companies separated vernacular music of the American South into two categories, divided along racial lines: the “race” series, aimed at a black audience, and the “hillbilly” series, aimed at a white audience. These series were the precursors to the also racially separated Rhythm & Blues and Country & Western charts, and arguably the source of the frequent racial divisions of today’s recording industry. But a closer examination reveals that the two populations rely heavily on many of the same musical resources, and that early blues and country music exhibit thorough interpenetration.
Many admirers of early blues and country music observe that black and white musicians from the 1920s to the 1940s share much with respect to repertoire and genre, and that the separation of the two on commercial recordings grew out of the prejudices of record companies. It becomes even more apparent how deeply intertwined the two traditions are when we examine blues and country musicians’ shared stock of schemes. Schemes are preexisting harmonic grounds and melodic structures that are common resources for the creation of songs. A scheme generates multiple distinct songs, with different lyrics and titles. Many schemes generated songs in both blues and country music.
There are several different types of blues and country schemes. One type is a harmonic progression that combines with one particular tune. The “Trouble In Mind” scheme, for example, generates both Bertha Chippie Hill’s “Trouble in Mind” (1) and the Hackberry Ramblers’ “Fais Pas Ça” (2). Both use the same harmonic progression, and the two melodies have relatively slight variation. Hill recorded for the “race” series, and the Hackberry Ramblers for the “hillbilly” series.
1. Bertha “Chippie” Hill, “Trouble in Mind” (Bertha “Chippie” Hill—Document Records)
2. Hackberry Ramblers, “Fais Pas Ça” (Jolie Blonde—Arhoolie Productions)
A second type of scheme is a preexisting harmonic progression that musicians associate primarily with a specific tune, which they set to lyrics about various subjects, but which they also use to support original melodies. In the “Frankie and Johnny” scheme, the same melody combines with lyrics about Frankie’s shooting of Johnny (or Albert) (3), the Boll Weevil infestation at the turn of the twentieth century (4), and the gambler Stack O’Lee, who shot and killed fellow gambler Billy Lyons (5). Singers also use the harmonic progression to support original melodies, with lyrics about Frankie (6), Stack O’Lee (7), or another subject (8).
In all of the examples, the same correspondence between lyrics and harmony is evident in the harmonic shift that accompanies the completion of the opening rhyming couplet, on the words “above” (3), “your home” (4), “road” (5), “beer” (6), the first “Stack O’Lee” (7), and “that line” (8), and in the harmonic shifts that accompany emphasized words in the refrain, on the words “man” and “wrong” (3, 5, and 6), “no home” and “no home” (4), “bad man” and “Stack O’Lee” (7), and “bad” and “bad” (8). Four of the recordings given here are from the “race” labels, and two are from the “hillbilly” labels, but the same scheme generates all of them.
3. Jimmie Rodgers, “Frankie and Johnny” (The Essential Jimmie Rodgers—Sony)
4. W. A. Lindsey, “Boll Weevil” (People Take Warning—Tomkins Square)
5. Ma Rainey, “Stack O’Lee Blues” (Ma Rainey’s Black Bottom—Yazoo)
7. Mississippi John Hurt, “Stack O’Lee” (Before the Blues—Yazoo)
8. Henry Thomas, “Bob McKinney” (Texas Worried Blues—Document Records)
A third type of scheme is a preexisting harmonic progression that musicians use primarily to support original melodies. This type of scheme is the most productive, and often supports countless melodies. The most well-known and productive of this type is the standard twelve-bar blues scheme. All seven of the following recordings (9–15)—four from the “race” series and three from the “hillbilly” series—contain original melodies combined with the standard twelve-bar blues harmonic progression, and all demonstrate the AAB poetic form that typically combines with the scheme, in which singers state the opening A line of a couplet twice and follow it with one statement of the rhyming B line.
9. Ida Cox, “Lonesome Blues” (Ida Cox Complete Recorded Works—Document Records)
10. Charley Patton, “Moon Going Down” (Charlie Patton Founder of the Delta Blues—Mastercopy Pty Ltd)
11. Jesse “Babyface” Thomas, “Down in Texas Blues” (The Stuff that Dreams are Made Of)
12. Lonnie Johnson, “Mr. Johnson’s Blues No. 2” (A Smithsonian Collection of Classic Blues Singers—Sony/Smithsonian)
13. W. Lee O’Daniel & His Hillbilly Boys, “Dirty Hangover Blues” (White Country Blues—Sony)
14. Jesse “Babyface” Thomas, “Down in Texas Blues” (The Stuff that Dreams are Made Of) (White Country Blues—Sony)
15. Carlisle & Ball, “Guitar Blues” (White Country Blues—Sony)
A fourth type of scheme is a preexisting melodic structure whose harmonizations display considerable variance and yet also certain requirements. The following four examples—two by black musicians and two by white musicians—are all realizations of the “Sitting on Top of the World” scheme, and use the same melodic structure. Their harmonizations are in some ways quite similar—for example, all four harmonize the beginning of the second, rhyming line with the same harmony, and accelerate the rate of harmonic change going into the cadence—but the harmonizations vary more than the melodic structure.
16. Tampa Red, “Things ‘Bout Coming My Way No. 2” (Tampa Red the Guitar Wizard—Sony)
17. Bill Broonzy, “Worrying You Off My Mind” (Big Bill Broonzy Good Time Tonight—Sony)
18. Bob Wills & His Texas Playboys, “Sittin’ on Top of the World” (Bob Wills & His Texas Playboys Anthology—Puzzle Productions)
19. The Carter Family, “I’m Sitting on Top of the World” (On Border Radio—Arhoolie)
Finally, a fifth type of scheme is a preexisting melodic structure for which performers have little shared conception of the harmonic progression. The last four examples—one by a black musician and three by white musicians—are all realizations of the “John Henry” scheme, and use the same melodic structure, but very different harmonic progressions. Riley Puckett, in his instrumental version, uses only one harmony throughout (20). Woody Guthrie uses two harmonies (21). The Williamson Brothers & Curry also use two harmonies, but arrive at a much different harmonization than Guthrie (22). Leadbelly uses three harmonies (23).
20. Riley Puckett, “A Darkey’s Wail” (White Country Blues—Sony)
Record companies presented American vernacular music in the context of a racial divide, but examining the common stock of schemes helps to reveal how extensively black and white musical traditions are intertwined. There are stylistic differences between blues and country music, but many differences lie on the surface, while on a deeper level the two populations frequently rely on the same musical foundations.
The business press and general media often lament that firm executives are exhibiting “short-termism”, succumbing to the pressure by stock market investors to maximize quarterly earnings while sacrificing long-term investments and innovation. In our new article in the Socio-Economic Review, we suggest that this complaint is partly accurate, but partly not.
What seems accurate is that the maximization of short-term earnings by firms and their executives has become somewhat more prevalent in recent years, and that some of the roots of this phenomenon lead to stock market investors. What is inaccurate, though, is the assumption that investors – even if they were “short-term traders” – would inherently attend to short-term quarterly earnings when making trading decisions. Namely, even “short-term trading” (i.e. buying stocks with the aim to sell them after few minutes, days, or months) does not equal or necessitate “short-term earnings focus”, i.e., making trading decisions based on short-term earnings (let alone based on short-term earnings only). This means that in case the media observes – or executives perceive – that firms are pressured by stock market investors to focus on short-term earnings, such a pressure is illusionary, in part.
The illusion, in turn, is based on the phenomenon of “vociferous minority”: a minority of stock investors may be focusing on short-term earnings, causing some weak correlation between short-term earnings and stock price jumps / drops. But the illusion is born when this gets interpreted as if most or all investors (i.e., the majority) would be focusing on short-term earnings only. Alas, such an interpretation may, in the dynamic markets, lead to a self-fulfilling prophecy – whereby an increasing number of investors join the vociferous minority and focus increasingly on short-term earnings (even if still not the majority of investors would focus on short-term earnings only). And more importantly – or more unfortunately – firm executives may start to increasingly maximize short-term earnings, too, due to the (inaccurate) illusion that the majority of investors would prefer that.
A final paradox is the role of the media. Of course, the media have good intentions in lamenting about short-termism in the markets, trying to draw attention to an unsatisfactory state of affairs. However, such lamenting stories may actually contribute to the emergence of the self-fulfilling prophecy. Namely, despite the lamenting tone of the media articles, they are in any case emphasizing that the market participants are focusing just on short-term earnings. This contributes to the illusion that all investors are focusing on short-term earnings only – which in turn may lead a bigger majority of investors and firms to actually join the minority’s bandwagon, in the illusion that everyone else is doing that too.
Should the media do something different, then? Well, we suggest that in this case, the media should report more on “positive stories”, or cases whereby firms have managed to create great innovations with a patient, longer-term focus. The media could also report on an increasing number of investors looking at alternative, long-term measures (such as patents or innovation rates) instead of short-term earnings.
So, more stories like this one about Rolls-Royce – however, without claiming or lamenting that most investors are just wanting “quick results” (i.e., without portraying cases like Rolls-Royce just as rare exceptions). Such positive stories could, in the best scenario, contribute to a reverse, self-fulfilling prophecy – whereby more and more investors, and thereafter firm executives, would replace some of the excessive focus on short-term earnings that they might currently have.
Open access (OA) publishing stands at something of a crossroads. OA is now part of the mainstream. But with increasing success and increasing volume come increasing complexity, scrutiny, and demand. There are many facets of OA which will prove to be significant challenges for publishers over the next few years. Here I’m going to focus on one — licensing — and discuss how the arguments seen over licensing in recent months shine a light on the difference between OA as a movement, and OA as a reality.
Today’s authors face a number of conflicting pressures. Publish in a high impact journal. Publish in a journal with the correct OA options as mandated by your funder. Publish in a journal with the correct OA options as mandated by your institution. Publish your article in a way which complies with government requirements on research excellence. They are then met by a wide array of options, and it’s no wonder we at OUP sometimes receive queries from authors confused as to which OA option they should choose.
One of the most interesting aspects of the various surveys Taylor & Francis (T&F) have conducted on open access over the past year or two has been the divergence between what authors say they want, and what their funders/governments mandate. The T&F findings imply that, whilst there is generally a shared consensus as to what is meant by accessible, there are divergent positions and preferences between funders and researchers as to what constitutes reasonable reuse. T&F’s surveys always reveal the most restrictive licences in the Creative Commons (CC) suite such as Creative Commons Attribution Non-Commercial No-Derivs (CC BY-NC-ND) to be the most popular, with the liberal Creative Commons Attribution (CC BY) licence coming in last. This neither squares with the mandates of funders which are usually, but not always, pro CC BY, or author behaviour at OUP, where CC BY-NC-ND usually comes in a resounding third behind CC BY and CC BY-NC where it’s available. It’s not a dramatic logical step to think that proliferation may lead to confusion, but given the conflicting evidence and demand, and potential for change, it’s logical for publishers to offer myriad options. At the same time elsewhere in the OA space we have a recent example of pressure to remove choice.
In July 2014, the International Association of Science, Technical and Medical Publishers (STM) released their ‘model licences’ for open access. These were at their core a series of alternatives for, and extensions to the terms of the established CC licences. STM’s new addition did not go down well in OA circles, as a ‘Global Coalition’ subsequently called for their withdrawal. One of the interesting elements of the Coalition’s call was that, in amongst some very valid points about interoperability, etc. it fell back on the kind of language more commonly associated with a sermon to make the STM actions seem incompatible with some fundamental precepts about the practice of science: “let us work together in a world where the whole sum of human knowledge… is accessible, usable, reusable, and interoperable.” At root, it could be interpreted that the Coalition was using positive terminology to frame an essentially negative action – barring a new entry to the market. Personally, I don’t have a strong opinion on the new STM licences. We don’t have any plans to adapt them at OUP (we use CC). But it was odd and striking that rather than letting a competitor to the CC status quo exist and in all likelihood fail, some serious OA players felt the need to call for that competitor’s withdrawal.
This illustrates one of the central challenges of the dichotomy of OA. On one hand you have OA as a political movement seeking to replace commercial interests with self-organized and self-governed communities of interest – a bottom-up aspiration for the common good, often suggested to be applied in quite restricted ways, usually adhering to the Berlin, Budapest, and Bethesda declarations. On the other you have OA as a top-down pragmatic means to an end, aiming to improve the flow of research and by extension, economic performance. The OA pragmatist might suggest that it’s fine for an author to be given the choice of liberal or less liberal OA licences, as long as they meet the basic criteria of being free to read and easy to re-use. The OA dogmatist might only be satisfied with the most liberal licence, and with OA along the terms they’ve come to believe is the correct interpretation of their core precepts. The danger of this approach is that there is a ‘right’ and a ‘wrong’ and, as can be seen from the language of the Global Coalition in responding to the STM licences, that can very easily translate into; “If you’re not with us, you’re against us.”
Against this backdrop, publishers find themselves in a thorny position. Do you (a) respect author choice, but possibly at some expense of simplicity, or do you (b) offer fewer options, but potentially leave members of the scholarly community feeling dissatisfied or disenfranchised by your standard option?
Oxford University Press at the moment chooses option (a), as we feel this is the more inclusive way to proceed. To me at least it feels right to give your customers choice. But there is an argument for streamlining processes, avoiding confusion, and giving users consistent knowledge of what to expect. Nature Publishing Group (NPG), for example, recently announced that as part of their move to full OA for Nature Communications they would be making CC BY their default, and only allowing other options on request. This is notable in as much as it’s a very strong steer in a particular direction, while not ruling out everything else. NPG has done more than most to examine the choice issue – changing the order of their licences to see what authors select, sometimes varying charges, etc. Empirical evidence such as this is essential for a viable and credible resolution to the future of OA licensing. Perhaps the Global Coalition should have given a more considered and less emotional response to the STM licences. Was repudiation necessary in a broad OA community which should be able to recognise and accept different variants of OA? It would be a shame if all the positive impacts of open access for the consumer come hand in hand with a diminution of scholarly freedom for the producer.
The opinions and other information contained in this blog post and comments do not necessarily reflect the opinions or positions of Oxford University Press.
As an Africanist historian who has long been committed to reaching broader publics, I was thrilled when the research team for the BBC’s popular genealogy program Who Do You Think You Are? contacted me late last February about an episode they were working on that involved mixed race relationships in colonial Ghana. I was even more pleased when I realized that their questions about the practice and perception of intimate relationships between African women and European men in the Gold Coast, as Ghana was then known, were ones I had just explored in a newly published American Historical Review article, which I readily shared with them. This led to a month-long series of lengthy email exchanges, phone conversations, Skype chats, and eventually to an invitation to come to Ghana to shoot the Who Do You Think You Are? episode.
After landing in Ghana in early April, I quickly set off for the coastal town of Sekondi where I met the production team, and the episode’s subject, Reggie Yates, a remarkable young British DJ, actor, and television presenter. Reggie had come to Ghana to find out more about his West African roots, but discovered instead that his great grandfather was a British mining accountant who worked in the Gold Coast for several years. His great grandmother, Dorothy Lloyd, was a mixed-race Fante woman whose father—Reggie’s great-great grandfather—was rumored to be a British district commissioner at the turn of the century in the Gold Coast.
The episode explores the nature of the relationship between Dorothy and George, who were married by customary law around 1915 in the mining town of Broomassi, where George worked as the paymaster at the local mine. George and Dorothy set up house in Broomassi and raised their infant son, Harry, there for two years before George left the Gold Coast in 1917 for good. Although their marriage was relatively short lived, it appears that Dorothy’s family and the wider community that she lived in regarded it as a respectable union and no social stigma was attached to her or Harry after George’s departure from the coast.
George and Dorothy lived openly as man and wife in Broomassi during a time period in which publicly recognized intermarriages were almost unheard of. As a privately employed European, George was not bound by the colonial government’s directives against cohabitation between British officers and local women, but he certainly would have been aware of the informal codes of conduct that regulated colonial life. While it was an open secret that white men “kept” local women, these relationships were not to be publicly legitimated.
Precisely because George and Dorothy’s union challenged the racial prescripts of colonial life, it did not resemble the increasingly strident characterizations of interracial relationships as immoral and insalubrious in the African-owned Gold Coast press. Although not a perfect union, as George was already married to an English woman who lived in London with their children, the trajectory of their relationship suggests that George and Dorothy had a meaningful relationship while they were together, that they provided their son Harry with a loving home, and that they were recognized as a respectable married couple. No doubt this had much to do with why the wider African community seemingly embraced the couple, and why Dorothy was able to “marry well” after George left. Her marriage to Frank Vardon, a prominent Gold Coaster, would have been unlikely had she been regarded as nothing more than a discarded “whiteman’s toy,” as one Gold Coast writer mockingly called local women who casually liaised with European men. In her own right, Dorothy became an important figure in the Sekondi community where she ultimately settled and raised her son Harry, alongside the children she had with Frank Vardon.
The “white peril” commentaries that I explored in my AHR article proved to be a rhetorically powerful strategy for challenging the moral legitimacy of British colonial rule because they pointed to the gap between the civilizing mission’s moral rhetoric and the sexual immorality of white men in the colony. But rhetoric often sacrifices nuance for argumentative force and Gold Coasters’ “white peril” commentaries were no exception. Left out of view were men like George Yates, who challenged the conventions of their times, even if imperfectly, and women like Dorothy Lloyd who were not cast out of “respectable” society, but rather took their place in it.
This sense of conflict and connection and of categorical uncertainty is what I hope to have contributed to the research process, storyline development, and filming of the Reggie Yates episode of Who Do You Think You Are? The central question the show raises is how do we think about and define relationships that were so heavily circumscribed by racialized power without denying the “possibility of love?” By “endeavor[ing] to trace its imperfections, its perversions,” was Martinican philosopher and anticolonial revolutionary Frantz Fanon’s answer. While I have yet to see the episode, Fanon’s insight will surely reverberate throughout it.
How rapidly does medical knowledge advance? Very quickly if you read modern newspapers, but rather slowly if you study history. Nowhere is this more true than in the fields of neurology and psychiatry.
It was believed that studies of common disorders of the nervous system began with Greco-Roman Medicine, for example, epilepsy, “The sacred disease” (Hippocrates) or “melancholia”, now called depression. Our studies have now revealed remarkable Babylonian descriptions of common neuropsychiatric disorders a millennium earlier.
There were several Babylonian Dynasties with their capital at Babylon on the River Euphrates. Best known is the Neo-Babylonian Dynasty (626-539 BC) associated with King Nebuchadnezzar II (604-562 BC) and the capture of Jerusalem (586 BC). But the neuropsychiatric sources we have studied nearly all derive from the Old Babylonian Dynasty of the first half of the second millennium BC, united under King Hammurabi (1792-1750 BC).
The Babylonians made important contributions to mathematics, astronomy, law and medicine conveyed in the cuneiform script, impressed into clay tablets with reeds, the earliest form of writing which began in Mesopotamia in the late 4th millennium BC. When Babylon was absorbed into the Persian Empire cuneiform writing was replaced by Aramaic and simpler alphabetic scripts and was only revived (translated) by European scholars in the 19th century AD.
The Babylonians were remarkably acute and objective observers of medical disorders and human behaviour. In texts located in museums in London, Paris, Berlin and Istanbul we have studied surprisingly detailed accounts of what we recognise today as epilepsy, stroke, psychoses, obsessive compulsive disorder (OCD), psychopathic behaviour, depression and anxiety. For example they described most of the common seizure types we know today e.g. tonic clonic, absence, focal motor, etc, as well as auras, post-ictal phenomena, provocative factors (such as sleep or emotion) and even a comprehensive account of schizophrenia-like psychoses of epilepsy.
Early attempts at prognosis included a recognition that numerous seizures in one day (i.e. status epilepticus) could lead to death. They recognised the unilateral nature of stroke involving limbs, face, speech and consciousness, and distinguished the facial weakness of stroke from the isolated facial paralysis we call Bell’s palsy. The modern psychiatrist will recognise an accurate description of an agitated depression, with biological features including insomnia, anorexia, weakness, impaired concentration and memory. The obsessive behaviour described by the Babylonians included such modern categories as contamination, orderliness of objects, aggression, sex, and religion. Accounts of psychopathic behaviour include the liar, the thief, the troublemaker, the sexual offender, the immature delinquent and social misfit, the violent, and the murderer.
The Babylonians had only a superficial knowledge of anatomy and no knowledge of brain, spinal cord or psychological function. They had no systematic classifications of their own and would not have understood our modern diagnostic categories. Some neuropsychiatric disorders e.g. stroke or facial palsy had a physical basis requiring the attention of the physician or asû, using a plant and mineral based pharmacology. Most disorders, such as epilepsy, psychoses and depression were regarded as supernatural due to evil demons and spirits, or the anger of personal gods, and thus required the intervention of the priest or ašipu. Other disorders, such as OCD, phobias and psychopathic behaviour were viewed as a mystery, yet to be resolved, revealing a surprisingly open-minded approach.
From the perspective of a modern neurologist or psychiatrist these ancient descriptions of neuropsychiatric phenomenology suggest that the Babylonians were observing many of the common neurological and psychiatric disorders that we recognise today. There is nothing comparable in the ancient Egyptian medical writings and the Babylonians therefore were the first to describe the clinical foundations of modern neurology and psychiatry.
A major and intriguing omission from these entirely objective Babylonian descriptions of neuropsychiatric disorders is the absence of any account of subjective thoughts or feelings, such as obsessional thoughts or ruminations in OCD, or suicidal thoughts or sadness in depression. The latter subjective phenomena only became a relatively modern field of description and enquiry in the 17th and 18th centuries AD. This raises interesting questions about the possibly slow evolution of human self awareness, which is central to the concept of “mental illness”, which only became the province of a professional medical discipline, i.e. psychiatry, in the last 200 years.
In 2014 Oxford University Press celebrates ten years of open access (OA) publishing. In that time open access has grown massively as a movement and an industry. Here we look back at five key moments which have marked that growth.
2004/05 – Nucleic Acids Research (NAR) converts to OA
At first glance it might seem parochial to include this here, but as Rich Roberts noted on this blog in 2012, Nucleic Acids Research’s move to open access was truly ‘momentous’. To put it in context, in 2004 NAR was OUP’s biggest owned journal and it was not at all clear that many of the elements were in place to drive the growth of OA. But in 2004/2005 NAR moved from being free to publish to free to read – with authors now supporting the journal financially by paying APCs (Article Processing Charges). No wonder Roberts adds that it was ‘with great trepidation’ that OUP and the editors made the change. Roberts needn’t have worried — NAR’s switch has been a huge success — its impact factor has increased, and submissions, which could have fallen off a cliff, have continued to climb. As with anything, there are elements of the NAR model which couldn’t be replicated now, but NAR helped show the publishing world in particular that OA could work. It’s saying something that it’s only ten years on, with the transition of Nature Communications to OA, that any journal near NAR’s size has made the switch.
2008 – National Institutes of Health (NIH) Mandate Introduced
Open access presents huge opportunities for research funders; the removal of barriers to access chimes perfectly with most funders’ aim to disseminate the fruits of their research as widely as possible. But as both the NIH and Wellcome, amongst others, have found out, author interests don’t always chime exactly with theirs. Authors have other pressures to consider – primarily career development – and that means publishing in the best journal, the journal with the highest impact factor, etc. and not necessarily the one with the best open access options. So it was that in 2008 the NIH found it was getting a very low rate of compliance with its recommended OA requirements for authors. What happened next was hugely significant for the progress of open access. As part of an Act which passed through the US legislature, it was made mandatory for all NIH-funded authors to make their works available 12 months after publication. This was transformative in two ways: it meant thousands of articles published from NIH research became available through PubMed Central (PMC), and perhaps just as importantly it legitimised government intervention in OA policy, setting a precedent for future developments in Europe and the United Kingdom.
2008 – Springer buys BioMed Central (BMC)
BioMed Central was the first for-profit open access publisher – and since its inception in 2000 it was closely watched in the industry to see if it could make OA ‘work’. When it was purchased by one of the world’s largest publishers, and when that company’s CEO declared that OA was now a ‘sustainable part of STM publishing’, it was a pretty clear sign to the rest of the industry, and all OA-watchers, that the upstart business model was now proving to be more than just an interesting side line. It also reflected the big players in the industry starting to take OA very seriously, and has been followed by other acquisitions – for example Nature purchasing Frontiers in early 2013. The integration of BMC into Springer has happened gradually over the past five years, and has also been marked by a huge expansion of OA at the parent company. Springer was one of the first subscription publishers to embrace hybrid OA, in 2004, but since acquiring BMC they have also massively increased their fully OA publishing. It seems bizarre to think that back in 2008 there were even some who feared the purchase was aimed at moving all BMC’s journals back to subscription access.
2007 on – Growth of PLOS ONE
The Public Library of Science (PLOS) started publishing open access journals back in 2003, but while its journals quickly developed a reputation for high-quality publishing, the not-for-profit struggled to succeed financially. The advent of PLOS ONE changed all that. PLOS ONE has been transformative for several reasons, most notably its method of peer review. Typically top journals have tended to have their niche, and be selective. A journal on carcinogens would be unlikely to accept a paper about molecular biology, and it would only accept a paper on carcinogens if it was seen to be sufficiently novel and interesting. PLOS ONE changed that. It covers every scientific field, and its peer review is methodological (i.e. is the basic science sound) rather than looking for anything else. This enabled PLOS ONE to rapidly turn into the biggest journal in the world, publishing a staggering 31,500 papers in 2013 alone. PLOS ONE’s success cannot be solely attributed to its OA nature, but it was being OA which enabled PLOS ONE to become the ‘megajournal’ we know today. It would simply not be possible to bring such scale to a subscription journal. The price would balloon beyond the reach of even the biggest library budget. PLOS ONE has spawned a rash of similar journals and more than any one title it has energised the development of OA, dispelling previously-held notions of what could and couldn’t be done in journals publishing.
2012 – The ‘Finch’ Report
It’s difficult to sum up the vast impact of the Finch Report on journals publishing in the UK. The product of a group chaired by the eponymous Dame Janet Finch, the report, by way of two government investigations, catalysed a massive investment in gold open access (funded by APCs) from the UK government, crystallised by Research Councils UK’s OA policy. In setting the direction clearly towards gold OA, ‘Finch’ led to a huge number of journals changing their policies to accommodate UK researchers, and the establishment of OA policies, departments, and infrastructure at academic institutions and publishers across the UK and beyond. The wide-ranging policy implications of ‘Finch’ continue to be felt as time progresses, through 2014’s Higher Education Funding Council (HEFCE) for England policy, through research into the feasibility of OA monographs, and through deliberations in other jurisdictions over whether to follow the UK route to open access. HEFCE’s OA mandate in particular will prove incredibly influential for UK researchers – as it directly ties the assessment of a university’s funding to their success in ensuring their authors publish OA. The mainstream media attention paid to ‘Finch’ also brought OA publishing into the public eye in a way never seen before (or since).
There’s a lot of interesting social science research these days. Conference programs are packed, journals are flooded with submissions, and authors are looking for innovative new ways to publish their work.
This is why we have started up a new type of research publication at Political Analysis, Letters.
Research journals have a limited number of pages, and many authors struggle to fit their research into the “usual formula” for a social science submission — 25 to 30 double-spaced pages, a small handful of tables and figures, and a page or two of references. Many, and some say most, papers published in social science could be much shorter than that “usual formula.”
We have begun to accept Letters submissions, and we anticipate publishing our first Letters in Volume 24 of Political Analysis. We will continue to accept submissions for research articles, though in some cases the editors will suggest that an author edit their manuscript and resubmit it as a Letter. Soon we will have detailed instructions on how to submit a Letter, the expectations for Letters, and other information, on the journal’s website.
We have named Justin Grimmer and Jens Hainmueller, both at Stanford University, to serve as Associate Editors of Political Analysis — with their primary responsibility being Letters. Justin and Jens are accomplished political scientists and methodologists, and we are quite happy that they have agreed to join the Political Analysis team. Justin and Jens have already put in a great deal of work helping us develop the concept, and working out the logistics for how we integrate the Letters submissions into the existing workflow of the journal.
I recently asked Justin and Jens a few quick questions about Letters, to give them an opportunity to get the word out about this new and innovative way of publishing research in Political Analysis.
Political Analysis is now accepting the submission of Letters as well as Research Articles. What are the general requirements for a Letter?
Letters are short reports of original research that move the field forward. This includes, but is not limited to, new empirical findings, methodological advances, theoretical arguments, as well as comments on or extensions of previous work. Letters are peer reviewed and subjected to the same standards as Political Analysis research articles. Accepted Letters are published in the electronic and print versions of Political Analysis and are searchable and citable just like other articles in the journal. Letters should focus on a single idea and are brief—only 2-4 pages and no longer than 1500-3000 words.
Political Analysis is taking this new direction to publish important results that do not traditionally fit in the longer format of journal articles that are currently the standard in the social sciences, but fit well with the shorter format that is often used in the sciences to convey important new findings. In this regard the role model for the Political Analysis Letters are the similar formats used in top general interest science journals like Science, Nature, or PNAS where significant findings are often reported in short reports and articles. Our hope is that these shorter papers also facilitate an ongoing and faster paced dialogue about research findings in the social sciences.
What is the main difference between a Letter and a Research Paper?
The most obvious difference is the length and focus. Letters are intended to only be 2-4 pages, while a standard research article might be 30 pages. The difference in length means that Letters are going to be much more focused on one important result. A letter won’t have the long literature review that is standard in political science articles and will have much more brief introduction, conclusion, and motivation. This does not mean that the motivation is unimportant; it just means that the motivation has to briefly and clearly convey the general relevance of the work and how it moves the field forward. A Letter will typically have 1-3 small display items (figures, tables, or equations) that convey the main results and these have to be well crafted to clearly communicate the main takeaways from the research.
If you had to give advice to an author considering whether to submit their work to Political Analysis as a Letter or a Research Article, what would you say?
Our first piece of advice would be to submit your work! We’re open to working with authors to help them craft their existing research into a format appropriate for letters. As scholars are thinking about their work, they should know that Letters have a very high standard. We are looking for important findings that are well substantiated and motivated. We also encourage authors to think hard about how they design their display items to clearly convey the key message of the Letter. Lastly, authors should be aware that a significant fraction of submissions might be desk rejected to minimize the burden on reviewers.
You both are Associate Editors of Political Analysis, and you are editing the Letters. Why did you decide to take on this professional responsibility?
Letters provides us an opportunity to create an outlet for important work in Political Methodology. It also gives us the opportunity to develop a new format that we hope will enhance the quality and speed of the academic debates in the social sciences.
Biology Week is an annual celebration of the biological sciences that aims to inspire and engage the public in the wonders of biology. The Society of Biology created this awareness day in 2012 to give everyone the chance to learn and appreciate biology, the science of the 21st century, through varied, nationwide events. Our belief that access to education and research changes lives for the better naturally supports the values behind Biology Week, and we are excited to be involved in it year on year.
Biology, as the study of living organisms, has an incredibly vast scope. We’ve identified some key figures from the last couple of centuries who traverse the range of biology: from physiology to biochemistry, sexology to zoology. You can read their stories by checking out our Biology Week 2014 gallery below. These biologists, in various different ways, have had a significant impact on the way we understand and interact with biology today. Whether they discovered dinosaurs or formed the foundations of genetic engineering, their stories have plenty to inspire, encourage, and inform us.
If you’d like to learn more about these key figures in biology, you can explore the resources available on our Biology Week page, or sign up to our e-alerts to stay one step ahead of the next big thing in biology.
Headline image credit: Marie Stopes in her laboratory, 1904, by Schnitzeljack. Public domain via Wikimedia Commons.
For many of us, nature is defined as an outdoor space, untouched by human hands, and a place we escape to for refuge. We often spend time away from our daily routines to be in nature, such as taking a backwoods camping trip, going for a long hike in an urban park, or gardening in our backyard. Think about the last time you were out in nature, what comes to mind? For me, it was a canoe trip with friends. I can picture myself in our boat, the sound of the birds and rustling leaves in the background, the smell of cedars mixed with the clearing morning mist, and the sight of the still waters in front of me. Most of all, I remember a sense of calmness and clarity which I always achieve when I’m in nature.
Nature takes us away from the demands of life, and allows us to concentrate on the world around us with little to no effort. We can easily be taken back to a summer day by the smell of fresh cut grass, and force ourselves to be still to listen to the distant sound of ocean waves. Time in nature has a wealth of benefits from reducing stress, improving mood, increasing attentional capacities, and facilitating and creating social bonds. A variety of work supports nature being healing and health promoting at both an individual level (such as being energized after a walk with your dog) and a community level (such as neighbors coming together to create a local co-op garden). However, it can become difficult to experience the outdoors when we spend most of our day within a built environment.
I’d like you to stop for a moment and look around. What do you see? Are there windows? Are there any living plants or animals? Are the walls white? Do you hear traffic or perhaps the hum of your computer? Are you smelling circulated air? As I write now I hear the buzz of the florescent lights above me, and take a deep inhale of the lingering smell from my morning coffee. There is no nature except for the few photographs of the countryside and flowers that I keep tapped to my wall. I often feel hypocritical researching nature exposure sitting in front of a computer screen in my windowless office. But this is the reality for most of us. So how can we tap into the benefits of nature in order to create healthy and healing indoor environments that mimic nature and provide us with the same benefits as being outdoors?
Urban spaces often get a bad rap. Sure, they’re typically overcrowded, high in pollution, and limited in their natural and green spaces, but they also offer us the ability to transform the world around us into something that is meaningful and also health promoting. Beyond architectural features such as skylights, windows, and open air courtyards, we can use ambient features to adapt indoor spaces to replicate the outdoors. The integration of plants, animals, sounds, scents, and textures into our existing indoor environments enables us to create a wealth of natural environments indoors.
Notable examples of indoor nature, are potted plants or living walls in office spaces, atriums providing natural light, and large mural landscapes. In fact, much research has shown that the presence of such visual aids provides the same benefits of being outdoors. Incorporating just a few pieces of greenery into your workspace can help increase your productivity, boost your mood, improve your health, and help you concentrate on getting your work done. But being in nature is more than just seeing, it’s experiencing it fully and being immersed into a world that engages all of your senses. The use of natural sounds, scents, and textures (e.g. wooden furniture or carpets that look and feel like grass) provides endless possibilities for creating a natural environment indoors, and encouraging built environments to be therapeutic spaces. The more nature-like the indoor space can be, the more apt it is to illicit the same psychological and physical benefits that being outdoors does. Ultimately, the built environment can engage my senses in a way that brings me back to my canoe trip, and help me feel that same clarity and calmness that I did on the lake.
On a broader level, indoor nature may also be a means of encouraging sustainable and eco-friendly behaviors. With more generations growing up inside, we risk creating a society that is unaware of the value of nature. It’s easy to suggest that the solution to our declining involvement with nature is to just “go outside”; but with today’s busy lifestyle, we cannot always afford the time and money to step away. Integrating nature into our indoor environment is one way to foster the relationship between us and nature, and to encourage a sense of stewardship and appreciation for our natural world. By experiencing the health promoting and healing properties of nature, we can instill individuals with the significance of our natural world.
As I look around my office I’ve decided I need to take some of my own advice and bring my own little piece of nature inside. I encourage you to think about what nature means to you, and how you can incorporate this meaning into your own space. Does it involve fresh cut flowers? A photograph of your annual family campsite? The sound of birds in the background as you work? Whatever it is, I’m sure it’ll leave you feeling a little bit lighter, and maybe have you working a little bit faster.
Image: World Financial Center Winter Garden by WiNG. CC-BY-3.0 via Wikimedia Commons.
World Anaesthesia Day commemorates the first successful demonstration of ether anaesthesia at the Massachusetts General Hospital on 16 October 1846. This was one of the most significant events in medical history, enabling patients to undergo surgical treatments without the associated pain of an operation. To celebrate this important day, we are highlighting a selection of British Journal of Anaesthesia podcasts so you can learn more about anaesthesia practices today.
Fifth National Audit Project on Accidental Awareness during General Anaesthesia
Accidental awareness during general anaesthesia (AAGA) is a rare but feared complication of anaesthesia. Studying such rare occurrences is technically challenging but following in the tradition of previous national audit projects, the results of the fifth national audit project have now been published receiving attention from both the academic and national press. In this BJA podcast Professor Jaideep Pandit (NAP5 Lead) summarises the results and main findings from another impressive and potentially practice changing national anaesthetic audit. Professor Pandit highlights areas of AAGA risk in anaesthetic practice, discusses some of the factors (both technical and human) that lead to accidental awareness, and describes the review panels findings and recommendations to minimise the chances of AAGA.
October 2014 || Volume 113 – Issue 4 || 36 Minutes
Emergency airway management in trauma patients is a complex and somewhat contentious issue, with opinions varying on both the timing and delivery of interventions. London’s Air Ambulance is a service specialising in the care of the severely injured trauma patient at the scene of an accident, and has produced one of the largest data sets focusing on pre-hospital rapid sequence induction. Professor David Lockey, a consultant with London’s Air Ambulance, talks to the BJA about LAA’s approach to advanced airway management, which patients benefit from pre-hospital anaesthesia and the evolution of RSI algorithms. Professor Lockey goes on to discuss induction agents, describes how to achieve a 100% success rate for surgical airways and why too much choice can be a bad thing, as he gives us an insight into the exciting world of pre-hospital emergency care.
August 2014 || Volume 113 – Issue 2 || 35 Minutes
Fluid responsiveness: an evolution in our understanding
Fluid therapy is a central tenet of both anaesthetic and intensive care practice, and has been a solid performer in the medical armamentarium for over 150 years. However, mounting evidence from both surgical and medical populations is starting to demonstrate that we may be doing more harm than good by infusing solutions of varying tonicity and pH into the arms of our patients. As anaesthetists we arguably monitor our patient’s response to fluid-based interventions more closely than most, but in emergency departments and on intensive care units this monitoring me be unavailable or misleading. For this podcast Dr Paul Marik, Professor and Division Chief of Pulmonary Critical Care at Eastern Virginia Medical Center delivers a masterclass on the physiology of fluid optimisation, tells us which monitors to believe and importantly under which circumstances, and reviews some of the current literature and thinking on fluid responsiveness.
April 2014 || Volume 112 – Issue 4 || 43 Minutes
Post-operative Cognitive Decline
Post-operative cognitive decline (POCD) has been detected in some studies in up to 50% patients undergoing major surgery. With an ageing population and an increasing number of elective surgeries, POCD may represent a major public health problem. However POCD research is complex and difficult to perform, and the current literature may not tell the full story. Dr Rob Sanders from the Wellcome Department of Imaging Neuroscience at UCL talks to us about the methodological limitations of previous studies and the important concept of a cognitive trajectory. In addition, Dr Sanders discusses the risk factors and role of inflammation in causing brain injury, and reveals the possibility that certain patients may in fact undergo post-operative cognitive improvement (POCI).
March 2014 || Volume 112 – Issue 3 || 20 Minutes
Needle Phobia – A Psychological Perspective
For anaesthetists, intravenous cannulation is the gateway procedure to an increasingly complex and risky array of manoeuvres, and as such becomes more a reflex arc than a planned motor act. For some patients however, that initial feeling of needle penetrating epidermis, dermis and then vessel wall is a dreaded event, and the cause of more anxiety than the surgery itself. Needle phobia can be a deeply debilitating disease causing patients not to seek help even under the most dire circumstances. Dr Kate Jenkins, a hospital clinical psychologist describes both the psychology and physiology of needle phobia, what we as anaesthetists need to be aware of, and how we can better serve out patients for whom ‘just a small scratch’ may be their biggest fear.
July 2014 || Volume 113 – Issue 1 || 32 Minutes
They might be short-lived — but between the time a bubble is born (Fig 1 and Fig 2a) and pops (Fig 2d-f), the bubble can interact with surrounding particles and microorganisms. The consequence of this interaction not only influences the performance of bioreactors, but also can disseminate the particles, minerals, and microorganisms throughout the atmosphere. The interaction between microorganism and bubbles has been appreciated in our civilizations for millennia, most notably in fermentation. During some of these metabolic processes, microorganisms create gas bubbles as a byproduct. Indeed the interplay of bubbles and microorganisms is captured in the origin of the word fermentation, which is derived from the Latin word ‘fervere’, or to boil. More recently, the importance of bubbles on the transfer of microorganisms has been appreciated. In the 1940s, scientists linked red tide syndrome to toxins aerosolized by bursting bubbles in the ocean. Other more deadly illnesses, such as Legionnaires’ disease have been linked since.
Bubbles are formed whenever gas is completely surrounded by an immiscible liquid. This encapsulation can occur when gas boils out of a liquid or when gas is injected or entrained from an external source, such as a breaking wave. The liquid molecules are attracted to each other more than they are to the gas molecules, and this difference in attraction leads to a surface tension at the gas-liquid interface. This surface tension minimizes surface area so that bubbles tend to be spherical when they rise and rapidly retract when they pop.
When microorganisms are near a bubble, they can interact in several ways. First, a rising bubble can create a flow that can move, mix, and stress the surrounding cells. Second, some of the gas inside the bubble can dissolve into the surrounding fluid, which can be important for respiration and gas exchange. Microorganisms can likewise influence a bubble by modifying its surface properties. Certain microorganisms secrete surfactant molecules, which like soap move to the liquid-gas interface and can locally lower the surface tension. Microorganisms can also adhere and stick on this interface. Thus, a submerged bubble travelling through the bulk can scavenge surrounding particulates during its journey, and lift them to the surface.
When a bubble reaches a surface (Figure 2c), such as the air-sea interface, it creates a thin, curved film that drains and eventually pops. In Figure 3, a sequence of images shows a bubble before (Fig 3a), during, and after rupture (Fig 3b). The schematic diagrams displayed in Fig 2c-f complement this sequence. Once a hole nucleates in the bubble film (Fig 2d), surface tension causes the film to rapidly retract and centripetal acceleration acts to destabilize the rim so that it forms ligaments and droplets. For the bubble shown, this retraction process occurs over a time of 150 microseconds, where each microsecond is a millionth of a second. The last image of the time series shows film drops launching into the surrounding air. Any particulates that became encapsulated into these film droplets, including all those encountered by the bubble on its journey through the water column, can be transported throughout the atmosphere by air currents.
Another source of droplets occurs after the bubble has ruptured (Fig 3b). The events occurring after the bubble ruptures is presented in the second time series of photographs. Here the time between photographs is one milliseconds, or 1/1000th of a second. After the film covering the bubble has popped, the resulting cavity rapidly closes to minimize surface area. The liquid filling the cavity overshoots, creating an upward jet that can break up into vertically propelled droplets. These jet drops can also transport any nearby particulates, also including those scavenged by the bubble on its journey to the surface. Although both film and jet drops can vary in size, jet drops tend to be bigger.
Whether it is for the best or the worst, bubbles are ubiquitous in our everyday life. They can expose us to diseases and harmful chemicals, or tickle our palate with fresh scents and yeast aromas, such as those distinctly characterizing a glass of champagne. Bubbles are the messenger that can connect the depth of the waters to the air we breathe and illustrate the inherent interdependence and connectivity that we have with our surrounding environment.
The spectacular arrival of thousands of unaccompanied Central American children at the southern frontier of the United States over the last three years has provoked a frenzied response. President Obama calls the situation a “humanitarian crisis” on the United States’ borders. News interviews with these vulnerable children appear almost daily in the global news media alongside official pronouncements by the US government on how it intends to stem this flow of migrants.
But what is not yet recognised is that these children represent only the tip of the iceberg of a deeper new humanitarian crisis in the region. Of course, recent figures for unaccompanied children (UAC) arriving in the US from the three countries of the Northern Triangle of Central America: El Salvador, Guatemala and Honduras are alarming.
But it’s important to pull back and look at the bigger picture, which is that there has been a steep increase in border guard apprehensions of nationals from the three Northern Triangle countries – not just unaccompanied children, but adults and families as well.
The unaccompanied children we’ve been hearing so much about are not exceptional but represent just one strand (albeit a more photogenic and newsworthy strand) of a broader – and massive – increase in irregular migration to the US from El Salvador, Guatemala and Honduras.
It may be tempting to dismiss this fear of returning – “they would say that, wouldn’t they?” – but this increase is particular to the Northern Triangle and has been found increasingly by US officials to be credible – and not generally found among other asylum-seekers in the United States.
Fleeing gang violence
This official data correlates with my ESRC-funded research in El Salvador, Guatemala and Honduras last year, which identified a dramatic increase in forced displacement generated by organised crime in these countries from around 2011.
As such, the timing of the increased numbers of UACs (and adults) arriving in the US corresponds closely to the explosion of people being forced from their homes by criminal violence in the Northern Triangle. The changing violent tactics of organised criminal groups are thus the principal motor driving the increased irregular migration to the US from these countries.
In all three countries, street gangs of Californian origin such as the Mara Salvatrucha and Barrio 18 have consolidated their presence in urban locations, particularly in the poorer parts of bigger cities.
In recent years these gangs have become more organised, criminal and brutal. Thus, for instance, whereas the gangs used to primarily extort only businesses, in the last few years they have begun to demand extortion monies from many local householders as well. This shift in tactics has fuelled a surge of people fleeing their homes in zones where the gangs are present.
What is not yet properly appreciated in the current debate is that these violent criminal dynamics are generating startling levels of internal displacement within these countries. If we take El Salvador as an example, we see that in 2012 some 3,300 Salvadorian children arrived in the US and 4,000 Salvadorians claimed to fear returning home.
The number of people seeking refuge in the United States fade in significance as against this new reality in the region.
Proportionally, 2.1% of El Salvadorians were forced to flee their homes in 2012 as a result of criminal violence and intimidation. Almost one-third of these people were displaced twice or more within the same period. If you compare this to even the worst years of gang-related violence in Colombia – the annual rate of internal displacement barely reached 1% of the population. Incredibly, the rates of forced displacement in countries such as El Salvador thus seem to surpass active war zones like Colombia.
The explosion of forced displacement caused by organised criminal groups in El Salvador, Guatemala and Honduras (not to mention Mexico) is the region’s true “humanitarian crisis”, of which the unaccompanied children are but one symptom.
Knee-jerk efforts by the US government to stop children arriving at its border miss this bigger picture and are doomed to failure. It would almost certainly be a better use of funds to help Central American governments to provide humanitarian support to the many uprooted families for whom survival in the resource-poor economies of the Northern Triangle is now an everyday struggle.
One way to increase transparency is for scholars to “preregister” their research. That is, they can write up their research plan and publish that prior to the actual implementation of their research plan. A number of social scientists have advocated research preregistration, and Political Analysis will soon release new author guidelines that will encourage scholars who are interested in preregistering their research plans to do so.
In order to facilitate further discussion of the pros and cons of research preregistration, I recently asked Jaime Monogan to write a brief essay that outlines the case for preregistration, and I also asked Joshua Tucker to write about some of the concerns that have been raised about how journals may deal with research preregistration.
* * * * *
The pros of preregistration for political science
By Jamie Monogan, Department of Political Science, University of Georgia
Study registration is the idea that a researcher can publicly release a data analysis plan prior to observing a project’s outcome variable. In a Political Analysis symposium on this topic, two articles make the case that this practice can raise research transparency and the overall quality of research in the discipline (“Humphreys, de la Sierra, and van der Windt 2013; Monogan 2013).
Together, these two articles describe seven reasons that study registration benefits our discipline. To start, preregistration can curb four causes of publication bias, or the disproportionate publishing of positive, rather than null, findings:
Preregistration would make evaluating the research design more central to the review process, reducing the importance of significance tests in publication decisions. Whether the decision is made before or after observing results, releasing a design early would highlight study quality for reviewers and editors.
Preregistration would help the problem of null findings that stay in the author’s file drawer because the discipline would at least have a record of the registered study, even if no publication emerged. This will convey where past research was conducted that may not have been fruitful.
Preregistration would reduce the ability to add observations to achieve significance because the registered design would signal in advance the appropriate sample size. It is possible to monitor the analysis until a positive result emerges before stopping data collection, and this would prevent that.
Preregistration can prevent fishing, or manipulating the model to achieve a desired result, because the researcher must describe the model specification ahead of time. By sorting out the best specification of a model using theory and past work ahead of time, a researcher can commit to the results of a well-reasoned model.
Additionally, there are three advantages of study registration beyond the issue of publication bias:
Preregistration prevents inductive studies from being written-up as deductive studies. Inductive research is valuable, but the discipline is being misled if findings that are observed inductively are reported as if they were hypothesis tests of a theory.
Preregistration allows researchers to signal that they did not fish for results, thereby showing that their research design was not driven by an ideological or funding-based desire to produce a result.
Preregistration provides leverage for scholars who face result-oriented pressure from financial benefactors or policy makers. If the scholar has committed to a design beforehand, the lack of flexibility at the final stage can prevent others from influencing the results.
Overall, there is an array of reasons why the added transparency of study registration can serve the discipline, chiefly the opportunity to reduce publication bias. Whatever you think of this case, though, the best way to form an opinion about study registration is to try it by preregistering one of your own studies. Online study registries are available, so you are encouraged to try the process yourself and then weigh in on the preregistration debate with your own firsthand experience.
* * * * *
Experiments, preregistration, and journals
By Joshua Tucker, Professor of Politics (NYU) and Co-Editor, Journal of Experimental Political Science
I want to make one simple point in this blog post: I think it would be a mistake for journals to come up with any set of standards that involves publically recognizing some publications as having “successfully” followed their pre-registration design while identifying others publications as not having done so. This could include a special section for articles that matched their pre-registration design, an A, B, C type rating system for how faithfully articles had stuck with the pre-registration design, or even an asterisk for articles that passed a pre-registration faithfulness bar.
Let me be equally clear that I have no problem with the use of registries for recording experimental designs before those experiments are implemented. Nor do I believe that these registries should not be referenced in published works featuring the results of those experiments. On the contrary, I think authors who have pre-registered designs ought to be free to reference what they registered, as well as to discuss in their publications how much the eventual implementation of the experiment might have differed from what was originally proposed in the registry and why.
My concern is much more narrow: I want to prevent some arbitrary third party from being given the authority to “grade” researchers on how well they stuck to their original design and then to be able to report that grade publically, as opposed to simply allowing readers to make up their own mind in this regard. My concerns are three-fold.
First, I have absolutely no idea how such a standard would actually be applied. Would it count as violating a pre-design registry if you changed the number of subjects enrolled in a study? What if the original subject pool was unwilling to participate for the planned monetary incentive, and the incentive had to be increased, or the subject pool had to be changed? What if the pre-registry called for using one statistical model to analyze the data, but the author eventually realized that another model was more appropriate? What if survey questions that was registered on a 1-4 scale was changed to a 1-5 scale? Which, if any of these, would invalidate the faithful application of the registry? Would all of them together? It seems to the only truly objective way to rate compliance is to have an all or nothing approach: either you do exactly what you say you do, or you didn’t follow the registry. Of course, then we are lumping “p-value fishing” in the same category as applying a better a statistical model or changing the wording of a survey question.
This bring me to my second point, which is a concern that giving people a grade for faithfully sticking to a registry could lead to people conducting sub-optimal research — and stifle creativity — out of fear that it will cost them their “A” registry-faithfulness grade. To take but one example, those of us who use survey experiments have long been taught to pre-test questions precisely because sometime some of the ideas we have when sitting at our desks don’t work in practice. So if someone registers a particular technique for inducing an emotional response and then runs a pre-test and figures out their technique is not working, do we really want the researcher to use the sub-optimal design in order to preserve their faithfulness to the registered design? Or consider a student who plans to run a field experiment in a foreign country that is based on the idea that certain last names convey ethnic identity. What happens if the student arrives in the field and learns that this assumption was incorrect? Should the student stick with the bad research design to preserve the ability to publish in the “registry faithful” section of JEPS? Moreover, research sometimes proceeds in fits and spurts. If as a graduate student I am able to secure funds to conduct experiments in country A but later as a faculty member can secure funds to replicate these experiments in countries B and C as well, should I fear including the results from country A in a comparative analysis because my original registry was for a single country study? Overall, I think we have to be careful about assuming that we can have everything about a study figured out at the time we submit a registry design, and that there will be nothing left for us to learn about how to improve the research — or that there won’t be new questions that can be explored with previously collected data — once we start implementing an experiment.
At this point a fair critique to raise is that the points in preceding paragraph could be taken as an indictment of registries generally. Here we venture more into simply a point of view, but I believe that there is a difference between asking people to document what their original plans were and giving them a chance in their own words — if they choose to do so — to explain how their research project evolved as opposed to having to deal with a public “grade” of whatever form that might take. In my mind, the former is part of producing transparent research, while the latter — however well intentioned — could prove paralyzing in terms of making adjustments during the research process or following new lines of interesting research.
This brings me to my final concern, which is that untenured faculty would end up feeling the most pressure in this regard. For tenured faculty, a publication without the requisite asterisks noting registry compliance might not end up being too big a concern — although I’m not even sure of that — but I could easily imagine junior faculty being especially worried that publications without registry asterisks could be held against them during tenure considerations.
The bottom line is that registries bring with them a host of benefits — as Jamie has nicely laid out above — but we should think carefully about how to best maximize those benefits in order to minimize new costs. Even if we could agree on how to rate a proposal in terms of faithfulness to registry design, I would suggest caution in trying to integrate ratings into the publication process.
The views expressed here are mine alone and do not represent either the Journal of Experimental Political Science or the APSA Organized Section on Experimental Research Methods.
Heading image: Interior of Rijksmuseum research library. Rijksdienst voor het Cultureel Erfgoed. CC-BY-SA-3.0-nl via Wikimedia Commons.
After the Scottish Independence Referendum, the journalist Cathy Newman wrote of the irony that Cameron – the man with the much reported ‘problem’ with women – in part owes his job to the female electorate in Scotland. As John Curtice’s post-referendum analysis points out, women seemed more reluctant than men to vote ‘yes’ due to relatively greater pessimism of the economic consequences of a yes vote.
The Scottish vote should remind Cameron and the Conservative strategists who advise him of a very clear message: ignore women voters at your peril.
For several decades after UK women won the right to vote, Conservatives could rely on women’s votes and the gender gap in voting was consistently in double figures. However in recent decades this gap has diminished, particularly amongst younger women and party competition to mobilize female voters has become more important. Of course women voters have many diverse interests but understanding the concerns of different groups of women voters is crucial as female voters often make their decisions on voting closer to the election.
So what does Cameron need to do to firmly secure women’s votes at the general election? We argue the Conservative Party needs to make sure it represents women descriptively, substantively, and symbolically. On all three counts we see problems with Cameron’s strategy to win women’s votes.
Pre-election rhetoric and pledges to feminise the party through women’s descriptive representation have not been matched with clear and tangible outcomes. Cameron tried to increase the number of women MPs but still the share of women in the Conservative Party in the House of Commons is just 16%. As the latest Sex and Power Report highlights this looks unlikely to increase significantly in GE2015 as so few women have been selected to stand in safe Conservative seats despite the campaigning and support work undertaken by Women2Win.
Even where Cameron has strong power and autonomy to improve women’s presence – by fulfilling his pledge that one-third of his government would be women by the end of parliament – he has managed just 22%. Last July’s reshuffle did not erase the impression that women are not included at Cameron’s top table.
Cameron’s Conservatives in government also do not have the institutional capacity to get policies right for women. There are still not enough women in strategically significant places. For example in the Coalition Quad of Cameron, Osborne, Clegg, and Alexander control policy making. The gender equality machinery set up by the last government to monitor and address gender inequality in a strategic and long-term way has been stripped out. Even at the emergency post-referendum meeting at Chequers to discuss the UK’s constitutional future there was just one woman at the table.
Although the gender gap in voting, which currently favours Labour, is likely to narrow as the election approaches, the Conservatives have, we argue, inflicted significant psephological damage on themselves in their strategies to attract women’s votes: by not promoting women into politics, by not protecting women from austerity, and by stripping out the governmental institutions which give voice to women and promote gender equality.
Cameron’s political face may have been saved by Scottish women last month but for the reasons outlined in this blog post, we suggest that in the critical contestation for women’s votes at the 2015 general election there are long standing weaknesses in the Conservative Party’s strategy for mobilising women’s votes and restoring the Party’s historical dominance among women voters.
It is well known that obesity rates have been increasing around the Western world.
The American obesity prevalence was less than 20% in 1994. By 2010, the obesity prevalence was greater than 20% in all states and 12 states had an obesity prevalence of 30%. For American children aged 2 – 19, approximately 17% are obese in 2011-2012. In the UK, the rifeness of obesity was similar to the US numbers. Between 1993 and 2012, the commonness of obesity increased from 13.2% to 24.4% for men and for women from 16.4% to 25.1%. The obesity prevalence is around 18% for children aged 11-15 and 11% for children aged 2-10.
Policy makers, researchers, and the general public are concerned about this trend because obesity is linked to an increase likelihood of health conditions such as diabetes and heart disease, among others. The increase in the obesity prevalence among children is of concern because of the possibility that obesity during childhood will increase the likelihood of being obese as an adult thereby leading to even higher rates of these health conditions in the future.
Researchers have investigated many possible causes for this trend including lower rates of participation in physical activity and easier access to fast food. Anderson, Butcher, and Levine (2003) identified maternal employment as a possible culprit when they noticed that in the US the timing of these two trends was similar. While the prevalence of obesity was increasing for children so was the employment rate of mothers. Other researchers have found similar results for other countries – more hours of maternal employment is related to a higher likelihood of children being obese.
What could be the relationship between a mother’s hours of work and childhood obesity? When mothers work they have less time to devote to activities around the home, which may mean less concern about nutrition, more meals eaten outside of the home or less time devoted to physical activities. On the other hand, more maternal employment could mean more income and an ability to purchase more nutritious food or encourage healthy activities for children.
We looked at this relationship for Canadian children 12-17 years old – an older group of children than studied in earlier papers. For youths aged 12 to 17 in Canada, the obesity prevalence was 7.8% in 2008. We analysed not only at the relationship between maternal employment and child obesity, but also the possible reasons that maternal employment may affect child obesity.
We find that the effect of hours of work differs from the effect of weeks of work. More hours of maternal work are related to activities we expect to be related to higher rates of obesity – more television viewing, less likely to eat breakfast daily, and a higher allowance. On the other hand, more weeks of maternal employment are related to behaviour expected to lower obesity – less television viewing and more physical activity. This difference between hours and weeks of work raises some interesting questions. How do families adapt to different aspects of the labour market? When mothers work for more weeks does this indicate a more regular attachment to the labour force? Do these families have schedules and routines that allow them to manage their child’s weight?
Unlike other studies that focus on younger children, we do not find a relationship between maternal employment and likelihood of obesity for adolescents. Does the impact of maternal employment at younger ages not last into adolescence? Is adolescence a stage during which obesity status is difficult to predict?
The debate over appropriate policy remedies should not focus on whether mothers should work, but rather should focus on what children are doing when mothers are working. What can be done to reduce the obesity prevalence in adolescents? Some ideas include working with the education system and local communities to create an environment for adolescents that fosters healthy weight status, supporting families with quality childcare, provision of viable and high-quality alternative activities, or flexible work hours. Programs or policies that help families establish a healthy routine are important. It may not be a case of simply providing activities for adolescents, but that these activities are easy for families to attend on a regular basis.
Traditionally, framed by the news media as a debate between moral and religious objections vs. equal rights, marriage equality is just one of a range of civil rights issues that remain important to members of the LGBT community. Many of these issues including employment nondiscrimination, second parent adoption, and open service in the military have been eclipsed by the almost singular focus on marriage equality by interest groups, the media, and public opinion pollsters.
As the chart below shows, 51.5% of Americans expressed support for firing known homosexual teachers when Pew first started collecting data in 1987. By 2012, only 21% of Americans still expressed support for the practice.
Who are the 21%?
These individuals — the 21% — are what researchers call the hard core, those who retain minority political viewpoints in the face of majority opposition. As the results of the data analysis show, this 21% or the hard core tend to be older males who are less educated, more religious, more conservative in their politics, and more likely to have old-fashioned values when it comes to marriage and family.
The analyses look at what factors explain support for variation in employment discrimination over time. Not surprisingly, the influence of religious and ideological value predispositions matters most. Demographics (e.g., gender, age, and level of education) are also important as are key cultural values like having old-fashioned views on marriage and family. Much like the same-sex marriage debate, the importance of partisanship (e.g., being a Democrat vs. Republican) has waned in importance over time and is no longer a significant factor driving opinions after 2002.
When it comes to change over time, the results show that the influence of year or time matters more between 2002-2012 than between 1987-2002 indicating that, much like the same-sex marriage debate, the rate or pace of change on this issue has shifted more rapidly, more recently.
Thus while we’ve been primarily focusing our attention on marriage equality, opinions have shifted on other LGBT civil rights issues as well.
So while opinions may have shifted just like with the case of marriage equality, and while President Obama has continued his “evolution” on issues of gay rights, federal legislation on employment nondiscrimination still lags behind.
Headline image credit: Two women at sunset. CC0 Public Domain via Pixabay.
Since World War II, homeownership has developed into the major tenure in almost all European countries. This democratization of homeownership has turned owned homes from luxury items available to a lucky few into inherent and attainable life goals for many. In the general perception, owning is often associated with better homes with larger gardens, in better neighbourhoods with better schools. To rent, in contrast, is considered pouring money down the drain. Therefore, especially as people marry and children are planned, homeownership becomes the preferred choice of tenure. This choice has been strongly subsidized by governments and has become the norm in countries such as Australia, Britain, Belgium, and the United States. Once people have better jobs or more children, they move to ever bigger and better homes. This has been described as people moving up the housing ladder.
However, the underlying idea of a stable, married family – which has been the standard convention for most of the twentieth century – is outdated. Many (though declining numbers of) marriages end in separation today. Besides the emotional turmoil that the marital separation causes, this event has profound effects on the chances to remain in homeownership for both ex-partners. Generally, at least one, if not both partners, will leave the previously shared dwelling. As separation often involves a loss of financial resources, people may have a hard time re-entering homeownership. After falling out of love and separating, a fall down the housing ladder may follow, as we show in a study recently published in European Sociological Review.
How drastic this fall will be depends very much on the housing market environment (see Figures 1 and 2). In the past in Britain, easy access to housing finance and high supply facilitated (re-)entry into homeownership for ex-partners even under house price inflation in the 1990s and early 2000s. In tight housing markets ex-partners will face more difficulties, and once access to mortgages becomes restricted, as happened in Britain after the recent crash in the housing market, problems may arise. So in the past British ex-partners could return to homeownership at some point in their lives because access to mortgages was easy – and they needed to return because alternatives in the private and social rental sector were and are unattractive. This may no longer work in future. Ex-partners may increasingly face similar problems that new market entrants currently encounter, for which the term generation rent has already been coined.
To better understand what may happen to British ex-partners, we can consider the example of Germany. The German housing market is in many ways different from the British, not the least because private rental accommodation is an attractive alternative to homeownership. Access to mortgages is also more restricted than in Britain, even after the recent tightening of regulations in Britain. High down payments are the rule in Germany. In this market environment, homeownership is a once-in-a-lifetime opportunity for many, while a considerable share of people will never enter homeownership. After separation, very few Germans will be able to return to homeownership (see Figure 2). Ex-partners will be less likely to be in homeownership through their lives post-separation. This scenario may foreshadow the British situation in the near future.
Being excluded from homeownership in the German context is not as consequential as it may turn out to be in Britain, however. First, more Germans will accept to rent after separation compared to the British, because attractive, and most of all, secure accommodation is available for – internationally seen – reasonable costs. Second, the German public pension system is relatively generous for those who continuously worked throughout their lives. To build up private wealth as a cushion for old age is not as necessary as in Britain. In Britain, where individuals are expected to privately invest in financial products and property to build an individual safety net – an idea called asset-based welfare – people that experience a separation may lose this safety net. This may result in stark disparities between the separated and those remaining married in old life.
Homeownership may offer many advantages for families. At the same time, homeownership is a long-term investment that does not necessarily fit well with the dynamics of modern partnership and family life. Everybody needs suitable and secure accommodation. Such diverse accommodation may sometimes be better provided in the private and social rental sector, which must not result in less security or quality compared to homeownership as can be seen in Germany. To make this work people need decent options to build up a safety net for rainy days outside of the housing market. However, people should also have reasonable tenure choice, which is not currently the case for many ex-partners in Germany.
As the domestic violence controversy in the NFL has captured the attention of fans and global media, it seems it has become the No. 1 off-field issue for the league. To gain further perspective into the matter of domestic violence and the current NFL situation, I spoke with Greta Friedemann-Sánchez, PhD and Rodrigo Lovatón, authors of the article, “Intimate Partner Violence in Colombia: Who Is at Risk?,” published in Social Forces, that explores the prevalence of intimate partner violence and the certain risk factors that increase its likelihood.
What do you think of the recent media coverage of domestic violence in the NFL?
In 2010, the Center for Disease Control and Prevention (CDC) estimated that in the United States 24% of women and 13% of men have experienced severe physical violence by an intimate partner at some point during their life. Furthermore, the Bureau of Justice Statistics (Department of Justice) calculates that domestic violence accounted for 21% of all violent victimizations between 2003 and 2012 and about 1.5 million cases in 2013. If emotional abuse and stalking are taken into account, the prevalence rates increase. In some countries the prevalence is even higher. In Colombia, for example, 39% of women have experienced physical violence in their lifetimes. The recent media coverage of domestic violence shows that this is an important policy issue that has not received adequate attention in the United States or internationally. Unfortunately, this is a missed opportunity to educate the public on the high prevalence rates and the negative effects domestic violence has, not only for the victim but for all the members of a family. Equally invisible in the coverage is the fact that domestic violence is an “equal opportunity” event, meaning that it is present in families regardless of socioeconomic status, race, ethnic affiliation, and so on. Domestic violence, and more specifically intimate partner violence, can be just as present in NFL players’ families who are on the eye of the public, as it can be in any other family. The issue, however, remains hidden for the most part. It takes a celebrity to be involved for the issue to gain visibility. In that sense, we are glad the media covered it. This is a policy issue that needs to be appropriately analyzed and addressed.
What do you think is an appropriate punishment for an NFL player who is convicted of domestic violence?
We agree that a professional sports organization, that has extensive media coverage with a large audience, including children and adolescents, should not allow a player who is convicted of domestic violence to participate. Organized sports organizations sell more than just games, they sell the personalities and lives of their players. Players are often held as role models, their careers and lives are admired. To allow a player to continue playing would endorse and normalize violent behavior. Intimate partner violence has long term negative physical, emotional, and economic consequences for the victims, which are often overlooked. In fact, children who witness violence at home have negative emotional and educational outcomes too. Witnessing violence as a child or being a victim of violence as a child are some of the strongest predictors for becoming a victim or a perpetrator of violence later in life. Therefore, the NFL or any sports organization should reject this kind of behavior by disallowing domestic violence offenders from participating in any of their activities.
Do you think that giving a person who commits domestic violence a more severe punishment will decrease the chances that the person will commit violence again?
Types and intensity of violence are varied, and so are the legal mechanisms in place to protect victims and punish batterers. Victims do not always get the support they need from law enforcement. Furthermore, protective and punitive laws are not always enforced in an adequate manner, consequently, victims have a chance to be re-victimized and re-traumatized as the perpetrators become even more violent as a result of the victims’ reporting. The proportion of domestic violence crimes reported to the police represents about 50% of all identified cases between 2003 and 2012 in the United States, according to the Bureau of Justice Statistics, Department of Justice. These issues are recursive. The experience for victims outside of the United States can be even direr as domestic violence legislation may be in its infancy.
Do you think that the recent media attention surrounding domestic and/or that this will increase or decrease the likelihood of/reduce other victims coming forward to report abuse?
Neither. Resolving intimate partner violence requires a multi-pronged approach. Increased visibility of the problem afforded by the recent media coverage might propel better law enforcement, increased funding for research, and implementation of prevention pilot programs that engage men and boys, just to name a few. We need better and more preventive, protective, and punitive mechanisms in place. In addition, the mechanisms in place need to be evaluated for effectiveness in responding to the issue. Until some of these steps happen, simply having more media attention will not have an effect on reporting.
What are some of the reasons women tend to stay in domestic violence situations?
Why do perpetrators exercise violence against their intimate partners? These questions go hand in hand, yet it is usually the first that is asked, although both are increasingly in the scope of research given the increase in violence against women worldwide. Women’s economic dependence on their partners, which gets amplified when children are present, contributes to women being locked into violent situations. Lack of employment options, being unemployed, having low-wage employment makes women financially dependent on their partners. Lack of affordable day care, day care with limited hours, and school schedules without after-school programs limit women’s participation in employment. Even women who are employed and have livable wages might find it hard to leave if temporary shelters and affordable housing are not available. But the barriers to exiting a violent relationship are not only material. Being abused is a stigmatizing experience. Victims are reluctant to be shamed by their family, friends, and society at large. In addition, the exercise of controlling and humiliating behaviors on the part of batterers has the effect of lowering the victims’ self-esteem and self-efficacy. Victims may doubt their capacity to survive on their own and with their children. But controlling behaviors also include batterers’ being effective at sabotaging the victims’ efforts to access her social support network, to gain employment, or to arrange an alternative living place. In many instances, the episodes of abuse are interspersed by weeks or months of relative calm, and victims may believe their partners have changed, only to find themselves in the same or worse situation. In addition, societies have cultural scripts of what is included in the marital contract, which may justify violence under certain circumstances. Gender norms give men the right to control their intimate partner’s behavior, to exert influence, and to resolve disputes with violence. Furthermore, women are socialized to prioritize the children and family “unity” over their welfare; women may perceive that the children will be negatively affected by a separation, not knowing the negative effects they may already be experiencing.
Who are most at risk for being a victim of domestic violence?
Several factors contribute to the risk of being a victim of intimate partner violence. While there are general patterns, the specifics may vary by country. In our recent study using data from Colombia’s Demographic and Health Surveys, we found that the highest risk factors were associated with the maltreatment of a woman’s partner when he was a child, and current child maltreatment by the woman’s partner. Higher risk is associated with lower educational status of both partners, lower socioeconomic status (only for physical violence), for younger women, and for women working outside of the home. This last factor is especially interesting given the role that income plays in household negotiation dynamics. Gender differences in power among family members affect each member’s economic choices and behavior, including individual’s bargaining over the allocation of material and time resources within the household, over gender norms, and even over how much abuse to exert or resist. It has long been hypothesized that income provides women with strong leverage in family negotiations. But our results and those found in studies in other countries are revealing that the dynamics of negotiation and violence may be heavily mediated by gender norms. In effect, gender norms about women’s socially acceptable behavior, including working for pay, might trump the leverage they can effect with income. In addition, we do not know the effect of relative wages of both partners on violence. What is known for the United States is that economic stress in a family increases the risk for violence. Gender norms of masculinity that prescribe men as the breadwinners have an effect: men who are unemployed are at greater risk for being perpetrators of violence. The same is true for men who endorse rigid views of masculinity, including the norms that men should dominate women.
How can we best help those most at risk of domestic violence?
Interventions at the individual and community level that address gender equitable norms and the construction of gender relations via socialization are simultaneously protective (batterer intervention programs) and preventive. In the same vein, promoting boys and men’s participation in activities considered feminine under rigid norms of masculinity, such as taking care of children, of the sick and disabled, and doing domestic work. Another line of response is to work on those risk factors that can be shaped by public policies, such as promoting equitable access to employment for women and an extended access to education to the population in general. In addition, special care is required for those groups that are at greater risk to suffer from violence, such as households with lower socioeconomic status, with younger women, more children, and where the partners have a previous history of maltreatment. Workshops on parenting skills and non-violent forms of disciplining children. Last, a policy response should also include better mechanisms for the victims to come forward and report the problem, support systems to help them escape from abusive domestic environments, and psychological service for trauma recovery.
Is there anything else you think we can learn about domestic violence in the United States from the recent NFL cases?
From the way the media covered it, it is clear that the general public is not well informed about intimate partner violence. More education will help de-stigmatize the issue.