JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: journals, Most Recent at Top [Help]
Results 1 - 25 of 74
How to use this Page
You are viewing the most recent posts tagged with the words: journals in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
Every April, when the robins sing and the trees erupt in leaves, I think of Brad — of the curtain wafting through his open window, of the sounds of his iron lung from within, of the heartache of his family. Brad and I grew up at a time when worried mothers barred their children from swimming pools, the circus, and the Fourth of July parade for fear of paralysis. It was constantly on everyone’s minds, cast a shadow over all summertime activities. In spite of the caution, Brad got polio — bad polio, which further terrorized our mothers. It still haunts me. If, somehow, he had managed to avoid the virus for a couple years until the Salk vaccine arrived, none of that — the iron lung, the shriveled limbs, the sling to hold up his head — would have happened.
In 1954, many children in my town, myself included, became “Polio Pioneers” because our parents made us participate in the massive clinical trial of the Salk vaccine. Some of us received the shot of killed virus, others received a placebo. We were proud, albeit scared, to get those jabs, to be part of a big, important experiment. Our moms and dads would have done anything to rid the country of that dreaded disease.
Because the vaccine is so effective, mothers today aren’t terrified of polio. Children in our neighborhoods aren’t growing up in iron lungs or shuffling to school in leg braces. We seem so safe. But our world is smaller than it used to be. The oceans along our coasts can’t stop a pestilence from reaching us from abroad. A polio virus infecting a child in Pakistan, Nigeria, or Afghanistan can hop a plane to New York or Los Angeles or Frankfurt or London, find an unimmunized child, and spread to other unimmunized people. Our earth is not yet free of polio.
Germs are like things that go bump in the night. They can’t been seen, they lurk in familiar places, they are sometimes very harmful, and they instill great fear—some justified, some not.
Fear of measles, like fear of polio, is justified. In the old days, one in twenty children with measles developed pneumonia, one or two in a thousand died. The vaccine changed all that in the developed world. But, measles continues to rage in underdeveloped countries. In a race for very high contagiousness, the measles virus ties the chickenpox virus (which causes another vaccine-preventable childhood infection). Both viruses can catch a breeze and fly. Or they may linger in still air for over an hour. They, too, ride airplanes. This year alone, outbreaks of measles started by imported cases have occurred in New York, California, Massachusetts, Washington, Texas, British Columbia, Italy, Germany, and Netherlands.
Fear of whooping cough (aka pertussis) is also justified. In the pediatric hospital where I work, two young children have died of this infection in the past several years and many others have suffered from the disease, which used to be called “the one-hundred day cough.” It lasts a long time and antibiotic treatment does nothing to shorten the course. Young children with pertussis may quit breathing, have seizures, or bleed into their eyes. It spreads like invisible smoke around high schools and places where people gather … and cough on each other.
On the other hand, fear of vaccines — immunizations against measles, polio, chickenpox, or whooping cough — is hard to understand. In the grand scheme of things, any of these serious infections is a much greater threat than the minimal side effects of a vaccine to prevent them. Just ask the mothers of the children who died of pertussis in my hospital. It’s true that the absolute risk of these infections in resource rich areas is small. But, for even rare infections, a 0.01% risk of disease translates into hundreds of healthy children who don’t have to be sick, or worse yet die, of a preventable infection.
In spite of the great success of vaccines, they aren’t perfect. Perfection is a tall order. Still we can do better. Fortunately, because of the work of my medical and scientific colleagues, new vaccines under development hold promise to be more effective with fewer doses, to provide increased durability of vaccine-induced immunity, and to be even freer of their already rare side effects. And, we’re creating vaccines against respiratory syncytial virus, Staphylococcus aureus, group A Streptococcus, herpes virus, and HIV, to name a few.
Brad would be proud of how far we have come in protecting our children from the horrible affliction that crippled him. He’d also be furious at our failure to vaccinate all our children. Every single one of them. He’d tell us that no child should ever be sacrificed to the ravages of polio or measles or chicken pox or whooping cough.
Janet R. Gilsdorf, MD is the Robert P. Kelch Research Professor of Pediatrics at the University of Michigan Medical School and pediatric infectious diseases physician at C. S. Mott Children’s Hospital, Ann Arbor. She is also professor of epidemiology at the University of Michigan and President-elect of the Pediatric Infectious Diseases Society. Her research focuses on developing new vaccines against Haemophilus influenzae, a bacterium that causes ear infections in children and bronchitis in older adults. She is the author of Inside/Outside: A Physician’s Journey with Breast Cancer and the novel Ten Days.
To raise awareness of World Immunization Week, the editors of Clinical Infectious Diseases, The Journal of Infectious Diseases, Open Forum Infectious Diseases, and Journal of the Pediatric Infectious Diseases Society have highlighted recent, topical articles, which have been made freely available throughout the observance week in a World Immunization Week Virtual Issue. Oxford University Press publishes The Journal of Infectious Diseases, Clinical Infectious Diseases, and Open Forum Infectious Diseases on behalf of the HIV Medicine Association and the Infectious Diseases Society of America (IDSA), and Journal of the Pediatric Infectious Diseases Society on behalf of the Pediatric Infectious Diseases Society (PIDS).
The Journal of the Pediatric Infectious Diseases Society (JPIDS), the official journal of the Pediatric Infectious Diseases Society, is dedicated to perinatal, childhood, and adolescent infectious diseases. The journal is a high-quality source of original research articles, clinical trial reports, guidelines, and topical reviews, with particular attention to the interests and needs of the global pediatric infectious diseases communities.
In the last few months, international media have reported extensively on the latest developments in the online economy. These reports have focused mostly on the rise of so-called cryptocurrencies, with bitcoin being the most well-known example. Such cryptocurrencies are characterized by their decentralized nature, meaning that they aren’t controlled by a central government. As such, their emission is controlled by an algorithm rather than by economic imperatives. While originally used by only a small group of enthusiasts, bitcoin has now captured the attention of the global economy and lawmakers. It has given impulse to the creation of several other cryptocurrencies, some of which are known for their community efforts — such as dogecoin — and others which are known for being named after a celebrity — which isn’t always appreciated.
While this rise of cryptocurrencies certainly has economic potential — with bitcoin alone having a current market capitalization estimated at over USD 5 billon — the last few months have also exposed a number of the less pleasant issues. These issue range between its role in, for instance, an online black market, theft, and money-laundering. One of the important contributing factors in this, is that cryptocurrencies have for long operated within a legal gray zone. From the beginning, it was unclear whether — and if so, which — existing regulation could be applied to this phenomenon. But for as long as this development played only a marginal societal role, specific legislative initiative attempting to regulate this matter wasn’t seriously considered.
This position of laissez-faire seems to be changing now. Just last week, for instance, the US Internal Revenue Service (IRS) released a notice in which it holds that cryptocurrencies should be considered as property for taxation purposes. Other governments have also tried their hand at finding a legal basis for the use of cryptocurrencies. However, in most of such discussions, lawmakers’ and governments’ efforts remain mostly limited to providing for the taxation of gains made from transactions involving cryptocurrencies.
This is why the European Commission’s proposal for a review of the Payment Services Directive (Directive 2007/64/EC) could have been a great opportunity to explore the possibilities for a more fundamental overhaul of the legal framework regulating most of today’s online payments. Online payments focus more and more on mobile technologies, and now also include the use of non-governmental currencies. However, the currently existing framework set by the Payment Services Directive and its close companion the E-money Directive (Directive 2009/110/EC) isn’t suited to extend to the complete field of online payments.
A new proposal in this matter could have merged the closely related matters of payment services and e-money. It could have expanded the scope to emerging technologies. It could have taken a more direct approach in addressing non-governmental currencies. All of such approaches could have provided the user of mobile payment technologies and cryptocurrencies with more legal certainty regarding the protection that is — or isn’t — offered to him by the law.
The European Commission’s proposal, however, takes a different approach. First, it was decided that due to the late implementation of the E-money Directive, there wasn’t sufficient practical experience with this framework to allow for a review. As a result, payment services and e-money will for the time being remain subject to separate legal instruments. Second, the proposed new Payment Services Directive doesn’t deviate much from its predecessor. It still aims to cover a broad definition of ‘payment services’, with a wide range of exemptions from that scope. The scope has been somewhat enlarged, now also covering ‘payment information services’ and ‘payment initiation services’. These are both services aimed at providing access to a user’s payment account at another service, thus not disposing of the funds moved on said account.
Under the original Payment Services Directive, the unclear formulation of the scope exemptions has resulted in different interpretations between EU Member States. Moreover, it was found that this uncertainty allowed market players to adapt their business models in order for them to fall into the negative scope of the directive, thus being exempt from having to comply with this legal framework. The new proposal aims to tighten the scope of the exemptions mostly by introducing new terminology. Such terminology — including formulations as ‘precise needs’, ‘specific instruments’, and ‘used in a limited way’ — hasn’t been properly defined in itself, thus leaving room for even more broad and divergent interpretations. Only the so-called ‘added value’ exemption has been made more clear due to the addition of a value limitation. As a result, this scope exemption will only apply to single transactions of maximum EUR 50 and cumulative transactions of maximum EUR 200 per billing month.
While the proposed review of the Payment Services Directive does include a few good points — such as the inclusion of additional service providers, new measures aimed at raising security and transparency, and the inclusion of a value limitation in one of its scope exemptions — the overall feeling remains that good opportunities have been left unused. As the online economy continues to grow in the direction of mobile payment solutions and the use of cryptocurrencies, the legal questions underlying these matters are becoming increasingly urgent. For the time being, it is clear that the answer won’t be found in the EU’s regulation of payment services and e-money.
The International Journal of Law and Information Technology provides cutting edge and comprehensive analysis of Information Technology, communications and cyberspace law as well as the issues arising from applying Information and Communications Technologies (ICT) to legal practice.
Subscribe to the OUPblog via email or RSS.
Subscribe to only law articles on the OUPblog via email or RSS.
Image credit: Physical Bitcoins by CASASCIUS. Work released into public domain via Wikimedia Commons.
The American Journal of Hypertension (AJH) recently published the findings of a comprehensive meta-analysis monitoring health outcomes for individuals based on their daily sodium intake. The results were controversial, seemingly confirming what many notable hypertension experts have begun to suspect in recent years: that levels of daily sodium intake recommended by governmental agencies like the CDC are far too low, perhaps dangerously so.
Media outlets were quick to broadcast the findings, and the response from the CDC and organizations like the American Heart Association were much the same as in the past, dismissing the analysis, without pointing to specifics, as relying on “faulty methodology” and “flawed data.”
We recently spoke to Dr. Niels Graudal, lead author of the meta-analysis published in AJH, to understand, among other things, the details of his research and his opinion on the reaction of governmental health agencies to new findings on sodium intake.
Could you start by talking to us about the nature of a meta-analysis? What about your meta-analysis makes its findings more valid than, say, a single, localized study?
Population studies are accepted in health science as a means to define associations between health-factors. For instance, the associations between blood pressure, cholesterol, and mortality have been defined by such studies. Meta-analyses integrate the results from many individual studies to provide an average of the association of the “risk factor” to outcome. Such analyses help to reach a consensus, and constitute the core of the Cochrane Collaboration, which systematically organizes medical research information on the basis of scientific evidence.
Was there a common methodology among the studies included in your meta-analysis?
In population studies on sodium intake, the individually-measured sodium intake is used to categorize the participants in groups of low, intermediate, and high sodium intake. The groups are followed for years, while mortality (death rate) and morbidity (disease rate) in the different groups are recorded. Successively, the association between sodium intake and mortality/morbidity is calculated.
What are some possible obstacles encountered in population studies like this?
There are factors which could bias the result in a wrong direction, so-called “confounders.” For instance, sodium intake typically increases with energy intake. Sick participants with a low energy intake may therefore eat less sodium than healthy people, and overweight participants predisposed to diabetes and cardiovascular disease may eat more sodium than healthy people. Therefore, the energy intake is a confounder, which could explain a potential increased mortality in participants with a low and a high sodium intake. However, there are statistical methods that allow us to correct for such confounders in order to ensure for accurate findings; such methods are used in almost all such studies, and have been for many years.
What were the specific findings of your meta-analysis on sodium intake?
Perhaps most importantly, the implications of these findings are that the present recommendation from the CDC that individuals should reduce sodium intake to below 2300 mg/day is too restrictive, and that the majority (about 95%) of the global population presently eat sodium within the safest range (2,645-4,945 mg/day) and therefore have no need to alter their intake.
Our present analysis showed that both high sodium intake and low sodium intake were associated with increased mortality when compared with the present usual sodium intake of most individuals worldwide, which is between 2,645 and 4,945 mg per day. In spite of the fact that sodium intake is somewhat difficult to measure precisely, the signal from the nearly 275,000 participants we looked at was abundantly clear.
Perhaps most importantly, the implications of these findings are that the present recommendation from the CDC that individuals should reduce sodium intake to below 2300 mg/day is too restrictive, and that the majority (about 95%) of the global population presently eat sodium within the safest range (2,645-4,945 mg/day) and therefore have no need to alter their intake.
How did you account for those participants in your analysis that were already suffering from, say, hypertension or obesity? Might they have affected the findings in some way?
When we excluded groups of populations with diseases from our analysis and only included healthy populations, which were random samples of the general population and within which multiple statistical adjustments for confounders had been performed, the results concerning low sodium intake were even more significant, indicating that confounders could not have affected the outcome of our analysis.
Is your meta-analysis the first scientific research to suggest that extremely low levels of sodium intake like those promoted by the CDC may actually be associated with negative health outcomes?
Actually, a 1984 paper published in the journal Science questioned the wisdom of population-wide sodium intake reduction on the basis of an investigation of about 10,000 participants. The FDA immediately published a high-profile response in The New York Times, claiming that the findings were likely the result of a statistical fluke or “something wrong with the analysis.” This immediate move to quell any dissenting evidence seems to have governed the debate ever since.
More recently, though, a population study published while our meta-analysis was under review showed results very similar to ours. Two of the individual studies (1, 2) included in our analysis also concluded that there was a “U” shaped correlation between sodium intake and mortality (increased risks at very low and high doses). For the record, excluding these two studies from our analysis did not change our results. In the past year, then, four recent studies have independently confirmed increased risks associated with both high and low sodium intakes, and suggest that the present recommendation of less than 2,300 mg/day is in conflict with available science.
What are the arguments against research like this?
Often, health organizations will attempt to call into question researchers’ objectivity by labeling them biased agents of the food industry. In a recent response to a paper showing that the majority of the world’s populations had a salt intake significantly above the recommended 2,300 mg/day, representatives of the World Health Organization (WHO) and World Action on Salt and Health (WASH) accusatorily asked, “why has the food and beverage industry mounted yet another campaign to try to resist beneficial changes, either directly or indirectly through their academic voices?”
Sometimes agencies will willfully misinterpret findings. In a short commentary to the recent Institute of Medicine (IOM) report on sodium intake in populations, nine CDC employees quoted the IOM report as follows: “When it comes to sodium intake levels <2,300mg per day… the committee found insufficient and inconsistent evidence regarding the benefit or harm in certain population subgroups (e.g., individuals with diabetes, chronic kidney disease, or preexisting cardiovascular disease)”. However, the actual quote from the study was this: “science was insufficient and inadequate to establish whether reducing sodium intake below 2,300 mg/d either decreases or increases CVD risk in the general population.”
Often, they claim that our data and methods are somehow flawed, though they rarely cite specific instances. In a response to our present meta-analysis, the American Heart Association (AHA) stated that the analysis relied on “flawed data and should not change the way anyone looks at sodium.” They went on to say that:
“…those studies were poorly designed to examine the relationship between sodium intake and mortality, and the findings fail to take into account well-established evidence about sodium intake. Other problems with the new study included unreliable measurements of sodium intake and an overemphasis on studying sick people rather than the general population.”
This is a small selection of the arguments which have been raised for years by representatives of public institutions (WHO, FDA, NIH, CDC, AHA) in response to scientific investigations that don’t agree with their population-wide sodium reduction agenda.
So that we can avoid generalizations, what are the specific studies these organizations usually cite in support of population-wide sodium reduction, and what do you think are their flaws?
The rebuttals of health organizations are almost invariably propped up by vague references to an “immense” – usually unspecified – body of research which “proves” the beneficial effects of sodium reduction for the general population. If they do cite specific studies, it’s usually either
A meta-analysis (which, ironically, would have the same hypothetical methodological weaknesses the AHA and CDC supposedly see in ours) that finds increased risk of stroke in individuals consuming more than 4,945 mg/day. The results don’t conflict with our findings (our healthy range is 2,645 – 4,945 mg/day), but also don’t examine negative outcomes for low sodium intake, so are irrelevant to debate around determining an intake range.
Follow-ups (1, 2) of two older studies, pooling and analyzing their data with cardiovascular disease (CVD) mortality and all-cause mortality (ACM) as outcomes. These showed no significant difference between the low sodium group (ACM = 2.3%) and the normal sodium group (ACM = 2.6 %) (p = 0.58), thus confirming that sodium reduction may have no effect. The authors did, however, on several occasions, dissect the results by means of multiple adjustments, and did succeed in finding a few marginal or borderline significant results in favor of sodium reduction. The analyses behind these results, though, were not predefined in a protocol and should therefore be considered as having an extremely high risk of bias.
These flawed or irrelevant studies tell us that, concerning the general population, the blood pressure surrogate link between sodium intake and mortality is unreliable. As a matter of fact, the blood pressure surrogate link has been opposed by a meta-analysis that accounted for the full range of global population blood pressure and showed negative side effects for sodium reduction.
Where does this leave us?
As blood pressure is obviously not a reliable link between sodium intake and mortality, the conclusion of the aforementioned 2013 IOM report based on evidence from population studies is the best we have. This report was conducted by independent researchers and sponsored by the CDC, who, for less-than-transparent reasons, chose to follow the example of the AHA by rejecting their own report. As previously mentioned, this report found no evidence to establish whether reducing sodium intake below 2,300 mg/day either decreases or increases CVD risk in the general population. It also found no evidence in support of recommending different sodium intakes to diseased and normal groups, and did find evidence for potential harm in a sodium intake below 1,500 mg. However, the report failed to specify the dimensions of a safe sodium intake zone, but, by implication, indicated that such a safe zone does exist, consistent with the experience of all other essential nutrients.
In your opinion, what should organizations like the CDC and AHA, who control the development and implementation of public health policy, be doing now, in light of this new research?
Any policy like the current one that would aim to have 95% of the world’s population drastically alter their diet ought to be based upon strong, irrefutable scientific evidence.
I think that, instead of immediately moving to accuse dissenting scientists of economic and intellectual corruption, it may be more appropriate for powerful health organizations to ask what scientific mind would buy a theory as simplistic as the one currently governing sodium intake policy (sodium intake leads to high blood pressure, which leads to death) without a modicum of skepticism? Would it be so unreasonable for these groups to at least take our skepticism seriously, instead of reflexively attempting to explain away the results?
Our study provides evidence that a U/J shaped curve exists for the association between sodium intake and health outcome, as it does with all other nutrients. I will be the first to admit that this evidence is based on observational population studies, which are inevitably subject to flaws caused by imprecise measurements and confounders. These flaws, though, are greatly mitigated by the inclusion of a large number of participants, by statistical adjustments, by sensitivity analyses of subgroups, and by consistency in results between several independent studies.
There can never be any scientific guarantee that these safeguards eliminate all flaws; on the other hand, though, in the absence of a conflicting body of data, the IOM report and our analysis should be included in the determination of public policy, notignored. Any policy like the current one that would aim to have 95% of the world’s population drastically alter their diet ought to be based upon strong, irrefutable scientific evidence.
The American Journal of Hypertension is a monthly, peer-reviewed journal that provides a forum for scientific inquiry of the highest standards in the field of hypertension and related cardiovascular disease. The journal publishes high-quality original research and review articles on basic sciences, molecular biology, clinical and experimental hypertension, cardiology, epidemiology, pediatric hypertension, endocrinology, neurophysiology, and nephrology.
As we celebrate Earth Day this year, it is timely to reflect on the international community’s commitment to halting serious environmental harm. The idea that all States have a ‘common interest’ in promoting global environmental responsibility — as evidenced most clearly through their active participation in multilateral environmental agreements — has been a cornerstone of international environmental policy for the last few decades. At the heart of this responsibility is the recognition that sovereign self-interest is enhanced, rather than compromised, through collective responses to matters of global concern. And that universal participation of diverse states in pursuit of common objectives is best secured through differential treatment of states tailored to their responsibilities and capabilities.
But this ideal of responsibility — common but differentiated responsibility — is facing serious challenge. In particular, the increasing questioning of differential treatment as a valuable tool in achieving common objectives has highlighted — if nothing else — a breakdown of previous certainties, however fragile the consensus ultimately was.
Since the 1990 London Amendments to the 1987 Montreal Ozone Protocol, differentiation in commitments and obligations of financial and technological support towards developing countries has characterised, and partially defined, international environmental law. Even among multilateral environmental agreements, the climate change regime is distinctive for the nature and extent of differential treatment it contains in favour of developing countries. The extent of this differential treatment, however, has proven deeply contentious over the years.
Indeed, the 1997 Kyoto Protocol, while representing the high-water mark of differential treatment in international environmental law, is set, to come to an end in 2020. This is partly due to the deep divisions concerning the differential treatment it contains. The latest round of negotiations, under the auspices of the Ad Hoc Group on the Durban Platform (ADP) and which are to conclude in 2015, has been mandated to produce an outcome that is ‘applicable to all’. Although, ‘applicable to all’ implies universality rather than uniformity of application, the use of the term, given the political context of the negotiations, is suggestive of a shift towards greater symmetry and more nuanced differentiation between Parties. The battle over differentiation — the existence, nature and extent of it — is raging in the ongoing climate negotiations, and will no doubt prove to be one of the final issues to be resolved in Paris, 2015.
It could of course be argued that such a shift in the climate regime is but a function of larger geo-political shifts that have occurred in international relations in the past two decades; differentiation as it was originally conceived being an artifact of the period in which it was negotiated. Traditional North-South dichotomies have since disintegrated in the face of economic growth in some developing countries and the shrinking of some first world economies. Such differential treatment in favour of developing countries, especially those that are today in the middle income or higher income brackets, is an anachronism, and thus the move towards greater symmetry, it might be argued, is both natural and politically necessary. Moreover, some developed countries have become disenchanted with differentiation over time, particularly when it is rigidly structured and increasingly viewed as artificial, particularly since the global economic collapse of 2007 and the challenges faced by many ‘developed’ economies.
Whether this move in international environmental law towards greater symmetry and more nuanced differentiation in obligations for Parties, albeit with greater deference to national circumstances, is likely to either result in a more efficient approach, which in turn will promote more ambitious legal outcomes or be sufficient to appease the majority of the global South remains uncertain. Undoubtedly there appears to be a systematic dismantling of a pervasive architecture of differentiation that had assumed a stronghold in international environmental law in the past three decades.
Does this matter; should not international environmental law reflect changing political and economic realities? Should not a regime be negotiated in a manner that seeks to include as many countries as possible? And where a regime such as Kyoto has become so contested, is it not better to seek an alternative that more States can endorse?
While these are valid arguments, they do not account for the fact that differentiation was not adopted merely to improve treaty compliance or prevent a zero-sum outcome in participation. Differentiation reflected a broader ambition; that environmental obligations should be fair and equitable as well as effective. That the international community should not be allowed to neglect historic injustices, enduring differences and considerable disparities in wealth when responding to the environmental challenges. Of course, differentiation can be achieved in many different ways and with greater or more limited financial and technological assistance attached thereto. But the bedrock of differentiation — and even stringent versions of it — had seemingly been accepted. Should this now be disregarded?
There are perhaps equally valid perspectives on this matter that transcend both politics and law. But some queries do arise in this context. If the international community is a community of law, bound within a legal framework, should such a framework to be with or without a moral core? And if the former, what role does equity play and what weight should be placed on it as a characteristic of any legal system? The parties to the climate regime might be right to reconsider their approach to differentiation; but in doing so, equity must still be a valid consideration.
Duncan French is Professor of International Law and Head of the University of Lincoln Law School, UK. He was previously co-rapporteur of the International Law Association’s Committee of International Law on Sustainable Development and is presently Chair of its Study Group on Due Diligence in International Law. Lavanya Rajamani is a Professor at the Centre for Policy Research, New Delhi. She writes, teaches and consults on international environmental law, in particular international climate change law. Her current work is in the field of treaty law (negotiation, design, architecture and interpretation), legal principles and models of differentiation in international agreements.
In recognition of Earth Day this year, we have looked across Law, History, Economics, Literature, Life Science, and Social Sciences to identify key articles in environmental studies, all made freely available.
The Journal of Environmental Law has established an international reputation as a lively and authoritative source of informed analysis for all those active in examining evolving legal responses to environmental problems in national and international jurisdictions.
Oxford University Press is a leading publisher in international law, including the Max Planck Encyclopedia of Public International Law, latest titles from thought leaders in the field, and a wide range of law journals and online products. We publish original works across key areas of study, from humanitarian to international economic to environmental law, developing outstanding resources to support students, scholars, and practitioners worldwide. For the latest news, commentary, and insights follow the International Law team on Twitter @OUPIntLaw.
Subscribe to the OUPblog via email or RSS.
Subscribe to only law articles on the OUPblog via email or RSS.
Image credit: Wild Nature via iStockphoto
Fifteen years ago, 20 April 1999, it happened in my community… at my son’s school. Two heavily armed seniors launched a deadly attack on fellow students, teachers, and staff at Columbine High School in Jefferson County, Colorado.
As the event played out live on broadcast TV, millions around the globe watched in horror as emergency responders evacuated survivors and transported the wounded. At first, a quiet sort of disbelief mixed with shock and anguish descended upon us. Hours later, when the final tally was released – 15 dead, 26 injured – the reality of the tragedy brought the entire community to its knees.
The Columbine shootings became a benchmark event for school violence in the United States. I thought surely this was the turning point; nothing like this would ever happen again. Yet, barely a month later, Conyers, Georgia was added to the list of communities devastated by a shooting. At an alarming rate more towns and neighborhoods join the list, which now includes shootings in theaters, youth camps, shopping malls, and churches.
In 1999, trauma counseling primarily addressed PTSD (post-traumatic stress disorder) among veterans and victims of domestic violence, abuse, or sexual assault. Few strategies addressed wholesale community trauma. Even less was available to help parents manage the day-to-day challenges of parenting traumatized teens or to advise traumatized educators on teaching students who had witnessed murder in their own school. My response to the situation was to learn as much as I could about what helps people recover from the crushing shock and grief that follows catastrophe, which led me to doctoral research and a continuing focus on trauma as a human experience.
Mass shootings like at Columbine, Sandy Hook, and Utøya, Norway are only one type of trauma we may face. Life has risk, and even the best planning doesn’t ensure invulnerability. Random events happen… accidents, sudden death of a loved one, natural disaster, assault; the list seems endless. Thankfully, effective approaches for promoting recovery are becoming more widely known.
Whenever a tragic event grabs headlines and non-stop media coverage, generous offers support and resources start flooding in. For personal traumas, the situation is different; survivors often suffer in silence as they try to find a way to a livable future alone.
Research that offers insight into trauma’s effects can help us better understand the challenges people face. Efforts to promote public awareness of trauma and recovery offer a genuine benefit. Many are unaware that trauma is a natural human condition, a biologic response to an experience in which the victim feels powerless and overwhelmed in the face of life-threatening or life-changing circumstance.
The human brain is charged with survival, and traumatic response is its attempt to learn from a threatening situation in order to survive threat in the future. Humans try to make sense of their world, and when everything turns to chaos, the brain struggles to learn to identify future risks and to regain a feeling of competence and comfort in the everyday. Behaviors associated with traumatic stress include hypervigilence; extreme sensitivity to smells, sights, and sounds connected to the event; flashbacks; anxiety; anger; depression; and memory problems.
The good news is that even in the face of such challenge, people can successfully integrate their trauma-experience into their own personal history and reclaim their life with a renewed sense of purpose. Victims and their families find that this process takes time and sensitivity. For some, caring friends, family, clergy, and social resources are enough. Others, not everyone, may develop clinical PTSD that best responds to professional counseling. Unfortunately, some may try to “just forget about it” and “get back to the way things used to be,” thereby short-circuiting the process of real recovery. Unresolved trauma can take a high toll on relationships and quality of life.
Trauma’s effect on our lives, as individuals and as communities, may be more widespread than commonly realized. It isn’t a problem faced only by the military; it is not uncommon among civilians. Estimates are that in the United States about 6 out of every 10 men (60%) and 5 of every 10 women (50%) experience at least one traumatic event in their life. For men, it is likely an accident, physical assault, combat, disaster, or witnessing death or injury. For women, the risk is more likely domestic violence, sexual assault, or abuse. A 2004 study reported by the National Child Traumatic Stress Network found that over 50% of children had experienced a traumatic event.
A sense of shame and perceived stigma from needing psychological counseling may keep people from seeking help. Perhaps with education to increase understanding of trauma, more will realize that traumatic response is not a sign of weakness or defect. Instead, it can be a sign of a healthy, normal attempt to reclaim a sense of well-being and safety.
I used to think I was a totally different person after Columbine. That there is no way I could have emerged without being radically altered. And trust me, I was. But what I realize now is that at my core, at my very center, there continues the essence of who I was before, and maybe more importantly, who I was meant to be.
Outcomes such as this are possible. People are slowly recognizing trauma as a critical health issue, not only in the United States but worldwide. Public dialogue can reduce the stigma and isolation felt in trauma’s aftermath. Increased recognition of the occurrence of trauma among civilians and the military, combined with greater awareness of trauma as a natural response, can make a profound difference in the lives of millions. That’s a goal that deserves attention.
Carolyn Lunsford Mears, Ph.D., is a founder of Sandy Hook-Columbine Cooperative, a non-profit foundation dedicated to trauma recovery and resilient communities. She is an award-winning author, speaker, and researcher. She is the author of “A Columbine Study: Giving Voice, Hearing Meaning.” (available to read for free for a limited time) in the Oral History Review. Her 2012 anthology, Reclaiming School in the Aftermath of Trauma, won a prestigious Colorado Book of the Year Award, given by the Colorado affiliate of the National Endowment for the Humanities. She is a Fellow of the Royal Society of the Arts, alliance member of the National Centre for Therapeutic Care, Fellow of the Planned Environment Therapy Trust, and Board of Directors member for the I Love You Guys Foundation, and adjunct faculty at the University of Denver.
Subscribe to the OUPblog via email or RSS.
Subscribe to only history articles on the OUPblog via email or RSS.
Image credit: All images of the Columbine Memorial courtesy of Carolyn Lunsford Mears. Do not reproduce without permission.
A woman who gives birth to six children each with a 75% chance of survival has the same expected number of surviving offspring as a woman who gives birth to five children each with a 90% chance of survival. In both cases, 4.5 offspring are expected to survive. Because the large fitness gain from an additional child can compensate for a substantially increased risk of childhood mortality, women’s bodies will have evolved to produce children closer together than is best for child fitness.
Sleeping baby by Minoru Nitta. CC BY 2.0 via Flickr.
Offspring will benefit from greater birth-spacing than maximizes maternal fitness. Therefore, infants would benefit from adaptations for delaying the birth of a younger sib. The increased risk of mortality from close spacing of births is experienced by both the older and younger child whose births bracket the interbirth interval. Although a younger sib can do nothing to cause the earlier birth of an older sib, an older sib could potentially enhance its own survival by delaying the birth of a younger brother or sister.
The major determinant of birth-spacing, in the absence of contraception, is the duration of post-partum infertility (i.e., how long after a birth before a woman resumes ovulation). A woman’s return to fertility appears to be determined by her energy status. Lactation is energetically demanding and more intense suckling by an infant is one way that an infant could potentially influence the timing of its mother’s return to fertility. In 1987, Blurton Jones and da Costa proposed that night-waking by infants enhanced child survival not only because of the nutritional benefits of suckling but also because of suckling’s contraceptive effects of delaying the birth of a younger sib.
Blurton Jones and da Costa’s hypothesis receives unanticipated support from the behavior of infants with deletions of a cluster of imprinted genes on human chromosome 15. The deletion occurs on the paternally-derived chromosome in Prader-Willi syndrome (PWS). Infants with PWS have weak cries, a weak or absent suckling reflex, and sleep a lot. The deletion occurs on the maternally-derived chromosome in Angelman syndrome (AS). Infants with AS wake frequently during the night.
The contrasting behaviors of infants with PWS and AS suggest that maternal and paternal genes from this chromosome region have antagonistic effects on infant sleep with genes of paternal origin (absent in PWS) promoting suckling and night waking whereas genes of maternal origin (absent in AS) promote infant sleep. Antagonistic effects of imprinted genes are expected when a behavior benefits the infant’s fitness at a cost to its mother’s fitness with genes of paternal origin favoring greater benefits to infants than genes of maternal origin. Thus, the phenotypes of PWS and AS suggest that night waking enhances infant fitness at a cost to maternal fitness. The most plausible interpretation is that these costs and benefits are mediated by effects on the interbirth interval.
Postnatal conflict between mothers and offspring has been traditionally assumed to involve behavioral interactions such as weaning conflicts. However, we now know that a mother’s body is colonized by fetal cells during pregnancy and that these cells can persist for the remainder of the mother’s life. These cells could potentially influence interbirth intervals in more direct ways. Two possibilities suggest themselves. First, offspring cells could directly influence the supply of milk to their child, perhaps by promoting greater differentiation of milk-producing cells (mammary epithelium). Second, offspring cells could interfere with the implantation of subsequent embryos. Both of these possibilities remain hypothetical but cells containing Y chromosomes (presumably derived from male fetuses) have been found in breast tissue and in the uterine lining of non-pregnant women.
David Haig is Professor of Biology at Harvard University. he is the author of “Troubled sleep: Night waking, breastfeeding and parent–offspring conflict” (available to read for free for a limited time) in Evolution, Medicine, and Public Health. The arguments summarized above are presented in greater detail in two papers that recently appeared in Evolution, Medicine, and Public Health.
Evolution, Medicine, and Public Health is an open access journal, published by Oxford University Press, which publishes original, rigorous applications of evolutionary thought to issues in medicine and public health. It aims to connect evolutionary biology with the health sciences to produce insights that may reduce suffering and save lives. Because evolutionary biology is a basic science that reaches across many disciplines, this journal is open to contributions on a broad range of topics, including relevant work on non-model organisms and insights that arise from both research and practice.
Subscribe to the OUPblog via email or RSS.
Subscribe to only science and medicine articles on the OUPblog via email or RSS.
Today is 15 April or Tax Day in the United States. In recognition of this day we compiled a free virtual issue on taxation bringing together content from books, online products, and journals. The material covers a wide range of specific tax-related topics including income tax, austerity, tax structure, tax reform, and more. The collection is not US-centered, but includes information on economies across the globe. Be sure to take a moment to view this useful online resource today.
Intellectual property rights (IPRs) and the regimes of protection and enforcement surrounding them have often been the subject of debate, a debate fuelled in the past year by the increased emphasis on free-trade negotiations and multi-lateral treaties including the now-rejected Anti-Counterfeiting Trade Agreement (ACTA) and its Goliath cousin, the Trans-Pacific Partnership Agreement (TPPA). The significant media coverage afforded to these treaties, however, risks thrusting certain perspectives of IPR protection and enforcement into the spotlight, while eclipsing alternative, but equally crucial voices that are perhaps in greater need of legitimate dialogue to safeguard their own collection of intangible rights. Caught in the vortex of inadequate recognition and ineffective protection, are the communal intellectual property rights of indigenous communities, centred on traditional knowledge (TK), traditional cultural expressions (TCE), expressions of folklore (EoF), and genetic resources (GR).
The fundamental incompatibility between current intellectual property rights regimes and the rights of indigenous peoples stems largely from the lack of understanding of the driving forces that have led to the development of traditional knowledge, traditional cultural expressions, expressions of folklore, and genetic resources – that of the protection of whole indigenous cultures through the preservation of the traditional knowledge acquired by these communities as a whole.
The issues are complex. Professor James Anaya’s 2014 keynote speech at the 26th Session of the Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore at WIPO highlighted the differences governing the intangible rights of indigenous peoples generally, and why these world views have so often been left out of the current mainframe of intellectual property rights. Whereas, the majority view of IPRs tends to focus on the rights of the individual and their protection as such, indigenous cultures are inherently built over centuries and across generations on communal understandings and organic exchanges of knowledge, making it practically impossible to ascribe the ownership of a certain set of IPRs to one or a few individuals.
Apache Dancers at the exhibit ‘Dignity – Tribes in Transition’. United States Mission Geneva Photo: Eric Bridiers. CC-BY-ND-2.0 via US Mission Geneva Flickr.
As Professor Anaya articulates and the other contemplate, the similarities between the inadequacies of the protection of tangible rights of indigenous peoples (e.g. indigenous land rights) and that of their intangible rights protection (including intellectual property rights) tend to stem from a common source – the failure to acknowledge the “inherent logic of indigenous peoples’ world views”.
Perhaps the solutions lie not just in finding ways to include indigenous intellectual property rights in current IPR regimes, but through the facilitation of an entire paradigm shift to capture the nuances of these issues both effectively and precisely. How, for instance, can indigenous IPRs be valued commercially, and how may adequate compensation models be developed in exchange for the commercial use of these rights? A key to increasing the recognition of the inherent value of indigenous IPRs within their traditional cultural settings may lie in developing methods to properly value this worth in tangible terms. What seems necessary is a model to adequately measure the significance of indigenous IPRs, starting at the source (the indigenous community), and finding ways of translating this value into benefit systems that can be returned to the communities from which the IPRs were sourced. Hence recognition is attributed to the crucial part these IPRs play within the cultures from which they are derived.
The strength of intellectual property law lies in its ability to meet the demands of a frenetically changing world, thus affording it vast amounts of power in shaping the law of the future; but this brings with it the challenge – can that power be harnessed to adequately protect rights of the past? Even if the answer is in the affirmative, it does not necessarily follow that the purpose of intellectual property rights protection should be to reduce IPRs to protectable commodities solely for the purpose of commercial exploitation. Protection of IPRs might be secured for any number of reasons, including the recognition of the right for ownership of those rights to be retained within the community. IPRs thus have the capacity to function both as shields and swords. Such weaponry however brings with it obligations: “With great power, comes great responsibility.”
Keri Johnston and Marion Heathcote are the guest editors of the Journal of Intellectual Property Law & Practice special issue on “The Quest for ‘Real’ Protection for Indigenous Intangible Property Rights”. The authors would like to thank Mekhala Chaubal, student-at-law, for her assistance. It is reassuring to know that a new generation of lawyers is willing and able. Keri AF Johnston is managing partner of Johnston Law in Toronto and Marion Heathcote is a partner with Davies Collison Cave in Sydney.
The Journal of Intellectual Property Law & Practice (JIPLP) is a peer-reviewed journal dedicated to intellectual property law and practice. Published monthly, coverage includes the full range of substantive IP topics, practice-related matters such as litigation, enforcement, drafting and transactions, plus relevant aspects of related subjects such as competition and world trade law.
Subscribe to the OUPblog via email or RSS.
Subscribe to only law articles on the OUPblog via email or RSS.
Over the past few months, the Oral History Review has become rather demanding. In February, we asked readers to experiment with the short form article. A few weeks ago, our upcoming interim editor Dr. Stephanie Gilmore sent out a call for papers for our special Winter/Spring 2016 issue, “Listening to and for LGBTQ Lives.” Now, we’d like you to also take over our OUPBlog posting duties.
Well, “take over” might be a hyperbole. However, we have always hoped to use this and our other social media platforms to encourage discussion within the oral history discipline, and to spark exchanges with those working with oral histories outside the field. We like to imagine that through our podcasts, interviews and book reviews, we have brought about some conversations or inspired new ways to approach oral history. However, we can do better.
Towards that end, we are putting out a “call for blog posts” for this summer. These posts should fall in line with the aforementioned goal to promote the engagement between and beyond those in oral history field. Like our hardcopy counterpart, we are especially interested in posts that explore oral history in the digital age. As you might have gathered, we thrive on puns and the occasional, outdated pop culture reference. These are even more appreciated when coupled with clean and thoughtful insights into oral history work.
We are currently looking for posts between 500-800 words and 15-20 minutes of audio or video. Though, because we operate on the wonderful worldwide web, we are open to negotiation in terms of media and format. We should also stress that while we welcome posts that showcase a particular project, we do not want to serve as landing page for anyone’s kickstarter.
Caitlin Tyler-Richards is the editorial/media assistant at the Oral History Review. When not sharing profound witticisms at @OralHistReview, Caitlin pursues a PhD in African History at the University of Wisconsin-Madison. Her research revolves around the intersection of West African history, literature and identity construction, as well as a fledgling interest in digital humanities. Before coming to Madison, Caitlin worked for the Lannan Center for Poetics and Social Practice at Georgetown University.
The cosmology community is abuzz with news from the BICEP2 experiment of the discovery of primordial gravitational waves, through their signature in the cosmic microwave background. If verified, this will be a clear indication that the very young universe underwent a period of acceleration, known as cosmic inflation. During this period, it is thought that the seeds were laid down for all the structures to form later in the universe, including galaxies, stars, and indeed ourselves.
The cosmic microwave background (CMB) is radiation left over from the Hot Big Bang, first discovered in 1965 and corresponding to a temperature only about 2.7 degrees above absolute zero. In 1992 the COBE satellite made the first detection of temperature variations in the CMB, and successive experiments, including satellite missions WMAP and Planck, have been accurately measuring these variations which have become the key tool to understanding our universe.
In addition to its brightness, radiation can have a polarisation, meaning that the electromagnetic oscillations that make up the light have a preferred orientation, e.g. horizontal or vertical. This same effect is used in 3D cinemas, where light of different polarisations reaches your left or right eye, the lenses in the glasses blocking out one or other from each eye. In the CMB the polarisation signal is very small, and moreover comes in two types, known as E-mode and B-mode polarisation. The second of these, corresponding to a twisting pattern of polarisation on the sky, is what BICEP2 has discovered for the first time. This twisting pattern is the signature of gravitational waves, created in the early universe and whose presence causes space-time itself to ‘wobble’ as the light from the CMB crosses the Universe.
The Dark Sector Laboratory at Amundsen-Scott South Pole Station. At left is the South Pole Telescope. At right is the BICEP2 telescope. Photo by Amble, 2009. CC-BY-SA-3.0 via Wikimedia Commons.
The BICEP2 team have been working for several years with the single aim of measuring this signal; inflation predicted it to be there but said nothing about its strength. Based at the South Pole, where the unusually clear and dry air creates an ideal viewpoint for accurate measurement, three years of observations were carried out from 2010 to 2012. Their experiment differs from others measuring the CMB polarisation because they focussed on covering as large an area of the sky as possible, at relatively moderate angular resolution, in order to specifically target the B-mode signal.
While the discovery of gravitational waves had been widely rumoured in the days leading up to the announcement, including even the size of the measured signal, what took everyone’s breath away was the significance of the signal. At 6 to 7-sigma, it exceeds even the gold-standard 5-sigma used at CERN for the Higgs particle detection. Most would have expected something tentative, 2 or 3-sigma perhaps. We will want verification, of course, especially because the use of just a single wavelength of observation (the microwave equivalent of using just one colour of the rainbow) means the experiment is a little vulnerable to radiation from sources other than the CMB, such as intervening galaxies or emission caused by particles spiralling around our own Milky Way’s magnetic fields. The strength of the detection suggests that will not be an issue, but for sure we want to see independent confirmation by other experiments and at other wavelengths. Some may have announcements even before the end of the year, including the Planck satellite mission.
The response of the cosmology community to BICEP2 has been staggeringly swift. Early communication and discussion was already underway during the web-streamed BICEP2 press conference, via a Facebook discussion group set up by Scott Dodelson at Fermilab. The first science papers using the results were already appearing on arXiv.org database within the next couple of days (including theseones by me!). By the end of March, only two weeks after the announcement, there were already almost 50 available papers with ‘BICEP’ in the title, written by researchers all around the world. Papers on BICEP2 are clearly going to be a main theme for astronomy journals, including MNRAS, for the remainder of the year as we all try to figure out what, in detail, it all means.
Andrew Liddle is Professor of Theoretical Astrophysics at the Institute for Astronomy, University of Edinburgh. He is an editor of the OUP astronomy journal Monthly Notices of the Royal Astronomical Society.
Monthly Notices of the Royal Astronomical Society (MNRAS) is one of the world’s leading primary research journals in astronomy and astrophysics, as well as one of the longest established. It publishes the results of original research in astronomy and astrophysics, both observational and theoretical.
Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.
Inequality has been on the rise in all the advanced democracies in the past three or four decades; in some cases dramatically. Economists already know a great deal about the proximate causes. In the influential work by Goldin and Katz on “The Race between Education and Technology”, for example, the authors demonstrate that the rate of “skill-biased technological change” — which is economist speak for changes that disproportionately increase the demand for skilled labor — has far outpaced the supply of skilled workers in the US since the 1980s. This rising gap, however, is not due to an acceleration of technological change, but rather to a slowdown in the supply of skilled workers. Most importantly, a cross-national comparison reveals that other countries have continued to expand the supply of skills, i.e. the trend towards rising inequality is less pronounced in these cases.
The narrow focus of economists on the proximate causes is not sufficient, however, to fully understand the dynamic of rising inequality and its political and institutional foundations. In particular, skill formation regimes and cross-country differences in collective wage bargaining influence the quantity and quality of skills and hence also differences in inequality. Generally speaking, countries with coordinated wage-setting and highly developed vocational education and training (VET) systems respond more effectively to technology-induced changes in demand than systems without such training systems.
Yet, there is a great deal of variance in the extent to which this is true, and one needs to be attentive to the broader organization of political institutions and social relations to explain this variance. One of the recurrent themes is the growing socioeconomic differentiation of educational opportunity. Countries with a significant private financing of education, for example, induce high-income groups to opt out of the public system and into high-quality but exclusive private education. As they do, some public institutions try to compete by raising tuition and fees, and with middle- and upper-middle classes footing more of the bill for their own children’s education, support for tax-financed public education declines.
This does not happen everywhere. In countries that inherited an overwhelmingly publicly-financed system only the very rich can opt out, and the return on private education is lower because of a flatter wage structure. In this setting the middle and upper-middle classes, deeply concerned with the quality of education, tend to throw their support behind improving the public system. Yet, they will do so in ways that may reproduce class-based differentiation within the public system. Based on an analysis of the British system, one striking finding is that a great deal of differentiation happens because high-educated, high-income parents, who are most concerned with the quality of the education of their children, move into good school districts and bid up housing prices in the process. As property prices increase, those from lower socio-economic strata are increasingly shut out from the best schools.
Even in countries with less spatial inequality, in part because of a more centralized provision of public goods, socioeconomic inequality may be reproduced through early tracking of students into vocational and academic lines. This is because the choice of track is known to be heavily dependent on the social class of parents. This is reinforced by the decisions of firms to offer additional training to their best workers, which disadvantages those who start at the bottom. There is also evidence that such training decisions discriminate against women because firm-based training require long tenures and women are less likely to have uninterrupted careers. So strong VET systems, although they tend to produce less wage inequality, can undermine intergenerational class mobility and gender equality.
The rise of economic inequality also has consequence for politics. While democratic politics is usually seen as compensating for market inequality, economic and political inequality in fact tend to reinforce each other. Economic and educational inequality destroy social networks and undermines political participation in the lower half of the distribution of incomes and skills, and this undercuts the incentives of politicians to be attentive to their needs. Highly segmented labor markets with low mobility also undermine support for redistribution because pivotal “insiders” are not at risk. Labor market “dualism” therefore delimits welfare state responsiveness to unemployment and rising inequality. In a related finding, the winners of globalization often oppose redistribution, in part because they are more concerned with competitiveness and how bloated welfare states may undermine it.
Economic, educational, and political inequalities thus also tend to reinforce each other. But the extent and form of such inequality vary a great deal across countries. This special issue helps explain why and suggests the need for an interdisciplinary approach that is attentive to national institutional and political context oppose redistribution.
Socio-Economic Review aims to encourage work on the relationship between society, economy, institutions and markets, moral commitments and the rational pursuit of self-interest. The journal seeks articles that focus on economic action in its social and historical context. In broad disciplinary terms, papers are drawn from sociology, political science, economics and management, and policy sciences.
Subscribe to the OUPblog via email or RSS.
Subscribe to only business and economics articles on the OUPblog via email or RSS.
Image credit: Laptop in classic library. By photogl, via iStockphoto.
Scientists, using epidural stimulation over the lumbar spinal cord, have enabled four completely paralyzed men to voluntarily move their legs.
Kent Stephenson is one of the four. This stimulation experiment wasn’t supposed to work for him; he is what clinicians call an AIS A. This is a measure of disability, formally the American Spinal Injury Association Impairment Scale (AIS), that rates impairment from A (no motor or sensory function) to D (ability to walk). Kent, a mid-thoracic paraplegic, has what is considered a “complete” injury. Kent’s doctors told him it was a waste of time to pursue any therapy; per the dogma, A’s don’t get better. Well, the young Texan, who was hurt five years ago on a dirt bike, didn’t get the message. He likes to cite a fortune cookie he got shortly after his injury. It said, “Everything’s impossible until somebody does it.”
Kent had the stimulator implanted. A few days later they turned it on. No one expected it to do anything. Researchers were only looking for a baseline measurement to compare Kent’s function later, after several weeks of intense Locomotor Training (guided weight supported stepping on a treadmill).
Kent tells the story: “The first time they turned the stim on I felt a charge in my back. I was told to try pull my left leg back, something I had tried without success many times before. So I called it out loud, ‘left leg up.’ This time it worked! My leg pulled back toward me. I was in shock; my mom was in the room and was in tears. Words can’t describe the feeling – it was an overwhelming happiness.”
Kent was the second of the four. Rob Summers, three years ago, was the first to pioneer the concept that complete doesn’t mean what it used to; epidural stimulation could make the spinal cord more receptive to nerve signals coming from the senses or the brain. Seven months after he was implanted with a stimulator unit, he initiated voluntary movements of his legs. The other two subjects, Andrew Meas and Dustin Shillcox, also started moving within days of the implant. Summers probably could have initiated movement early on too, but the research team didn’t test for it – they had no reason to believe he could do it.
Here’s lead author of the Brain paper, Claudia Angeli, Ph.D., to explain. She is a senior researcher at the Human Locomotor Research Center at Frazier Rehab Institute, and an assistant professor at the University of Louisville’s Kentucky Spinal Cord Injury Research Center (KSCIRC).
“First, in the Lancet paper [regarding the first stimulation subject] it was just Rob, just one person. Yes, it was proof of concept, yes it went great. But now we are talking about four subjects. That’s four out of four showing functional recovery. What’s more, two of the four are categorized as AIS A – no motor or sensory function below the lesion level, with no chance for any recovery.”
The other two patients are classified AIS B: no motor function below the lesion but with some sensory function.
Left to right is Andrew Meas, Dustin Shillcox, Kent Stephenson, and Rob Summers, the first four to undergo task-specific training with epidural stimulation at the Human Locomotion Research Center laboratory, Frazier Rehab Institute, as part of the University of Louisville’s Kentucky Spinal Cord Injury Research Center, Louisville Kentucky.
How does this work? The epidural stimulation supplies a continuous electrical current, at varying frequencies and intensities, to specific locations on the lower part of the spinal cord. A 16-electrode spinal cord stimulator, commonly used to treat pain, is implanted over the spinal cord at T11-L1, a location that corresponds to the complex neural networks that control movement of the hips, knees, ankles and feet.
The leg muscles are not stimulated directly. The epidural stimulation apparently awakens circuitry in the spinal cord. “In simple terms,” says Dr. Angeli, “we are raising the excitability or gain of the spinal cord. Let’s say you have an intent to move. That signal originates in the brain and gets through to the spinal cord but the cord is not aware enough or excited enough to do anything with that intent. When we add the stimulation, the spinal cord networks are made a little more aware, so when the intent comes through, the cord is able to interpret it and movement becomes voluntary.”
The theory behind spinal cord stimulation is that these spinal cord networks are smart: they can remember and they can learn. The current work builds on decades of research. Susan Harkema, Ph.D. (University of Louisville) and V. Reggie Edgerton, Ph.D. (University of California Los Angeles) have led the effort. Dr. Harkema is Principal Investigator for the epidural stimulation projects and Director of the Christopher & Dana Reeve Foundation’s NeuroRecovery Network. Dr. Edgerton, a member of the Reeve Foundation’s International Research Consortium on Spinal Cord Injury, is a basic scientist whose work attempts to understand human locomotion and how the brain and spinal cord adapt and change in response to various interventions, including activity, training and stimulation.
Dr. Harkema says plans are in place to implant eight more patients in the next year. Four will mirror the first group, matched by age, level of injury, time since injury, etc. (Gender, by the way, is not a factor; men with spinal cord injury happen to outnumber women four to one.) Another four patients will be stimulated specifically to control heart rate and blood pressure. Dr. Harkema said one of the first four had issues with low blood pressure. When the stimulator was on, though, the pressure was raised, even without contracting any muscles. They want to assess that sort of autonomic recovery in greater detail.
The research team is aware that epidural stimulation can enhance autonomic function in paralyzed subjects; indeed, the first four subjects report improved temperature control, plus better bowel, bladder, and sexual function. Data is being collected to present that part of the stimulation story in another paper.
Does this mean anyone with a spinal cord injury with an implanted stimulator can move? Not necessarily, says Dr. Harkema. “But what I want people to know about this study is that we need to change our attitude about what a complete injury is, challenge the dogma that in AIS A patients there is no possibility of recovery. The view is that it is not a worthwhile investment to offer even intense rehabilitation to people with complete injuries. They’re not going to recover. But the message now is that there is a tremendous amount available. These individuals have potential for recoveries that will improve their health and quality of life. Now we have a fundamentally new strategy that can dramatically affect recovery of voluntary movement in those with complete paralysis, even years after injury.”
Brain provides researchers and clinicians with the finest original contributions in neurology. Leading studies in neurological science are balanced with practical clinical articles. Its citation rating is one of the highest for neurology journals, and it consistently publishes papers that become classics in the field.
We are now entering the month of April 2014—a time for reflection, empathy, and understanding for anyone in or involved with Rwanda. Twenty years ago, Rwandan political and military leaders initiated a series of actions that quickly turned into one of the 20th century’s greatest mass violations of human rights.
As we commemorate the genocide, our empathy needs to extend first to survivors and victims. Many families were destroyed in the genocide. Many survivors suffered enormous hardships to survive. Whatever our stand on the current state of affairs in Rwanda, we have to be enormously recognizant of the pain many endured.
In this brief post, I address three issues that speak to Rwanda today. I do so with trepidation, as discussions about contemporary Rwanda are often polarized and emotionally charged. Even though I am critical, I shall try to raise concerns with respect and recognition that there are few easy solutions.
My overall message is one of concern. At one level, Rwanda is doing remarkably and surprisingly well—in terms of security, the economy, and non-political aspects of governance. However, deep resentments and ethnic attachments persist, hardships and significant inequality remain. While it is difficult to know what people really feel, my general conclusion is that the social fabric remains tense beneath a veneer of good will. A crucial issue is that the political system is authoritarian and designed for control rather than dialogue. It is also a political system that many Rwandans believe is structured to favor particular groups over others. Fostering trust in such a political context is highly unlikely.
I also conclude that a “genocide lens” has limits for the objective of social repair. The genocide lens has been invaluable for achieving international recognition of what happened in 1994. But that lens leads to certain biases about Rwanda’s history and society that limit long-term social repair in Rwanda.
Rwandan Genocide Memorial. 7 April 2011. El Fasher: The Rwandan community in UNAMID organized the 17th Commemoration of the 1994 Genocide against Tutsi hold in Super Camp – RWANBATT 25 Military Camp (El Fasher). Photo by Albert Gonzalez Farran / UNAMID. CC-BY-NC-ND-2.0 via UNAMID Flickr.
During the past 20 years, a sea change in international recognition has occurred. Fifteen years ago, very few people knew globally that genocide took place in Rwanda. Today, the “Rwandan Genocide” is widely recognized as a world historical event. That global recognition is an achievement. We also know a great deal more about the causes and dynamics of the genocide itself.
However, several important controversies and unanswered questions remain. One is who killed President Habyarimana on 6 April 1994. Another is how to conceptualize when the plan for genocide began. Some date the plan for genocide to the late 1950s; others to the 1990s; still others to April 1994. A third question is how one should conceptualize RPF responsibility. Some depic the former rebels as saviors who stopped the genocide. Others argue that their actions were integral to the dynamics that led to genocide. And there are other issues as well, including how many were killed. Each of these issues remains intensely debated and hopefully will be the subject of open-minded inquiry in the years to come.
Contemporary Rwanda is at one level inspiring. The government is visionary, ambitious, and accomplished. The plan is to transform the society, economy, and culture—and to wean the state from foreign aid. The government has successfully introduced major reforms. The tax system is much improved. Public corruption is virtually absent. Remarkable results in public health and the economy have been achieved. Public security is also dramatically improved.
But there is a dark side. Most importantly, the government is repressive. The government seeks to exercise control over public space, especially around sensitive topics—in politics, in the media, in the NGO sector, among ordinary citizens, and even among donors. The net impact is the experience of intimidation and, as a friend aptly put it, many silences.
That brings me to the delicate question of reconciliation. Reconciliation is an imprecise concept for what I mean. What matters is the quality of the social fabric in Rwanda—the trust between people—and the quality of state-society relations.
A central pillar in Rwanda’s social reconstruction process has been justice. Much is written on gacaca, the government’s extraordinary program to transform a traditional dispute settlement process into a country-wide, decade-long process to account for genocide crimes. Gacaca brought some survivors satisfaction at finally seeing the guilty punished. Gacaca spawned some important conversations, led to important revelations, and prompted some sincere apologies.
But there were also a lot of problems. There were lies on all sides. There were manipulations of the system. Some apologies were pro-forma. And there were weak protections for witnesses and defendants alike. In many cases, justice was not done. But to my mind many the bigger issue is gacaca reinforced the idea that post-genocide Rwanda is an environment of winners and losers.
The entire justice process excluded non-genocide crimes, in particular atrocities that the RPF committed as it took power, in the northwest the late 1990s, and in Congo, where a lot of violence occurred. This meant that whole categories of suffering in the long arc of the 1990s and 2000s were neither recognized nor accounted for. Justice was one-sided. Many Rwandans experience it therefore as political justice that serve the RPF goal of retaining power.
The second issue is the scale. A million citizens, primarily Hutu, were accused. The net effect is that the legal process served to politically demobilize many Hutus, as Anu Chakravarty has written. Having watched the process of rebuilding social cohesion and state-society relations after atrocity in several places, I come to the conclusion that inclusion is vitally important.
If states privilege justice as a mechanism for social healing, judicial processes should recognize the multi-sided nature of atrocity. All groups that suffered from atrocity should be able to give voice to their experiences and, if punitive measures are on the table, seek accountability. Otherwise, in the long run, justice looks like a charade, one that ultimately may undermine the memories it is designed to preserve.
Here is where the “genocide lens” did not serve Rwanda well. A genocide lens narrates history as a story between perpetrators and victims. Yet the Rwandan reality is much more complicated.
Scott Straus is Professor of Political Science and International Studies at UW-Madison. Scott specializes in the study of genocide, political violence, human rights, and African politics. His published work includes several books on Rwanda and articles in African Affairs. A longer version of this article was presented at the “Rwanda Today: Twenty Years after the Genocide” event at Humanity House in The Hague on 3 April 2014. The author wishes to thank the organizers of that event.
To mark the 20th anniversary of the genocide, African Affairs is making some of their best articles on Rwanda freely available. Don’t miss this opportunity to read about the legacy of genocide and Rwandan politics under the RPF.
African Affairs is published on behalf of the Royal African Society and is the top ranked journal in African Studies. It is an inter-disciplinary journal, with a focus on the politics and international relations of sub-Saharan Africa. It also includes sociology, anthropology, economics, and to the extent that articles inform debates on contemporary Africa, history, literature, art, music and more.
Subscribe to the OUPblog via email or RSS.
Subscribe to only history articles on the OUPblog via email or RSS.
We invite you to celebrate with us by submitting your own art to our Street Photography Contest. According Grove Art Online, street photography is:
Genre of photography that can be understood as the product of an artistic interaction between a photographer and an urban public space. It is distinguished from documentary photography in that the photographer is not necessarily motivated by the evidentiary value or socio-political function of the resulting photographs. Unlike photojournalism, a street photographer’s images are not intended to illustrate a news story or other narrative. Instead, their primary goal is expressive and communicates a subjective impression of the experience of everyday life in a city. Thus neither the locale nor the subject-matter defines street photography; it is the photographer’s approach to the medium and movement through public space that differentiate street photography from related forms of photography.
Eugène Atget. The steeple of the church before the restoration in 1913. Collections Department of the Ecole Nationale Supérieure des Beaux-Arts. Public domain via Wikimedia Commons.
Live in a city? Have a camera? Send us your best shots.
To submit, please email groveartmarketing[at]oup[dot]com, with “photography competition” in the subject line. Please include a caption describing your work in the body of the email, and attach your image (maximum of 3MB). Competition will close on 28 April 2014. Please read our terms and conditions before entering the competition.
Victoria Davis works in marketing for Oxford University Press, including Grove Art and Oxford Art Online.
Oxford Art Online offers access to the most authoritative, inclusive, and easily searchable online art resources available today. Through a single, elegant gateway users can access — and simultaneously cross-search — an expanding range of Oxford’s acclaimed art reference works: Grove Art Online, the Benezit Dictionary of Artists, the Encyclopedia of Aesthetics, The Oxford Companion to Western Art, and The Concise Oxford Dictionary of Art Terms, as well as many specially commissioned articles and bibliographies available exclusively online.
Subscribe to the OUPblog via email or RSS.
Subscribe to only art and architecture articles on the OUPblog via email or RSS.
One of the most prominent features of jurisdictional rules is a focus on the location of actions. For example, the extraterritorial reach of data privacy law may be decided by reference to whether there was the offering of goods or services to EU residents, in the EU.
Already in the earliest discussions of international law and the Internet it was recognised that this type of focus on the location of actions clashes with the nature of the Internet – in many cases, locating an action online is a clumsy legal fiction burdened by a great degree of subjectivity.
I propose an alternative: a doctrine of ‘market sovereignty’ determined by reference to the effective reach of ‘market destroying measures’. Such a doctrine can both delineate, and justify, jurisdictional claims in relation to the Internet.
It is commonly noted that the real impacts of jurisdictional claims in relation to the Internet is severally limited by the intrinsic difficulty of enforcing such claim. For example, Goldsmith and Wu note that:
“[w]ith few exceptions governments can use their coercive powers only within their borders and control offshore Internet communications only by controlling local intermediaries, local assets, and local persons” (emphasis added)
However, I would advocate the removal of the word ‘only’. From what unflatteringly can be called a cliché, there is now a highly useful description of a principle well-established at least 400 years ago.
The word ‘only’ gives the impression that such powers are of limited significance for the overall question, which is misleading. The power governments have within their territorial borders can be put to great effect against offshore Internet communications. A government determined to have an impact on foreign Internet actors that are beyond its directly effective jurisdictional reach may introduce what we can call ‘market destroying measures’ to penalise the foreign party. For example, it may introduce substantive law allowing its courts to, due to the foreign party’s actions and subsequent refusal to appear before the court, make a finding that:
that party is not allowed to trade within the jurisdiction in question;
debts owed to that party are unenforceable within the jurisdiction in question; and/or
parties within the control of that government (e.g. residents or citizens) are not allowed to trade with the foreign party.
In light of this type of market destroying measures, the enforceability of jurisdictional claims in relation to the Internet may not be as limited as it may seem at a first glance.
In this context, it is also interesting to connect to the thinking of 17th century legal scholars, exemplified by Hugo de Groot (better known as Hugo Grotius). Grotius stated that:
“It seems clear, moreover, that sovereignty over a part of the sea is acquired in the same way as sovereignty elsewhere, that is, [...] through the instrumentality of persons and territory. It is gained through the instrumentality of persons if, for example, a fleet, which is an army afloat, is stationed at some point of the sea; by means of territory, in so far as those who sail over the part of the sea along the coast may be constrained from the land no less than if they should be upon the land itself.”
A similar reasoning can usefully be applied in relation to sovereignty in the context of the Internet. Instead of focusing on the location of persons, acts or physical things – as is traditionally done for jurisdictional purposes – we ought to focus on marketplace control – on what we can call ‘market sovereignty’. A state has market sovereignty, and therefore justifiable jurisdiction, over Internet conduct where it can effectively exercise ‘market destroying measures’ over the market that the conduct relates to. Importantly, in this sense, market sovereignty both delineates, and justifies, jurisdictional claims in relation to the Internet.
The advantage market destroying measures have over traditional enforcement attempts could escape no one. Rather than interfering with the business operations worldwide in case of a dispute, market destroying measures only affect the offender’s business on the market in question. It is thus a much more sophisticated and targeted approach. Where a foreign business finds compliance with a court order untenable, it will simply have to be prepared to abandon the market in question, but is free to pursue business elsewhere. Thus, an international agreement under which states undertake to only apply market destroying measures and not seek further enforcement would address the often excessive threat of arrests of key figures, such as CEOs, of offending globally active Internet businesses.
Professor Dan Jerker B. Svantesson is Managing Editor of the journal International Data Privacy Law. He is author of Internet and E-Commerce Law, Private International Law and the Internet, and Extraterritoriality in Data Privacy Law. Professor Svantesson is a Co-Director of the Centre for Commercial Law at the Faculty of Law (Bond University) and a Researcher at the Swedish Law & Informatics Research Institute, Stockholm University.
Combining thoughtful, high level analysis with a practical approach, International Data Privacy Law has a global focus on all aspects of privacy and data protection, including data processing at a company level, international data transfers, civil liberties issues (e.g., government surveillance), technology issues relating to privacy, international security breaches, and conflicts between US privacy rules and European data protection law.
Untangling recent and still-unfolding events in Ukraine is not a simple task. The western news media has been reasonably successful in acquainting its consumers with events, from the fall of Yanukovich on the back of intensive protests in Kiev, by those angry at his venality and signing a pact with Russia over one with the EU, to the very recent moves by Russia to annex Crimea.
However, as is perhaps inevitable where space is compressed, messages brief and time short, a habit of talking about Ukraine in binaries seems to be prevalent. Superficially helpful, it actually hinders a deeper understanding of the issues at hand – and any potential resolution. Those binaries, encouraged to some extent by the nature of the protests themselves (‘pro-Russian’ or ‘pro-EU/Western’), belie complex and important heterogeneities.
Ironically, the country’s name, taken by many to mean ‘borderland’, is one such index of underlying complexity. Commentators outside the mainstream news, including specialists like Andrew Wilson, have long been vocal in pointing out that the East-West divide is by no means a straightforward geographic or linguistic diglossia, drawn with a compass or ruler down the map somewhere east of Kiev, with pro-Western versus pro-Russian sentiment ‘mapped’ accordingly. Being a Russian-speaker is not automatically coterminous with following a pro-Russian course for Ukraine; and the reverse is also sometimes true. In a country with complex legacies of ethnic composition and ruling regime (western regions, before incorporation into the USSR, were ruled at different times in the modern period by Poland, Romania and Austria-Hungary), local vectors of identity also matter, beyond (or indeed, within) the binary ethnolinguistic definition of nationality.
The Bridge to the European Union from Ukraine to Romania. Photo by Madellina Bird. CC BY-NC-SA 2.0 via madellinabird Flickr.
Just as slippery is the binary used in Russian media, which portrays the old regime as legitimately elected and the new one as basically fascist, owing to its incorporation of Ukrainian nationalists of different stripes. First, this narrative supposes that being legitimately elected negates Yanukovich’s anti-democratic behaviours since that election, including the imprisonment of his main political opponent, Yulia Tymoshenko (whatever the ambivalence of her own standing in the politics of Ukraine). Second, the warnings about Ukrainian fascism call to mind George Bernard Shaw’s comment about half-truths as being especially dangerous. As well-informed Ukraine watchers like Andreas Umland and others have noted, overstating the presence of more extreme elements sets up another false binary as a way of deligitimising the new regime in toto. This is certainly not to say that Ukraine’s nationalist elements should escape scrutiny, and here we have yet another warning against false binaries: EU countries themselves may be manifestly less immune to voting in the far right at the fringes, but they still may want to keep eyes and ears open as to exactly what some of Ukraine’s coalition partners think and say about its history and heroes, the Jews, and much more.
So much for seeing the bigger picture, but events may well still take turns that few historians could predict with detailed accuracy. What we can see, at least, from the perspective of a maturing historiographic canon in the west, is that Ukraine is a country that demands a more sophisticated take on identity politics than the standard nationalist discourse allows – a discourse that has been in existence since at least the late nineteenth Century, and yet one which the now precarious-seeming European idea itself was set up to moderate.
First published in January 1886, The English Historical Review (EHR) is the oldest journal of historical scholarship in the English-speaking world. It deals not only with British history, but also with almost all aspects of European and world history since the classical era.
Subscribe to the OUPblog via email or RSS.
Subscribe to only history articles on the OUPblog via email or RSS.
When Elinor Ostrom visited Lafayette College in 2010, the number of my non-political science colleagues who announced familiarity with her work astonished me. Anthropologists, biologists, economists, engineers, environmentalists, historians, philosophers, sociologists, and others flocked to see her.
Elinor’s work cut across disciplines and fields of governance because she deftly employed and developed interrelated concepts having applications in multiple settings. A key foundation of these concepts is federalism—an idea central also to the work of her mentor and husband, Vincent Ostrom.
Vincent understood federalism to be a covenantal relationship that establishes unity for collective action while preserving diversity for local self-governance by constitutionally uniting separate political communities into a limited but encompassing political community. Power is divided and shared between concurrent jurisdictions—a general government having certain nationwide duties and multiple constituent governments having broad local responsibilities. These jurisdictions both cooperate and compete. The arrangement is non-hierarchical and animated by multiple centers of power, which, often competing, exhibit flexibility and responsiveness.
From this foundation, one can understand why the Ostroms embraced the concept of polycentricity advanced in Michael Polanyi’s The Logic of Liberty (1951), namely, a political or social system consisting of many decision-making centers possessing autonomous, but limited, powers that operate within an encompassing framework of constitutional rules.
This general principle can be applied to the global arena where, like true federalists, the Ostroms rejected the need for a single global institution to solve collective action problems such as environmental protection and common-pool resource management. They advocated polycentric arrangements that enable local actors to make important decisions as close to the affected situation as possible. Hence, the Ostroms also anticipated the revival of the notion of subsidiarity in European federal theory.
But polycentricity also applies to small arenas, such as irrigation districts and metropolitan areas. Elinor and Vincent worked on water governance early in their careers, and both argued that metropolitan areas are best organized polycentrically because urban services have different economies of scale, large bureaucracies have inherent pathologies, and citizens are often crucial in co-producing public services, especially policing (the subject of empirical studies by Elinor and colleagues).
The Ostroms valued largely self-organizing social systems that border on but do not topple into sheer anarchy. Anarchy is a great bugaboo of centralists, who de-value the capacity of citizens to organize for self-governance. Without expert instructions from above, citizens are headless chickens. But this centralist notion exposes citizens to the depredations of vanguard parties and budget-maximizing bureaucrats.
This is why Vincent placed Hamilton’s famous statement in Federalist No. 1 at the heart of his work, namely, “whether societies of men are really capable or not, of establishing good government from reflection and choice” rather than “accident and force.” The Ostroms expressed abiding confidence in the ability of citizens to organize for self-governance in multi-sized arenas if given opportunities to reflect on their common dilemmas, make reasoned constitutional choices, and acquire resources to follow through with joint action.
Making such arrangements work also requires what Vincent especially emphasized as covenantal values, such as open communication, mutual trust, and reciprocity among the covenanted partners. Thus, polycentric governance, like federal governance, requires both good institutions and healthy processes.
As such, the Ostroms also placed great value on Alexis de Tocqueville’s notion of self-interest rightly understood. Indeed, it is the process of self-organizing and engaging one’s fellow citizens that helps participants to understand their self-interest rightly so as to act in collectively beneficial ways without central dictates.
Consequently, another major contribution of the Ostroms was to point out that governance choices are not limited to potentially gargantuan government regulation or potentially selfish privatization. There is a third way grounded in federalism.
John Kincaid is the Robert B. and Helen S. Meyner Professor of Government and Public Service at Lafayette College and Director of the Meyner Center for the Study of State and Local Government. He served as Associate Editor and Editor of Publius: The Journal of Federalism, and has written and lectured extensively on federalism and state and local government.
More on the applications and reflections on the work of Elinor and Vincent Ostrom can be found in this recently released special issue from Publius: The Journal of Federalism. An addition to this, Publius has also just released a free virtual collection of the most influential articles written by the Ostroms and published in Publiues over the past 23 years.
Publius: The Journal of Federalism is the world’s leading journal devoted to federalism. It is required reading for scholars of many disciplines who want the latest developments, trends, and empirical and theoretical work on federalism and intergovernmental relations.
This week, managing editor Troy Reeves speaks with scholar and artist Abbie Reese about her recently published book, Dedicated to God: An Oral History of Cloistered Nuns. Through an exquisite blend of oral and visual narratives, Reese shares the stories of the Poor Clare Colettine Order, a multigenerational group of cloistered contemplative nuns living in Rockford, Illinois. Among other issues, Reese’s photographs and interviews raise valuable questions about collective memory formation and community building in a space marked by anonymity and silence.
A metal grille is the literal and symbolic separation and reflection of the nuns’ vow of enclosure. The Poor Clare Colettine nuns film Abbie Reese for a collaborative ethnographic documentary. Courtesy of Abbie Reese.
In her interview with Troy, Reese talks about how popular culture sparked her interest in nuns and what it was like to work with the real women of the Poor Clare Colettine Order. Reese also discusses how she came to incorporate oral history into her work as a visual artist and her next, upcoming project.
Reese was also kind enough to share an excerpt from an interview with Sister Mary Nicolette. When sending the clip, Reese noted, “Her voice is hoarse from the interview because the nuns observe monastic silence, speaking only what is necessary to complete a task.”
Poor Clare Colettine nuns return to the monastery after a funeral service on the premises, in 2010, for a cloistered nun who served in WWII; Sister Ann Frances joined an active order of nuns before she transferred to the cloistered contemplative order at the Corpus Christi Monastery. Courtesy of Abbie Reese.
In keeping with the nuns' vow of enclosure and to limit the need for workers to enter the cloistered monastery, Poor Clare Colettine nuns undertake repairs and maintenance themselves, including cleaning the boiler while wearing the full habit. Coutesy of Abbie Reese.
The standard arguments against monetary policy responding to asset prices are the claims that it is not feasible to identify asset price bubbles in real time, and that the use of interest rates to restrain asset prices would have big adverse effects on real economic activity. So what happened with central banks and house prices prior to the financial crisis of 2007-2008?
Looking in detail at what the Federal Reserve Board (Fed), the European Central Bank (ECB) and the Bank of England (BoE) thought and said about house prices from the beginning of the 2000s, it appears that the Fed was so convinced of the standard line (monetary policy should not respond to asset prices but just stand ready to mop up if a bubble bursts) that it did not allocate much time or resources to discussing what was happening.
The BoE, on the other hand, while equally committed to that orthodoxy, felt the need to argue it out, at least up till 2005, and a number of speeches by Steve Nickell and others explained why they believed that the rises in house prices were a response to changes in the fundamentals (notably, the much lower levels of inflation and interest rates from the mid-1990s) and were therefore not a cause for concern. But after 2005 the BoE seems to have lost interest in the issue even to that extent.
Bank of England headquarters, London
The ECB was in principle more willing to consider the issue and to think about a response, but developments were very different between euro area countries (with Spain and Ireland experiencing strong house price booms but Germany and Austria seeing almost no change in house prices), and this would seem to be the main reason why the ECB never raised interest rates to restrain the house price booms in the former (which it correctly identified).
Since the crisis the Fed and the BoE have produced analyses suggesting that monetary policy bore almost no responsibility for the house price rises, on the one hand, and that using interest rates to restrain them would have caused sharp downward pressures on income and employment, on the other. The trouble with these analyses is that they consider only the effect of interest rates being a little higher before the crisis, with everything else equal. But of course the advocates of ‘leaning against the wind’ (the minority view which has favoured using interest rates to head off large asset price booms) have always emphasised that the existence of such a policy needs to be known in advance, so that it feeds into the public’s expectations of asset prices and helps to stabilise them. The absence of any such expectations effect in these analyses means that they are wide open to the Lucas Critique, and their results cannot be taken as an argument against leaning against the wind in this case.
What this all amounts to is our conclusion that the failure to adequately monitor developments in the housing markets means that the central banks of the United States and the United Kingdom, in particular, cannot reasonably claim to have done all they could have done to mitigate the house price movements that were crucial to the incidence and depth of the financial crisis.
The main outcome of the crisis for the operations and strategy of monetary policy so far has been the creation of instruments and arrangements for ‘macro-prudential’ policies, which will indeed offer central banks some additional ways of addressing problems in asset markets. However, central banks need to take some responsibility for the debacle of 2007-2008 and its effects. And they need to find some way in the future to incorporate an element of leaning against the wind into their inflation targeting strategies, in case macro-prudential policies turn out to be inadequate.
It is not beyond the wit of man or woman to establish a central bank remit which has a primary focus on price stability but allows the central bank to react to other developments in extreme situations, as long as it makes clear publicly that this is what it is doing, and why, and for how long it expects to be doing it.
Such a revised remit would and should incorporate useful expectations-stabilising effects for asset markets. The transparency and accountability involved would also help to shore up the independence of the central banks (particularly the BoE) at a time when there is so much pressure on them from the political authorities to ensure economic recovery.
Oxford Journals has published a special issue on the topic of Monetary Policy, with free papers until the end of March 2014.
Subscribe to the OUPblog via email or RSS.
Subscribe to only business and economics articles on the OUPblog via email or RSS.
Image credit: Bank of England, Threadneedle Street, London. By Eluveitie. CC-BY-SA-3.0 via Wikimedia Commons
Urban gardens are increasingly recognised for their potential to maintain or even enhance biodiversity. In particular the presence of large densities and varieties of flowering plants is thought to support a number of pollinating insects whose range and abundance has declined as a consequence of agricultural intensification and habitat loss. However, many of our garden plants are not native to Britain or even Europe, and the value of non-native flowers to local pollinators is widely disputed.
We tested the hypothesis that bumblebees foraging in urban gardens preferentially visited plants species with which they share a common biogeography (i.e. the plants evolved in the same regions as the bees that visit them). We did this by conducting summer-long surveys of bumblebee visitation to flowers seen in front gardens along a typical Plymouth street, dividing plants into species that naturally co-occur with British bees (a range extending across Europe, north Africa, and northern Asia – collectively called the Palaearctic by biologists), those that co-occur with bumblebees in other regions such as southern Asia, and North and South America (Sympatric), and plants from regions (Southern Africa and Australasia) where bumblebees are not naturally found (Allopatric).
Rather than discriminating between Palaearctic-native and non-native garden plants when taken together, bees simply visited in proportion to flower availability. Indeed, of the six most commonly visited garden plants, only one Foxglove (Digitalis purpurea – 6% of all bee visits) was a British native and only three garden plants were of Palaearctic origin (including the most frequently visited species Campanula poscharskyana (20.6% of visits) which comes from the Balkans). The remaining ‘most visited’ garden plants were from North America (Ceanothus 11% of visits) and Asia (Deutzia Spp 7% of visits), while the second most visited plant, Hebe × francisciana (18% of visits) is a hybrid variety with parents from New Zealand (H. speciosa) and South America (H. elliptica).
However a slightly different pattern emerges when we consider the behaviour of individual bumblebee species. This is important because we know from work done in natural grassland ecosystems that different bumblebees vary greatly in their preference for native plant species. Some bumblebees visit almost any flower, while others seem to have strict preferences for certain plants. The latter group (‘dietary specialists’) include bees with long tongues that allow them to access the deep flowers of plants belonging to the pea and mint families that short-tongued bees cannot. One of these dietary specialists, the aptly named ‘garden bumblebee’ (Bombus hortorum), showed a strong preference for Palaearctic-origin garden plant species (78% of flower visits by this species); although we also saw this species feeding on the New Zealand-native, Cordyline australis. Even more interesting was the fact that our most common species the ‘buff-tailed bumblebee’ (B. terrestris) appeared to favour non-Palaearctic garden plants (70% of all visits) over garden plants with which it shares a common evolutionary heritage (i.e. Palaearctic plants). So it seems that any preference for plants from ‘home turf’ varies between different bumblebees; just like in natural grasslands, some bees are fussy about where they forage, and others not.
So what should gardeners do to encourage pollinators? Our results suggest that it is not simply a question of growing native species even if this is desirable for other reasons, but that any ‘showily-flowered’ plant is likely to offer some forage reward. There are caveats, however. Garden plants that have been subject to modification to produce ‘double’ flowers that replace or obscure the anthers and carpels that yield pollen and nectar (e.g. Petunias, Begonias, and Hybrid Tea roses) are known to offer little or no pollinator reward. A spring to autumn supply of flowers of different corolla lengths is important to provide both long- and short-tongued bumblebees with nectar. A reliable pollen supply is particularly important during nest founding through to the release of queen and male bees at the end of the nest cycle. Roses and poppies are obvious choices, but early season willows also offer pollen for nest-founding queens. Potentially most crucial of all however, are the pea family as they offer higher quality pollen vital for the success of the short-nest cycle, specialist bumblebees such as B. hortorum. It is also important that access to what gardeners refer to as ‘weeds’ is available. Where possible gardeners can set aside a small area to allow native brambles, vetches, dead nettles, and clovers to grow, but as long as some native weed species are available in nearby allotments, parks, or other green spaces, we suggest that a combination of commonly-grown garden plants will help support our urban bumblebees for future generations.
Annals of Botany is an international plant science journal that publishes novel and substantial research papers in all areas of plant science, along with reviews and shorter Botanical Briefings about topical issues. Each issue also features a round-up of plant-based items from the world’s media – ‘Plant Cuttings’.
Subscribe to the OUPblog via email or RSS.
Subscribe to only earth, environmental, and life sciences articles on the OUPblog via email or RSS.
Image credit: Bumblebee on apple tree. By Victorllee [CC-BY-SA-3.0], via Wikimedia Commons
The idea of extending life expectancy by modifying diet originated in the mid-20th century when the effects of caloric restriction were found. It was first demonstrated on rats and then confirmed on other model organisms. Fasting activists like Paul Bragg or Roy Walford attempted to show in practice that caloric restriction also helps to prolong life in humans.
For a long time the crucial question in this research concerned finding a molecular mechanism that demonstrated how caloric restriction might promote longevity. The discovery of such a mechanism is possible with very simple organisms whose genetics were well understood and whose genes could be switched on or off. For example, the budding yeast, nematodes and fruit flies are windows into the complicated genetics of longevity. Several discoveries have been made in recent years, including resveratrol, sirtuins, insulin growth factor, methuselah gene, Indy mutation.
Capillary feeding assay, developed in the laboratory of Seymour Benzer at Caltech, which allows tracking of consumed food
The effects of caloric restriction may be more complex than anticipated. Protein-to-carbohydrate ratio has been shown to play a large role in diet response. Additionally, medical concerns about danger of refined sugar and fructose for health have gained recognition, typically relating to high-mortality diseases and disorders, such as diabetes, diabetic complications, and obesity.
Following an initial study of antioxidant system of the budding yeast, we turned our sights to biogerontological studies after the discovery of possible molecular mechanism of resveratrol action in yeast model. However, we quickly realized that the fruit fly (specifically Drosophila) is likely a better model because we could then also investigate behavioural outcome and food intake. How would caloric restriction and the amount of carbohydrates in the diet affect the longevity of fruit flies?
Food with a dye enables measurement of food intake
Analysis of faecal spots left by fruit flies allows life-long measurement of medium ingestion
We posed the question of whether the type of carbohydrate fed would affect mortality in fruit flies, including fructose, glucose, a plain mixture of the two, and sucrose (a disaccharide composed from monomers, fructose and glucose). We wanted to see whether fructose is a “poison” or “toxicant” as can be found in publications or popular lectures of Professor Robert Lustig.
We found, surprisingly, that flies fed on sucrose ceased to lay eggs after several weeks of adult life, and sucrose shortened their mean life span at all concentrations above 0.5% total carbohydrate. On the other hand, we found that fruit flies were quite well adapted for living on fructose. Furthermore, this effect was not observed for plain mixture of fructose and glucose.
Dietary response surface where concentrations of protein and carbohydrate ingested are put on X and Y axes, while Z is any physiological parameter which may depend on protein-to-carbohydrate ratio
The results were surprising because sucrose is routinely used in laboratory recipes of fly food. Lower fecundity on sucrose was also unexpected. However, we realized that the effects we had observed in the study would not lead us to immediate sugar denialism. The fly food used in the study was quite different from usual fly food, and in human context resembled likely a spicy marmalade diet.
Nonetheless, it is known that egg laying in Drosophila is promoted by dietary proteins (taken up mostly from yeasts). The diets of our flies contained a very small amount of protein, and yet this deficiency did not interfere with egg laying on monosaccharides while disaccharide sucrose caused dramatic loss in fecundity.
Is it possible to apply our current data to human physiology? It seems to be rather difficult to build assumptions on healthy diet of humans based on the data obtained for insects. The insect physiology with their specific development hormones, probably different metabolism and metabolic demands stands far away from that of humans. Nevertheless, the general message is that the influence of diet on ageing cannot be reduced to simply amount of calories, or macronutrient balance. The quality of nutrients, the micronutrients, the peculiarities of digestion, including gut microbiota, should also be taken into account. While scientists are often forced to simplify models, to gain a better understanding of molecular, biochemical, genetic, and physiological grounds of ageing, our understanding would likely benefit from bringing a variety of researchers, from ecologists and mathematicians, into the discussion.
The Journals of Gerontology® were the first journals on aging published in the United States. The tradition of excellence in these peer-reviewed scientific journals, established in 1946, continues today. The Journals of Gerontology, Series A® publishes within its covers The Journal of Gerontology: Biological Sciences and The Journal of Gerontology: Medical Sciences.
Subscribe to the OUPblog via email or RSS.
Subscribe to only health and medicine articles on the OUPblog via email or RSS.
Image credit: All images courtesy of the authors. Do not use without permission.
In the 1960s, Coca-Cola had a cocaine problem. This might seem odd, since the company removed cocaine from its formula around 1903, bowing to Jim Crow fears that the drug was contributing to black crime in the South. But even though Coke went cocaine-free in the Progressive Era, it continued to purchase coca leaves from Peru, removing the cocaine from the leaves but keeping what was left over as a flavoring extract. By the end of the twentieth century it was the single largest purchaser of legally imported coca leaves in the United States.
Yet, in the 1960s, Coke feared that an international counternarcotics crackdown on cocaine would jeopardize their secret trade with Peruvian cocaleros, so they did a smart thing: they began growing coca in the United States. With the help of the US government, a New Jersey chemical firm, and the University of Hawaii, Coca-Cola launched a covert coca operation on the island of Kauai. In 1965, growers in the Pacific paradise reported over 100 shrubs in cultivation.
How did this bizarre Hawaiian coca operation come to be? How, in short, did Coca-Cola become the only legal buyer of coca produced on US soil? The answer, I discovered, had to do with the company’s secret formula: not it’s unique recipe, but its peculiar business strategy for making money—what I call Coca-Cola capitalism.
What made Coke one of the most profitable firms of the twentieth century was its deftness in forming partnerships with private and public sector partners that helped the company acquire raw materials it needed at low cost. Coca-Cola was never really in the business of making stuff; it simply positioned itself as a kind of commodity broker, channeling ecological capital between producers and distributors, generating profits off the transaction. It thrived by making friends, both in government and in the private sector, friends that built the physical infrastructure and technological systems that produced and transported the cheap commodities needed for mass-marketing growth.
In the case of coca leaf, Coca-Cola had the Stepan chemical company of Maywood, New Jersey, which was responsible for handling Coke’s coca trade and “decocainizing” leaves used for flavoring extract (the leftover cocaine was ultimately sold to pharmaceutical firms for medicinal purposes). What Coke liked about its relationship with Stepan was that it kept the soft drink firm out of the limelight, obfuscating its connection to a pesky and tabooed narcotics trade.
But Stepan was just part of the procurement puzzle. The Federal Bureau of Narcotics (FBN) also played a pivotal role in this trade. Besides helping to pilot a Hawaiian coca farm, the US counternarcotics agency negotiated deals with the Peruvian government to ensure that Coke maintained access to coca supplies. The FBN and its successor agencies did this even while initiating coca eradication programs, tearing up shrubs in certain parts of the Andes in an attempt to cut off cocaine supply channels. By the 1960s, coca was becoming an enemy of the state, but only if it was not destined for Coke.
In short, Coca-Cola—a company many today consider a paragon of free-market capitalism—relied on the federal government to get what it wanted.
An old Coca-Cola bottling plant showing some of the municipal pipes that these bottlers tapped into. Courtesy of Bart Elmore.
Coke’s public partnerships extended to other ingredients. Take water, for example. For decades, the Coca-Cola Company relied on hundreds of independently owned bottlers (over 1,000 in 1920 alone) to market its products to consumers. Most of these bottlers simply tapped into the tap to satiate Coke’s corporate thirst, connecting company piping to established public water systems that were in large part built and maintained by municipal governments.
The story was much the same for packaging materials. Beginning in the 1980s, Coca-Cola benefited substantially from the development of curbside recycling systems paid for by taxpayers. Corporations welcomed the government handout, because it allowed them to expand their packaging production without taking on more costs. For years, environmental activists had called on beverage companies to clean up their waste. In fact, in 1970, 22 US congressmen supported a bill that would have banned the sale of nonreturnable beverage containers in the United States. But Congress, urged on by corporate lobbyists, abandoned the plan in favor of recycling programs paid for by the public. In the end, Coke and its industry partners were direct beneficiaries of the intervention, utilizing scrap metal and recycled plastic that was conveniently brought to them courtesy of municipal reclamation programs.
In all these interwoven ingredient stories there was one common thread: Coke’s commitment to outsourcing and franchising. The company consistently sought a lean corporate structure, eschewing vertical integration whenever possible. All it did was sell a concentrated syrup of repackaged cheap commodities. It did not own sugar plantations in Cuba (as the Hershey Chocolate Company did), coca farms in Peru, or caffeine processing plants in New Jersey, and by not owning these assets, the company remained nimble throughout its corporate life. It found creative ways to tap into pipes, plantations, and plants managed by governments and other businesses.
In the end, Coca-Cola realized that it could do more by doing less, extending its corporate reach, both on the frontend and backend of its business, by letting other firms and independent bottlers take on the risky and sometimes unprofitable tasks of producing cheap commodities and transporting them to consumers.
This strategy for doing business I have called Coca-Cola capitalism, so-named because Coke modeled it particularly well, but there were many other businesses, in fact some of the most profitable of our time, that followed similar paths to big profits. Software firms, for example, which sell a kind of information concentrate, have made big bucks by outsourcing raw material procurement responsibilities. Fast food chains, internet businesses, and securities firms—titans of twenty-first century business—have all demonstrated similar proclivities towards the Coke model of doing business.
Thus, as we look to the future, we would do well to examine why Coca-Cola capitalism has become so popular in the past several decades. Scholars have begun to debate the causes of a recent trend toward vertical disintegration, and while there are undoubtedly many causes for this shift, it seems ecological realities need to be further investigated. After all, one of the reasons Coke chose not to own commodity production businesses was because they were both economically and ecologically unsustainable over the long term. Might other firms divestment from productive industries tied to the land be symptomatic of larger environmental problems associated with extending already stressed commodity networks? This is a question we must answer as we consider the prudence of expanding our current brand of corporate capitalism in the years ahead.
Enterprise & Society offers a forum for research on the historical relations between businesses and their larger political, cultural, institutional, social, and economic contexts. The journal aims to be truly international in scope. Studies focused on individual firms and industries and grounded in a broad historical framework are welcome, as are innovative applications of economic or management theories to business and its context.
Subscribe to the OUPblog via email or RSS.
Subscribe to only American history articles on the OUPblog via email or RSS.
Tuberculosis (TB) is a disease of poverty and social exclusion with a global impact. It is these underlying truths that are captured in the theme of World TB Day 2014 ‘Reach the three million: a TB test, treatment and cure for all’. Of the nine million cases of tuberculosis each year, one-third does not have access to the necessary TB services to treat them and prevent dissemination of the disease in their communities. The StopTB Partnership is calling for ‘a global effort to find, treat and cure the three million’ and thus eliminate TB as a public health problem. So is the scientific community making sufficient progress to realise this target?
Early diagnosis is a cornerstone of management of the individual and we know that as the disease progresses and the bacterial load and severity of disease increase, the likelihood of a poor outcome is exacerbated. It is important to distinguish between diagnosis of tuberculosis and detection, which is confirmation of the presence of mycobacteria. Diagnosis for the three million (and many more) is largely dependent on the clinical expertise of the healthcare worker, with minimal input from technology. Whereas detection requires input from microbiological services and the principal tool in this area is sputum smear microscopy. A sputum sample with no evidence of acid fast bacilli is the accepted predictor of low risk of transmission, and so early application is critical in the management pathway. With improvements such as the auramine stain and LED fluorescent microscopy, the smear remains a cost effective component of TB screening programmes. The emergence of multi-drug resistant tuberculosis has accentuated the need for prompt confirmation of drug susceptibility and this is where molecular tools have potential impact. The WHO supported roll out of GeneXpert in resource poor settings is going ahead and we are seeing change in practice, but it is too soon to determine the public health impact of this innovation. The challenge for microbiology is not to get drawn into a ‘one size fits all’ solution. In many settings, the low technology, low cost and rapid screening of smears serves to break the chain of transmission of drug sensitive tuberculosis. Whereas, in areas of high endemicity of drug resistant TB, such as South Africa, an equally fast indication of drug resistance is essential.
Photo by WHO/Jean Cheung
Diagnosis leads to treatment. TB is curable but treatment regimens are long, toxic and complex to deliver. Following the stakeholders meeting in Cape Town in 2000 there has been a major effort to open up the drug development pipeline. There are two aspects to this, firstly new agents and secondly clinical trials. There is a new enthusiasm for exploring new compounds with action against TB and the publication of the whole genome of Mycobacterium tuberculosis allowed the interrogation of its biochemistry, opening the door for medicinal chemists to contribute their expertise. The development of MDRTB has led us to reconsider compounds previously excluded as too toxic or too difficult to administer; these drugs, such as PAS and thioridazene, are now being re-visited or forming the basis of fresh iterations of chemical screening programmes. After 30 years of no new drugs for TB treatment, two phase 3 trials (RIFAQUIN and OFLATUB) were reported in 2013 and a third (REMoxTB) is expected to report shortly. These studies have shaken things up. They each have potential to make improvements in TB treatment. However, it could be argued that their real benefit lies in the development of a network of facilities capable of undertaking TB clinical trials, as exemplified by the Global Alliance for TB Drug Development and the EDCTP funded PanACEA consortium, and their contribution to the active debate about how to efficiently deliver clinical trials that have a real impact on individuals and populations. We are now looking outside the world of TB and to, for example, cancer trial methodology for innovations such as the multi-arm multi-stage (MAMS) approach. A significant challenge here is to convert the results of studies undertaken, with the aim of full regulatory approval, into the rather more complex environment of programmatic delivery.
The host-pathogen interaction for M. tuberculosis is manifest in the pathology of tuberculosis and has proven to be a fruitful area of immunological research. This, together with the (variable) success of BCG vaccination, has led us to the reasonable expectation of a vaccine for control of tuberculosis. There has been much innovation in this area and new studies are in the pipeline. The quest for immunological markers of disease continues. Useful diagnostic tools for latency have been developed in the shape of IGRA tests (Tuberculosis: Diagnosis and Treatment), but, more importantly, recent advances lead us to the idea that we may be able to define a host response signature to tuberculosis. If successful, this approach may allow us to select those patients for whom a shorter course of therapy is adequate. From the UK MRC studies it was clear that as many as 80% of patients would be cured with a four-month regimen; the difficulty was that they could not be identified in advance or during treatment. A host response biomarker may well enable us to address this issue.
M. tuberculosis is a fascinating organism with many features of its biology that are distinct from other bacteria. For this reason the TB research community has become rather insular, not necessarily drawing on the experience from the wider bacteriology community. This was further exacerbated by the apparent fall in incidence of TB through the 1960s and 70s. Complacency is the term that comes to mind. Despite the commitment of groups such as those led by Mitchison and Grossett, there has been very little innovation in detection and diagnosis, and no new drug introduced to first line treatment after the 1960s. The declaration by WHO of TB as a global health emergency alerted us to the need for new ideas and new tools to meet this challenge. Twenty years down the line, we have rolled out new diagnostics and a new drugs pipeline that flows with the first phase 3 trials reporting shortly. Similarly, innovation in vaccine design and application moves forward and importantly our understanding of operational and behavioural aspects of controlling TB increases. However, we must not become complacent again. M. tuberculosis is not just an academic challenge and as long as the three million exist, we need to focus all our knowledge to achieve a TB test, treatment and cure for all.
Timothy D. McHugh is Professor of Medical Microbiology at the Centre for Clinical Microbiology, University College London. This is an adapted version of Professor McHugh’s commentary for the Transactions of the Royal Society of Tropical Medicine and Hygiene.
When a religious believer wears a religious symbol to work can their employer object? The question brings corporate dress codes and expressions of religious belief into sharp conflict. The employee can marshal discrimination and human rights law on the one side, whereas the employer may argue that conspicuous religion makes for bad business.
The issue reached the European Court of Human Rights in 2013 in a group of cases (Eweida and Others v. United Kingdom), following a lengthy and unsuccessful domestic legal campaign, brought by a group of employees who argued their right of freedom of religion and belief (under Article 9 of the Convention) had not been protected when the UK courts favoured their employers’ interests.
Nadia Eweida, an airline check-in clerk, and Shirley Chaplin, a nurse, had been refused permission by their respective employers, British Airways and an NHS trust, to wear a small cross on a necklace so that it was visible to other people. The employer’s rationale in each case was rather different. British Airways wanted to maintain a consistent corporate image so that no ‘customer-facing staff’ should be permitted to wear jewellery for any reason. The NHS trust argued that there was a potential health and safety risk if jewellery were worn by nursing staff – in Ms Chaplin’s case a disturbed patient might ‘seize the cross’ and harm either themselves or indeed Ms Chaplin.
Both applicants argued that their sense of religious obligation to wear a cross outweighed the employer’s normal discretion in setting a uniform policy. They also argued that their respective employers had also been inconsistent because their uniform policies made a number of specific accommodations for members of minority faiths, such as Muslims and Sikhs.
A major difficulty for both Eweida and Chaplin was the risk that their cross-wearing could be dismissed as a personal preference rather than a protected manifestation of their beliefs. After all many – probably most – Christians do not choose to wear the cross. The UK domestic courts found that the practice was not regarded as a mandatory religious practice (applying a so-called ‘necessity’ test) but rather one merely ‘motivated’ by religion and not therefore eligible for protection. This did not help either Eweida or Chaplin as both believed passionately that they had an obligation to wear the cross to attest to their faith (in Chaplin’s case this was in response to a personal vow to God). The other major difficulty for both applicants was that the Court had also historically accepted a rather strange argument that people voluntarily surrender their right to freedom of religion and belief in the workplace when they enter into an employment contract, and so the employer has discretion to set its policies without regard to interfering with its employees religious practices. If an employee found this too burdensome, then he or she could protect their rights by resigning and finding another job. This argument, ignoring the realities of the labour market and imposing a very heavy burden on religious employees, has been a key reason why so few ‘workplace’ claims have been successful before the European Court.
Arguably the most significant aspect of the judgment was that the religious liberty questions were in fact considered by the Court rather than being dismissed as being inapplicable in the workplace (as the government and the National Secular Society had both argued). The Court specifically repudiated both the necessity test and the doctrine of ‘voluntary surrender’ of Article 9 rights at work. As a result, it has opened the door both to applications for protection for a much wider group of religious practices in the future and for claims relating to employment. From a religious liberty perspective this is surely something to welcome.
Nadia Eweida’s application was successful on its merits. It is now clear therefore that an employer cannot over-ride the religious conscience of its staff due to the mere desire for uniformity. However, Chaplin was unsuccessful, the Court essentially finding that ‘health and safety’ concerns provided a legitimate interest allowing the employer to over-ride religious manifestation. This is disappointing, particularly since evidence was presented that the health and safety risks of a nurse wearing a cross were minimal and that, in this case, Chaplin was prepared to compromise to reduce them still further. Hopefully this aspect of the judgment (unnecessary deference to national authorities in the realm of health and safety) will be revisited in future.
Whether that happens or not it is clear that religious expressions in the workplace now need to be approached differently after the European Court’s ruling. The idea that employees must leave their religion at the door has been dealt a decisive blow From now on, if corporate policy over-rides employees’ religious beliefs, then employers will be under a much greater obligation to demonstrate why, if at all, this is necessary.
Andrew Hambler and Ian Leigh are the authors of “Religious Symbols, Conscience, and the Rights of Others” (available to read for free for a limited time) in the Oxford Journal of Law and Religion. Dr Andrew Hambler is senior lecturer in human resources and employment law at the University of Wolverhampton. His research focusses on how the manifestation of religion in the workplace is regulated both at an organisational and at a legal level. Andrew is the author of Religious Expression in the Workplace and the Contested Role of Law, a monograph due for publication in November 2014. Ian Leigh is a Professor of Law at Durham University. He has written extensively on legal and human rights questions concerning religious liberty. He is co-author of Rex Ahdar and Ian Leigh, Religious Freedom in the Liberal State (2nd edition, OUP, 2013).
The Oxford Journal of Law and Religion is hosting its second annual Summer Academy in Law and Religion this coming June. The title of this year’s academy is “Versions of Secularism – Comparative and International Legal and Foreign Policy Perspectives on International Religious Freedom.” The meeting will take place June 23 – 27 at St. Hugh’s College, Oxford. Click for more details about the conference, confirmed speakers, and registration.
The Oxford Journal of Law and Religion publishes a range of articles drawn from various sectors of the law and religion field, including: social, legal and political issues involving the relationship between law and religion in society; comparative law perspectives on the relationship between religion and state institutions; developments regarding human and constitutional rights to freedom of religion or belief; considerations of the relationship between religious and secular legal systems; empirical work on the place of religion in society; and other salient areas where law and religion interact (e.g., theology, legal and political theory, legal history, philosophy, etc.).
With the WHO Executive Board recently adopting the resolution ‘Strengthening of palliative care as a component of integrated treatment within the continuum of care’, which is to be referred to the World Health Assembly for ratification in May, Nathan Cherny puts the current global situation in perspective and lays out the next steps needed in this crucial campaign to end suffering to millions.
By Nathan Cherny
In the curious trail that has been my life thus far, some would say that there was a certain inevitability that I would end up working for cancer patients’ right to access medication for adequate relief of their suffering. As a medical student I suffered terrible cancer-related pain from a thoracotomy to remove lung metastases for testicular cancer. As an oncologist and palliative care physician in the Middle East, my current work allows me to look after both Israeli and Palestinian patients. My profession has also taken me to caring for many “medical tourists” from Eastern Europe as well as foreign workers from Thailand, India, Nepal, and the Philippines. Oh, and I was born on Human Rights Day, 10 December 1958!
I hate pain. I am appalled by the global scope of untreated and unrelieved cancer pain. At the initiative of its Palliative Care Working Group, the European Society for Medical Oncology (ESMO) has taken this on board as a global priority issue. ESMO facilitated the first comprehensive study to evaluate the barriers to pain relief in Europe, which highlighted the distressing situation in many Eastern European counties.
The GOPI studied opioid availability and accessibility for cancer patients in Africa, Asia, the Middle East, Latin America, and the Caribbean. The results were published in a special supplement of the Annals of Oncology in December 2013. The seven manuscripts in the special issue highlighted the global problem of excessively restrictive regulations regarding the prescribing and dispensing of opioids — ‘catastrophe born out of good intentions’.
In order to prevent abuse and diversion, most patients with genuine need to relieve severe cancer pain cannot access the appropriate medication. Millions of people around the world end their lives racked in pain, harming not only the patients but also their families who bear witness to this torturous tragedy.
On 23 January 2014, the WHO Executive Board adopted a stand-alone resolution on palliative care which will be referred to the World Health Assembly for ratification in May 2014. This is great news for all those campaigning to improve access to medication to end suffering to millions. There is still much to be done on this long, winding road, yet we can still be proud. Thanks to our united efforts and the evidence provided by the GOPI data, our voices are being heard.
Overregulation of opioids is not the only problem impeding global relief of cancer pain. In many places around the world there is major need to: educate clinicians in the assessment and management of pain; educate the public regarding the effectiveness and safety of opioid analgesia in the management of cancer pain; and secure supplies of affordable medications.
The next steps: The GOPI Collaborative Group is now writing to Ministers of Health in the many countries where we have identified major over-regulation with a 10 point plan to help redress the problem, covering education, restrictions, limits, professional standards, monitoring, and prescription.
Tell us what actions you can take to incorporate these next steps in your country. Can you contact your Ministry of Health? What could be inspirational for others to know? We make more noise if we all shout together.
Annals of Oncology is a multidisciplinary journal that publishes articles addressing medical oncology, surgery, radiotherapy, paediatric oncology, basic research and the comprehensive management of patients with malignant diseases. Follow them on Twitter at @Annals_Oncology.
Subscribe to the OUPblog via email or RSS.
Subscribe to only health and medicine articles on the OUPblog via email or RSS.
Image credits: (1) Photo of Nathan Cherny, via ESMO; (2) GOPI banner, via Global Opioid Policy Initiative/ESMO