JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: journals, Most Recent at Top [Help]
Results 1 - 25 of 69
How to use this Page
You are viewing the most recent posts tagged with the words: journals in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
A woman who gives birth to six children each with a 75% chance of survival has the same expected number of surviving offspring as a woman who gives birth to five children each with a 90% chance of survival. In both cases, 4.5 offspring are expected to survive. Because the large fitness gain from an additional child can compensate for a substantially increased risk of childhood mortality, women’s bodies will have evolved to produce children closer together than is best for child fitness.
Sleeping baby by Minoru Nitta. CC BY 2.0 via Flickr.
Offspring will benefit from greater birth-spacing than maximizes maternal fitness. Therefore, infants would benefit from adaptations for delaying the birth of a younger sib. The increased risk of mortality from close spacing of births is experienced by both the older and younger child whose births bracket the interbirth interval. Although a younger sib can do nothing to cause the earlier birth of an older sib, an older sib could potentially enhance its own survival by delaying the birth of a younger brother or sister.
The major determinant of birth-spacing, in the absence of contraception, is the duration of post-partum infertility (i.e., how long after a birth before a woman resumes ovulation). A woman’s return to fertility appears to be determined by her energy status. Lactation is energetically demanding and more intense suckling by an infant is one way that an infant could potentially influence the timing of its mother’s return to fertility. In 1987, Blurton Jones and da Costa proposed that night-waking by infants enhanced child survival not only because of the nutritional benefits of suckling but also because of suckling’s contraceptive effects of delaying the birth of a younger sib.
Blurton Jones and da Costa’s hypothesis receives unanticipated support from the behavior of infants with deletions of a cluster of imprinted genes on human chromosome 15. The deletion occurs on the paternally-derived chromosome in Prader-Willi syndrome (PWS). Infants with PWS have weak cries, a weak or absent suckling reflex, and sleep a lot. The deletion occurs on the maternally-derived chromosome in Angelman syndrome (AS). Infants with AS wake frequently during the night.
The contrasting behaviors of infants with PWS and AS suggest that maternal and paternal genes from this chromosome region have antagonistic effects on infant sleep with genes of paternal origin (absent in PWS) promoting suckling and night waking whereas genes of maternal origin (absent in AS) promote infant sleep. Antagonistic effects of imprinted genes are expected when a behavior benefits the infant’s fitness at a cost to its mother’s fitness with genes of paternal origin favoring greater benefits to infants than genes of maternal origin. Thus, the phenotypes of PWS and AS suggest that night waking enhances infant fitness at a cost to maternal fitness. The most plausible interpretation is that these costs and benefits are mediated by effects on the interbirth interval.
Postnatal conflict between mothers and offspring has been traditionally assumed to involve behavioral interactions such as weaning conflicts. However, we now know that a mother’s body is colonized by fetal cells during pregnancy and that these cells can persist for the remainder of the mother’s life. These cells could potentially influence interbirth intervals in more direct ways. Two possibilities suggest themselves. First, offspring cells could directly influence the supply of milk to their child, perhaps by promoting greater differentiation of milk-producing cells (mammary epithelium). Second, offspring cells could interfere with the implantation of subsequent embryos. Both of these possibilities remain hypothetical but cells containing Y chromosomes (presumably derived from male fetuses) have been found in breast tissue and in the uterine lining of non-pregnant women.
David Haig is Professor of Biology at Harvard University. he is the author of “Troubled sleep: Night waking, breastfeeding and parent–offspring conflict” (available to read for free for a limited time) in Evolution, Medicine, and Public Health. The arguments summarized above are presented in greater detail in two papers that recently appeared in Evolution, Medicine, and Public Health.
Evolution, Medicine, and Public Health is an open access journal, published by Oxford University Press, which publishes original, rigorous applications of evolutionary thought to issues in medicine and public health. It aims to connect evolutionary biology with the health sciences to produce insights that may reduce suffering and save lives. Because evolutionary biology is a basic science that reaches across many disciplines, this journal is open to contributions on a broad range of topics, including relevant work on non-model organisms and insights that arise from both research and practice.
Subscribe to the OUPblog via email or RSS.
Subscribe to only science and medicine articles on the OUPblog via email or RSS.
Today is 15 April or Tax Day in the United States. In recognition of this day we compiled a free virtual issue on taxation bringing together content from books, online products, and journals. The material covers a wide range of specific tax-related topics including income tax, austerity, tax structure, tax reform, and more. The collection is not US-centered, but includes information on economies across the globe. Be sure to take a moment to view this useful online resource today.
Intellectual property rights (IPRs) and the regimes of protection and enforcement surrounding them have often been the subject of debate, a debate fuelled in the past year by the increased emphasis on free-trade negotiations and multi-lateral treaties including the now-rejected Anti-Counterfeiting Trade Agreement (ACTA) and its Goliath cousin, the Trans-Pacific Partnership Agreement (TPPA). The significant media coverage afforded to these treaties, however, risks thrusting certain perspectives of IPR protection and enforcement into the spotlight, while eclipsing alternative, but equally crucial voices that are perhaps in greater need of legitimate dialogue to safeguard their own collection of intangible rights. Caught in the vortex of inadequate recognition and ineffective protection, are the communal intellectual property rights of indigenous communities, centred on traditional knowledge (TK), traditional cultural expressions (TCE), expressions of folklore (EoF), and genetic resources (GR).
The fundamental incompatibility between current intellectual property rights regimes and the rights of indigenous peoples stems largely from the lack of understanding of the driving forces that have led to the development of traditional knowledge, traditional cultural expressions, expressions of folklore, and genetic resources – that of the protection of whole indigenous cultures through the preservation of the traditional knowledge acquired by these communities as a whole.
The issues are complex. Professor James Anaya’s 2014 keynote speech at the 26th Session of the Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore at WIPO highlighted the differences governing the intangible rights of indigenous peoples generally, and why these world views have so often been left out of the current mainframe of intellectual property rights. Whereas, the majority view of IPRs tends to focus on the rights of the individual and their protection as such, indigenous cultures are inherently built over centuries and across generations on communal understandings and organic exchanges of knowledge, making it practically impossible to ascribe the ownership of a certain set of IPRs to one or a few individuals.
Apache Dancers at the exhibit ‘Dignity – Tribes in Transition’. United States Mission Geneva Photo: Eric Bridiers. CC-BY-ND-2.0 via US Mission Geneva Flickr.
As Professor Anaya articulates and the other contemplate, the similarities between the inadequacies of the protection of tangible rights of indigenous peoples (e.g. indigenous land rights) and that of their intangible rights protection (including intellectual property rights) tend to stem from a common source – the failure to acknowledge the “inherent logic of indigenous peoples’ world views”.
Perhaps the solutions lie not just in finding ways to include indigenous intellectual property rights in current IPR regimes, but through the facilitation of an entire paradigm shift to capture the nuances of these issues both effectively and precisely. How, for instance, can indigenous IPRs be valued commercially, and how may adequate compensation models be developed in exchange for the commercial use of these rights? A key to increasing the recognition of the inherent value of indigenous IPRs within their traditional cultural settings may lie in developing methods to properly value this worth in tangible terms. What seems necessary is a model to adequately measure the significance of indigenous IPRs, starting at the source (the indigenous community), and finding ways of translating this value into benefit systems that can be returned to the communities from which the IPRs were sourced. Hence recognition is attributed to the crucial part these IPRs play within the cultures from which they are derived.
The strength of intellectual property law lies in its ability to meet the demands of a frenetically changing world, thus affording it vast amounts of power in shaping the law of the future; but this brings with it the challenge – can that power be harnessed to adequately protect rights of the past? Even if the answer is in the affirmative, it does not necessarily follow that the purpose of intellectual property rights protection should be to reduce IPRs to protectable commodities solely for the purpose of commercial exploitation. Protection of IPRs might be secured for any number of reasons, including the recognition of the right for ownership of those rights to be retained within the community. IPRs thus have the capacity to function both as shields and swords. Such weaponry however brings with it obligations: “With great power, comes great responsibility.”
Keri Johnston and Marion Heathcote are the guest editors of the Journal of Intellectual Property Law & Practice special issue on “The Quest for ‘Real’ Protection for Indigenous Intangible Property Rights”. The authors would like to thank Mekhala Chaubal, student-at-law, for her assistance. It is reassuring to know that a new generation of lawyers is willing and able. Keri AF Johnston is managing partner of Johnston Law in Toronto and Marion Heathcote is a partner with Davies Collison Cave in Sydney.
The Journal of Intellectual Property Law & Practice (JIPLP) is a peer-reviewed journal dedicated to intellectual property law and practice. Published monthly, coverage includes the full range of substantive IP topics, practice-related matters such as litigation, enforcement, drafting and transactions, plus relevant aspects of related subjects such as competition and world trade law.
Subscribe to the OUPblog via email or RSS.
Subscribe to only law articles on the OUPblog via email or RSS.
Over the past few months, the Oral History Review has become rather demanding. In February, we asked readers to experiment with the short form article. A few weeks ago, our upcoming interim editor Dr. Stephanie Gilmore sent out a call for papers for our special Winter/Spring 2016 issue, “Listening to and for LGBTQ Lives.” Now, we’d like you to also take over our OUPBlog posting duties.
Well, “take over” might be a hyperbole. However, we have always hoped to use this and our other social media platforms to encourage discussion within the oral history discipline, and to spark exchanges with those working with oral histories outside the field. We like to imagine that through our podcasts, interviews and book reviews, we have brought about some conversations or inspired new ways to approach oral history. However, we can do better.
Towards that end, we are putting out a “call for blog posts” for this summer. These posts should fall in line with the aforementioned goal to promote the engagement between and beyond those in oral history field. Like our hardcopy counterpart, we are especially interested in posts that explore oral history in the digital age. As you might have gathered, we thrive on puns and the occasional, outdated pop culture reference. These are even more appreciated when coupled with clean and thoughtful insights into oral history work.
We are currently looking for posts between 500-800 words and 15-20 minutes of audio or video. Though, because we operate on the wonderful worldwide web, we are open to negotiation in terms of media and format. We should also stress that while we welcome posts that showcase a particular project, we do not want to serve as landing page for anyone’s kickstarter.
Caitlin Tyler-Richards is the editorial/media assistant at the Oral History Review. When not sharing profound witticisms at @OralHistReview, Caitlin pursues a PhD in African History at the University of Wisconsin-Madison. Her research revolves around the intersection of West African history, literature and identity construction, as well as a fledgling interest in digital humanities. Before coming to Madison, Caitlin worked for the Lannan Center for Poetics and Social Practice at Georgetown University.
The cosmology community is abuzz with news from the BICEP2 experiment of the discovery of primordial gravitational waves, through their signature in the cosmic microwave background. If verified, this will be a clear indication that the very young universe underwent a period of acceleration, known as cosmic inflation. During this period, it is thought that the seeds were laid down for all the structures to form later in the universe, including galaxies, stars, and indeed ourselves.
The cosmic microwave background (CMB) is radiation left over from the Hot Big Bang, first discovered in 1965 and corresponding to a temperature only about 2.7 degrees above absolute zero. In 1992 the COBE satellite made the first detection of temperature variations in the CMB, and successive experiments, including satellite missions WMAP and Planck, have been accurately measuring these variations which have become the key tool to understanding our universe.
In addition to its brightness, radiation can have a polarisation, meaning that the electromagnetic oscillations that make up the light have a preferred orientation, e.g. horizontal or vertical. This same effect is used in 3D cinemas, where light of different polarisations reaches your left or right eye, the lenses in the glasses blocking out one or other from each eye. In the CMB the polarisation signal is very small, and moreover comes in two types, known as E-mode and B-mode polarisation. The second of these, corresponding to a twisting pattern of polarisation on the sky, is what BICEP2 has discovered for the first time. This twisting pattern is the signature of gravitational waves, created in the early universe and whose presence causes space-time itself to ‘wobble’ as the light from the CMB crosses the Universe.
The Dark Sector Laboratory at Amundsen-Scott South Pole Station. At left is the South Pole Telescope. At right is the BICEP2 telescope. Photo by Amble, 2009. CC-BY-SA-3.0 via Wikimedia Commons.
The BICEP2 team have been working for several years with the single aim of measuring this signal; inflation predicted it to be there but said nothing about its strength. Based at the South Pole, where the unusually clear and dry air creates an ideal viewpoint for accurate measurement, three years of observations were carried out from 2010 to 2012. Their experiment differs from others measuring the CMB polarisation because they focussed on covering as large an area of the sky as possible, at relatively moderate angular resolution, in order to specifically target the B-mode signal.
While the discovery of gravitational waves had been widely rumoured in the days leading up to the announcement, including even the size of the measured signal, what took everyone’s breath away was the significance of the signal. At 6 to 7-sigma, it exceeds even the gold-standard 5-sigma used at CERN for the Higgs particle detection. Most would have expected something tentative, 2 or 3-sigma perhaps. We will want verification, of course, especially because the use of just a single wavelength of observation (the microwave equivalent of using just one colour of the rainbow) means the experiment is a little vulnerable to radiation from sources other than the CMB, such as intervening galaxies or emission caused by particles spiralling around our own Milky Way’s magnetic fields. The strength of the detection suggests that will not be an issue, but for sure we want to see independent confirmation by other experiments and at other wavelengths. Some may have announcements even before the end of the year, including the Planck satellite mission.
The response of the cosmology community to BICEP2 has been staggeringly swift. Early communication and discussion was already underway during the web-streamed BICEP2 press conference, via a Facebook discussion group set up by Scott Dodelson at Fermilab. The first science papers using the results were already appearing on arXiv.org database within the next couple of days (including theseones by me!). By the end of March, only two weeks after the announcement, there were already almost 50 available papers with ‘BICEP’ in the title, written by researchers all around the world. Papers on BICEP2 are clearly going to be a main theme for astronomy journals, including MNRAS, for the remainder of the year as we all try to figure out what, in detail, it all means.
Andrew Liddle is Professor of Theoretical Astrophysics at the Institute for Astronomy, University of Edinburgh. He is an editor of the OUP astronomy journal Monthly Notices of the Royal Astronomical Society.
Monthly Notices of the Royal Astronomical Society (MNRAS) is one of the world’s leading primary research journals in astronomy and astrophysics, as well as one of the longest established. It publishes the results of original research in astronomy and astrophysics, both observational and theoretical.
Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.
Inequality has been on the rise in all the advanced democracies in the past three or four decades; in some cases dramatically. Economists already know a great deal about the proximate causes. In the influential work by Goldin and Katz on “The Race between Education and Technology”, for example, the authors demonstrate that the rate of “skill-biased technological change” — which is economist speak for changes that disproportionately increase the demand for skilled labor — has far outpaced the supply of skilled workers in the US since the 1980s. This rising gap, however, is not due to an acceleration of technological change, but rather to a slowdown in the supply of skilled workers. Most importantly, a cross-national comparison reveals that other countries have continued to expand the supply of skills, i.e. the trend towards rising inequality is less pronounced in these cases.
The narrow focus of economists on the proximate causes is not sufficient, however, to fully understand the dynamic of rising inequality and its political and institutional foundations. In particular, skill formation regimes and cross-country differences in collective wage bargaining influence the quantity and quality of skills and hence also differences in inequality. Generally speaking, countries with coordinated wage-setting and highly developed vocational education and training (VET) systems respond more effectively to technology-induced changes in demand than systems without such training systems.
Yet, there is a great deal of variance in the extent to which this is true, and one needs to be attentive to the broader organization of political institutions and social relations to explain this variance. One of the recurrent themes is the growing socioeconomic differentiation of educational opportunity. Countries with a significant private financing of education, for example, induce high-income groups to opt out of the public system and into high-quality but exclusive private education. As they do, some public institutions try to compete by raising tuition and fees, and with middle- and upper-middle classes footing more of the bill for their own children’s education, support for tax-financed public education declines.
This does not happen everywhere. In countries that inherited an overwhelmingly publicly-financed system only the very rich can opt out, and the return on private education is lower because of a flatter wage structure. In this setting the middle and upper-middle classes, deeply concerned with the quality of education, tend to throw their support behind improving the public system. Yet, they will do so in ways that may reproduce class-based differentiation within the public system. Based on an analysis of the British system, one striking finding is that a great deal of differentiation happens because high-educated, high-income parents, who are most concerned with the quality of the education of their children, move into good school districts and bid up housing prices in the process. As property prices increase, those from lower socio-economic strata are increasingly shut out from the best schools.
Even in countries with less spatial inequality, in part because of a more centralized provision of public goods, socioeconomic inequality may be reproduced through early tracking of students into vocational and academic lines. This is because the choice of track is known to be heavily dependent on the social class of parents. This is reinforced by the decisions of firms to offer additional training to their best workers, which disadvantages those who start at the bottom. There is also evidence that such training decisions discriminate against women because firm-based training require long tenures and women are less likely to have uninterrupted careers. So strong VET systems, although they tend to produce less wage inequality, can undermine intergenerational class mobility and gender equality.
The rise of economic inequality also has consequence for politics. While democratic politics is usually seen as compensating for market inequality, economic and political inequality in fact tend to reinforce each other. Economic and educational inequality destroy social networks and undermines political participation in the lower half of the distribution of incomes and skills, and this undercuts the incentives of politicians to be attentive to their needs. Highly segmented labor markets with low mobility also undermine support for redistribution because pivotal “insiders” are not at risk. Labor market “dualism” therefore delimits welfare state responsiveness to unemployment and rising inequality. In a related finding, the winners of globalization often oppose redistribution, in part because they are more concerned with competitiveness and how bloated welfare states may undermine it.
Economic, educational, and political inequalities thus also tend to reinforce each other. But the extent and form of such inequality vary a great deal across countries. This special issue helps explain why and suggests the need for an interdisciplinary approach that is attentive to national institutional and political context oppose redistribution.
Socio-Economic Review aims to encourage work on the relationship between society, economy, institutions and markets, moral commitments and the rational pursuit of self-interest. The journal seeks articles that focus on economic action in its social and historical context. In broad disciplinary terms, papers are drawn from sociology, political science, economics and management, and policy sciences.
Subscribe to the OUPblog via email or RSS.
Subscribe to only business and economics articles on the OUPblog via email or RSS.
Image credit: Laptop in classic library. By photogl, via iStockphoto.
Scientists, using epidural stimulation over the lumbar spinal cord, have enabled four completely paralyzed men to voluntarily move their legs.
Kent Stephenson is one of the four. This stimulation experiment wasn’t supposed to work for him; he is what clinicians call an AIS A. This is a measure of disability, formally the American Spinal Injury Association Impairment Scale (AIS), that rates impairment from A (no motor or sensory function) to D (ability to walk). Kent, a mid-thoracic paraplegic, has what is considered a “complete” injury. Kent’s doctors told him it was a waste of time to pursue any therapy; per the dogma, A’s don’t get better. Well, the young Texan, who was hurt five years ago on a dirt bike, didn’t get the message. He likes to cite a fortune cookie he got shortly after his injury. It said, “Everything’s impossible until somebody does it.”
Kent had the stimulator implanted. A few days later they turned it on. No one expected it to do anything. Researchers were only looking for a baseline measurement to compare Kent’s function later, after several weeks of intense Locomotor Training (guided weight supported stepping on a treadmill).
Kent tells the story: “The first time they turned the stim on I felt a charge in my back. I was told to try pull my left leg back, something I had tried without success many times before. So I called it out loud, ‘left leg up.’ This time it worked! My leg pulled back toward me. I was in shock; my mom was in the room and was in tears. Words can’t describe the feeling – it was an overwhelming happiness.”
Kent was the second of the four. Rob Summers, three years ago, was the first to pioneer the concept that complete doesn’t mean what it used to; epidural stimulation could make the spinal cord more receptive to nerve signals coming from the senses or the brain. Seven months after he was implanted with a stimulator unit, he initiated voluntary movements of his legs. The other two subjects, Andrew Meas and Dustin Shillcox, also started moving within days of the implant. Summers probably could have initiated movement early on too, but the research team didn’t test for it – they had no reason to believe he could do it.
Here’s lead author of the Brain paper, Claudia Angeli, Ph.D., to explain. She is a senior researcher at the Human Locomotor Research Center at Frazier Rehab Institute, and an assistant professor at the University of Louisville’s Kentucky Spinal Cord Injury Research Center (KSCIRC).
“First, in the Lancet paper [regarding the first stimulation subject] it was just Rob, just one person. Yes, it was proof of concept, yes it went great. But now we are talking about four subjects. That’s four out of four showing functional recovery. What’s more, two of the four are categorized as AIS A – no motor or sensory function below the lesion level, with no chance for any recovery.”
The other two patients are classified AIS B: no motor function below the lesion but with some sensory function.
Left to right is Andrew Meas, Dustin Shillcox, Kent Stephenson, and Rob Summers, the first four to undergo task-specific training with epidural stimulation at the Human Locomotion Research Center laboratory, Frazier Rehab Institute, as part of the University of Louisville’s Kentucky Spinal Cord Injury Research Center, Louisville Kentucky.
How does this work? The epidural stimulation supplies a continuous electrical current, at varying frequencies and intensities, to specific locations on the lower part of the spinal cord. A 16-electrode spinal cord stimulator, commonly used to treat pain, is implanted over the spinal cord at T11-L1, a location that corresponds to the complex neural networks that control movement of the hips, knees, ankles and feet.
The leg muscles are not stimulated directly. The epidural stimulation apparently awakens circuitry in the spinal cord. “In simple terms,” says Dr. Angeli, “we are raising the excitability or gain of the spinal cord. Let’s say you have an intent to move. That signal originates in the brain and gets through to the spinal cord but the cord is not aware enough or excited enough to do anything with that intent. When we add the stimulation, the spinal cord networks are made a little more aware, so when the intent comes through, the cord is able to interpret it and movement becomes voluntary.”
The theory behind spinal cord stimulation is that these spinal cord networks are smart: they can remember and they can learn. The current work builds on decades of research. Susan Harkema, Ph.D. (University of Louisville) and V. Reggie Edgerton, Ph.D. (University of California Los Angeles) have led the effort. Dr. Harkema is Principal Investigator for the epidural stimulation projects and Director of the Christopher & Dana Reeve Foundation’s NeuroRecovery Network. Dr. Edgerton, a member of the Reeve Foundation’s International Research Consortium on Spinal Cord Injury, is a basic scientist whose work attempts to understand human locomotion and how the brain and spinal cord adapt and change in response to various interventions, including activity, training and stimulation.
Dr. Harkema says plans are in place to implant eight more patients in the next year. Four will mirror the first group, matched by age, level of injury, time since injury, etc. (Gender, by the way, is not a factor; men with spinal cord injury happen to outnumber women four to one.) Another four patients will be stimulated specifically to control heart rate and blood pressure. Dr. Harkema said one of the first four had issues with low blood pressure. When the stimulator was on, though, the pressure was raised, even without contracting any muscles. They want to assess that sort of autonomic recovery in greater detail.
The research team is aware that epidural stimulation can enhance autonomic function in paralyzed subjects; indeed, the first four subjects report improved temperature control, plus better bowel, bladder, and sexual function. Data is being collected to present that part of the stimulation story in another paper.
Does this mean anyone with a spinal cord injury with an implanted stimulator can move? Not necessarily, says Dr. Harkema. “But what I want people to know about this study is that we need to change our attitude about what a complete injury is, challenge the dogma that in AIS A patients there is no possibility of recovery. The view is that it is not a worthwhile investment to offer even intense rehabilitation to people with complete injuries. They’re not going to recover. But the message now is that there is a tremendous amount available. These individuals have potential for recoveries that will improve their health and quality of life. Now we have a fundamentally new strategy that can dramatically affect recovery of voluntary movement in those with complete paralysis, even years after injury.”
Brain provides researchers and clinicians with the finest original contributions in neurology. Leading studies in neurological science are balanced with practical clinical articles. Its citation rating is one of the highest for neurology journals, and it consistently publishes papers that become classics in the field.
We are now entering the month of April 2014—a time for reflection, empathy, and understanding for anyone in or involved with Rwanda. Twenty years ago, Rwandan political and military leaders initiated a series of actions that quickly turned into one of the 20th century’s greatest mass violations of human rights.
As we commemorate the genocide, our empathy needs to extend first to survivors and victims. Many families were destroyed in the genocide. Many survivors suffered enormous hardships to survive. Whatever our stand on the current state of affairs in Rwanda, we have to be enormously recognizant of the pain many endured.
In this brief post, I address three issues that speak to Rwanda today. I do so with trepidation, as discussions about contemporary Rwanda are often polarized and emotionally charged. Even though I am critical, I shall try to raise concerns with respect and recognition that there are few easy solutions.
My overall message is one of concern. At one level, Rwanda is doing remarkably and surprisingly well—in terms of security, the economy, and non-political aspects of governance. However, deep resentments and ethnic attachments persist, hardships and significant inequality remain. While it is difficult to know what people really feel, my general conclusion is that the social fabric remains tense beneath a veneer of good will. A crucial issue is that the political system is authoritarian and designed for control rather than dialogue. It is also a political system that many Rwandans believe is structured to favor particular groups over others. Fostering trust in such a political context is highly unlikely.
I also conclude that a “genocide lens” has limits for the objective of social repair. The genocide lens has been invaluable for achieving international recognition of what happened in 1994. But that lens leads to certain biases about Rwanda’s history and society that limit long-term social repair in Rwanda.
Rwandan Genocide Memorial. 7 April 2011. El Fasher: The Rwandan community in UNAMID organized the 17th Commemoration of the 1994 Genocide against Tutsi hold in Super Camp – RWANBATT 25 Military Camp (El Fasher). Photo by Albert Gonzalez Farran / UNAMID. CC-BY-NC-ND-2.0 via UNAMID Flickr.
During the past 20 years, a sea change in international recognition has occurred. Fifteen years ago, very few people knew globally that genocide took place in Rwanda. Today, the “Rwandan Genocide” is widely recognized as a world historical event. That global recognition is an achievement. We also know a great deal more about the causes and dynamics of the genocide itself.
However, several important controversies and unanswered questions remain. One is who killed President Habyarimana on 6 April 1994. Another is how to conceptualize when the plan for genocide began. Some date the plan for genocide to the late 1950s; others to the 1990s; still others to April 1994. A third question is how one should conceptualize RPF responsibility. Some depic the former rebels as saviors who stopped the genocide. Others argue that their actions were integral to the dynamics that led to genocide. And there are other issues as well, including how many were killed. Each of these issues remains intensely debated and hopefully will be the subject of open-minded inquiry in the years to come.
Contemporary Rwanda is at one level inspiring. The government is visionary, ambitious, and accomplished. The plan is to transform the society, economy, and culture—and to wean the state from foreign aid. The government has successfully introduced major reforms. The tax system is much improved. Public corruption is virtually absent. Remarkable results in public health and the economy have been achieved. Public security is also dramatically improved.
But there is a dark side. Most importantly, the government is repressive. The government seeks to exercise control over public space, especially around sensitive topics—in politics, in the media, in the NGO sector, among ordinary citizens, and even among donors. The net impact is the experience of intimidation and, as a friend aptly put it, many silences.
That brings me to the delicate question of reconciliation. Reconciliation is an imprecise concept for what I mean. What matters is the quality of the social fabric in Rwanda—the trust between people—and the quality of state-society relations.
A central pillar in Rwanda’s social reconstruction process has been justice. Much is written on gacaca, the government’s extraordinary program to transform a traditional dispute settlement process into a country-wide, decade-long process to account for genocide crimes. Gacaca brought some survivors satisfaction at finally seeing the guilty punished. Gacaca spawned some important conversations, led to important revelations, and prompted some sincere apologies.
But there were also a lot of problems. There were lies on all sides. There were manipulations of the system. Some apologies were pro-forma. And there were weak protections for witnesses and defendants alike. In many cases, justice was not done. But to my mind many the bigger issue is gacaca reinforced the idea that post-genocide Rwanda is an environment of winners and losers.
The entire justice process excluded non-genocide crimes, in particular atrocities that the RPF committed as it took power, in the northwest the late 1990s, and in Congo, where a lot of violence occurred. This meant that whole categories of suffering in the long arc of the 1990s and 2000s were neither recognized nor accounted for. Justice was one-sided. Many Rwandans experience it therefore as political justice that serve the RPF goal of retaining power.
The second issue is the scale. A million citizens, primarily Hutu, were accused. The net effect is that the legal process served to politically demobilize many Hutus, as Anu Chakravarty has written. Having watched the process of rebuilding social cohesion and state-society relations after atrocity in several places, I come to the conclusion that inclusion is vitally important.
If states privilege justice as a mechanism for social healing, judicial processes should recognize the multi-sided nature of atrocity. All groups that suffered from atrocity should be able to give voice to their experiences and, if punitive measures are on the table, seek accountability. Otherwise, in the long run, justice looks like a charade, one that ultimately may undermine the memories it is designed to preserve.
Here is where the “genocide lens” did not serve Rwanda well. A genocide lens narrates history as a story between perpetrators and victims. Yet the Rwandan reality is much more complicated.
Scott Straus is Professor of Political Science and International Studies at UW-Madison. Scott specializes in the study of genocide, political violence, human rights, and African politics. His published work includes several books on Rwanda and articles in African Affairs. A longer version of this article was presented at the “Rwanda Today: Twenty Years after the Genocide” event at Humanity House in The Hague on 3 April 2014. The author wishes to thank the organizers of that event.
To mark the 20th anniversary of the genocide, African Affairs is making some of their best articles on Rwanda freely available. Don’t miss this opportunity to read about the legacy of genocide and Rwandan politics under the RPF.
African Affairs is published on behalf of the Royal African Society and is the top ranked journal in African Studies. It is an inter-disciplinary journal, with a focus on the politics and international relations of sub-Saharan Africa. It also includes sociology, anthropology, economics, and to the extent that articles inform debates on contemporary Africa, history, literature, art, music and more.
Subscribe to the OUPblog via email or RSS.
Subscribe to only history articles on the OUPblog via email or RSS.
We invite you to celebrate with us by submitting your own art to our Street Photography Contest. According Grove Art Online, street photography is:
Genre of photography that can be understood as the product of an artistic interaction between a photographer and an urban public space. It is distinguished from documentary photography in that the photographer is not necessarily motivated by the evidentiary value or socio-political function of the resulting photographs. Unlike photojournalism, a street photographer’s images are not intended to illustrate a news story or other narrative. Instead, their primary goal is expressive and communicates a subjective impression of the experience of everyday life in a city. Thus neither the locale nor the subject-matter defines street photography; it is the photographer’s approach to the medium and movement through public space that differentiate street photography from related forms of photography.
Eugène Atget. The steeple of the church before the restoration in 1913. Collections Department of the Ecole Nationale Supérieure des Beaux-Arts. Public domain via Wikimedia Commons.
Live in a city? Have a camera? Send us your best shots.
To submit, please email groveartmarketing[at]oup[dot]com, with “photography competition” in the subject line. Please include a caption describing your work in the body of the email, and attach your image (maximum of 3MB). Competition will close on 28 April 2014. Please read our terms and conditions before entering the competition.
Victoria Davis works in marketing for Oxford University Press, including Grove Art and Oxford Art Online.
Oxford Art Online offers access to the most authoritative, inclusive, and easily searchable online art resources available today. Through a single, elegant gateway users can access — and simultaneously cross-search — an expanding range of Oxford’s acclaimed art reference works: Grove Art Online, the Benezit Dictionary of Artists, the Encyclopedia of Aesthetics, The Oxford Companion to Western Art, and The Concise Oxford Dictionary of Art Terms, as well as many specially commissioned articles and bibliographies available exclusively online.
Subscribe to the OUPblog via email or RSS.
Subscribe to only art and architecture articles on the OUPblog via email or RSS.
One of the most prominent features of jurisdictional rules is a focus on the location of actions. For example, the extraterritorial reach of data privacy law may be decided by reference to whether there was the offering of goods or services to EU residents, in the EU.
Already in the earliest discussions of international law and the Internet it was recognised that this type of focus on the location of actions clashes with the nature of the Internet – in many cases, locating an action online is a clumsy legal fiction burdened by a great degree of subjectivity.
I propose an alternative: a doctrine of ‘market sovereignty’ determined by reference to the effective reach of ‘market destroying measures’. Such a doctrine can both delineate, and justify, jurisdictional claims in relation to the Internet.
It is commonly noted that the real impacts of jurisdictional claims in relation to the Internet is severally limited by the intrinsic difficulty of enforcing such claim. For example, Goldsmith and Wu note that:
“[w]ith few exceptions governments can use their coercive powers only within their borders and control offshore Internet communications only by controlling local intermediaries, local assets, and local persons” (emphasis added)
However, I would advocate the removal of the word ‘only’. From what unflatteringly can be called a cliché, there is now a highly useful description of a principle well-established at least 400 years ago.
The word ‘only’ gives the impression that such powers are of limited significance for the overall question, which is misleading. The power governments have within their territorial borders can be put to great effect against offshore Internet communications. A government determined to have an impact on foreign Internet actors that are beyond its directly effective jurisdictional reach may introduce what we can call ‘market destroying measures’ to penalise the foreign party. For example, it may introduce substantive law allowing its courts to, due to the foreign party’s actions and subsequent refusal to appear before the court, make a finding that:
that party is not allowed to trade within the jurisdiction in question;
debts owed to that party are unenforceable within the jurisdiction in question; and/or
parties within the control of that government (e.g. residents or citizens) are not allowed to trade with the foreign party.
In light of this type of market destroying measures, the enforceability of jurisdictional claims in relation to the Internet may not be as limited as it may seem at a first glance.
In this context, it is also interesting to connect to the thinking of 17th century legal scholars, exemplified by Hugo de Groot (better known as Hugo Grotius). Grotius stated that:
“It seems clear, moreover, that sovereignty over a part of the sea is acquired in the same way as sovereignty elsewhere, that is, [...] through the instrumentality of persons and territory. It is gained through the instrumentality of persons if, for example, a fleet, which is an army afloat, is stationed at some point of the sea; by means of territory, in so far as those who sail over the part of the sea along the coast may be constrained from the land no less than if they should be upon the land itself.”
A similar reasoning can usefully be applied in relation to sovereignty in the context of the Internet. Instead of focusing on the location of persons, acts or physical things – as is traditionally done for jurisdictional purposes – we ought to focus on marketplace control – on what we can call ‘market sovereignty’. A state has market sovereignty, and therefore justifiable jurisdiction, over Internet conduct where it can effectively exercise ‘market destroying measures’ over the market that the conduct relates to. Importantly, in this sense, market sovereignty both delineates, and justifies, jurisdictional claims in relation to the Internet.
The advantage market destroying measures have over traditional enforcement attempts could escape no one. Rather than interfering with the business operations worldwide in case of a dispute, market destroying measures only affect the offender’s business on the market in question. It is thus a much more sophisticated and targeted approach. Where a foreign business finds compliance with a court order untenable, it will simply have to be prepared to abandon the market in question, but is free to pursue business elsewhere. Thus, an international agreement under which states undertake to only apply market destroying measures and not seek further enforcement would address the often excessive threat of arrests of key figures, such as CEOs, of offending globally active Internet businesses.
Professor Dan Jerker B. Svantesson is Managing Editor of the journal International Data Privacy Law. He is author of Internet and E-Commerce Law, Private International Law and the Internet, and Extraterritoriality in Data Privacy Law. Professor Svantesson is a Co-Director of the Centre for Commercial Law at the Faculty of Law (Bond University) and a Researcher at the Swedish Law & Informatics Research Institute, Stockholm University.
Combining thoughtful, high level analysis with a practical approach, International Data Privacy Law has a global focus on all aspects of privacy and data protection, including data processing at a company level, international data transfers, civil liberties issues (e.g., government surveillance), technology issues relating to privacy, international security breaches, and conflicts between US privacy rules and European data protection law.
Untangling recent and still-unfolding events in Ukraine is not a simple task. The western news media has been reasonably successful in acquainting its consumers with events, from the fall of Yanukovich on the back of intensive protests in Kiev, by those angry at his venality and signing a pact with Russia over one with the EU, to the very recent moves by Russia to annex Crimea.
However, as is perhaps inevitable where space is compressed, messages brief and time short, a habit of talking about Ukraine in binaries seems to be prevalent. Superficially helpful, it actually hinders a deeper understanding of the issues at hand – and any potential resolution. Those binaries, encouraged to some extent by the nature of the protests themselves (‘pro-Russian’ or ‘pro-EU/Western’), belie complex and important heterogeneities.
Ironically, the country’s name, taken by many to mean ‘borderland’, is one such index of underlying complexity. Commentators outside the mainstream news, including specialists like Andrew Wilson, have long been vocal in pointing out that the East-West divide is by no means a straightforward geographic or linguistic diglossia, drawn with a compass or ruler down the map somewhere east of Kiev, with pro-Western versus pro-Russian sentiment ‘mapped’ accordingly. Being a Russian-speaker is not automatically coterminous with following a pro-Russian course for Ukraine; and the reverse is also sometimes true. In a country with complex legacies of ethnic composition and ruling regime (western regions, before incorporation into the USSR, were ruled at different times in the modern period by Poland, Romania and Austria-Hungary), local vectors of identity also matter, beyond (or indeed, within) the binary ethnolinguistic definition of nationality.
The Bridge to the European Union from Ukraine to Romania. Photo by Madellina Bird. CC BY-NC-SA 2.0 via madellinabird Flickr.
Just as slippery is the binary used in Russian media, which portrays the old regime as legitimately elected and the new one as basically fascist, owing to its incorporation of Ukrainian nationalists of different stripes. First, this narrative supposes that being legitimately elected negates Yanukovich’s anti-democratic behaviours since that election, including the imprisonment of his main political opponent, Yulia Tymoshenko (whatever the ambivalence of her own standing in the politics of Ukraine). Second, the warnings about Ukrainian fascism call to mind George Bernard Shaw’s comment about half-truths as being especially dangerous. As well-informed Ukraine watchers like Andreas Umland and others have noted, overstating the presence of more extreme elements sets up another false binary as a way of deligitimising the new regime in toto. This is certainly not to say that Ukraine’s nationalist elements should escape scrutiny, and here we have yet another warning against false binaries: EU countries themselves may be manifestly less immune to voting in the far right at the fringes, but they still may want to keep eyes and ears open as to exactly what some of Ukraine’s coalition partners think and say about its history and heroes, the Jews, and much more.
So much for seeing the bigger picture, but events may well still take turns that few historians could predict with detailed accuracy. What we can see, at least, from the perspective of a maturing historiographic canon in the west, is that Ukraine is a country that demands a more sophisticated take on identity politics than the standard nationalist discourse allows – a discourse that has been in existence since at least the late nineteenth Century, and yet one which the now precarious-seeming European idea itself was set up to moderate.
First published in January 1886, The English Historical Review (EHR) is the oldest journal of historical scholarship in the English-speaking world. It deals not only with British history, but also with almost all aspects of European and world history since the classical era.
Subscribe to the OUPblog via email or RSS.
Subscribe to only history articles on the OUPblog via email or RSS.
When Elinor Ostrom visited Lafayette College in 2010, the number of my non-political science colleagues who announced familiarity with her work astonished me. Anthropologists, biologists, economists, engineers, environmentalists, historians, philosophers, sociologists, and others flocked to see her.
Elinor’s work cut across disciplines and fields of governance because she deftly employed and developed interrelated concepts having applications in multiple settings. A key foundation of these concepts is federalism—an idea central also to the work of her mentor and husband, Vincent Ostrom.
Vincent understood federalism to be a covenantal relationship that establishes unity for collective action while preserving diversity for local self-governance by constitutionally uniting separate political communities into a limited but encompassing political community. Power is divided and shared between concurrent jurisdictions—a general government having certain nationwide duties and multiple constituent governments having broad local responsibilities. These jurisdictions both cooperate and compete. The arrangement is non-hierarchical and animated by multiple centers of power, which, often competing, exhibit flexibility and responsiveness.
From this foundation, one can understand why the Ostroms embraced the concept of polycentricity advanced in Michael Polanyi’s The Logic of Liberty (1951), namely, a political or social system consisting of many decision-making centers possessing autonomous, but limited, powers that operate within an encompassing framework of constitutional rules.
This general principle can be applied to the global arena where, like true federalists, the Ostroms rejected the need for a single global institution to solve collective action problems such as environmental protection and common-pool resource management. They advocated polycentric arrangements that enable local actors to make important decisions as close to the affected situation as possible. Hence, the Ostroms also anticipated the revival of the notion of subsidiarity in European federal theory.
But polycentricity also applies to small arenas, such as irrigation districts and metropolitan areas. Elinor and Vincent worked on water governance early in their careers, and both argued that metropolitan areas are best organized polycentrically because urban services have different economies of scale, large bureaucracies have inherent pathologies, and citizens are often crucial in co-producing public services, especially policing (the subject of empirical studies by Elinor and colleagues).
The Ostroms valued largely self-organizing social systems that border on but do not topple into sheer anarchy. Anarchy is a great bugaboo of centralists, who de-value the capacity of citizens to organize for self-governance. Without expert instructions from above, citizens are headless chickens. But this centralist notion exposes citizens to the depredations of vanguard parties and budget-maximizing bureaucrats.
This is why Vincent placed Hamilton’s famous statement in Federalist No. 1 at the heart of his work, namely, “whether societies of men are really capable or not, of establishing good government from reflection and choice” rather than “accident and force.” The Ostroms expressed abiding confidence in the ability of citizens to organize for self-governance in multi-sized arenas if given opportunities to reflect on their common dilemmas, make reasoned constitutional choices, and acquire resources to follow through with joint action.
Making such arrangements work also requires what Vincent especially emphasized as covenantal values, such as open communication, mutual trust, and reciprocity among the covenanted partners. Thus, polycentric governance, like federal governance, requires both good institutions and healthy processes.
As such, the Ostroms also placed great value on Alexis de Tocqueville’s notion of self-interest rightly understood. Indeed, it is the process of self-organizing and engaging one’s fellow citizens that helps participants to understand their self-interest rightly so as to act in collectively beneficial ways without central dictates.
Consequently, another major contribution of the Ostroms was to point out that governance choices are not limited to potentially gargantuan government regulation or potentially selfish privatization. There is a third way grounded in federalism.
John Kincaid is the Robert B. and Helen S. Meyner Professor of Government and Public Service at Lafayette College and Director of the Meyner Center for the Study of State and Local Government. He served as Associate Editor and Editor of Publius: The Journal of Federalism, and has written and lectured extensively on federalism and state and local government.
More on the applications and reflections on the work of Elinor and Vincent Ostrom can be found in this recently released special issue from Publius: The Journal of Federalism. An addition to this, Publius has also just released a free virtual collection of the most influential articles written by the Ostroms and published in Publiues over the past 23 years.
Publius: The Journal of Federalism is the world’s leading journal devoted to federalism. It is required reading for scholars of many disciplines who want the latest developments, trends, and empirical and theoretical work on federalism and intergovernmental relations.
This week, managing editor Troy Reeves speaks with scholar and artist Abbie Reese about her recently published book, Dedicated to God: An Oral History of Cloistered Nuns. Through an exquisite blend of oral and visual narratives, Reese shares the stories of the Poor Clare Colettine Order, a multigenerational group of cloistered contemplative nuns living in Rockford, Illinois. Among other issues, Reese’s photographs and interviews raise valuable questions about collective memory formation and community building in a space marked by anonymity and silence.
A metal grille is the literal and symbolic separation and reflection of the nuns’ vow of enclosure. The Poor Clare Colettine nuns film Abbie Reese for a collaborative ethnographic documentary. Courtesy of Abbie Reese.
In her interview with Troy, Reese talks about how popular culture sparked her interest in nuns and what it was like to work with the real women of the Poor Clare Colettine Order. Reese also discusses how she came to incorporate oral history into her work as a visual artist and her next, upcoming project.
Reese was also kind enough to share an excerpt from an interview with Sister Mary Nicolette. When sending the clip, Reese noted, “Her voice is hoarse from the interview because the nuns observe monastic silence, speaking only what is necessary to complete a task.”
Poor Clare Colettine nuns return to the monastery after a funeral service on the premises, in 2010, for a cloistered nun who served in WWII; Sister Ann Frances joined an active order of nuns before she transferred to the cloistered contemplative order at the Corpus Christi Monastery. Courtesy of Abbie Reese.
In keeping with the nuns' vow of enclosure and to limit the need for workers to enter the cloistered monastery, Poor Clare Colettine nuns undertake repairs and maintenance themselves, including cleaning the boiler while wearing the full habit. Coutesy of Abbie Reese.
With the WHO Executive Board recently adopting the resolution ‘Strengthening of palliative care as a component of integrated treatment within the continuum of care’, which is to be referred to the World Health Assembly for ratification in May, Nathan Cherny puts the current global situation in perspective and lays out the next steps needed in this crucial campaign to end suffering to millions.
By Nathan Cherny
In the curious trail that has been my life thus far, some would say that there was a certain inevitability that I would end up working for cancer patients’ right to access medication for adequate relief of their suffering. As a medical student I suffered terrible cancer-related pain from a thoracotomy to remove lung metastases for testicular cancer. As an oncologist and palliative care physician in the Middle East, my current work allows me to look after both Israeli and Palestinian patients. My profession has also taken me to caring for many “medical tourists” from Eastern Europe as well as foreign workers from Thailand, India, Nepal, and the Philippines. Oh, and I was born on Human Rights Day, 10 December 1958!
I hate pain. I am appalled by the global scope of untreated and unrelieved cancer pain. At the initiative of its Palliative Care Working Group, the European Society for Medical Oncology (ESMO) has taken this on board as a global priority issue. ESMO facilitated the first comprehensive study to evaluate the barriers to pain relief in Europe, which highlighted the distressing situation in many Eastern European counties.
The GOPI studied opioid availability and accessibility for cancer patients in Africa, Asia, the Middle East, Latin America, and the Caribbean. The results were published in a special supplement of the Annals of Oncology in December 2013. The seven manuscripts in the special issue highlighted the global problem of excessively restrictive regulations regarding the prescribing and dispensing of opioids — ‘catastrophe born out of good intentions’.
In order to prevent abuse and diversion, most patients with genuine need to relieve severe cancer pain cannot access the appropriate medication. Millions of people around the world end their lives racked in pain, harming not only the patients but also their families who bear witness to this torturous tragedy.
On 23 January 2014, the WHO Executive Board adopted a stand-alone resolution on palliative care which will be referred to the World Health Assembly for ratification in May 2014. This is great news for all those campaigning to improve access to medication to end suffering to millions. There is still much to be done on this long, winding road, yet we can still be proud. Thanks to our united efforts and the evidence provided by the GOPI data, our voices are being heard.
Overregulation of opioids is not the only problem impeding global relief of cancer pain. In many places around the world there is major need to: educate clinicians in the assessment and management of pain; educate the public regarding the effectiveness and safety of opioid analgesia in the management of cancer pain; and secure supplies of affordable medications.
The next steps: The GOPI Collaborative Group is now writing to Ministers of Health in the many countries where we have identified major over-regulation with a 10 point plan to help redress the problem, covering education, restrictions, limits, professional standards, monitoring, and prescription.
Tell us what actions you can take to incorporate these next steps in your country. Can you contact your Ministry of Health? What could be inspirational for others to know? We make more noise if we all shout together.
Annals of Oncology is a multidisciplinary journal that publishes articles addressing medical oncology, surgery, radiotherapy, paediatric oncology, basic research and the comprehensive management of patients with malignant diseases. Follow them on Twitter at @Annals_Oncology.
Subscribe to the OUPblog via email or RSS.
Subscribe to only health and medicine articles on the OUPblog via email or RSS.
Image credits: (1) Photo of Nathan Cherny, via ESMO; (2) GOPI banner, via Global Opioid Policy Initiative/ESMO
When a religious believer wears a religious symbol to work can their employer object? The question brings corporate dress codes and expressions of religious belief into sharp conflict. The employee can marshal discrimination and human rights law on the one side, whereas the employer may argue that conspicuous religion makes for bad business.
The issue reached the European Court of Human Rights in 2013 in a group of cases (Eweida and Others v. United Kingdom), following a lengthy and unsuccessful domestic legal campaign, brought by a group of employees who argued their right of freedom of religion and belief (under Article 9 of the Convention) had not been protected when the UK courts favoured their employers’ interests.
Nadia Eweida, an airline check-in clerk, and Shirley Chaplin, a nurse, had been refused permission by their respective employers, British Airways and an NHS trust, to wear a small cross on a necklace so that it was visible to other people. The employer’s rationale in each case was rather different. British Airways wanted to maintain a consistent corporate image so that no ‘customer-facing staff’ should be permitted to wear jewellery for any reason. The NHS trust argued that there was a potential health and safety risk if jewellery were worn by nursing staff – in Ms Chaplin’s case a disturbed patient might ‘seize the cross’ and harm either themselves or indeed Ms Chaplin.
Both applicants argued that their sense of religious obligation to wear a cross outweighed the employer’s normal discretion in setting a uniform policy. They also argued that their respective employers had also been inconsistent because their uniform policies made a number of specific accommodations for members of minority faiths, such as Muslims and Sikhs.
A major difficulty for both Eweida and Chaplin was the risk that their cross-wearing could be dismissed as a personal preference rather than a protected manifestation of their beliefs. After all many – probably most – Christians do not choose to wear the cross. The UK domestic courts found that the practice was not regarded as a mandatory religious practice (applying a so-called ‘necessity’ test) but rather one merely ‘motivated’ by religion and not therefore eligible for protection. This did not help either Eweida or Chaplin as both believed passionately that they had an obligation to wear the cross to attest to their faith (in Chaplin’s case this was in response to a personal vow to God). The other major difficulty for both applicants was that the Court had also historically accepted a rather strange argument that people voluntarily surrender their right to freedom of religion and belief in the workplace when they enter into an employment contract, and so the employer has discretion to set its policies without regard to interfering with its employees religious practices. If an employee found this too burdensome, then he or she could protect their rights by resigning and finding another job. This argument, ignoring the realities of the labour market and imposing a very heavy burden on religious employees, has been a key reason why so few ‘workplace’ claims have been successful before the European Court.
Arguably the most significant aspect of the judgment was that the religious liberty questions were in fact considered by the Court rather than being dismissed as being inapplicable in the workplace (as the government and the National Secular Society had both argued). The Court specifically repudiated both the necessity test and the doctrine of ‘voluntary surrender’ of Article 9 rights at work. As a result, it has opened the door both to applications for protection for a much wider group of religious practices in the future and for claims relating to employment. From a religious liberty perspective this is surely something to welcome.
Nadia Eweida’s application was successful on its merits. It is now clear therefore that an employer cannot over-ride the religious conscience of its staff due to the mere desire for uniformity. However, Chaplin was unsuccessful, the Court essentially finding that ‘health and safety’ concerns provided a legitimate interest allowing the employer to over-ride religious manifestation. This is disappointing, particularly since evidence was presented that the health and safety risks of a nurse wearing a cross were minimal and that, in this case, Chaplin was prepared to compromise to reduce them still further. Hopefully this aspect of the judgment (unnecessary deference to national authorities in the realm of health and safety) will be revisited in future.
Whether that happens or not it is clear that religious expressions in the workplace now need to be approached differently after the European Court’s ruling. The idea that employees must leave their religion at the door has been dealt a decisive blow From now on, if corporate policy over-rides employees’ religious beliefs, then employers will be under a much greater obligation to demonstrate why, if at all, this is necessary.
Andrew Hambler and Ian Leigh are the authors of “Religious Symbols, Conscience, and the Rights of Others” (available to read for free for a limited time) in the Oxford Journal of Law and Religion. Dr Andrew Hambler is senior lecturer in human resources and employment law at the University of Wolverhampton. His research focusses on how the manifestation of religion in the workplace is regulated both at an organisational and at a legal level. Andrew is the author of Religious Expression in the Workplace and the Contested Role of Law, a monograph due for publication in November 2014. Ian Leigh is a Professor of Law at Durham University. He has written extensively on legal and human rights questions concerning religious liberty. He is co-author of Rex Ahdar and Ian Leigh, Religious Freedom in the Liberal State (2nd edition, OUP, 2013).
The Oxford Journal of Law and Religion is hosting its second annual Summer Academy in Law and Religion this coming June. The title of this year’s academy is “Versions of Secularism – Comparative and International Legal and Foreign Policy Perspectives on International Religious Freedom.” The meeting will take place June 23 – 27 at St. Hugh’s College, Oxford. Click for more details about the conference, confirmed speakers, and registration.
The Oxford Journal of Law and Religion publishes a range of articles drawn from various sectors of the law and religion field, including: social, legal and political issues involving the relationship between law and religion in society; comparative law perspectives on the relationship between religion and state institutions; developments regarding human and constitutional rights to freedom of religion or belief; considerations of the relationship between religious and secular legal systems; empirical work on the place of religion in society; and other salient areas where law and religion interact (e.g., theology, legal and political theory, legal history, philosophy, etc.).
Tuberculosis (TB) is a disease of poverty and social exclusion with a global impact. It is these underlying truths that are captured in the theme of World TB Day 2014 ‘Reach the three million: a TB test, treatment and cure for all’. Of the nine million cases of tuberculosis each year, one-third does not have access to the necessary TB services to treat them and prevent dissemination of the disease in their communities. The StopTB Partnership is calling for ‘a global effort to find, treat and cure the three million’ and thus eliminate TB as a public health problem. So is the scientific community making sufficient progress to realise this target?
Early diagnosis is a cornerstone of management of the individual and we know that as the disease progresses and the bacterial load and severity of disease increase, the likelihood of a poor outcome is exacerbated. It is important to distinguish between diagnosis of tuberculosis and detection, which is confirmation of the presence of mycobacteria. Diagnosis for the three million (and many more) is largely dependent on the clinical expertise of the healthcare worker, with minimal input from technology. Whereas detection requires input from microbiological services and the principal tool in this area is sputum smear microscopy. A sputum sample with no evidence of acid fast bacilli is the accepted predictor of low risk of transmission, and so early application is critical in the management pathway. With improvements such as the auramine stain and LED fluorescent microscopy, the smear remains a cost effective component of TB screening programmes. The emergence of multi-drug resistant tuberculosis has accentuated the need for prompt confirmation of drug susceptibility and this is where molecular tools have potential impact. The WHO supported roll out of GeneXpert in resource poor settings is going ahead and we are seeing change in practice, but it is too soon to determine the public health impact of this innovation. The challenge for microbiology is not to get drawn into a ‘one size fits all’ solution. In many settings, the low technology, low cost and rapid screening of smears serves to break the chain of transmission of drug sensitive tuberculosis. Whereas, in areas of high endemicity of drug resistant TB, such as South Africa, an equally fast indication of drug resistance is essential.
Photo by WHO/Jean Cheung
Diagnosis leads to treatment. TB is curable but treatment regimens are long, toxic and complex to deliver. Following the stakeholders meeting in Cape Town in 2000 there has been a major effort to open up the drug development pipeline. There are two aspects to this, firstly new agents and secondly clinical trials. There is a new enthusiasm for exploring new compounds with action against TB and the publication of the whole genome of Mycobacterium tuberculosis allowed the interrogation of its biochemistry, opening the door for medicinal chemists to contribute their expertise. The development of MDRTB has led us to reconsider compounds previously excluded as too toxic or too difficult to administer; these drugs, such as PAS and thioridazene, are now being re-visited or forming the basis of fresh iterations of chemical screening programmes. After 30 years of no new drugs for TB treatment, two phase 3 trials (RIFAQUIN and OFLATUB) were reported in 2013 and a third (REMoxTB) is expected to report shortly. These studies have shaken things up. They each have potential to make improvements in TB treatment. However, it could be argued that their real benefit lies in the development of a network of facilities capable of undertaking TB clinical trials, as exemplified by the Global Alliance for TB Drug Development and the EDCTP funded PanACEA consortium, and their contribution to the active debate about how to efficiently deliver clinical trials that have a real impact on individuals and populations. We are now looking outside the world of TB and to, for example, cancer trial methodology for innovations such as the multi-arm multi-stage (MAMS) approach. A significant challenge here is to convert the results of studies undertaken, with the aim of full regulatory approval, into the rather more complex environment of programmatic delivery.
The host-pathogen interaction for M. tuberculosis is manifest in the pathology of tuberculosis and has proven to be a fruitful area of immunological research. This, together with the (variable) success of BCG vaccination, has led us to the reasonable expectation of a vaccine for control of tuberculosis. There has been much innovation in this area and new studies are in the pipeline. The quest for immunological markers of disease continues. Useful diagnostic tools for latency have been developed in the shape of IGRA tests (Tuberculosis: Diagnosis and Treatment), but, more importantly, recent advances lead us to the idea that we may be able to define a host response signature to tuberculosis. If successful, this approach may allow us to select those patients for whom a shorter course of therapy is adequate. From the UK MRC studies it was clear that as many as 80% of patients would be cured with a four-month regimen; the difficulty was that they could not be identified in advance or during treatment. A host response biomarker may well enable us to address this issue.
M. tuberculosis is a fascinating organism with many features of its biology that are distinct from other bacteria. For this reason the TB research community has become rather insular, not necessarily drawing on the experience from the wider bacteriology community. This was further exacerbated by the apparent fall in incidence of TB through the 1960s and 70s. Complacency is the term that comes to mind. Despite the commitment of groups such as those led by Mitchison and Grossett, there has been very little innovation in detection and diagnosis, and no new drug introduced to first line treatment after the 1960s. The declaration by WHO of TB as a global health emergency alerted us to the need for new ideas and new tools to meet this challenge. Twenty years down the line, we have rolled out new diagnostics and a new drugs pipeline that flows with the first phase 3 trials reporting shortly. Similarly, innovation in vaccine design and application moves forward and importantly our understanding of operational and behavioural aspects of controlling TB increases. However, we must not become complacent again. M. tuberculosis is not just an academic challenge and as long as the three million exist, we need to focus all our knowledge to achieve a TB test, treatment and cure for all.
Timothy D. McHugh is Professor of Medical Microbiology at the Centre for Clinical Microbiology, University College London. This is an adapted version of Professor McHugh’s commentary for the Transactions of the Royal Society of Tropical Medicine and Hygiene.
In the 1960s, Coca-Cola had a cocaine problem. This might seem odd, since the company removed cocaine from its formula around 1903, bowing to Jim Crow fears that the drug was contributing to black crime in the South. But even though Coke went cocaine-free in the Progressive Era, it continued to purchase coca leaves from Peru, removing the cocaine from the leaves but keeping what was left over as a flavoring extract. By the end of the twentieth century it was the single largest purchaser of legally imported coca leaves in the United States.
Yet, in the 1960s, Coke feared that an international counternarcotics crackdown on cocaine would jeopardize their secret trade with Peruvian cocaleros, so they did a smart thing: they began growing coca in the United States. With the help of the US government, a New Jersey chemical firm, and the University of Hawaii, Coca-Cola launched a covert coca operation on the island of Kauai. In 1965, growers in the Pacific paradise reported over 100 shrubs in cultivation.
How did this bizarre Hawaiian coca operation come to be? How, in short, did Coca-Cola become the only legal buyer of coca produced on US soil? The answer, I discovered, had to do with the company’s secret formula: not it’s unique recipe, but its peculiar business strategy for making money—what I call Coca-Cola capitalism.
What made Coke one of the most profitable firms of the twentieth century was its deftness in forming partnerships with private and public sector partners that helped the company acquire raw materials it needed at low cost. Coca-Cola was never really in the business of making stuff; it simply positioned itself as a kind of commodity broker, channeling ecological capital between producers and distributors, generating profits off the transaction. It thrived by making friends, both in government and in the private sector, friends that built the physical infrastructure and technological systems that produced and transported the cheap commodities needed for mass-marketing growth.
In the case of coca leaf, Coca-Cola had the Stepan chemical company of Maywood, New Jersey, which was responsible for handling Coke’s coca trade and “decocainizing” leaves used for flavoring extract (the leftover cocaine was ultimately sold to pharmaceutical firms for medicinal purposes). What Coke liked about its relationship with Stepan was that it kept the soft drink firm out of the limelight, obfuscating its connection to a pesky and tabooed narcotics trade.
But Stepan was just part of the procurement puzzle. The Federal Bureau of Narcotics (FBN) also played a pivotal role in this trade. Besides helping to pilot a Hawaiian coca farm, the US counternarcotics agency negotiated deals with the Peruvian government to ensure that Coke maintained access to coca supplies. The FBN and its successor agencies did this even while initiating coca eradication programs, tearing up shrubs in certain parts of the Andes in an attempt to cut off cocaine supply channels. By the 1960s, coca was becoming an enemy of the state, but only if it was not destined for Coke.
In short, Coca-Cola—a company many today consider a paragon of free-market capitalism—relied on the federal government to get what it wanted.
An old Coca-Cola bottling plant showing some of the municipal pipes that these bottlers tapped into. Courtesy of Bart Elmore.
Coke’s public partnerships extended to other ingredients. Take water, for example. For decades, the Coca-Cola Company relied on hundreds of independently owned bottlers (over 1,000 in 1920 alone) to market its products to consumers. Most of these bottlers simply tapped into the tap to satiate Coke’s corporate thirst, connecting company piping to established public water systems that were in large part built and maintained by municipal governments.
The story was much the same for packaging materials. Beginning in the 1980s, Coca-Cola benefited substantially from the development of curbside recycling systems paid for by taxpayers. Corporations welcomed the government handout, because it allowed them to expand their packaging production without taking on more costs. For years, environmental activists had called on beverage companies to clean up their waste. In fact, in 1970, 22 US congressmen supported a bill that would have banned the sale of nonreturnable beverage containers in the United States. But Congress, urged on by corporate lobbyists, abandoned the plan in favor of recycling programs paid for by the public. In the end, Coke and its industry partners were direct beneficiaries of the intervention, utilizing scrap metal and recycled plastic that was conveniently brought to them courtesy of municipal reclamation programs.
In all these interwoven ingredient stories there was one common thread: Coke’s commitment to outsourcing and franchising. The company consistently sought a lean corporate structure, eschewing vertical integration whenever possible. All it did was sell a concentrated syrup of repackaged cheap commodities. It did not own sugar plantations in Cuba (as the Hershey Chocolate Company did), coca farms in Peru, or caffeine processing plants in New Jersey, and by not owning these assets, the company remained nimble throughout its corporate life. It found creative ways to tap into pipes, plantations, and plants managed by governments and other businesses.
In the end, Coca-Cola realized that it could do more by doing less, extending its corporate reach, both on the frontend and backend of its business, by letting other firms and independent bottlers take on the risky and sometimes unprofitable tasks of producing cheap commodities and transporting them to consumers.
This strategy for doing business I have called Coca-Cola capitalism, so-named because Coke modeled it particularly well, but there were many other businesses, in fact some of the most profitable of our time, that followed similar paths to big profits. Software firms, for example, which sell a kind of information concentrate, have made big bucks by outsourcing raw material procurement responsibilities. Fast food chains, internet businesses, and securities firms—titans of twenty-first century business—have all demonstrated similar proclivities towards the Coke model of doing business.
Thus, as we look to the future, we would do well to examine why Coca-Cola capitalism has become so popular in the past several decades. Scholars have begun to debate the causes of a recent trend toward vertical disintegration, and while there are undoubtedly many causes for this shift, it seems ecological realities need to be further investigated. After all, one of the reasons Coke chose not to own commodity production businesses was because they were both economically and ecologically unsustainable over the long term. Might other firms divestment from productive industries tied to the land be symptomatic of larger environmental problems associated with extending already stressed commodity networks? This is a question we must answer as we consider the prudence of expanding our current brand of corporate capitalism in the years ahead.
Enterprise & Society offers a forum for research on the historical relations between businesses and their larger political, cultural, institutional, social, and economic contexts. The journal aims to be truly international in scope. Studies focused on individual firms and industries and grounded in a broad historical framework are welcome, as are innovative applications of economic or management theories to business and its context.
Subscribe to the OUPblog via email or RSS.
Subscribe to only American history articles on the OUPblog via email or RSS.
The idea of extending life expectancy by modifying diet originated in the mid-20th century when the effects of caloric restriction were found. It was first demonstrated on rats and then confirmed on other model organisms. Fasting activists like Paul Bragg or Roy Walford attempted to show in practice that caloric restriction also helps to prolong life in humans.
For a long time the crucial question in this research concerned finding a molecular mechanism that demonstrated how caloric restriction might promote longevity. The discovery of such a mechanism is possible with very simple organisms whose genetics were well understood and whose genes could be switched on or off. For example, the budding yeast, nematodes and fruit flies are windows into the complicated genetics of longevity. Several discoveries have been made in recent years, including resveratrol, sirtuins, insulin growth factor, methuselah gene, Indy mutation.
Capillary feeding assay, developed in the laboratory of Seymour Benzer at Caltech, which allows tracking of consumed food
The effects of caloric restriction may be more complex than anticipated. Protein-to-carbohydrate ratio has been shown to play a large role in diet response. Additionally, medical concerns about danger of refined sugar and fructose for health have gained recognition, typically relating to high-mortality diseases and disorders, such as diabetes, diabetic complications, and obesity.
Following an initial study of antioxidant system of the budding yeast, we turned our sights to biogerontological studies after the discovery of possible molecular mechanism of resveratrol action in yeast model. However, we quickly realized that the fruit fly (specifically Drosophila) is likely a better model because we could then also investigate behavioural outcome and food intake. How would caloric restriction and the amount of carbohydrates in the diet affect the longevity of fruit flies?
Food with a dye enables measurement of food intake
Analysis of faecal spots left by fruit flies allows life-long measurement of medium ingestion
We posed the question of whether the type of carbohydrate fed would affect mortality in fruit flies, including fructose, glucose, a plain mixture of the two, and sucrose (a disaccharide composed from monomers, fructose and glucose). We wanted to see whether fructose is a “poison” or “toxicant” as can be found in publications or popular lectures of Professor Robert Lustig.
We found, surprisingly, that flies fed on sucrose ceased to lay eggs after several weeks of adult life, and sucrose shortened their mean life span at all concentrations above 0.5% total carbohydrate. On the other hand, we found that fruit flies were quite well adapted for living on fructose. Furthermore, this effect was not observed for plain mixture of fructose and glucose.
Dietary response surface where concentrations of protein and carbohydrate ingested are put on X and Y axes, while Z is any physiological parameter which may depend on protein-to-carbohydrate ratio
The results were surprising because sucrose is routinely used in laboratory recipes of fly food. Lower fecundity on sucrose was also unexpected. However, we realized that the effects we had observed in the study would not lead us to immediate sugar denialism. The fly food used in the study was quite different from usual fly food, and in human context resembled likely a spicy marmalade diet.
Nonetheless, it is known that egg laying in Drosophila is promoted by dietary proteins (taken up mostly from yeasts). The diets of our flies contained a very small amount of protein, and yet this deficiency did not interfere with egg laying on monosaccharides while disaccharide sucrose caused dramatic loss in fecundity.
Is it possible to apply our current data to human physiology? It seems to be rather difficult to build assumptions on healthy diet of humans based on the data obtained for insects. The insect physiology with their specific development hormones, probably different metabolism and metabolic demands stands far away from that of humans. Nevertheless, the general message is that the influence of diet on ageing cannot be reduced to simply amount of calories, or macronutrient balance. The quality of nutrients, the micronutrients, the peculiarities of digestion, including gut microbiota, should also be taken into account. While scientists are often forced to simplify models, to gain a better understanding of molecular, biochemical, genetic, and physiological grounds of ageing, our understanding would likely benefit from bringing a variety of researchers, from ecologists and mathematicians, into the discussion.
The Journals of Gerontology® were the first journals on aging published in the United States. The tradition of excellence in these peer-reviewed scientific journals, established in 1946, continues today. The Journals of Gerontology, Series A® publishes within its covers The Journal of Gerontology: Biological Sciences and The Journal of Gerontology: Medical Sciences.
Subscribe to the OUPblog via email or RSS.
Subscribe to only health and medicine articles on the OUPblog via email or RSS.
Image credit: All images courtesy of the authors. Do not use without permission.
Since their introduction in the United States in the 1940s, artificial fluoridation programmes have been credited with reducing tooth decay, particularly in deprived areas. They are acknowledged by the US Centers for Disease Control and Prevention as one of the ten great public health achievements of the 20th century (alongside vaccination and the recognition of tobacco use as a health hazard). Such plaudits however, have only gone on to fuel what is an extremely polarised ‘water fight’. Those opposed to artificial fluoridation continue to claim it causes a range of health conditions and diseases such as reduced IQ in children, reduced thyroid function, and increased risk of bone cancer. Regardless of the controversy, the one thing that everyone agrees upon is that little or no high quality research is available to confirm or refute any public concerns. The York systematic review of water fluoridation has previously highlighted the weakness of the evidence base by acknowledging the quality of the research included in the review was low to moderate.
Fluoride changes the structure of tooth enamel making it more resistant to acid attack and can reduce the incidence of tooth decay. This is why it is added to drinking water as part of artificial fluoridation programmes. The aim is to dose naturally occurring fluoride to a level that provides optimum benefit for the prevention of dental caries. The optimum range can depend on temperature but falls within the range of 0.7-1.2 parts per million (ppm) for Great Britain. Levels lower than 0.7ppm are considered to provide little or no benefit. Drinking water standards are set so that the level of fluoride must not exceed 1.5ppm in accordance with national regulations that come directly from EU law.
Severn Trent Water, Northumbrian Water, South Staffordshire Water, United Utilities, and Anglian Water are the only water companies in Great Britain that artificially fluoridate their water supply to a target level of 1 ppm. The legal agreements to fluoridate currently sit with the Secretary of State, acting through Public Health England, although local authorities are the ultimate decision makers when it comes to establishing, maintaining, adjusting or terminating artificial fluoridation programmes. As a programme dedicated to improving oral health, all of the associated costs come from the public health budget. Therefore, it is important to know that the money is being spent in the most effective way.
Our study has, for the first time, enabled an in-depth examination of the relationship between the incidence of two of the most common types of bone cancer that are found in children and young adults, osteosarcoma and Ewing sarcoma, and fluoride levels in drinking water across the whole of Great Britain. We have combined case data from population based cancer registries, fluoride monitoring data from water companies and census data within a computerised geographic information system, to enable us to carry out sophisticated geo-statistical analyses.
The study found no evidence of an association between fluoride in drinking water and osteosarcoma or Ewing sarcoma. The study also found no evidence that those who lived in an area of Great Britain with artificially fluoridated drinking water, or who were supplied with drinking water containing naturally occurring fluoride at a level within the optimal range, were at an increased risk of osteosarcoma or Ewing sarcoma.
It is important to note that finding no evidence of an association between the geographical occurrences of osteosarcoma or Ewing sarcoma and fluoride levels in drinking water, does not necessarily mean there is no association. Indeed, intake of fluids and food products that contain fluoride will not be the same for everyone and not taking this variation into consideration is one of the limitations of our study. Nevertheless, the methodologies we have developed could be used in the future to examine fluoride exposure over time and take other risk factors into consideration at an individual level. Such an approach could help the controversy surrounding artificial fluoridation ebb rather than flow.
Another important, although unexpected, finding arose from our use of fluoride monitoring data. We found that the fluoridation levels of approximately one third of the artificially fluoridated water supply zones were below 0.7ppm (the minimum limit of the optimum range). This finding reinforces that it is incorrect to assume an artificially fluoridated area is dosed up to 1ppm. In reality, it may be a lot less. A number of previous studies have mistakenly made this assumption making their conclusions unreliable. Our study shows that you cannot guarantee that fluoride levels in all artificially fluoridated water supply zones are close to the target level of 1ppm. Assuming that water fluoridation is a safe practice and evidence surrounding calculation of recommended dosage is reliable, this finding has economic implications in terms of public health. If public money is paying for artificial fluoridation shouldn’t the water supply zones be dosed up to a level that will provide the greatest benefit? If they aren’t then could it be that public money is merely being thrown down the drain?
The International Journal of Epidemiology is an essential requirement for anyone who needs to keep up to date with epidemiological advances and new developments throughout the world. It encourages communication among those engaged in the research, teaching, and application of epidemiology of both communicable and non-communicable disease, including research into health services and medical care.
On 11 September 2013, an unusually long and bright impact flash was observed on the Moon. Its peak luminosity was equivalent to a stellar magnitude of around 2.9.
What happened? A meteorite with a mass of around 400 kg hit the lunar surface at a speed of over 61,000 kilometres per hour.
Rocks often collide with the lunar surface at high speed (tens of thousands of kilometres per hour) and are instantaneously vaporised at the impact site. This gives rise to a thermal glow that can be detected by telescopes from Earth as short duration flashes. These flashes, in general, last just a fraction of a second.
The extraordinary flash in September was recorded from Spain by two telescopes operating in the framework of the Moon Impacts Detection and Analysis System (MIDAS). These devices were aimed to the same area in the night side of the Moon. With a duration of over eight seconds, this is the brightest and longest confirmed impact flash ever recorded on the Moon.
Our calculations show that the impact, which took place at 20:07 GMT, created a new crater with a diameter of around 40 meters in Mare Nubium. This rock had a size raging between 0.6 and 1.4 metres. The impact energy was equivalent to over 15 tons of TNT under the assumption of a luminous efficiency of 0.002 (the fraction of kinetic energy converted into visible radiation as a consequence of the hypervelocity impact).
The detection of impact flashes is one of the techniques suitable to analyze the flux of incoming bodies to the Earth. One of the characteristics of the lunar impacts monitoring technique is that it is not possible to unambiguously associate an impact flash with a given meteoroid stream. Nevertheless, our analysis shows that the most likely scenario is that the impactor had a sporadic origin (i.e., was not associated to any known meteoroid stream). From the analysis of this event we have learnt that that one metre-sized objects may strike our planet about ten times as often as previously thought.
Monthly Notices of the Royal Astronomical Society is one of the world’s leading primary research journals in astronomy and astrophysics, as well as one of the longest established. It publishes the results of original research in astronomy and astrophysics, both observational and theoretical.
Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.
On Saturday, 8 March, we celebrate International Women’s Day. But is there really anything to celebrate?
Last year, the United Nations declared its theme for International Women’s Day to be: “A promise is a promise: Time for action to end violence against women.” But in the United Kingdom in 2012, the government’s own figures show that around 1.2 million women suffered domestic abuse, over 400,000 women were sexually assaulted, 70,000 women were raped, and thousands more were stalked.
In a nutshell, this means that men’s violence against women is simply the most extreme manifestation of a continuum of male privilege, starting with domination of public discourse and decision-making, taking the lion’s share of global income and assets, and finally, controlling women’s actions and agency by force if necessary.
Throughout history and in most cultures, violence against women has been an accepted way in which men maintain power. In this country, the traditional right of a husband to inflict moderate corporal punishment on his wife in order to keep her “within the bounds of duty” was only removed in 1891. Our lingering ambivalence over the rights and wrongs of intervening in the face of domestic violence (“It’s just a domestic” as the police used to say) continues more than a century later. An ICM poll in 2003 found more people would call the police if someone was mistreating their dog than if someone was mistreating their partner (78% versus 53%). Women recognise this culture of condoning and excusing violence against them in their reluctance even today to exert their legal rights and make an official complaint. The most recent figures from the Ministry of Justice show that only 15% of women who have been raped report it to the police. And when they do, they’re likely to be disbelieved: the ‘no-crime’ rate (where a victim reports a crime but the police decide that no crime took place) for overall police recorded crime is 3.4%; for rape it’s 10.8%. All this adds up to a culture of impunity in which violence can continue.
And it’s exacerbated by our media. When the End Violence against Women Coalition, along with some of our members, were invited to give evidence to the Leveson Inquiry, we argued that:
“reporting on violence against women which misrepresents crimes, which is intrusive, which sensationalises and which uncritically blames ‘culture’, is not simply uninformed, trivial or in bad taste. It has real and lasting impact – it reinforces attitudes which blame women and girls for the violence that is done to them, and it allows some perpetrators to believe they will get away with committing violence. Because such news reporting are critical to establishing what behaviour is acceptable and what is regarded as ‘real’ crime, in the long term and cumulatively, this reporting affects what is perceived as crime, which victims come forward, how some perpetrators behave, and ultimately who is and is not convicted of crime.”
When do states become responsible for private acts of violence against women?
The UN Committee on the Elimination of All Forms of Discrimination against Women (CEDAW) says in its General Recommendation No. 19 that states may be responsible for private acts “if they fail to act with due diligence to prevent violations of rights or to investigate and punish acts of violence.”
Due diligence means that states must show the same level of commitment to preventing, investigating, punishing and providing remedies for violence against women as they do other crimes of violence. Arguably, our poor rates of reporting and prosecution suggest that the UK is not fulfilling this obligation.
What are some possible policy solutions to eliminate violence against women?
The last Government developed a national strategy to tackle this problem and the current Government has followed suit, adopting a national action plan that aims to coordinate action at the highest level. This has had the single-minded backing of the Home Secretary, Theresa May — who of course happens to be a woman. Under this umbrella, steps have been taken to focus on what works — although much more needs to be done, for example on the key issue of prevention –changing the attitudes that create a conducive environment for violence. Research by the UN in a number of countries recently showed that 70-80% of men who raped said did so because they felt entitled to; they thought they had a right to sex. Research with young people by the Children’s Commissioner has highlighted the sexual double standard that rewards young men for having sex while passing negative judgment on young women who do so. We need to rethink constructions of gender, particularly of masculinity.
What will the End Violence Against Women Campaign focus on this year?
End Violence Against Women welcomes the fact that the main political parties now recognize that this is a key public policy issue, and we’ll be using the upcoming local and national elections in 2014 and 2015 to question candidates on their practical proposals for ending violence against women and girls. We need to make sure that women’s support services are available in every area. And we’ll be working on our long-term aim of changing the way people talk and think about violence against women and girls — starting in schools, where children learn about gender roles and stereotypes — much earlier than we think. We hope Michael Gove will back our Schools Safe 4 Girls campaign. We also look forward to a historic milestone in April, when the UN special rapporteur on violence against women makes a visit to the UK to assess progress.
On International Women’s Day this year, what is the most urgent issue for the world to focus on?
As Nelson Mandela said: “For every woman and girl violently attacked, we reduce our humanity. Every woman who has to sell her life for sex we condemn to a lifetime in prison. For every moment we remain silent, we conspire against our women.” While women across the world are raped and murdered, systematically beaten, trafficked, bought and sold, ending this “undeclared war on women” has to be our top priority.
Janet Veitch is a member of the board of the End Violence against Women Coalition, a coalition of activists, women’s rights and human rights organisations, survivors of violence, academics and front line service providers calling for concerted action to end violence against women. She is immediate past Chair of the UK Women’s Budget Group. She was awarded an OBE for services to women’s rights in 2011.
On 22 March 2014, the University of Nottingham Human Rights Law Centre will be hosting the 15th Annual Student Human Rights Conference ‘Mind the Gender Gap: The Rights of Women,’ and Janet Veitch will be among the experts on the rights of women who will be speaking. Full details are available on the Human Rights Law Centre webpage.
Human Rights Law Review publishes critical articles that consider human rights in their various contexts, from global to national levels, book reviews, and a section dedicated to analysis of recent jurisprudence and practice of the UN and regional human rights systems.
Oxford University Press is a leading publisher in international law, including the Max Planck Encyclopedia of Public International Law, latest titles from thought leaders in the field, and a wide range of law journals and online products. We publish original works across key areas of study, from humanitarian to international economic to environmental law, developing outstanding resources to support students, scholars, and practitioners worldwide. For the latest news, commentary, and insights follow the International Law team on Twitter @OUPIntLaw.
Stories are powerful ways to bring the voice and ideas of marginalized people into endeavors to restore justice and enact change. Beginning in the early 1990s, I started using oral history to bring the stories and experiences of abused women into efforts to make policy changes in New York City. Trained and supported by colleagues at Columbia Center for Oral History and Hunter College’s Puerto Rican Studies Department, I was able to pioneer the use of oral history to leverage social change.
In 2007, I became an Ashoka Fellow and had the space to organize my ideas and experiences about oral history, story gathering, and participatory practices into a set of teachable methods and strategies. This resulted in the creation of Threshold Collaborative, an organization that uses stories as a catalyst for change. Our methods aim to deepen empathy and ignite action in order to build more just, caring, and healthy communities. Working with justice organizations around the country, we help design and implement ways to do that through engaged story work.
This is why when a colleague who runs a youth leadership organization in Pennsylvania wanted to share the ideas and voices of the area’s marginalized youth, we helped to create a school-based story-sharing initiative called A Picture is Worth…. This project came to fruition after the New York Times gave Reading, PA the “unwelcome distinction” of having the highest poverty rate of any American city. Reading also suffered from elevated high school dropout numbers and extraordinarily low college degree rates.
Threshold went to the I-LEAD Charter High School in Reading, which offers poor and immigrant youth another chance to succeed. After spending time at the school — meeting and talking with teachers, parents and learners — we brainstormed a project that would incorporate the personal stories of 22 learners into an initiative to help them learn about themselves, their peers, and their larger community. Audio story gathering and sharing were at the core of this work. The idea was to support them in identifying their vision and values, link them with their peers, and thereby align them with positive change going on in Reading.
With the support of I-LEAD, assistance from the administrators and teachers, the talent of a fabulous photographer Janice Levy, and of course, the participation of the students, Threshold was able to launch an in-school curricular literacy class, which revolved around story gathering and sharing. The project uses writing, audio stories and photography to create powerful interactive narratives of students, highlighting their unique yet unifying experiences. A Picture is Worth… also provides an associated curriculum in literacy for high school students. The project fosters acquisition of real-world knowledge and skills, and encourages young learners to become more engaged in personal and scholastic growth, by combining personal stories with academic standards.
We also gathered and edited the stories of all 22 learners and have linked them with the wonderful photos done by Levy. You can find these powerful voices and images on our Soundcloud page. Here is one of the photos and stories:
Now, we are growing this project to be able to share it with schools and other youth leadership programs around the country. Through our book, curriculum and training program, we hope to inspire youth justice programs to see how young people can contribute to positive change through the power of their stories.
Alisa Del Tufo has worked to support justice and to strengthen empathy throughout her life. Raising over 80 million dollars, she founded three game changing organizations: Sanctuary for Families, CONNECT, and Threshold Collaborative. In the early 1990s, Del Tufo pioneered the use of oral history and community engagement to build grassroots change around the issues of family, and intimate violence. Her innovations have been recognized through a Revson, Rockefeller, and Ashoka Fellowship.
There are huge changes taking place in the world of biosciences, and whether it’s new discoveries in stem cell research, new reproductive technologies, or genetics being used to make predictions about health and behavior, there are legal ramifications for everything. Journal of Law and the Biosciences is a new journal published by Oxford University Press in association Duke University, Harvard University Law School, and Stanford University, focused on the legal implications of the scientific revolutions in the biosciences. We sat down with one of the Editors in Chief, I. Glenn Cohen, to discuss the rapidly changing field, emerging legal issues, and the new peer-reviewed and open access journal.
Why have you decided to launch Journal of Law and the Biosciences?
This is an incredibly exciting time to be working in these areas and in particular the legal aspects related to these areas. We are seeing major developments in genomics, in neuroscience, in patent law, and in health care. We want to be in the forefront of this, and we think that a peer-review journal led by the leading research institutions working in this area in the United States is the way to go.
How has this subject changed in the last 10 years?
The genomics revolution, the reality of cheap whole genome sequencing, further developments in the ability to examine neuroscience, the realization that biosciences are a crucial aspect of criminal investigations, and the importance of research ethics have all become more prominent, as have roles that law and the biosciences play in the criminal justice system, health care delivery, and our understanding of ourselves.
What are the major intersections of law and the biosciences?
Neuroscience, genetics, research ethics, human enhancement, development of drugs and devices in biologics, and medical ethics, and many others.
What is it that makes this such a fast growing area of law?
First, we are fuelled by development in the biosciences, which is moving at an increasingly fast pace since we can build new technologies over old technologies. Second, there is increasing interest by jurists and by lawyers in these areas. Third is an increase in interest in health care and sciences more generally. From President Obama’s announcement of a major enterprise in studying the human brain to the passing of the Affordable Care Act, we are seeing a golden age in this field.
What do you expect to see in the coming years from both the field and the journal?
The ethical issues that have always been in the background are going to be made much more pressing, such as with cheap whole genome sequencing, fetal blood tests called non-invasive genetic testing, and increasingly science-based attempts to restrict abortion rights. All of these are raising questions that have always been present but are making them more pressing and also making it more likely that courts and legislatures will have to be the ones to wrestle with them correctly. We are hoping that the journal plays a role in answering those questions.
Last year, with the Advanced Notice of Proposed Rulemaking (ANPRM) and revisions to the common rule in human subjects’ research, there has also been a lot more emphasis and rethinking about the rules by which science operates at the level of human subject research regulation.
What do you hope to see in the coming years from both the field and the journal?
Increasing number of law students and non-lawyers realizing the important role that law has to play in these disputes and enabling discourse at a deeper level than we have seen to this date.
What does Journal of Law and the Biosciences expect to focus on within the field (trends / new approaches)?
Stem cell technology, reproductive technologies, law and genetics, law and neuroscience, human subjects’ research, human enhancement, patent law, food and drug regulation, and predictive analytics and big data . . . but those are just off the top of my head. We are hoping to get submissions in many more areas as well.
Nita Farahany, I. Glenn Cohen, and Henry T. (Hank) Greely are the Editors of the Journal of Law and the Biosciences. I. Glenn Cohen, JD, is Professor of Law and Co-Director of the Petrie-Flom Center for Health Law Policy, Biotechnology & Bioethics at Harvard Law School. Cohen’s current projects relate to reproduction and reproductive technology, research ethics, rationing in law and medicine, health policy, and medical tourism. Nita Farahany, PhD, JD, is Professor of Law & Philosophy at Duke Law School and Professor of Genome Sciences and Policy at the IGSP. Since 2010, she has served on Obama’s Presidential Commission for the Study of Bioethical Issues. Henry T. (Hank) Greely, JD, is the Deane F. and Kate Edelman Johnson Professor of Law at Stanford University, where he directs the Center for Law and the Biosciences. He chairs the California Advisory Committee on Human Stem Cell Research, is a founder and director of the International Neuroethics Society, and belongs to the Advisory Council for the National Institute for General Medical Sciences and the Institute of Medicine’s Neuroscience Forum.
The Journal of Law and the Biosciences(JLB) is the first fully Open Access peer-reviewed legal journal focused on the advances at the intersection of law and the biosciences. A co-venture between Duke University, Harvard University Law School, and Stanford University, and published by Oxford University Press, this open access, online, and interdisciplinary academic journal publishes cutting-edge scholarship in this important new field. The Journal contains original and response articles, essays, and commentaries on a wide range of topics, including bioethics, neuroethics, genetics, reproductive technologies, stem cells, enhancement, patent law, and food and drug regulation.
Subscribe to the OUPblog via email or RSS.
Subscribe to only law articles on the OUPblog via email or RSS.
The standard arguments against monetary policy responding to asset prices are the claims that it is not feasible to identify asset price bubbles in real time, and that the use of interest rates to restrain asset prices would have big adverse effects on real economic activity. So what happened with central banks and house prices prior to the financial crisis of 2007-2008?
Looking in detail at what the Federal Reserve Board (Fed), the European Central Bank (ECB) and the Bank of England (BoE) thought and said about house prices from the beginning of the 2000s, it appears that the Fed was so convinced of the standard line (monetary policy should not respond to asset prices but just stand ready to mop up if a bubble bursts) that it did not allocate much time or resources to discussing what was happening.
The BoE, on the other hand, while equally committed to that orthodoxy, felt the need to argue it out, at least up till 2005, and a number of speeches by Steve Nickell and others explained why they believed that the rises in house prices were a response to changes in the fundamentals (notably, the much lower levels of inflation and interest rates from the mid-1990s) and were therefore not a cause for concern. But after 2005 the BoE seems to have lost interest in the issue even to that extent.
Bank of England headquarters, London
The ECB was in principle more willing to consider the issue and to think about a response, but developments were very different between euro area countries (with Spain and Ireland experiencing strong house price booms but Germany and Austria seeing almost no change in house prices), and this would seem to be the main reason why the ECB never raised interest rates to restrain the house price booms in the former (which it correctly identified).
Since the crisis the Fed and the BoE have produced analyses suggesting that monetary policy bore almost no responsibility for the house price rises, on the one hand, and that using interest rates to restrain them would have caused sharp downward pressures on income and employment, on the other. The trouble with these analyses is that they consider only the effect of interest rates being a little higher before the crisis, with everything else equal. But of course the advocates of ‘leaning against the wind’ (the minority view which has favoured using interest rates to head off large asset price booms) have always emphasised that the existence of such a policy needs to be known in advance, so that it feeds into the public’s expectations of asset prices and helps to stabilise them. The absence of any such expectations effect in these analyses means that they are wide open to the Lucas Critique, and their results cannot be taken as an argument against leaning against the wind in this case.
What this all amounts to is our conclusion that the failure to adequately monitor developments in the housing markets means that the central banks of the United States and the United Kingdom, in particular, cannot reasonably claim to have done all they could have done to mitigate the house price movements that were crucial to the incidence and depth of the financial crisis.
The main outcome of the crisis for the operations and strategy of monetary policy so far has been the creation of instruments and arrangements for ‘macro-prudential’ policies, which will indeed offer central banks some additional ways of addressing problems in asset markets. However, central banks need to take some responsibility for the debacle of 2007-2008 and its effects. And they need to find some way in the future to incorporate an element of leaning against the wind into their inflation targeting strategies, in case macro-prudential policies turn out to be inadequate.
It is not beyond the wit of man or woman to establish a central bank remit which has a primary focus on price stability but allows the central bank to react to other developments in extreme situations, as long as it makes clear publicly that this is what it is doing, and why, and for how long it expects to be doing it.
Such a revised remit would and should incorporate useful expectations-stabilising effects for asset markets. The transparency and accountability involved would also help to shore up the independence of the central banks (particularly the BoE) at a time when there is so much pressure on them from the political authorities to ensure economic recovery.
Oxford Journals has published a special issue on the topic of Monetary Policy, with free papers until the end of March 2014.
Subscribe to the OUPblog via email or RSS.
Subscribe to only business and economics articles on the OUPblog via email or RSS.
Image credit: Bank of England, Threadneedle Street, London. By Eluveitie. CC-BY-SA-3.0 via Wikimedia Commons
Urban gardens are increasingly recognised for their potential to maintain or even enhance biodiversity. In particular the presence of large densities and varieties of flowering plants is thought to support a number of pollinating insects whose range and abundance has declined as a consequence of agricultural intensification and habitat loss. However, many of our garden plants are not native to Britain or even Europe, and the value of non-native flowers to local pollinators is widely disputed.
We tested the hypothesis that bumblebees foraging in urban gardens preferentially visited plants species with which they share a common biogeography (i.e. the plants evolved in the same regions as the bees that visit them). We did this by conducting summer-long surveys of bumblebee visitation to flowers seen in front gardens along a typical Plymouth street, dividing plants into species that naturally co-occur with British bees (a range extending across Europe, north Africa, and northern Asia – collectively called the Palaearctic by biologists), those that co-occur with bumblebees in other regions such as southern Asia, and North and South America (Sympatric), and plants from regions (Southern Africa and Australasia) where bumblebees are not naturally found (Allopatric).
Rather than discriminating between Palaearctic-native and non-native garden plants when taken together, bees simply visited in proportion to flower availability. Indeed, of the six most commonly visited garden plants, only one Foxglove (Digitalis purpurea – 6% of all bee visits) was a British native and only three garden plants were of Palaearctic origin (including the most frequently visited species Campanula poscharskyana (20.6% of visits) which comes from the Balkans). The remaining ‘most visited’ garden plants were from North America (Ceanothus 11% of visits) and Asia (Deutzia Spp 7% of visits), while the second most visited plant, Hebe × francisciana (18% of visits) is a hybrid variety with parents from New Zealand (H. speciosa) and South America (H. elliptica).
However a slightly different pattern emerges when we consider the behaviour of individual bumblebee species. This is important because we know from work done in natural grassland ecosystems that different bumblebees vary greatly in their preference for native plant species. Some bumblebees visit almost any flower, while others seem to have strict preferences for certain plants. The latter group (‘dietary specialists’) include bees with long tongues that allow them to access the deep flowers of plants belonging to the pea and mint families that short-tongued bees cannot. One of these dietary specialists, the aptly named ‘garden bumblebee’ (Bombus hortorum), showed a strong preference for Palaearctic-origin garden plant species (78% of flower visits by this species); although we also saw this species feeding on the New Zealand-native, Cordyline australis. Even more interesting was the fact that our most common species the ‘buff-tailed bumblebee’ (B. terrestris) appeared to favour non-Palaearctic garden plants (70% of all visits) over garden plants with which it shares a common evolutionary heritage (i.e. Palaearctic plants). So it seems that any preference for plants from ‘home turf’ varies between different bumblebees; just like in natural grasslands, some bees are fussy about where they forage, and others not.
So what should gardeners do to encourage pollinators? Our results suggest that it is not simply a question of growing native species even if this is desirable for other reasons, but that any ‘showily-flowered’ plant is likely to offer some forage reward. There are caveats, however. Garden plants that have been subject to modification to produce ‘double’ flowers that replace or obscure the anthers and carpels that yield pollen and nectar (e.g. Petunias, Begonias, and Hybrid Tea roses) are known to offer little or no pollinator reward. A spring to autumn supply of flowers of different corolla lengths is important to provide both long- and short-tongued bumblebees with nectar. A reliable pollen supply is particularly important during nest founding through to the release of queen and male bees at the end of the nest cycle. Roses and poppies are obvious choices, but early season willows also offer pollen for nest-founding queens. Potentially most crucial of all however, are the pea family as they offer higher quality pollen vital for the success of the short-nest cycle, specialist bumblebees such as B. hortorum. It is also important that access to what gardeners refer to as ‘weeds’ is available. Where possible gardeners can set aside a small area to allow native brambles, vetches, dead nettles, and clovers to grow, but as long as some native weed species are available in nearby allotments, parks, or other green spaces, we suggest that a combination of commonly-grown garden plants will help support our urban bumblebees for future generations.
Annals of Botany is an international plant science journal that publishes novel and substantial research papers in all areas of plant science, along with reviews and shorter Botanical Briefings about topical issues. Each issue also features a round-up of plant-based items from the world’s media – ‘Plant Cuttings’.
Subscribe to the OUPblog via email or RSS.
Subscribe to only earth, environmental, and life sciences articles on the OUPblog via email or RSS.
Image credit: Bumblebee on apple tree. By Victorllee [CC-BY-SA-3.0], via Wikimedia Commons