JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: journals, Most Recent at Top [Help]
Results 1 - 25 of 138
How to use this Page
You are viewing the most recent posts tagged with the words: journals in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
In 1985, Nobel Laureate Gary Becker observed that the gap in employment between mothers and fathers of young children had been shrinking since the 1960s in OECD countries. This led Becker to predict that such sex differences “may only be a legacy of powerful forces from the past and may disappear or be greatly attenuated in the near future.” In the 1990s, however, the shrinking of the mother-father gap stalled before Becker’s prediction could be realized. In today’s economy, how big is this mother-father employment gap, what forces underlie it, and are there any policies which could close it further?
A simple way to characterize the mother-father employment gap is to sum up how much more work is done by mothers compared to fathers of children from ages 0 to 10. In 2010, fathers in the United States worked 3.1 more years on average than mothers over this age 0 to 10 age range. In the United Kingdom, the comparable number is 3.8, while in Canada it is 2.9 and Germany 4.5. The figure below traces the evolution of this mother-father employment gap for all four of these countries.
Becker’s theorizing about the family can help us to understand the development of this mother-father employment gap. Becker’s theoretical models suggest that if there are even slight differences between the productivity of mothers and fathers in the home vs. the workplace, spouses will tend to specialize completely in either in-home or in out-of-home work. These kind of productivity differences could arise because of cultural conditioning, as society pushes certain roles and expectations on women and men. Also, biology could be important as women have a heavier physical burden during pregnancy and after the birth of a child women have an advantage in breastfeeding. It is possible that the initial impact of these unique biological roles for mothers lingers as their children age. Biology is not destiny, but should be acknowledged as a potential barrier that contributes to the origins of the mother-father work gap.
Will today’s differences in mother-father work patterns persist into the future? To some extent that may depend on how cultural attitudes evolve. But there’s also the possibility that family-friendly policy can move things along more quickly. Both parental leave and subsidized childcare are options to consider.
Analysis of some data across the four countries suggest that these kinds of policies can make some difference, but the impact is limited.
Parental leave makes a very big difference when the children are age zero and the parent is actually taking the leave—but because mothers take much more parental leave than fathers, this increases the mother-father employment gap rather than shrinking it. Evidence suggests that after age 0 when most parents return to work, there doesn’t seem to be any lasting impact of having taken a maternity leave on mothers’ employment patterns when their children are ages 1 to 10.
Another policy that might matter is childcare. In the Canadian province of Quebec, a subsidized childcare program was put in place in 1997 that required parents to pay only $5 per day for childcare. This program not only increased mothers’ work at pre-school ages, but also seems to have had a lasting impact when their children reach older ages, as employment of women in Quebec increased at all ages from 0 to 10. When summed up over these ages, Quebec’s subsidized childcare closed the mother-father employment gap by about half a year of work.
Gary Becker’s prediction about the disappearance of mother-father work gaps hasn’t come true – yet. Evidence from Canada, Germany, the United States, and the United Kingdom suggests that policy can contribute to a shrinking of the mother-father employment gap. However, the analysis makes clear that policy alone may not be enough to overcome the combination of strong cultural attitudes and any persistence of intrinsic biological differences between mothers and fathers.
Kleptoplasty describes a special type of endosymbiosis where a host organism retain photosynthetic organelles from their algal prey. Kleptoplasty is widespread in ciliates and foraminifera; however, within Metazoa animals (animals having the body composed of cells differentiated into tissues and organs, and usually a digestive cavity lined with specialized cells), sacoglossan sea slugs are the only known species to harbour functional plastids. This characteristic gives these sea slugs their very special feature.
The “stolen” chloroplasts are acquired by the ingestion of macro algal tissue and retention of undigested functional chloroplasts in special cells of their gut. These “stolen” chloroplasts (thereafter named kleptoplasts) continue to photosynthesize for varied periods of time, in some cases up to one year.
In our study, we analyzed the pigment profile of Elysia viridis in order to evaluate appropriate measures of photosynthetic activity.
The pigments siphonaxanthin, trans and cis-neoxanthin, violaxanthin, siphonaxanthin dodecenoate, chlorophyll (Chl) a and Chl b, ε,ε- and β,ε-carotenes, and an unidentified carotenoid were observed in all Elysia viridis. With the exception of the unidentified carotenoid, the same pigment profile was recorded for the macro algae C. tomentosum (its algal prey).
In general, carotenoids found in animals are either directly accumulated from food or partially modified through metabolic reactions. Therefore, the unidentified carotenoid was most likely a product modified by the sea slugs since it was not present in their food source.
Pigments characteristic of other macro algae present in the sampling locations were not detected inthe sea slugs. These results suggest that these Elysia viridis retained chloroplasts exclusively from C. tomentosum.
In general, the carotenoids to Chl a ratios were significantly higher in Elysia viridis than in C. tomentosum. Further analysis using starved individuals suggests carotenoid retention over Chlorophylls during the digestion of kleptoplasts. It is important to note that, despite a loss of 80% of Chl a in Elysia viridis starved for two weeks, measurements of maximum capacity of performing photosynthesis indicated a decrease of only 5% of the photosynthetic capacity of kleptoplasts that remain functional.
This result clearly illustrates that measurement of photosynthetic activity using this approach can be misleading when evaluating the importance of kleptoplasts for the overall nutrition of the animal.
Finally, concentrations of violaxanthin were low in C. tomentosum and Elysia viridis and no detectable levels of antheraxanthin or zeaxanthin were observed in either organism. Therefore, the occurrence of a xanthophyll cycle as a photoregulatory mechanism, crucial for most photosynthetic organisms, seems unlikely to occur in C. tomentosum and Elysia viridis but requires further research.
As we enter the potentially crucial phase of the Scottish independence referendum campaign, it is worth remembering more broadly that political campaigns always matter, but they often matter most at referendums.
Referendums are often classified as low information elections. Research demonstrates that it can be difficult to engage voters on the specific information and arguments involved (Lupia 1994, McDermott 1997) and consequently they can be decided on issues other than the matter at hand. Referendums also vary from traditional political contests, in that they are usually focused on a single issue; the dynamics of political party interaction can diverge from national and local elections; non-political actors may often have a prominent role in the campaign; and voters may or may not have strong, clear views on the issue being decided. Furthermore, there is great variation in the information environment at referendums. As a result the campaign itself can be vital.
We can understand campaigns through the lens of LeDuc’s framework which seeks to capture some of the underlying elements which can lead to stability or volatility in voter behaviour at referendums. The essential proposition of this model is that referendums ask different types of questions of voters, and that the type of question posed conditions the behaviour of voters. Referendums that ask questions related to the core fundamental values and attitudes held by voters should be stable. Voters’ opinions that draw on cleavages, ideology, and central beliefs are unlikely to change in the course of a campaign. Consequently, opinion polls should show very little movement over the campaign. At the other end of the spectrum, volatile referendums are those which ask questions on which voters do not have pre-conceived fixed views or opinions. The referendum may ask questions on new areas of policy, previously un-discussed items, or items of generally low salience such as political architecture or institutions.
Another essential component determining the importance of the campaign are undecided voters. When voter political knowledge emanates from a low base, the campaign contributes greatly to increasing political knowledge. This point is particularly clear from Farrell and Schmitt-Beck (2002) where they demonstrated that voter ignorance is widespread and levels of political knowledge among voters are often overestimated. As Ian McAllister argues, partisan de-alignment has created a more volatile electoral environment and the number of voters who make their decisions during campaigns has risen. In particular, there has been a sharp rise in the number of voters who decide quite late in a campaign. In this case, the campaign learning is vital and the campaign may change voters’ initial disposition. Opinions may only form during the campaign when voters acquire information and these opinions may be changeable, leading to volatility.
The experience of referendums in Ireland is worth examining as Ireland is one of a small but growing number of countries which makes frequent use of referendums. It is also worth noting that Ireland has a highly regulated campaign environment. In the Oireachtas Inquiries Referendum 2011, Irish voters were asked to decide on a parliamentary reform proposal (Oireachtas Inquiries – OI) in October 2011. The issue was of limited interest to voters and co-scheduled with a second referendum on reducing the pay of members of the judiciary along with a lively presidential election.
The OI referendum was defeated by a narrow margin and the campaign period witnessed a sharp fall in support for the proposal. Only a small number of polls were taken but the sharp decline is clear from the figure below.
Few voters had any existing opinion on the proposal and the post-referendum research indicated that voters relied significantly on heuristics or shortcuts emanating from the campaign and to a lesser extent on either media campaigns or rational knowledge. The evidence showed that just a few weeks after the referendum, many voters were unable to recall the reasons for their voting decision. An interesting result was that while there was underlying support for the reform with 74% of all voters in support of Oireachtas Inquiries in principle, it failed to pass. There was a very high level of ignorance of the issues where some 44% of voters could not give cogent reasons for why they voted ‘no’, underlining the common practice of ‘if you don’t know, vote no’.
So are there any lessons we can draw for Scottish Independence campaign? Scottish independence would likely be placed on the stable end of the Le Duc spectrum in that some voters could be expected to have an ideological predisposition on this question. Campaigns matter less at these types of referendums. However, they are by no means a foregone conclusion. We would expect that the number of undecided voters will be key and these voters may use shortcuts to make their decision. In other words the positions of the parties, of celebrities of unions and businesses and others will likely matter. In addition, the extent to which voters feel fully informed on the issues will also possibly be a determining factor. It may be instructive to look at another Irish referendum, on the introduction of divorce in the 1980s, during which voters’ opinions moved sharply during the campaign, even though the referendum question drew largely from the deep rooted conservative-liberal cleavage in Irish politics (Darcy and Laver 1990). The Scottish campaign might thus still conceivably see some shifts in opinion.
Headline image: Scottish Parliament Building via iStockphoto.
In the 1990s, policing in major US cities was transformed. Some cities embraced the strategy of “community policing” under which officers developed working relationships with members of their local communities on the belief that doing so would change the neighborhood conditions that give rise to crime. Other cities pursued a strategy of “order maintenance” in which officers strictly enforced minor offenses on the theory that restoring public order would avert more serious crimes. Numerous scholars have examined and debated the efficacy of these approaches.
A companion concept, called “community prosecution,” seeks to transform the work of local district attorneys in ways analogous to how community policing changed the work of big-city cops. Prosecutors in numerous jurisdictions have embraced the strategy. Indeed, Attorney General Eric Holder was an early adopter of the strategy when he was US Attorney for the District of Columbia in the mid-1990s. Yet, community prosecution has not received the level of public attention or academic scrutiny that community policing has.
A possible reason for community prosecution’s lower profile is the difficulty of defining it. Community prosecution contrasts with the traditional model of a local prosecutor, which is sometimes called the “case processor” approach. In the traditional model, police provide a continuous flow of cases to the prosecutor, and she prioritizes some cases for prosecution and declines others. The prosecutor secures guilty pleas in most of the pursued cases, often through plea bargains, and trials are rare. The signature feature of the traditional prosecutor’s work is quickly resolving or processing a large volume of cases.
Community prosecution breaks with the traditional paradigm and changes the work of prosecutors in several ways. It removes prosecutors from the central courthouse and relocates them to a small office in a neighborhood, often in a retail storefront. This permits the prosecutor to develop relationships with community groups and individual residents, even allowing residents to walk into the prosecutor’s office and express concerns. It frees the prosecutors from responsibility for managing the flow of cases supplied by police and allows them to undertake two main tasks. The first is that prosecutors partner with community members to identify the sources of crime within the neighborhood and formulate solutions that will prevent crime before it occurs. The second is that when community prosecutors seek to impose criminal punishments, they develop their own cases rather than rely on those presented by police, and they typically focus on the cases they anticipate will have the greatest positive impact on the local community.
In the past fifteen years, Chicago, Illinois, has had a unique experience with community prosecution that allowed the first examination of its impact on crime rates. The State’s Attorney in Cook County (in which Chicago is located), opened four community prosecution offices between 1998 and 2000. Each of these offices had responsibility for applying the community prosecution approach to a target neighborhood in Chicago, and collectively, about 38% of Chicago’s population resided in a target neighborhood. Other parts of the city received no community prosecution intervention. The efforts continued until early 2007, when a budget crisis compelled the closure of these offices and the cessation of the county’s community prosecution program. For more than two years, Chicago had no community prosecution program. In 2009, a new State’s Attorney re-launched the program, and during the next three years, the four community prosecution offices were re-opened.
This sequence of events provided an opportunity to evaluate the impact of community prosecution on crime. The first adoption of community prosecution in the late 1990s lent itself to differences-in-differences estimation. The application of community prosecution to four sets of neighborhoods, each beginning at four different dates, enabled comparisons of crime rates before and after the program’s implementation within those neighborhoods. The fact that other neighborhoods received no intervention permitted these comparisons to drawn relative to the crime rates in a control group. Furthermore, Chicago’s singular experience with community prosecution – its launch, cancellation, and re-launch – furnished a sequence of three policy transitions (off to on, on to off again, and off again to on again). By contrast, the typical policy analysis observes only one policy transition (commonly from off to on). These multiple rounds of program application enhanced the opportunity to detect whether community prosecution affected public safety.
The estimates from this differences-in-differences approach showed that community prosecution reduced crime in Chicago. The declines in violent crime were large and statistically significant. For example, the estimates imply that aggravated assaults fell by 7% following the activation of community prosecution in a neighborhood. The estimates for property crime also showed declines, but they were too imprecisely estimated to permit firm statistical inferences. These results are the first evidence that community prosecution can produce reductions in crime and that the reductions are sizable.
Moreover, there was no indication that community prosecution simply displaced crime, moving it from one neighborhood to another. Neighborhoods just over the border of each community prosecution target area experienced no change in their average rates of crime. The declines thus appeared to reflect a true reduction instead of a reallocation of crime. In addition, the drops in offending were immediate and sustained. One might expect responses in crime rates would arrive slowly and gain momentum over time as prosecutors’ relationships with the community grew. But the estimates instead suggest that community prosecutors were able to identify and exploit immediately opportunities to improve public safety.
This evaluation of the community prosecution in Chicago offers broad lessons about the role of prosecutors. As with any empirical study, some caveats apply. The highly decentralized and flexible nature of community prosecution forbids reducing the program to a fixed set of principles and steps that can be readily implemented elsewhere. To the degree that its success depends on bonds of trust between prosecutor and community, its success may hinge on the personality and talents of specific prosecutors. (Indeed, the article’s estimates show variation in the estimated impacts across offices within Chicago.) At minimum, the results demonstrate that, under circumstances that require more study, community prosecution can reduce crime.
More broadly, the estimates suggest that the role of prosecutors is more far-reaching than typically thought. Crime control is conventionally understood to be primarily the responsibility of police. It was for this very reason that in the 1990s so much attention was devoted to the cities’ choice of policing style – community policing or order maintenance. Restructuring the work of police was thought to be a key mechanism through which crime could be reduced. By contrast, a conventional view of prosecutors is that their responsibilities pertain to the selection of cases, adjudication in the courtroom, and striking plea bargains. This article’s estimates show that this view is unduly narrow. Just as altering the structure and tasks of police may affect crime, so too can changing how prosecutors perform their work.
On 28 June 1914, Archduke Franz Ferdinand and his wife Sophie, Duchess of Hohenberg, were assassinated in Sarajevo, setting off a six week diplomatic battle that resulted in the start of the First World War. The horrors of that war, from chemical weapons to civilian casualties, led to the first forays into modern international law. The League of Nations was established to prevent future international crises and a Permanent Court of International Justice created to settle disputes between nations. While these measures did not prevent the Second World War, this vision of a common law for all humanity was essential for international law today. To mark the centenary of the start of the Great War, and to better understand how international law arose from it, we’ve compiled a brief reading list.
How did international law develop from the 15th century until the end of World War II? This 2014 ASIL Certificate of Merit winnor looks at the history of international law in relation to themes such as peace and war, the sovereignty of states, hegemony, and the protection of the individual person. It includes Milos Vec’s ‘From the Congress of Vienna to the Paris Peace Treaties of 1919′ and Peter Krüger’s ‘From the Paris Peace Treaties to the End of the Second World War’.
A detailed study into the 1922-34 exchange of minorities between Greece and Turkey, supported by the League of Nations, in which two million people were forcibly relocated. Check out the specific chapters on: Wilson and international law; US jurisprudence and international law in the wake of WWI; and the failed marriage of the US and the League of Nations and America’s reaction of isolationism through WWII.
How could the world repress aggressive war, war crimes, terrorism, and genocide in the wake of the First World War? Mark Lewis examines attempts to create specific criminal justice courts to address these crimes, and the competing ideologies behind them.
The Treaty of Versailles marked the first significant attempt to hold an individual — Kaiser Wilhelm — accountable for unlawful resort to major military force. Mary Ellen O’Connell and Mirakmal Niyazmatov discuss the prohibition on aggression, the Jus ad Bellum, the ICC Statute, successful prosecution, Kampala compromise, and protecting the right to life of millions of people.
Following the First World war, there was a general movement in international law towards the prohibition of aggressive war. So why is there an absence of legal milestones marking the advance towards the criminalization of aggression?
What is the bridge between the International Military Tribunal, formed following the Treaty of Versailles, and the International Criminal Tribunal for the former Yugoslavia? Mohamed Shahabuddeen examines the first traces of the development of international criminal justice before the First World War and today’s ideas of the responsibility of the State and the criminal liability of the individual.
When are sanctions doomed to failure? David J. Bederman analyzes the historical context of the demilitarization sanctions imposed against Iraq in the aftermath of the Gulf War of 1991 from the 1919 Treaty of Versailles through to the present day.
How did legal terminology and provisions concerning hostilities, prisoners of war, and other wartime-related concerns change following the introduction of modern warfare during the First World War?
“League of Nations” by Christian J Tams in the Max Planck Encyclopedia of Public International Law
What lessons does the first body of international law hold for the United Nations and individual nations today?
“Alliances” by Louise Fawcett in the Max Planck Encyclopedia of Public International Law
Peace was once ensured through a complex web of diplomatic alliances. However, those same alliances proved fatal as they ensured that various European nations and their empires were dragged into war. How did the nature of alliances between nations change following the Great War?
In the midst of tremendous suffering and loss, suffragists continued to march and protest for the rights of women. How did the First World War hinder the women’s suffrage movement, and how did it change many of the demands and priorities of the suffragists?
A brief overview of the development of international law during the interwar period: where there was promise, and where there was failure.
Headline image credit: Stanley Bruce chairing the League of Nations Council in 1936. Joachim von Ribbentrop is addressing the council. Bruce Collection, National Archives of Australia. Public domain via Wikimedia Commons.
One of the highest points of the International Congress of Mathematicians, currently underway in Seoul, Korea, is the announcement of the Fields Medal prize winners. The prize is awarded every four years to up to four mathematicians under the age of 40, and is viewed as one of the highest honours a mathematician can receive.
This year sees the first ever female recipient of the Fields Medal, Maryam Mirzakhani, recognised for her highly original contributions to geometry and dynamical systems. Her work bridges several mathematic disciplines – hyperbolic geometry, complex analysis, topology, and dynamics – and influences them in return.
We’re absolutely delighted for Professor Mirzakhani, who serves on the editorial board for International Mathematics Research Notices. To celebrate the achievements of all of the winners, we’ve put together a reading list of free materials relating to their work and to fellow speakers at the International Congress of Mathematicians.
Noted by the International Mathematical Union as work contributing to Mirzakhani’s achievement, this paper investigates the dynamics of the earthquake flow defined by Thurston on the bundle PMg of geodesic measured laminations.
Manjul Bhargava joins Maryam Mirzakhani amongst this year’s winners of the Fields Medal. Here he uses Serre’s mass formula for totally ramified extensions to derive a mass formula that counts all étale algebra extentions of a local field F having a given degree n.
Several authors, some of whom speaking at the International Congress of Mathematicians, have considered whether the ultrapower and the relative commutant of a C*-algebra or II1 factor depend on the choice of the ultrafilter.
Wooley’s paper, as well as his talk at the congress, investigates sums of mixed powers involving two squares, two cubes, and various higher powers concentrating on situations inaccessible to the Hardy-Littlewood method.
What is a classic album? Not a classical album – a classic album. One definition would be a recording that is both of superb quality and of enduring significance. I would suggest that Miles Davis’s 1959 recording Kind of Blue is indubitably a classic. It presents music making of the highest order, and it has influenced — and continues to influence — jazz to this day.
There were several important records released in 1959, but no event or recording matches the importance of the release of the new Miles Davis album Kind of Blue on 17 August 1959. There were people waiting in line at record stores to buy it on the day it appeared. It sold very well from its first day, and it has sold increasingly well ever since. It is the best-selling jazz album in the Columbia Records catalogue, and at the end of the twentieth century it was voted one of the ten best albums ever produced.
But popularity or commercial success do not correlate with musical worth, and it is in the music on the recording that we find both quality and significance. From the very first notes we know we are hearing something new. Piano and bass draw in the listener into a new world of sound: contemplative, dreamy and yet intense.
The pianist here is Bill Evans, who was new to Davis’s band and a vital contributor to the whole project. Evans played spaciously and had an advanced harmonic sense. His sound was floating and open. The lighter sound and less crowded manner were more akin to the understated way in which Davis himself played. “He plays the piano the way it should be played,” said Davis about Bill Evans. And although Davis’s speech was often sprinkled with blunt Anglo-Saxon expressions, he waxed poetic about Evans’s playing: “Bill had this quiet fire. . . . [T]he sound he got was like crystal notes or sparkling water cascading down from some clear waterfall.” The admiration was mutual. Evans thought of Davis and the other musicians in his band as “superhumans.”
Evans makes his mark throughout the album, though Wynton Kelly substitutes for him on the bluesier and somewhat more traditional second track “Freddie Freeloader.”
Musicians refer to the special sound on Kind of Blue as “modal.” And the term “modal jazz” is often found in writings about jazz styles and jazz history. What exactly is modal jazz? There are two characteristic features that set this style apart. The first is the use of scales that are different from the standard major and minor ones. So the first secret of the special sound on this album is the use of unusual scales. But the second characteristic is even more noticeable, and that is the way the music is grounded on long passages of unchanging harmony. “So What” is an AABA form in which all the A sections are based on a single harmony and the B sections on a different harmony a half step higher.
A [D harmony]
A [D harmony]
B [Eb harmony]
A [D harmony]
Unusual scales are most clearly heard on “All Blues.”
And for hypnotic and meditative, you can’t do better than “Flamenco Sketches,” the last track, which brings the modal conception to its most developed point. It is based upon five scales or modes, and each musician improvises in turn upon all five in order. A clear analysis of this track is given in Mark Gridley’s excellent jazz textbook Jazz Styles.)
An aside here:
It is possible — even likely — that the titles of these two tracks are reversed. In my Musical Quarterly article (link below), I suggest that “Flamenco Sketches” is the correct title for the strumming medium-tempo music on the track that is now known as “All Blues” and that “All Blues” is the correct title for the last, very slow, track on the album. I also show how the mixup occurred in 1959, just as the album was released.
Perhaps the most beautiful piece on the album is the Evans composition “Blue in Green,” for which Coltrane fashions his greatest and most moving solo. Of the five tracks on the album, four are quite long, ranging from nine to eleven and a half minutes, and they are placed two before and two after “Blue in Green.” Regarding the program as a whole, therefore, one sees “Blue in Green” as the small capstone of a musical arch. But “Blue in Green” itself is in arch form, with a palindromic arrangement of the solos. The capstone of this arch upon an arch is the thirty seconds or so of Coltrane’s solo.
“Blue in Green”
“Freddie Freeloader” “All Blues”
“So What” “Flamenco Sketches”
Kind of Blue
The great strength of Kind of Blue lies in the consistency of its inspiration and the palpable excitement of its musicians. “See,” wrote Davis in his autobiography, “If you put a musician in a place where he has to do something different from what he does all the time . . . that’s where great art and music happens.”
One of the most common questions that scholars confront is trying to find the right journal for their research papers. When I go to conferences, often I am asked: “How do I know if Political Analysis is the right journal for my work?”
This is an important question, in particular for junior scholars who don’t have a lot of publishing experience — and for scholars who are nearing important milestones (like contract renewal, tenure, and promotion). In a publishing world where it may take months for an author to receive an initial decision from a journal, and then many additional months if they need to revise and resubmit their work to one or more subsequent journals, selecting the most appropriate journal can be critical for professional advancement.
So how can a scholar try to determine which journal is right for their work?
The first question an author needs to ask is how suitable their paper is for a particular journal. When I meet with my graduate students, and we talk about potential publication outlets for their work, my first piece of advice is that they should take a close look at the last three or four issues of the journals they are considering. I’ll recommend that they look at the subjects that each journal is focusing on, including both substantive topics and methodological approaches. I also tell them to look closely at how the papers appearing in those journals are structured and how they are written (for example, how long the papers typically are, and how many tables and figures they have). The goal is to find a journal that is currently publishing papers that are most closely related to the paper that the student is seeking to publish, as assessed by the substantive questions typically published, the methodological approaches generally used, paper framing, and manuscript structure.
Potential audience is the second consideration. Different journals have different readers — meaning that authors can have some control over who might be exposed to their paper when they decide which journals to target for their work. This is particularly true for authors who are working on highly interdisciplinary projects, where they might be able to frame their paper for publication in related but different academic fields. In my own work on voting technology, for example, some of my recent papers have appeared in journals that have their primary audience in computer science, while others have appeared in more typical political science journals. So authors need to decide in many cases which audience they want to appeal two, and make sure that when they submit their work to a journal that appeals to that audience that the paper is written in an appropriate manner for that journal.
However, most authors will want to concentrate on journals in a single field. For those papers, a third question arises: whether to target a general interest journal or a more specialized field journal. This is often a very subjective question, as it is quite hard to know prior to submission whether a particular paper will be interesting to the editors and reviewers of a general interest journal. As general interest journals often have higher impact factors (I’ll say more about impact factors next), many authors will be drawn to submit their papers to general interest journals even if that is not the best strategy for their work. Many authors will “start high”, that is begin with general interest journals, and then once the rejection letters pile up, they will move to the more specialized field journals. While this strategy is understandable (especially for authors who are nearing promotion or tenure deadlines), it may also be counterproductive — the author will likely face a long and frustrating process getting their work published, if they submit first to general interest journals, get the inevitable rejections, and then move to specialized field journals. Thus, my advice (and my own practice with my work) is to avoid that approach, and to be realistic about the appeal of the particular research paper. That is, if your paper is going to appeal only to readers in a narrow segment of your discipline, then send it to the appropriate specialized field journal.
A fourth consideration is the journal’s impact factor. Impact factors are playing an increasingly important role in many professional decisions, and they may be a consideration for many authors. Clearly, an author should generally seek to publish their work in journals that have higher impact than those that are lower impact. But again, authors should try to be realistic about their work, and make sure that regardless of the journal’s impact factor that their submission is appropriate for the journal they are considering.
Finally, authors should always seek the input of their faculty colleagues and mentors if they have questions about selecting the right journal. And in many fields, journal editors, associate editors, and members of the journal’s editorial board will often be willing to give an author some quick and honest advice about whether a particular paper is right for their journal. While many editors shy away from giving prospective authors advice about a potential submission, giving authors some brief and honest advice can actually save the editor and the journal a great deal of time. It may be better to save the author (and the journal) the time and effort that might get sunk into a paper that has little chance at success in the journal, and help guide the author to a more appropriate journal.
Selecting the right journal for your work is never an easy process. All scholars would like to see their work published in the most widely read and highest impact factor journals in their field. But very few papers end up in those journals, and authors can get their work into print more quickly and with less frustration if they first make sure their paper is appropriate for a particular journal.
Biomechanics is the study of how animals move. It’s a very broad field, including concepts such as how muscles are used, and even how the timing of respiration is associated with moving. Biomechanics can date its beginnings back to the 1600s, when Giovanni Alfonso Borelli first began investigating animal movements. More detailed analyses by pioneers such as Etienne Jules Marey and Eadweard Muybridge, in around the late 1800s started examining the individual frames of videos of moving animals. These initial attempts led to a field known as kinematics – the study of animal movement, but this is only one side of the coin. Kinetics, the study of motion and its causes, and kinematics together provide a very strong tool for fully understanding the strategies animals use to move as well as why they move the way they do.
One factor that really changes the way an animal moves is its body size. Small animals tend to have a much more z-shaped leg posture (when looking at them from a lateral view), and so are considered to be more crouched as their joints are more flexed. Larger animals on the other hand have straighter legs, and if you look at the extreme (e.g. elephant), they have very columnar legs. Just this one change in morphology has a significant effect on the way an animal can move.
We know that the environment animals live in is not uniform, but is cluttered with many different obstacles that must be overcome to successfully move and survive. One type of terrain that animals will frequently encounter is slopes: inclines and declines. Each of the two different types of slopes impose different mechanical challenges on the locomotor system. Inclines require much greater work from the muscles to move uphill against gravity! On declines, an animal is moving with gravity and so the limbs need to brake to prevent a headlong rush down the slope. Theoretically, there are many ways an animal can achieve successful locomotion on slopes, but, to date, there has been no consensus across species or animals of differing body sizes as to whether they do use similar strategies on slopes.
From published literature we generated an overview of how animals, ranging in size from ants to horses, move across slopes. We also investigated and analysed how strategies of moving uphill and downhill change with body size, using a traditional method for scaling analyses. What really took us by surprise was the lack of information on how animals move down slopes. There was nearly double the number of studies on inclines as opposed to declines. This is remarkable given that, if an animal climbs up something inevitably it has to find a way to come back down, either on its own or by having their owner call the fire department out to help!
Most animals tend to move slower up inclines and keep limbs in contact with the ground longer; this allows more time for the muscles to generate work to fight against gravity. Although larger animals have to do more absolute work than smaller animals to move up inclines, the relative stride length did not change across body size or on inclines. Even though there is much less data in the literature on how animals move downhill, we did notice that smaller animals (<~10kg) seem to use different strategies compared to large animals. Small animals use much shorter strides going downhill than on level terrain whereas large animals use longer strides. This difference may be due to stability issues that become more problematic (more likely to result in injury) as an animal’s size increases.
Our study highlights the lack of information we have about how size affects non-level locomotion and emphasises what future work should focus on. We really do not have any idea of how animals deal with stability issues going downhill, nor whether both small and large animals are capable of moving downhill without injuring themselves. It is clear that body size is important in determining the strategies an animal will use as it moves on inclines and declines. Gaining a better understanding of this relationship will be crucial for demonstrating how these mechanical challenges have affected the evolution of the locomotor system and the diversification of animals into various ecological niches.
Image credit: Mountain goat, near Masada, by mogos gazhai. CC-BY-2.5 via Wikimedia Commons.
I recently had the opportunity to talk with Lonna Atkeson, Professor of Political Science and Regents’ Lecturer at the University of New Mexico. We discussed her opinions about improving survey methodology and her thoughts about how surveys are being used to study important applied questions. Lonna has written extensively about survey methodology, and has developed innovative ways to use surveys to improve election administration (her 2012 study of election administration is a wonderful example).
In the current issue of Political Analysis is the Symposium on Advances in Survey Methodology, which Lonna and I co-edited; in addition to the five research articles in the Symposium, we wrote an introduction that puts each of the research articles in context and talks about the current state of research in survey methodology. Also, Lonna and I are co-editing the Oxford Handbook on Polling and Polling Methods, which is in initial stages of development.
It’s well-known that response rates for traditional telephone surveying have declined dramatically. What’s the solution? ow can survey researchers produce quality data given low response rates with traditional telephone survey approaches?
What we’ve learned about response rates is they are not the be all or end all as an evaluative tool for the quality of the survey, which is a good thing because response rates are ubiquitously low! There is mounting evidence that response rates per se are not necessarily reflective of problems in nonresponse. Nonresponse error appears to be more related to the response rate interacting with the characteristic of the nonrespondent. Thus, if survey topic salience leads to response bias then nonresponse error becomes a problem, but in and of itself response rate is only indirect evidence of a potential problem. One potential solution to falling response rates is to use mixed mode surveys and find the best contact and response option for the respondent. As polling becomes more and more sophisticated, we need to consider best contact and response methods for different types of sample members. Survey researchers need to be able to predict the most likely response option for the individual and pursue that strategy.
Mixed mode surveys use multiple methods to contact or receive information from respondents. Thus, mixed mode surveys involve both mixtures of data collection and communications with the respondent. For example, a mixed mode survey might contact sample members by phone or mail and then have them respond to a questionnaire over the Internet. Alternatively a mixed mode survey might allow for multiple forms of response. For example, sample frame members may be able to complete the interview over the phone, by mail, or on the web. Thus a respondent who does not respond over the Internet may in subsequent contact receive a phone call or a FTF visit or may be offered a choice of response mode on the initial contact.
When you see a poll or survey reported online or in the news media, how do you determine if the poll was conducted in a way that has produced reliable data? What indicates a high-quality poll?
This is a difficult question because all polls are not created equally and many reported polls might have problems with sampling, nonresponse bias, question wording, etc. The point being that there are many places where error creeps into your survey not just one and to evaluate a poll researchers like to think in terms of total survey error, but the tools for that evaluation are still in the development stage and is an area of opportunity for survey researchers and political methodologists. We also need to consider a total survey error approach in how survey context, which now varies tremendously, influences respondents and what that means for our models and inferences. This is an area for continued research. Nevertheless, the first criteria for examining a poll ought to be its transparency. Polling data should include information on who funded the poll, a copy of the instrument, a description of the sampling frame, and sampling design (e.g. probability, non-probability, the study size, estimates of sampling error for probability designs, information on any weighting of the data, and how and when the data were collected). These are basic criteria that are necessary to evaluate the quality of the poll.
Survey research is a rapidly changing environment with new methods for respondent contacting and responding. Perhaps the biggest change in the most recent decade is the move away from predominantly interviewer driven data collection methods (e.g. phone, FTF) to respondent driven data collection methods (e.g. mail, Internet, CASI), the greater use of mixed mode surveys, and the introduction of professional respondents who participate over long periods of time in discontinuous panels. We are just beginning to figure out how all these pieces fit together and we need to come up with better tools to assess the quality of data we are obtaining. The future of polling and its importance in the discipline, in marketing, and in campaigns will continue, and as academics we need to be at the forefront of evaluating these changes and their impact on our data. We tend to brush over the quality of data in favor of massaging the data statistically or ignoring issues of quality and measurement altogether. I’m hoping the changing survey environment will bring more political scientists into an important interdisciplinary debate about public opinion as a methodology as opposed to the study of the frequencies of opinions. To this end, I have a new Oxford Handbook, along with my co-editor Mike Alvarez, on polling and polling methods that will take a closer look at many of these issues and be a helpful guide for current and future projects.
In your recent research on election administration, you use polling techniques as tools to evaluate elections. What have you learned from these studies, and based on your research what do you see are issues that we might want to pay close attention to in this fall’s midterm elections in the United States?
We’ve learned so much from our election administration work about designing polling places, training poll workers, mixed mode surveys and more generally evaluating the election process. In New Mexico, for example, we have been interviewing both poll workers and voters since 2006, giving us five election cycles, including 2014, that provide an overall picture of the current state of election administration and how it’s doing relative to past election cycles. Our multi-method approach provides continuous evaluation, review, and improvement to New Mexico elections. This fall I think there are many interesting questions. We are interested in some election reform questions about purging voter registration files, open primaries, the straight party ballot options and felon re-enfranchisement. We are also especially interested in how voters decide whether to vote early or on Election Day and on Election Day where they decide to vote if they are using voting convenience centers instead of precincts. This is an important policy question, but where we place vote centers might impact turnout or voter satisfaction or confidence. We are also very interested in election lines and their impact on voters. In 2012 we found that voters on average can fairly easily tolerate lines of about ½ an hour, but feel there are administrative problems when lines grow longer. We want to continue to drill down on this question and examine when lines deter voters or create poor experiences that reduce the quality of their vote experience.
Lonna Rae Atkeson is Professor of Political Science and Regents’ Lecturer at the University of New Mexico. She is a nationally recognized expert in the area of campaigns, elections, election administration, survey methodology, public opinion and political behavior and has written numerous articles, book chapters, monographs and technical reports on these topics. Her work has been supported by the National Science Foundation, the Pew Charitable Trusts, the JEHT Foundation, the Galisano Foundation, the Bernalillo County Clerk, and the New Mexico Secretary of State. She holds a BA in political science from the University of California, Riverside and a Ph.D. in political science from the University of Colorado, Boulder.
R. Michael Alvarez is a professor of Political Science at Caltech. His research and teaching focuses on elections, voting behavior, and election technologies. He is editor-in-chief of Political Analysis with Jonathan N. Katz.
Political Analysis chronicles the exciting developments in the field of political methodology, with contributions to empirical and methodological scholarship outside the diffuse borders of political science. It is published on behalf of The Society for Political Methodology and the Political Methodology Section of the American Political Science Association. Political Analysis is ranked #5 out of 157 journals in Political Science by 5-year impact factor, according to the 2012 ISI Journal Citation Reports. Like Political Analysis on Facebook and follow @PolAnalysis on Twitter.
Subscribe to the OUPblog via email or RSS.
Subscribe to only politics and political science articles on the OUPblog via email or RSS.
There are many exciting things coming down the Oral History Review pipeline, including OHR volume 41, issue 2, the Oral History Association annual meeting, and a new staff member. But before we get to all of that, I want to take one last opportunity to celebrate OHR volume 41, issue 1 — specifically, Abigail Perkiss’ “Reclaiming the Past: Oral History and the Legacy of Integration in West Mount Airy, Philadelphia.” In this article, Abigail investigates an oral history project launched in her hometown in the 1990s, which sought to resolve contemporary tensions by collecting stories about the area’s experience with racial integration in the 1950s. Through this intriguing local history, Abigail digs into the connection between oral history, historical memory, and social change.
If that weren’t enough to whet your academic appetite, the article also went live the same week her first daughter, Zoe, was born.
How awesome is that?
But back to business. Earlier this month I chatted with Abigail about the article and the many other projects she has had in the works this year. So, please enjoy this quick interview and her article, which is currently available to all.
How did you become interested in oral history?
I’ve been gathering people’s stories in informal ways for as long as I can remember, and as an undergraduate sociology major at Bryn Mawr College, my interests began to coalesce around the intersection of storytelling and social change. I took classes in ethnography, worked as a PA on a few documentary projects, and interned at a documentary theater company. All throughout, I had the opportunity to develop and hone my skills as an interviewer.
I began taking history classes my junior year, and through that I started to think about the idea of oral history in a more intentional way. I focused my research around oral history, which culminated in my senior thesis, in which I interviewed several folksingers to examine the role of protest music in creating a collective memory of the Vietnam War, and how that memory was impacting the way Americans understood the war in Iraq. A flawed project, but pretty amazing to speak with people like Pete Seeger, Janis Ian, and Mary Travers!
After college, I studied at the Salt Institute for Documentary Studies in Portland, Maine, and when I began my doctoral studies at Temple University, I knew that I wanted to pursue research that would allow me to use oral history as one of the primary methodological approaches.
What sparked your interest in the Mount Airy project?
When I started my graduate work at Temple, I was pursuing a joint JD/PhD in US history. I knew I wanted to do something in the fields of urban history and racial justice, and I kept coming back to the Mount Airy integration project. I actually grew up in West Mount Airy, and even as a kid, I was very much aware of the lore of the neighborhood integration project. There was a real sense that the community was unique, special.
I knew that there had to be more to the utopian vision that was so pervasive in public conversations about the neighborhood, and I realized that by contextualizing the community’s efforts within the broader history of racial justice and urban space in the mid-twentieth century, I would be able to look critically about the concept and process of interracial living. I could also use oral history as a key piece of my research.
Your article focuses on an 1990s oral history project led by a local organization, the West Mount Airy Neighbors. Why did you choose to augment the interviews they collected with your own?
The 1993 oral history project was a wonderful resource for my book project (from which this article comes); but for my purposes, it was also incomplete. Interviewers focused largely on the early years of integration, so I wasn’t able to get much of a sense of the historical evolution of the efforts. The questions were also framed according to a very particular set of goals that project coordinators sought to achieve — as I argue, they hoped to galvanize community cohesion in the 1990s and to situate the local community organization at the center of contemporary change.
So, while the interviews were quite telling about the West Mount Airy Neighbors’ efforts to maintain institutional control in the neighborhood, they weren’t always useful for me in getting at some of the other questions I was trying to answer: about the meaning of integration for various groups in the community, about the racial politics that emerged, about the perception of Mount Airy in the city at large. To get at those questions, it was important for me to conduct additional interviews.
Is there anything you couldn’t address in the article that you’d like to share here?
As I alluded to above, it is part of a larger book project on postwar residential integration, Making Good Neighbors: Civil Rights, Liberalism, and Integration in Postwar Philadelphia(Cornell University Press, 2014). There, I look at the broader process of integrating and the challenges that emerged as the integration efforts coalesced and evolved over the decades. Much of the research for the book came from archival collections, but the oral histories from the 1990s, and the ones I collected, were instrumental in fleshing out the story and humanizing what could otherwise have been a rather institutional history of the West Mount Airy Neighbors organization.
Are you working on any projects the OHR community should know about?
I’ve spent the past 18 months directing an oral history project on Hurricane Sandy, Staring out to Sea, which came about through a collaboration with Oral History in the Mid-Atlantic Region (remember them?) and a seminar I taught in Spring 2013. That semester, I worked intensively with six undergraduates, studying the practice of oral history and setting up the project’s parameters. The students developed the themes and questions, recruited participants, conducted and transcribed interviews. They then processed and analyzed their findings, looking specifically at issues of race, power and representation in the wake of the storm.
In addition to blogging about their experience, the students presented their work at the 2013 OHMAR and OHA meetings. You can read a bit more about that and the project in Perspectives on History. This fall, I’ll be working with Professor Dan Royles and his digital humanities students to index the interviews we’ve collected and develop an online digital library for the project. I’ll also be attending to the OHA annual meeting this year to discuss the project’s transformative impact on the students themselves.
Excellent! I look forward to seeing you (and the rest of our readers) in Madison this October.
Caitlin Tyler-Richards is the editorial/media assistant at the Oral History Review. When not sharing profound witticisms at @OralHistReview, Caitlin pursues a PhD in African History at the University of Wisconsin-Madison. Her research revolves around the intersection of West African history, literature and identity construction, as well as a fledgling interest in digital humanities. Before coming to Madison, Caitlin worked for the Lannan Center for Poetics and Social Practice at Georgetown University.
The Oral History Review, published by the Oral History Association, is the U.S. journal of record for the theory and practice of oral history. Its primary mission is to explore the nature and significance of oral history and advance understanding of the field among scholars, educators, practitioners, and the general public. Follow them on Twitter at @oralhistreview, like them on Facebook, add them to your circles on Google Plus, follow them on Tumblr, listen to them on Soundcloud, or follow their latest OUPblog posts via email or RSS to preview, learn, connect, discover, and study oral history.
Subscribe to the OUPblog via email or RSS.
Subscribe to only history articles on the OUPblog via email or RSS.
Despite the huge body of evidence that males and females have very different immune systems and responses, few biomedical studies consider sex in their analyses. Sex refers to the intrinsic characteristics that distinguish males from females, whereas gender refers to the socially determined behaviour, roles, or activities that males and females adopt. Male and female immune systems are not the same leading to clear sexual dimorphism in response to infections and vaccination.
In 2010, Nature featured a series of articles aimed at raising awareness of the inherent sex bias in modern day biomedical research and, yet, little has changed since that time. They suggested journals and funders should insist on studies being conducted in both sexes, or that authors should state the sex of animals used in their studies, but, unfortunately, this was not widely adopted.
Even before birth, intrauterine differences begin to differentially shape male and female immune systems. The male intrauterine environment is more inflammatory than that of females, male fetuses produce more androgens and have higher IgE levels, all of which lead to sexual dimorphism before birth. Furthermore, male fetuses have been shown to undergo more epigenetic changes than females with decreased methylation of many immune response genes, probably due to physiological differences.
The X chromosome contains numerous immune response genes, while the Y chromosome encodes for a number of inflammatory pathway genes that can only be expressed in males. Females have two X chromosomes, one of which is inactivated, usually leading to expression of the wild type gene. X inactivation is incomplete or variable, which is thought to contribute to greater inflammatory responses among females. The immunological X and Y chromosome effects will begin to manifest in the womb leading to the sex differences in immunity from birth, which continue throughout life.
MicroRNAs (miRNAs) regulate physiological processes, including cell growth, differentiation, metabolism and apoptosis. Males and females differ in their miRNA expression, even in embryonic stem cells, which is likely to contribute to sex differences in the prevalence, pathogenesis and outcome of infections and vaccination.
Females are born with higher oestriol concentrations than males, while males have more testosterone. Shortly after birth, male infants undergo a ‘mini-puberty’, characterised by a testosterone surge, which peaks at about 3 months of age, while the female effect is variable. Once puberty begins, the ovarian hormones such as oestrogen dominate in females, while testicular-derived androgens dominate in males. Many immune cells express sex hormone receptors, allowing the sex hormones to influence immunity. Very broadly, oestrogens are Th2 biasing and pro-inflammatory, whereas testosterone is Th1 skewing and immunosuppressive. Thus, sex steroids undoubtedly play a major role in sexual dimorphism in immunity throughout life.
Sex differences have been described for almost every commercially available vaccine in use. Females have higher antibody responses to certain vaccines, such as measles, hepatitis B, influenza and tetanus vaccines, while males have better antibody responses to yellow fever, pneumococcal polysaccharide, and meningococcal A and C vaccines. However, the data are conflicting with some studies showing sex effects, whereas other studies show none. Post-vaccination clinical attack rates also vary by sex with females suffering less influenza and males experiencing less pneumococcal disease after vaccination. Females suffer more adverse events to certain vaccines, such as oral polio vaccine and influenza vaccine, while males have more adverse events to other vaccines, such as yellow fever vaccine, suggesting the sex effect varies according to the vaccine given. The existing data hint at higher vaccine-related adverse events in infant males progressing to a female preponderance from adolescence, suggesting a hormonal effect, but this has not been confirmed.
If male and female immune systems behave in opposing directions then clearly analysing them together may well cause effects and responses to be cancelled out. Separate analysis by sex would detect effects that were not seen in the combined analysis. Furthermore, a dominant effect in one of the sexes might be wrongly attributed to both sexes. For drug and vaccine trials this could have serious implications.
Given the huge body of evidence that males and females are so different, why do most scientific studies fail to analyse by sex? Traditionally in science the sexes have been regarded as being equal and the main concern has been to recruit the same number of males and females into studies. Adult females are often not enrolled into drug and vaccine trials because of the potential interference of hormones of the menstrual cycle or risk of pregnancy; thus, most data come from trials conducted in males only. Similarly, the majority of animal studies are conducted in males, although many animal studies fail to disclose the sex of the animals used. Analysing data by sex adds the major disadvantage that sample sizes would need to double in order to have sufficient power to detect significant sex effects. This potentially means double the cost and double the time to conduct the study, in a time when research funding is limited and hard to obtain. Furthermore, since the funders don’t request analysis by sex, and the journals do not ask for it, it is not a major priority in today’s highly competitive research environment.
It is likely that we are missing important scientific information by not investigating more comprehensively how males and females differ in immunological and clinical trials. We are entering an era in which there is increasing discussion regarding personalised medicine. Therefore, it is quite reasonable to imagine that females and males might benefit differently from certain interventions such as vaccines, immunotherapies and drugs. The mindset of the scientific community needs to shift. I appeal to readers to take heed and start to turn the tide in the direction whereby analysis by sex becomes the norm for all immunological and clinical studies. The knowledge gained would be of huge scientific and clinical importance.
Dr Katie Flanagan leads the Infectious Diseases Service at Launceston General Hospital in Tasmania, and is an Adjunct Senior Lecturer in the Department of Immunology at Monash University in Melbourne. She obtained a degree in Physiological Sciences from Oxford University in 1988, and her MBBS from the University of London in 1992. She is a UK and Australia accredited Infectious Diseases Physician. She did a PhD in malaria immunology based at Oxford University (1997 – 2000). She was previously Head of Infant Immunology Research at the MRC Laboratories in The Gambia from 2005-11 where she conducted multiple vaccine trials in neonates and infants.
Dr Katie Flanagan’s editorial, ‘Sexual dimorphism in biomedical research: a call to analyse by sex’, is published in the July issue of Transactions of the Royal Society of Tropical Medicine and Hygiene. Transactions of the Royal Society of Tropical Medicine and Hygiene publishes authoritative and impactful original, peer-reviewed articles and reviews on all aspects of tropical medicine.
Did you know that the introduction of languages into primary schools has been dubbed the world’s biggest development in education? And, of course, overwhelmingly, the language taught is English. Already the world’s most popular second language, the desire for English continues apace, at least in the short term, and with this desire has come a rapid decrease in the age at which early language learning (ELL) starts. From the kindergartens of South Korea to classes of 70+ in Tanzania, very young children are now being taught English. So is it a good idea to learn English from an early age? Many people believe that in terms of learning language, the younger the better. However, this notion is based on children learning in bilingual environments in which they get a great deal of input in two or more languages. Adults see children seemingly soaking up language and speaking in native-like accents and think that language learning for children is easy. However, most children do not learn English in this kind of bilingual environment. Instead, they learn in formal school settings where they are lucky if they get one or two hours of English tuition a week. In these contexts, there is little or no evidence that an early start benefits language learning. Indeed, it has been argued that the time spent teaching English is better spent on literacy, which has been shown to develop children’s language learning potential.
So why are children learning from so young an age? One answer is parent power. Parents see the value of English for getting ahead in the global world and put pressure on governments to ensure children receive language tuition from an early age. Another answer is inequality. Governments are aware that many parents pay for their children to have private tuition in English and they see this as disadvantaging children who come from poorer backgrounds. In an attempt to level the playing field, they introduce formal English language learning in primary schools. While this is admirable, research shows that school English is not generally effective, particularly in developing countries, and in fact tends to advantage those who are also having private lessons. Another argument for sticking to literacy teaching?
Of course, government policy eventually translates into classroom reality and in very many countries the introduction of English has been less than successful. One mammoth problem is the lack of qualified teachers. Contrary to popular belief, and despite representations in film and television programmes, being able to speak English does not equate to an ability to teach English, particularly to very young children. Yet in many places unqualified native English speaking teachers are drafted into schools to make good the shortfall in teacher provision. In other countries, local homeroom teachers take up the burden but may not have any English language skills or may have no training in language teaching. Other problems include a lack of resources, large classes and lack of motivation leading to poor discipline. Watch out Mr Gove — similar problems lie in store for England in September 2014! (When the new national curriculum for primary schools launches, maintained primary schools will have to teach languages to children, and yet preparation for the curriculum change has been woefully inadequate.)
Why should we be in interested in this area of English language teaching when most of it happens in countries far away from our own? David Graddol, our leading expert on the economy of English language teaching, suggests that the English language teaching industry directly contributes 1.3 billion pounds annually to the British economy and up to 10 billion pounds indirectly through English language education related activities. This sector is a huge beneficiary to the British economy, yet its importance is widely unacknowledged. For example, in terms of investigating English language teaching, it is extremely difficult in England to get substantial funding, particularly when the focus is on countries overseas.
From the perspective of academics interested in this topic, which we are, the general view that English language teaching is not a serious contender for research funding is galling. However, the research funding agencies are not alone. Academic journals rarely publish work on teaching English to young learners, which has become something of a Cinderella subject in research into English language teaching. There are numerous studies on adults learning English in journals of education and applied linguistics, but ELL is hardly represented. This might be because there is little empirical research or because the area is not considered important. Yet as we suggest, there are huge questions to be asked (and answered). For example, in what contexts are children advantaged and disadvantaged by learning English in primary schools? What are the most effective methods for teaching languages to children in particular contexts? What kind of training in teaching languages do primary teachers need and what should their level of English be? The list of questions, like the field, is growing and the answers would support both the UK English language industry and also our own approach to language learning in primary schools, where there is very little expertise.
ELT Journal is a quarterly publication for all those involved in English Language Teaching (ELT), whether as a second, additional, or foreign language, or as an international Lingua Franca. The journal links the everyday concerns of practitioners with insights gained from relevant academic disciplines such as applied linguistics, education, psychology, and sociology. A Special Issue of the ELT Journal, entitled “Teaching English to young learners” is available now. It showcases papers from around the world that address a number of key topics in ELL, including learning through online gaming, using heritage languages to teach English, and the metaphors children use to explain their language learning.
Fiona Copland is Senior Lecturer in TESOL in the School of Languages and Social Sciences at Aston University, Birmingham, UK, where she is Course Director of distance learning MSc programmes in TESOL. With colleagues at Aston, Sue Garton and Anne Burns, she carried out a global research project titled Investigating Global Practices in Teaching English to Young Learners which led to the production of a book of language learning activities called Crazy Animals and Other Activities for Teaching English to Young Learners. She is currently working on a project investigating native-speaker teacher projects. Sue Garton is a Senior Lecturer in TESOL and Director of Postgraduate Programmes in English at Aston University. She worked for many years as an English language teacher in Italy before joining Aston as a teacher educator on distance learning TESOL programmes. As well as leading the British Council funded project on investigating global practices in teaching English to young learners, she has also worked on two other British Council projects, one looking at the transition from primary to secondary school and the other, led by Fiona Copland, on investigating native-speaker teacher schemes. They are editors of the ELT Journal Special Issue on “Teaching English to young learners.“
Subscribe to the OUPblog via email or RSS.
Subscribe to only education articles on the OUPblog via email or RSS.
Subscribe to only language articles on the OUPblog via email or RSS.
My research has focused on the use of participatory media in conflict-affected communities. The aim has been to demonstrate that involving community members in a media production provides them with a platform to tell their story about the violence they have experienced and the causes they believe led to it. This facilitates the achievement of a shared understanding of the conflict between groups that were fighting and lays the foundations for the establishment of a new social fabric that encompasses peace.
This is, by no means, an easy process. It is also one that requires the co-implementation of different types of interventions that strive to rebuild peace in those areas. However, what is often lacking in post-conflict contexts is a communication channel that allows people to reconnect. In the aftermath of civil violence, communities are left divided and in need of information to make sense of the brutality they have undergone. Victims and perpetrators live side by side as neighbours, and dynamics based on resentment and hatred hinder the return to a peaceful environment. The mass media are often unable to address the tensions that have remained within communities as a legacy of the conflict; hence, it is crucial to provide a platform where formerly opposing groups can articulate their views.
By drawing on the experience of a participatory video project conducted in the Rift Valley of Kenya after the 2007/2008 Post-Election Violence, when the country underwent a period of intense ethnic violence, I was able to demonstrate the potential of Communication for Social Change in post-conflict settings through the use of participatory video.
Social change is a process that seeks to transform the unequal power relations that affect a community. The literature on conflict studies tells us that, in order to achieve social change, what firstly needs to be targeted in conflict interventions is change both at the individual and relational level. Changing individuals requires adjusting their feelings and behaviours towards other groups, while changing relationships is about creating a meaningful interaction between members opposing groups, which results in the improvement of inter-group relations. This can be represented as follows:
I argue that, from a communication perspective, these changes can be achieved when people participate in the production of a media story that allows them to both reflect upon and become aware of their situation, as well as to share their experience and create an understanding among groups.
In particular, collaborating towards the creation of media content, listening to one another and becoming producers of their own story, allows communities to transform conflict at all levels:
Individual change – participatory video activities contribute to instating participants’ confidence in re-establishing peace, helping them identify themselves as agents of change, and also guiding them in the discovery of new skills. The storytelling process people engage with encourages reflection on their actions during the violence and greater awareness of their present situation and the need to rebuild peace.
Relational change – the participatory video-making process can establish harmony among those who work together in the mixed-tribe workshops. These involve both those who are in front of the camera but also who cover other roles during the production process. Those who watch the final videos through public screenings can exchange views and develop an understanding of the situation for both victims and perpetrators.
Social change – Thanks to the power shifts resulting from newly-developed perceptions of the conflict and of their post-conflict environment, members of different groups begin to engage in dialogue. The existence of different realities of the violence and of the need to move forward are acknowledged, laying the foundations that are needed to begin to build a new social fabric.
A Communication for Social Change approach to peacebuilding recognises how changes at the individual and relational level can be addressed both through the media content production process and the screening of the final media outputs in the community. Within this context, participatory video is seen as a catalyst that can initiate processes of conflict transformation that lead to a wider social change.
Valentina Baú is completing a PhD at Macquarie University (Sydney, Australia). Both as a practitioner and as a researcher, her work has focused on the use of communication in international development. Valentina has collaborated with different international NGOs, the United Nations and the Italian Development Cooperation, in various African countries. Her doctoral research has looked at the use of Communication for Development in Peacebuilding, particularly through the use of participatory media. Valentina Baú is the author of ‘Building peace through social change communication: participatory video in conflict-affected communities‘, in the Community Development Journal.
Community Development Journalis the leading international journal in its field, covering a wide range of topics, reviewing significant developments and providing a forum for cutting-edge debates about theory and practice. It adopts a broad definition of community development to include policy, planning and action as they impact on the life of communities.
Subscribe to the OUPblog via email or RSS.
Subscribe to only social work articles on the OUPblog via email or RSS.
Image credit: Flow chart of social change, by Valentina Baú. Do not re-use without permission.
As the European Society of Cardiology gets ready to welcome a new journal to its prestigious family, we meet the Editor-in-Chief, Professor Stefan Agewall, to find out how he came to specialise in this field and what he has in store for the European Heart Journal – Cardiovascular Pharmacotherapy.
What encouraged you to pursue a career in the field of cardiology?
I qualified as a doctor at Göteborg University in Sweden in 1986. I became fascinated by emergency medicine early on in my career. I was soon drawn to cardiology as it covers such a broad spectrum of medicine, from acute emergency medicine to physiology, invasive and non-invasive examination and treatment techniques, pharmacology and cardiovascular prevention. I have mainly worked at coronary care units; first at the coronary care unit of Sahlgrenska University Hospital and then at Karolinska University Hospital in Sweden. At Karolinska, I was the head of the coronary care unit. In 2006 I became professor in Cardiology and moved to Oslo University Hospital.
What do you think are the challenges being faced in the field of cardiovascular pharmacotherapy today?
Professor Stefan Agewall, the new Editor-in-Chief of European Heart Journal – Cardiovascular Pharmacotherapy
Pharmacological treatment is very good now and the mortality rate in patients with acute coronary syndrome is quite low. Clinical studies therefore need to be huge in order to demonstrate beneficial effects on hard end-points. We need to put more focus on quality of life in these larger studies and it is also extremely important that some emphasis is placed on preventive medicine, both with and without pharmacotherapy.
How do you see this field developing in the future?
Although the market place for cardiology-related journals is crowded and competitive, I believe the new publication will cover an area that has changed dramatically over the last few decades. This new journal will focus specifically on clinical cardiovascular pharmacology. The production of papers within this area is enormous; in Medline there are almost 500,000 references to the search term ‘cardiovascular pharmacology’ and the rate of publication in this field appears to be steadily increasing. Despite this fast development, we still need even more data from pharmacology studies aimed at improving prognosis for cardiovascular disease as it remains the most common cause of death world-wide.
What are you most looking forward to about being Editor-in-Chief for EHJ-Cardiovascular Pharmacotherapy?
I am looking forward to launching this key new journal and establishing it as a member of the European Society of Cardiology journal family. I hope and believe the Journal will help readers to improve their knowledge in pharmacological treatment of patients with cardiovascular disease through the publication of high quality original research and reviews.
What does your typical day as the Editor-in-Chief look like?
Each day, I will start by handling new submissions and making decisions on papers which have been reviewed by experts within the field. If the submitted papers are of potential interest, they will be sent out for review. We have already recruited a fantastic editorial board, which guarantees a high quality review process. Time will be spent at different kinds of meetings to consider how to develop the journal, how to market it, and how to attract quality submissions from authors in the field.
How do you see the journal developing in the future?
The number of submissions to the journal will hopefully increase every year. In 2015 we aim for four issues and the number of issues will increase year on year. Monthly publication is a goal to achieve within five years. We will of course aim for an increasing impact factor and to become number one within the field of cardiovascular pharmacotherapy.
What do you think readers will take away from the journal?
We hope that by inviting respected and well-known authors, readers will be provided with excellent review papers. We want to provide readers with new information about cardiovascular therapy and, above all, we hope to help the readers to interpret and integrate new scientific developments within the area of cardiovascular pharmacotherapy.
Subscribe to the OUPblog via email or RSS.
Subscribe to only health and medicine articles on the OUPblog via email or RSS.
Image credit: Headshot courtesy of Professor Stefan Agewall. Do not re-use without permission.
In these times of budgetary constraints and demographic change, we need to find new ways of supporting people to live longer in their own homes. Telecare has been suggested as a useful way forward. Some examples of this technology, such as pull-cord or pendant alarms, have been around for years, but these ‘first-generation’ products have given way to more extensive and sophisticated systems. ‘Second-generation’ products literally have more bells and whistles – for instance, alarms for carbon monoxide and floods, and sensors that can detect movement in and out of bed. These sensors send alerts to a call-centre operator who can organise a response, perhaps call out a designated key-holder, organise a visit to see if there is a problem, or ring the emergency services. There are even more elaborate systems that continuously monitor a person’s activity using sensors and analyse these ‘lifestyle’ data to identify changes in usual activity patterns, but these systems are not in mainstream use. In contrast to telehealth – where the recipient is actively involved in transmitting and in many cases receiving information – the sensors in telecare do not require the active engagement of participants to transmit data, as this is done automatically in the background.
Take-up of telecare remains below its potential in England. One recent study estimated that some 4.17 million over-50 year olds could potentially use telecare, while only about a quarter of that figure were actually using personal alarms or alerting devices. The Department of Health has similarly suggested that millions of people with social care needs and long term conditions could benefit from telecare and telehealth. To help meet this need, it launched the 3-Million Lives campaign in partnership with industry to promote the scaling-up of telehealth and telecare.
The hope held by government and commissioners in the NHS and local authorities is that these new assistive technologies not only promote independence and improve care quality but also reduce the use of health and social care services. To decide how much funding to allocate to these promising new services, these commissioners need a solid evidence base. In 2008, the Department of Health launched the Whole Systems Demonstrator (WSD) programme in three local authority areas in England engaged in whole-systems redesign to test the impacts of telecare (for people with social care needs) and telehealth (for people with long-term conditions).
The research that accompanied the WSD programme was extensive. It included quantitative studies investigating health and social care service use, mortality, costs, and the effectiveness of these technologies. Parallel qualitative studies explored the experiences of people using telecare and telehealth and their carers. The research also examined the ways in which local managers and frontline professionals were introducing the new technologies.
Some results from these streams of research have been published with more to come. From the quantitative research, three articles were published in Age and Ageing over the past year. Steventon and colleagues report on the use of hospital, primary care and social services, and mortality for all participants in the trial – around 2,600 people – based on routinely collected data. Two papers report the results of the WSD telecare questionnaire study (Hirani, Beynon et al. 2013; Henderson, Knapp et al. 2014). The questionnaire study included participants from the main trial who filled out questionnaires about their psychological outcomes, their quality of life, and their use of health and social care services.
The most recent paper to be published in Age and Ageing is the cost-effectiveness analysis of WSD telecare. Participants used a second-generation package of sensors and alarms that was passively and remotely monitored. On average, about five items of telecare equipment were provided to people in the ‘intervention’ group. The whole telecare package accounted for just under 10% of the estimated total yearly health and social care costs of £8,625 (adjusting for case mix) for these people. This was more costly than the care packages of people in the ‘usual care’ group (£7,610 per year) although the difference was not statistically significant. The extra cost of gaining a quality-adjusted life year (QALY) associated with the telecare intervention was £297,000. This is much higher than the threshold range – £20,000 to£30,000 per QALY – used by the National Institute for Health and Care Excellence (NICE) when judging whether an intervention should be used in the NHS (National Institute for Health and Clinical Excellence 2008). Given these results, we would, therefore, caution against thinking that second-generation telecare is the cure-all solution for providing good quality care to increasing numbers of people with social care needs while containng costs.
As with any research, it is important to understand how to best use the findings. The telecare tested during the pilot period was ‘second generation’, so conclusions from this research cannot be applied, for instance, to existing pendant alarm systems currently in widespread use. And telecare systems have continued to evolve since this research started. Moreover, while the results summarised here relate to the telecare participants and do not cover any potential impacts on family carers, there is some evidence that telecare alleviates carer strain.
These findings inevitably raise further questions. What are the broader experiences of those using telecare? What makes a telecare experience positive? And what detracts from the experience? Who can benefit most from telecare? Some answers will emerge as we look across all the findings from the WSD research programme. We also need to look forward to findings from new research, such as the current trial of telecare for people with dementia and their carers (Leroi, Woolham et al. 2013). The ‘big’ question is not whether we should implement a ‘one-size fits all’ solution to meet the increasing demands on social care but for whom do these new assistive technologies work best and for whom are they most cost-effective response.
Age and Ageing is an international journal publishing refereed original articles and commissioned reviews on geriatric medicine and gerontology. Its range includes research on ageing and clinical, epidemiological, and psychological aspects of later life.
Crime is a hot issue on the policy agenda in the United States. Despite a significant fall in crime levels during the 1990s, the costs to taxpayers have soared together with the prison population. The US prison population has doubled since the early 1980s and currently stands at over 2 million inmates. According to the latest World Prison Population List (ICPS, 2013), the prison population rate in 2012 stood at 716 inmates per 100,000 inhabitants, against about 480 in the United Kingdom and the Russian Federation – the two OECD countries with the next highest rates – and against a European average of 154. The rise in the prison population is not just a phenomenon in the United States. Over the last twenty years, prison population rates have grown by over 20% in almost all countries in the European Union and by at least 40% in one half of them. The pattern appears remarkably similar in other regions, with a growth of 50% in Australia, 38% in New Zealand and about 6% worldwide.
In many countries – such as the United States and Canada – this fast-paced growth has occurred against a backdrop of stable or decreasing crime rates and is mostly due to mandatory and longer prison sentencing for non-violent offenders. But how much does prison actually cost? And who goes to jail?
The average annual cost per prison inmate in the United States was close to 30,000 dollars in 2008. Costs are even higher in countries like the United Kingdom and Canada. Punishment is an expensive business. These figures have prompted a shift of interest, among both academics and policymakers, from tougher sentencing to other forms of intervention. Prison populations overwhelmingly consist of individuals with poor education and even poorer job prospects. Over 70% of US inmates in 1997 did not have a high school degree. In an influential paper, Lochner and Moretti (2004) establish a sizable negative effect of education, in particular of high school graduation, on crime. There is also a growing body of evidence on the positive effect of education subsidies on school completion rates. In light of this evidence, and given the monetary and human costs of crime, it is crucial to quantify the relative benefits of policies promoting incarceration vis-à-vis alternatives such as boosting educational attainment, and in particular high school graduation.
When it comes to reducing crime, prevention may be more efficient than punishment. Resources devoted to running jails could profitably be employed in productive activities if the same crime reduction could be achieved through prevention.
Establishing which policies are more efficient requires a framework that accounts for individuals’ responses to alternative policies and can compare their costs and benefits. In other words, one needs a model of education and crime choices that allows for realistic heterogeneity in individuals’ labor market opportunities and propensity to engage in property crime. Crucially, this analysis must be empirically relevant and account for several features of the data, in particular for the crime response to changes in enrollment rates and the enrollment response to graduation subsidies.
The findings from this type of exercise are fairly clear and robust. For the same crime reduction, subsidizing high school graduation entails large output and efficiency gains that are absent in the case of tougher sentences. By improving the education composition of the labor force, education subsidies increase the differential between labor market and illegal returns for the average worker and reduce crime rates. The increase in average productivity is also reflected in higher aggregate output. The responses in crime rate and output are large. A subsidy equivalent to about 9% of average labor earnings during each of the last two years of high school induces almost a 10% drop in the property crime rate and a significant increase in aggregate output. The associated welfare gain for the average worker is even larger, as education subsidies weaken the link between family background and lifetime outcomes. In fact, one can show that the welfare gains are twice as large as the output gains. This compares to negligible output and welfare gains in the case of increased punishment. These results survive a variety of robustness checks and alternative assumptions about individual differences in crime propensity and labor market opportunities.
To sum up, the main message is that, although interventions which improve lifetime outcomes may take time to deliver results, given enough time they appear to be a superior way to reduce crime. We hope this research will advance the debate on the relative benefits of alternative policies.
Giulio Fella is a Senior Lecturer in the School of Economics and Finance at Queen Mary University, United Kingdom. Giovanni Gallipoli is an Associate Professor at the Vancouver School of Economics (University of British Columbia) in Canada. They are the co-authors of the paper ‘Education and Crime over the Life Cycle‘ in the Review of Economic Studies.
Review of Economic Studies aims to encourage research in theoretical and applied economics, especially by young economists. It is widely recognised as one of the core top-five economics journal, with a reputation for publishing path-breaking papers, and is essential reading for economists.
After years of intense basic and clinical research, hepatitis C is now curable for the vast majority of the millions of people who have it. The major barrier is access (diagnosis, getting care, and paying for it), because the scientific problem has been solved.
Not only that — but the situation will soon get even better.
For those who haven’t followed this medical miracle closely, here’s a Spark Notes version to bring you up to speed:
Pre-1989: Many blood transfusion recipients, injection drug users, and people with hemophilia have a form of chronic hepatitis, but they test negative for hepatitis A or B. Their infection is cleverly called “non-A, non-B hepatitis,” kind of a placeholder for a future discovery.
1989: A government-industry collaboration discovers the virus that causes “NANB hepatitis” (as it is sometimes further abbreviated). Good thing for that placeholder, because the new virus is called “hepatitis C”, abbreviated “HCV.” A few years later, a reasonably accurate blood test arrives, helping protect the blood supply and also giving us a much better sense of the natural history of HCV (generally slow but progressive liver disease), and finding a vast number of people infected, most of them unaware of it.
1990s: Remarkably, interferon therapy alone sometimes cures hepatitis C. That’s right, cures it. Unlike HIV and hepatitis B, HCV has no phase where it’s integrated into the host genome, so clearance of the virus completely occurs, provided the host and treatment factors are right. That’s the good news, but the rest, not so much: cure rates are terrible (generally <10% for genotype 1, the most common form in the United States), interferon has to be injected three times a week, and, perhaps worst of all, side effects are legion — fatigue, fever, muscle aches, anorexia, depression, irritability — and tend to worsen over the year or so of required therapy.
Late 1990s: Ribavirin — a mysterious antiviral whose mechanism of action still remains unclear — is added to interferon treatment, boosting cure rates up to 30-40% for genotype 1, 70% or higher for genotypes 2 and 3. Cause for celebration? Usually not, for several reasons: ribavirin has its own tricky side effects (hemolytic anemia, for one, and severe teratogenicity), so treatment is even more difficult than with interferon alone. Furthermore, the viral kinetics of successful treatment remain poorly defined, and hence patients are often given months of toxic therapy before it is ultimately stopped for “futility”.
Early 2000s: Attaching polyethylene glycol (PEG) to interferon greatly slows its clearance, so injections are now required only once a week. These “pegylated” forms of interferon plus ribavirin increase cure rates a bit further, as the reduced frequency of injections markedly improves adherence. (They also engender one of the best trade names ever for a drug – what marketing genius thought of Pegasys?) Side effects, alas, are no better. “I feel like I’m slowly killing myself,” says one of my patients, memorably, as he abandons treatment after 36 weeks of fatigue, snapping at his wife and co-workers, and general misery because his blood tests still show a bit of detectable virus – with no guarantee that continuing on to week 48 will cure him.
2011: The first “directly acting antivirals” (DAAs) are approved, the HCV protease inhibitors boceprevir and telaprevir. For patients completing treatment with these drugs — again, in addition to interferon and ribavirin — cure rates for genotype 1 reach 70-80%. Certainly a big improvement, yes, but a few major caveats: first, though the treatment can sometimes (but not always) be shortened to 24 weeks with these three rather than two drugs, interferon and ribavirin side effects remain extremely problematic, with some of them (in particular the cytopenias) made even worse. Second, these first-generation protease inhibitors have their own set of nasty toxicities (anemia, rashes, taste disturbance, diarrhea, pain with defecation — another memorable patient quote: “I feel like I’m shitting glass shards.”) Third, both drugs have a high pill burden and, with telaprevir, stringent food requirements, making adherence extremely challenging.
Given the limitations of interferon (pegylated or not), ribavirin, telaprevir and boceprevir, it’s not surprising that many clinicians and patients decide it’s best to wait for better treatments to come. In fact, the cure rates from clinical trials are huge overestimates of the proportions actually cured in clinical practice, since there is intense clinician and patient self-selection about who should launch into these tough treatments. Meanwhile, research is proceeding rapidly (competition in this field is a good thing) to find other anti-HCV drugs, and several promising early clinical trials results are presented at academic meetings.
The practical culmination of this research finally arrives in late 2013 with the approval of first simeprevir — another protease inhibitor, only given as just one pill a day and with very few side effects — and, a few weeks later, sofosbuvir. The first HCV nucleotide polymerase inhibitor, sofosbuvir is also one pill a day, is highly potent, has few side effects or drug interactions, and is so effective it can help you get a better deal on your car insurance. (That last part was made up, but for the price — $1000 a pill — sofosbuvir better be pretty good.)
Simeprevir and sofosbuvir have been studied together in the COSMOS study and the bottom line is that more than 90% of genotype 1 patients are cured with 12 weeks of therapy. Some of the patients in COSMOS received no ribavirin, and most importantly none received interferon. It’s a small study, yes, and so we can’t take that response rate as applicable to everyone – some very difficult to treat individuals have already failed “SIM-SOF,” as the combination is being called by the HCV cognoscenti. But both in the clinical trial and thus far in clinical practice, this two-pill, once-daily regimen has shockingly few side effects.
So what’s next? How can this happy state of affairs get even better? Within the next 12 months, we’ll have a combination pill that gives HCV treatment as one pill a day. Some patients will be cured in 8 rather than 12 weeks. Other options (here and here) will arrive that have the same astounding cure rates – because a greater than 90% response is the price of entry into this HCV treatment arena. It’s hoped (and expected by many) that these expanded options will bring the cost of HCV therapy down, because that’s the way markets are supposed to work.
More than 90% cured. Sure beats the 9% rate from the interferon-only days.
And that, my friends, is reason to celebrate World Hepatitis Day.
Paul Edward Sax, MD is Clinical Director of the Brigham and Women’s Hospital and Professor of Medicine, Harvard Medical School. He is the editor-in-chief of the Infectious Diseases Society of America’s new peer-reviewed, open access journal, Open Forum Infectious Diseases (OFID).
Open Forum Infectious Diseases provides a global forum for the rapid publication of clinical, translational, and basic research findings in a fully open access, online journal environment. The journal reflects the broad diversity of the field of infectious diseases, and focuses on the intersection of biomedical science and clinical practice, with a particular emphasis on knowledge that holds the potential to improve patient care in populations around the world.
Subscribe to the OUPblog via email or RSS.
Subscribe to only health and medicine articles on the OUPblog via email or RSS.
Visual illusions, such as the rabbit-duck (shown below) and café wall are fascinating because they remind us of the discrepancy between perception and reality. But our knowledge of such illusions has been largely limited to studying humans.
That is now changing. There is mounting evidence that other animals can fall prey to the same illusions. Understanding whether these illusions arise in different brains could help us understand how evolution shapes visual perception.
For neuroscientists and psychologists, illusions not only reveal how visual scenes are interpreted and mentally reconstructed, they also highlight constraints in our perception. They can take hundreds of different forms and can affect our perception of size, motion, colour, brightness, 3D form and much more.
Artists, architects and designers have used illusions for centuries to distort our perception. Some of the most common types of illusory percepts are those that affect the impression of size, length, or distance. For example, Ancient Greek architects designed columns for buildings so that they tapered and narrowed towards the top, creating the impression of a taller building when viewed from the ground. This type of illusion is called forced perspective, commonly used in ornamental gardens and stage design to make scenes appear larger or smaller.
As visual processing needs to be both rapid and generally accurate, the brain constantly uses shortcuts and makes assumptions about the world that can, in some cases, be misleading. For example, the brain uses assumptions and the visual information surrounding an object (such as light level and presence of shadows) to adjust the perception of colour accordingly.
Known as colour constancy, this perceptual process can be illustrated by the illusion of the coloured tiles. Both squares with asterisks are of the same colour, but the square on top of the cube in direct light appears brown whereas the square on the side in shadow appears orange, because the brain adjusts colour perception based on light conditions.
These illusions are the result of visual processes shaped by evolution. Using that process may have been once beneficial (or still is), but it also allows our brains to be tricked. If it happens to humans, then it might happen to other animals too. And, if animals are tricked by the same illusions, then perhaps revealing why a different evolutionary path leads to the same visual process might help us understand why evolution favours this development.
The idea that animal colouration might appear illusory was raised more than 100 years ago by American artist and naturalist Abbott Thayer and his son Gerald. Thayer was aware of the “optical tricks” used by artists and he argued that animal colouration could similarly create special effects, allowing animals with gaudy colouration to apparently become invisible.
In a recent review of animal illusions (and other sensory forms of manipulation), we found evidence in support of Thayer’s original ideas. Although the evidence is only recently emerging, it seems, like humans, animals can perceive and create a range of visual illusions.
Animals use visual signals (such as their colour patterns) for many purposes, including finding a mate and avoiding being eaten. Illusions can play a role in many of these scenarios.
Great bowerbirds could be the ultimate illusory artists. For example, their males construct forced perspective illusions to make them more attractive to mates. Similar to Greek architects, this illusion may affect the female’s perception of size.
Animals may also change their perceived size by changing their social surroundings. Female fiddler crabs prefer to mate with large-clawed males. When a male has two smaller clawed males on either side of him he is more attractive to a female (because he looks relatively larger) than if he was surrounded by two larger clawed males.
This effect is known as the Ebbinghaus illusion, and suggests that males may easily manipulate their perceived attractiveness by surrounding themselves with less attractive rivals. However, there is not yet any evidence that male fiddler crabs actively move to court near smaller males.
We still know very little about how non-human animals process visual information so the perceptual effects of many illusions remains untested. There is variation among species in terms of how illusions are perceived, highlighting that every species occupies its own unique perceptual world with different sets of rules and constraints. But the 19th Century physiologist Johannes Purkinje was onto something when he said: “Deceptions of the senses are the truths of perception.”
In the past 50 years, scientists have become aware that the sensory abilities of animals can be radically different from our own. Visual illusions (and those in the non-visual senses) are a crucial tool for determining what perceptual assumptions animals make about the world around them.
Bringing together significant work on all aspects of the subject, Behavioral Ecology is broad-based and covers both empirical and theoretical approaches. Studies on the whole range of behaving organisms, including plants, invertebrates, vertebrates, and humans, are welcomed.
Subscribe to the OUPblog via email or RSS.
Subscribe to only earth and life sciences articles on the OUPblog via email or RSS.
Image credit: Duck-Rabbit illusion, by Jastrow, J. (1899). Public domain via Wikimedia Commons.<
Cognitive impairment is a common problem in older adults, and one which increases in prevalence with age with or without the presence of pathology. Persons with mild cognitive impairment (MCI) have difficulties in daily functioning, especially in complex everyday tasks that rely heavily on memory and reasoning. This imposes a potential impact on the safety and quality of life of the person with MCI as well as increasing the burden on the care-giver and overall society. Individuals with MCI are at high risk of progressing to Alzheimer’s diseases (AD) and other dementias, with a reported conversion rate of up to 60-100% in 5-10 years. These signify the need to identify effective interventions to delay or even revert the disease progression in populations with MCI.
At present, there is no proven or established treatment for MCI although the beneficial effects of physical activity/exercise in improving the cognitive functions of older adults with cognitive impairment or dementia have long been recognized. Exercise regulates different growth factors which facilitate neuroprotection and anti-inflammatory effects on the brain. Studies also found that exercise promotes cerebral blood flow and improves learning. However, recent reviews reported that evidence from the effects of physical activity/exercise on cognition in older adults is still insufficient.
Surprisingly, studies have found that although numerous new neurons can be generated in the adult brain, about half of the newly generated cells in the brain die during the first 1-4 weeks. Nevertheless, research also found that spatial learning or exposure to an enriched environment can rescue the newly generated immature cells and promote their long-term survival and functional connection with other neurons in the adult brain
It has been proposed that exercise in the context of a cognitively challenge environment induces more new neurons and benefits the brain rather than the exercise alone. A combination of mental and physical training may have additive effects on the adult brain, which may further promote cognitive functions.
Daily functional tasks are innately cognitive-demanding and involve components of stretching, strengthening, balance, and endurance as seen in traditional exercise programs. Particularly, visual spatial functional tasks, such as locating a key or finding the way through a familiar or new environment, demand complex cognitive processes and play an important part in everyday living.
In our recent study, a structured functional tasks exercise program, using placing/collection tasks as a means of intervention, was developed to compare its effects on cognitions with a cognitive training program in a population with mild cognitive impairment.
Patients with subjective memory complaint or suspected cognitive impairment were referred by the Department of Medicine and Geriatrics of a public hospital in Hong Kong. Older adults (age 60+) with mild cognitive decline living in the community were eligible for the study if they met the inclusion criteria for MCI. A total of 83 participants were randomized to either a functional task exercise (FcTSim) group (n = 43) or an active cognitive training (AC) group (n = 40) for 10 weeks.
We found that the FcTSim group had significantly higher improvements in general cognitive functions, memory, executive function, functional status, and everyday problem solving ability, compared with the AC group, at post-intervention. In addition, the improvements were sustained during the 6-month follow-up.
Although the functional tasks involved in the FcTSim program are simple placing/collection tasks that most people may do in their everyday life, complex cognitive interplays are required to enable us to see, reach and place the objects to the target positions. Indeed, these goal-directed actions require integration of information (e.g. object identity and spatial orientation) and simultaneous manipulation of the integrated information that demands intensive loads on the attentional and executive resources to achieve the ongoing tasks. It is a matter of fact that misplacing objects are commonly reported in MCI and AD.
Importantly, we need to appreciate that simple daily tasks can be cognitively challenging to persons with cognitive impairment. It is important to firstly educate the participant as well as the carer about the rationale and the goals of practicing the exercise in order to initiate and motivate their participation. Significant family members or caregivers play a vital role in the lives of persons with cognitive impairment, influencing their level of activities and functional interaction in their everyday environment. Once the participants start and experience the challenges in performing the functional tasks exercise, both the participants and the carer can better understand and accept the difficulties a person with cognitive impairment can possibly encounter in his/her everyday life.
Furthermore, we need to aware that the task demands will decrease once the task becomes more automatic through practice. The novelty of the practicing task has to be maintained in order ensure a task demand that allows successful performance and maintain an advantage for the intervention. Novelty can be maintained in an existing task by adding unfamiliar features, and therefore performance of the task will remain challenging and not become subject to automation.
Dr. Lawla Law is a practicing Occupational Therapist for more than 24 years, with extensive experience in acute and community settings in Hong Kong and Tasmania, Australia. She is currently the Head of Occupational Therapy at the Jurong Community Hospital of Jurong Health Services in Singapore and will take up a position as Lecturer in Occupational Therapy at the University of Sunshine Coast, Queensland, Australia in August 2014. Her research interests are in Geriatric Rehabilitations with a special emphasis on assessments and innovative interventions for cognitive impairment. Dr. Law is an author of the paper ‘Effects of functional tasks exercise on older adults with cognitive impairment at risk of Alzheimer’s disease: a randomised controlled trial’, published in the journal Age and Ageing.
Age and Ageing is an international journal publishing refereed original articles and commissioned reviews on geriatric medicine and gerontology. Its range includes research on ageing and clinical, epidemiological, and psychological aspects of later life.
Subscribe to the OUPblog via email or RSS.
Subscribe to only science and medicine articles on the OUPblog via email or RSS.
Image credit: Brain aging. By wildpixel, via iStockphoto.
The downing of the Malaysian Airlines Flight MH17 on 17 July 2014 sent shockwaves around the world. The airliner was on its way from Amsterdam to Kuala Lumpur when it was shot down over Eastern Ukraine by an surface to air missile, killing all people on board, 283 passengers including 80 children, and 15 crew members. The victims were nationals of at least 10 different states, with the Netherlands losing 192 of its citizens.
With new information being released hourly strong evidence seems to indicate that the airliner was downed by a sophisticated military surface to air missile system, the SA-17 BUK missile system. This self-propelled air defence system was introduced in 1980 to the Armed Forces of the then Soviet Union and which is still in service with the Armed forces of both Russia and Ukraine. There is growing suspicion that the airliner was shot down by pro-Russian separatist forces operating in the area, with one report by AP having identified the presence of a rebel BUK unit in close proximity of the crash site. The United States and its intelligence services were quick in identifying the pro-Russian separatists as having been responsible for launching the missile. This view is supported further by the existence of incriminating communications between the rebels and their Russian handlers immediately after the aircraft hit the ground and also a now deleted announcement on social media by the self declared Rebel Commander, Igor Strelkov. This evidence points to the possibility that MH17 was mistaken for an Ukrainian military plane and therefore targeted. Given that two Ukrainian military aircraft were shot down over Eastern Ukraine in only two days preceding 17 July 2014 a not unlikely possibility.
It will be crucial to establish the extent of Russia’s involvement in the atrocity. While there seems to be evidence that the rebels may have taken possession of BUK units of the Ukrainian, it seems unlikely that they would have been able to operate these systems without assistance from Russian military experts and even radar assets.
Makeshift memorial at Amsterdam Schiphol Airport for the victims of the Malaysian Airlines flight MH17 which crashed in the Ukraine on 18 July 2014 killing all 298 people on board. Photo by Roman Boed. CC BY 2.0 via romanboed Flickr.
Russia was quick to shift the blame on Ukraine itself, asking why civil aircraft hadn’t been barred completely from overflying the region, directly blaming Ukraine’s aviation authorities during the emergency meeting on the UN Security Council (UNSC) on 18 July 2014. Russia even went so far to blame Ukraine indirectly of shooting down MH17 by comparing the incident with the accidental shooting down of a Russian civilian airliner en route from Tel Aviv to Novosibirsk in 2001. Despite Russia’s call for an independent investigation of the incident, Moscow’s rebels reportedly blocked actively international observers from OSCE to access the site.
While any civilian airliner crash is a catastrophe, and in cases of terrorist involvement an international crime, the shooting down of passenger jets by a state are particularly shocking as they always affect non combatants and resemble acts which are always outside the parameters of the legality of any military action (such as distinction, necessity, and proportionality). Any such act would lead to global condemnation and would hurt the perpetrator state’s international reputation. Consequently, there have only been few such incidents over the last 60 years.
What could be the possible consequences? The rebels are still formally Ukrainian citizens and as such subject to Ukraine’s criminal judicial system, according to the active personality principle. Such a prosecution could extent to the Russian co-rebels as Ukraine could exercise its jurisdiction as the state where the crime was committed, under the territoriality principle. In addition prosecutions could be initiated by the states whose citizens were murdered, under the passive personality principle of international criminal law. With Netherlands as the nation with the highest numbers of victims having a particularly strong interest in swift criminal justice, memories of the Pan Am 103 bombing come to mind, where Libyan terrorists murdered 270 humans when an airliner exploded over Lockerbie in Scotland. Following international pressure, Libya agreed to surrender key suspects to a Scottish Court sitting in the Netherlands.
The establishment of an international(-ised) criminal forum for the prosecution of the perpetrators would require Russia’s cooperation, something which seems to be unlikely given Putin’s increasing defiance of the international community’s call for justice. A prosecution by the International Criminal Court (ICC) in The Hague under its Statute, the Rome Statute, is unlikely to happen as neither Russian nor Ukraine have ratified the Statute. An UNSC referral to the ICC — if one accepts that the murder of 298 civilians would amount to a crime which qualifies as a crime against humanity or even a war crime under Article 5 of the ICC Statute — would fail given that Russia and its new strategic partner China are Veto powers on the Council and would veto any resolution for a referral.
Other responses could be the imposing of unilateral and international sanctions and embargos against Moscow and high profile individuals. Related to such economic countermeasures is the possibility to hold Russia as a state responsible for its complicity in the shooting down of MH17; the International Court of Justice (ICJ) would be the forum where such a case against Russia could be brought by a state affected by the tragedy. An example for such an interstate case arising from a breach of international law can be found in the ICJ case Aerial Incident of 3 July 1988 (Islamic Republic of Iran v. United States of America), arising from the unlawful shooting down of Iran Air Flight 655 by the United States in 1988. The case ended with an out of Court settlement by the US in 1996. Again, it seems quite unlikely that Russia will accept any ruling by the ICJ on the matter and even less likely would be any compliance with an damages order by the court.
One alternative could be a true US solution for the accountability gap of Russia’s complicity in the disaster. If the US Congress was to qualify the rebel groups as terrorist organizations then this would make Russia a state sponsor of terrorism, and as such subject to US federal jurisdiction in a terrorism civil litigation case brought under the Anti-Terrorism Act (ATA-18 USC Sections 2331-2338) as an amendment to the Alien Torts Statute (ATS/ATCA – 28 USC Section 1350). The so-called “State Sponsors of Terrorism” exception to the Foreign Sovereign Immunities Act (FSIA Exception-28 USC Section 1605(a)(7)), which allows lawsuit against so-called state sponsors of terrorism. The Foreign Sovereign Immunities Act (FSIA) Exception of 1996 limits the defense of state immunity in cases of state sponsored terrorism and can be seen as a direct judicial response to the growing threat of acts of international state sponsored terrorism directed against the United States and her citizens abroad, as exemplified in the case of Flatow v. Islamic Republic of Iran (76 F. Supp. 2d 28 (D.D.C. 1999)). Utilising US law to bring a civil litigation case against Russia as a designated state sponsor of international terrorism would certainly set a strong signal and message to Putin; it remains to be seen whether the US call for stronger unified sanctions against Russia will translate into such unilateral action.
Time will tell if the downing of MH17 will turn out to be a Lusitania moment (the sinking of the British passenger ship Lusitania with significant loss of US lives by a German U-boat led to the entry of the US in World War I) for Russia’s relations with the West, which might pave the way to a new ‘Cold War’ along new conflict lines with different allies and alliances. What has become clear already today is Russia’s potential new role as state sponsor of terrorism.
Sascha-Dominik Bachmann is an Associate Professor in International Law (Bournemouth University); State Exam in Law (Ludwig-Maximilians Universität, Munich), Assessor Jur, LL.M (Stellenbosch), LL.D (Johannesburg); Sascha-Dominik is a Lieutenant Colonel in the German Army Reserves and had multiple deployments in peacekeeping missions in operational and advisory roles as part of NATO/KFOR from 2002 to 2006. During that time he was also an exchange officer to the 23rd US Marine Regiment. He wants to thank Noach Bachmann for his input. This blog post draws from Sascha’s article “Targeted Killings: Contemporary Challenges, Risks and Opportunities” in the Journal of Conflict Security Law and available to read for free for a limited time. Read his previous blog posts.
The Journal of Conflict & Security Law is a refereed journal aimed at academics, government officials, military lawyers and lawyers working in the area, as well as individuals interested in the areas of arms control law, the law of armed conflict and collective security law. The journal aims to further understanding of each of the specific areas covered, but also aims to promote the study of the interfaces and relations between them.
Adaptation to climate change is currently high on the agenda of EU bureaucrats exploring the regulatory scope of the topic. Climate change may potentially bring about changes in the frequency of extreme weather events such as heat waves, flooding or thunder storms, which in turn may require adaptation to changes in our living conditions. Adaptation to these conditions cannot stop climate change, but it can reduce the cost of climate change. Building dikes protects the landscape from an increase in sea level. New vaccines protect the population from diseases that may spread due to the change in the climate. Leading politicians, the media and prominent interest groups call for more efforts in adaptation.
But who should be in charge? Do governments have to play a leading role in adaptation? Will firms and households make the right choices? Or do governments have to intervene to correct insufficient or false adaptation choices? If intervention is necessary, will the policy have to be decided on a local level or on a national or even supranational (EU) level? In a recent article we review the main arguments for government intervention in climate change adaptation. Overall, we find that the role of the state in adaptation policy is limited.
In many cases, adaptation decisions can be left to private individuals or firms. This is true if private sector decision-makers both bear the cost and enjoy the benefits of their own decisions. Superior insulation of buildings is a good example. It shields the occupants of a building from extreme temperatures during cold winters and hot summers. The occupants – and only the occupants – benefit from the improved insulation. They also bear the costs of the new insulation. If the benefit exceeds the cost, they will invest in the superior insulation. If it does not pay off, they will refrain from the adaptation measure (and they should do so from an efficiency point of view). There is no need for government intervention in the form of building regulation or rehabilitation programmes.
In some other cases, adaptation affects an entire community as in the case of dikes. A single household will hardly be able – nor have the incentive – to build a dike of the appropriate size. But the local municipality can and should be able to so. All inhabitants of the municipality can share the costs and appropriate the benefit from flood protection. The decision on the dike could be made on the state level if not at the municipal level. The local population will probably have a long-standing experience and superior knowledge about the flood events and its potential damages. The subsidiarity principle, which is a major principle of policy task assignment in the European Union, suggests that the decisions should be made on the most decentralized level for which there are no major externalities between the decision-makers. In the case of the dike, the appropriate level for the adaptation measure would be the municipality. Again there is no need for intervention from upper-level governments.
So what role is left for the upper echelons of government in climate change adaptation? Firstly, the government has to help in improving our knowledge. Information about climate change and information about technical adaptation measures are typical public goods: the cost of generating the information has to be incurred once, whereas the information can be used at no additional cost. Without government intervention, too little information would be generated. Therefore, financing basic research in this area is one of the fundamental tasks for a central government.
Secondly, the government has to provide the regulatory framework for insurance markets. The economic consequences of natural disasters can be cushioned through insurance markets. However, the incentives to buy insurance are insufficient for several reasons. For instance, whenever a major disaster threatens the economic existence of a larger group of citizens, the government is under social pressure and will typically provide help to all those in need. By anticipating government support in case of a disaster, there is little or no incentive to buy insurance in the market. Why should they pay the premium for private insurance, or invest in self-insurance or self-protection measures if they enjoy a similar amount of free protection from the government? If the government wants to avoid being pressured for disaster relief, it has to make disaster insurance mandatory. And to induce citizens to the appropriate amount of self-protection, insurance premiums have to be differentiated according to local disaster risks.
Thirdly, fostering growth helps coping with the consequences of climate change and facilitates adaptation. Poor societies and population groups with low levels of education have the highest exposure to climate change, whereas richer societies have the means to cope with the implications of climate change. Hence, economic growth – properly measured – and education should not be dismissed easily as they act as powerful self-insurance devices against the uncertain future challenges of climate change.
Kai A. Konrad is Director at the Max Planck Institute for Tax Law and Public Finance. Marcel Thum is Professor of Economics at TU Dresden and Director of ifo Dresden. They are the authors of the paper ‘The Role of Economic Policy in Climate Change Adaptation’ published in CESifo Economic Studies.
CESifo Economic Studies publishes provocative, high-quality papers in economics, with a particular focus on policy issues. Papers by leading academics are written for a wide and global audience, including those in government, business, and academia. The journal combines theory and empirical research in a style accessible to economists across all specialisations.
Subscribe to the OUPblog via email or RSS.
Subscribe to only business and economics articles on the OUPblog via email or RSS.
Image credit: Flooding, July 2007, by Mat Fascoine. CC-BY-SA-2.0 via Wikimedia Commons.
The recent firing of Jill Abramson, the first female executive editor of the New York Times, after less than three years on the job focused the news cycle on gender inequity, with discussions of glass cliffs (women get shorter leashes even when they get the top jobs) and reports showing the persistence of glass ceilings and pay disparities (e.g. Abramson was paid less than her male predecessor). In the United States, women now represent a substantial majority of those earning advanced degrees. Yet as we look higher and higher up the ladders of career attainment, we see smaller and smaller percentages of women – as well as the persistence of pay gaps for women, even in senior positions. In other words, even as women break through one glass ceiling, they encounter another on the next rung.
Take law firms. Women make up almost half of US law school graduates (up from 5% in 1950). But they represent only 20% of US law firm partners and an even smaller share (16%) of the more elite class of equity partners. And the higher one looks within the partnership stratosphere, the less diverse it gets. Furthermore, the leaders of the profession, as well as clients of law firms, express frustration with the slow pace of progress in generating more gender and ethnic equality at the top of the profession. These efforts can be aided by improving our understanding of the work and career processes within law firms and, by extension, partnerships in other professional fields, such as accounting, consulting, and investment banking.
So how exactly do partners rise to different levels within the partnership hierarchy, and how do those processes challenge female partners? To date, researchers have analyzed the challenge of becominga partner, but we know curiously little about how professional careers unfold after that. Although partners at large law firms may all be one-percenters, they are certainly not equal, with distinctions made between equity and non-equity partners, and recent surveys showing some “super-partners” earn up to 25 times more than their peers.
To get at these questions, we studied how partners gain power within a partnership, as measured by their “book of business” – the fees paid to the firm by clients with whom the partner holds the primary relationship. The more client revenue that a partner is responsible for, the more that partner will hold influence in their firm, command respect, and generate career mobility options in the wider profession. To understand power in a partnership, then, is to understand how partners come to obtain books of business.
What we found was intriguing. In short, although women may be disadvantaged in a primary “path to power” in the partnership, they may have opportunities along a second pathway of growing importance.
The primary pathway involves “inheriting” clients from an established power partner. To build a book of business, one needs to either pursue that strategy, or the alternative of “making rain” by bringing new clients to the firm. A newly minted partner thus needs to decide which path to invest in—or how much to invest in each path. Do you spend time working for clients of power partners nearing retirement—or pounding the pavement (or the cocktail circuit) seeking new clients of your own? Of course, each path has its risks. Investing in the inheritance path can backfire, for example, if a retiring benefactor bequeaths a client to a rival partner. And the rainmaking strategy can backfire if nibbles of new-client business don’t eventually turn into a large revenue stream for the firm. Since both investments require time and energy, what’s the optimal career strategy?
Deepening the puzzle, both paths are also likely to pose particular challenges to female attorneys, as they depend on forming social relationships with either the senior power partners or with decision makers at potential new client firms. Much research shows the existence of “homophily” in interpersonal relationships, or the tendency for people to be drawn to and feel greater affinity for people who are like themselves in terms of race and gender. So where senior partners and/or client decision makers are largely male, female junior partners may be at a disadvantage in forming the bonds of affinity or trust that help win the client business.
Analysis of the internal records of law firms shows, unsurprisingly, that female partners have smaller books of business than their male peers. More interestingly, though, we are finding that the rate of return on investments in the two paths to power differs between men and women. In fact, the inheritance strategy appears to be a particularly poor investment for women. For women, larger investments in the inheritance path are associated with lower future books of business. Why? We speculate this could be because of “selective affinity.” That is, when it comes time for the power partners to pass on their clients, they may unconsciously favor partners who are more demographically similar to them.
Yet, when it comes to the rainmaking strategy, the opposite may be true. For female partners, investments in the rainmaking path appear to pay handsomely. In fact even better than for male partners. Why could that be? Perhaps female partners recruit new clients in different ways than male partners, or perhaps “selective affinity” can actually favor female partners in the open marketplace (rather than the closed ecosystem of the firm’s internal networks).
What does it all mean? First off, for partnerships, there may be considerable value in studying the inheritance and rainmaking processes going on in their own organizations. Virtually all firms now have the relevant internal data waiting to be analyzed. Second, our findings are important for managing diversity in partnerships. For example, the results suggest there could be a “double payoff” to supporting rainmaking efforts for newly-made female partners – double in the sense of the firm’s overall revenue generation as well as diversity goals.
What is the role of a regional oral history organization?
The Board of Officers of Oral History in the Mid-Atlantic Region (OHMAR) recently wrestled with this question over the course of a year-long strategic planning process. Our organization had reached an inflection point. New technologies, shifting member expectations and changing demographics compelled us to re-think our direction. What could we offer new and existing members that local or national organizations did not —and how would we offer it?
Our strategic planning committee set out to answer these questions, and to chart a course for 2014 and beyond. Four board members served on the committee: Kate Scott of the Senate Historical Office; LuAnn Jones of the National Park Service; Anne Rush of the University of Maryland; and myself, of the Library of Congress, acting as director. OHMAR dates back to 1976 and has been a vibrant organization for nearly 40 years. Therefore, our goal was not to re-invent but rather to re-focus. To start, we identified OHMAR’s core values. We determined them to be:
Whatever our new direction, we would stay true to these ideals.
For months, the committee discussed how OHMAR could better serve members with these values in mind. We also polled membership and consulted with past organization presidents about what they valued in OHMAR and what they wanted in the future. What emerged was a plan with several key considerations for how any regional organization can serve its membership:
Build community. Through digital technology, formal and informal events, and low-cost membership, regional organizations can foster meaningful professional networks, offer support, and create opportunities for intimate interaction on an ongoing basis.
Provide targeted resources. Local knowledge can allow regional organizations like OHMAR to provide targeted educational, professional, and monetary resources. For example, oral historians working for the federal government in and around Washington, D.C., have unique challenges to which OHMAR can provide specific tools, tips, and advice.
Leverage expertise. Our region boasts tremendous expertise courtesy of oral historians such as Don Ritchie, Linda Shopes, Roger Horowitz, and more. These experts can help educate new members, especially those from fields such as journalism, the arts, public history, and advocacy on best practices.
Offer meaningful opportunities. By forming new committees, we can offer members meaningful ways to get involved and gain leadership experience.
We presented our findings in the form of a new Strategic Plan at our April 2014 annual meeting. The intimate two-day event was attended by more than 60 oral historians and reaffirmed the value of regional conferences. In fact, feedback stated that for some, ours was the best conference they had ever attended. On the afternoon of the second day, our members ratified OHMAR’s Strategic Plan for 2015-2020. Accordingly, next year, we will focus on improving our internal operations, updating our bylaws, and overhauling our website, member management system, and e-newsletter. In the following years, we will also introduce several new initiatives, including a Martha Ross Memorial Prize for students, named for our beloved founder.
Jason Steinhauer serves on the Board of Oral History in the Mid-Atlantic Region (OHMAR). He directed the organization’s strategic planning process from 2013-2014. You can follow Jason on Twitter at @JasonSteinhauer and OHMAR at @OHMidAtlantic.
The Oral History Review, published by the Oral History Association, is the U.S. journal of record for the theory and practice of oral history. Its primary mission is to explore the nature and significance of oral history and advance understanding of the field among scholars, educators, practitioners, and the general public. Follow them on Twitter at @oralhistreview, like them on Facebook, add them to your circles on Google Plus, follow them on Tumblr, listen to them on Soundcloud, or follow their latest OUPblog posts via email or RSS to preview, learn, connect, discover, and study oral history.
Subscribe to the OUPblog via email or RSS.
Subscribe to only history articles on the OUPblog via email or RSS.
Major trauma impacts on the lives of young and old alike. Most of us know or are aware of somebody who has suffered serious injury. In the United Kingdom over five-thousand people die from trauma each year. It is the most common cause of death in people under forty. Many of the fifteen-thousand people who survive major trauma suffer life-changing injuries and some will never fully recover and require life-long care. Globally it is estimated that injuries are responsible for sixteen-thousand deaths per day together with a large burden of people left with permanent disability. These sombre statistics are driving a revolution in trauma care.
A key aspect of the changes in trauma management in the United Kingdom and around the world is the organisation of networks to provide trauma care. People who have been seriously hurt, for example in a road traffic accident, may have suffered a head injury, injuries to the heart and lungs, abdominal trauma, broken limbs, and serious loss of skin and muscle. The care of these injuries may require specialist surgery including neurosurgery, cardiothoracic surgery, general (abdominal and pelvic) surgery, orthopaedic surgery, and plastic surgery. These must be supported by high quality anaesthetic, intensive care, radiological services and laboratory services. Few hospitals are able to provide all of the services in one location. It therefore makes sense for the most seriously injured patients to be transported not to the nearest hospital but to the hospital best equipped to provide the care that they need. Many trauma services around the world now operate on this principle and from 2010 these arrangements have been established in England. Hospitals are designated to one of three tiers: major trauma centres, trauma units, and local emergency hospitals. The most seriously injured patients are triaged to bypass trauma units and local emergency hospitals and are transported directly to major trauma centres. While this is a new system and some major trauma centres in England have only “gone live” in the past two years, it has already had an impact on trauma outcomes, with monitoring by the Trauma Audit and Research Network (TARN) indicating a 19% improvement in survival after major trauma in England.
Not only have there been advances in the organisation of trauma services, but there have also been advances in the immediate clinical management of trauma. In many cases it is appropriate to undertake “early definitive surgery/early total care” – that is, definitive repair of long bone fractures within twenty-four hours of injury. However, patients who have suffered major trauma often have severe physiological and biochemical derangements by the time they arrive at hospital. The concepts of damage control surgery and damage control resuscitation have emerged for the management of these patients. In this approach resuscitation and surgery are directed towards stopping haemorrhage, performing essential life-saving surgery, and stabilising and correcting the patient’s physiological state. This may require periods of surgery followed by intervals for the administration of blood and clotting factors and time for physiological recovery before further surgery is undertaken. The decision as to whether to undertake early definitive care or to institute a damage control strategy can be complex and is made by senior clinicians working together to formulate an overview of the state of the patient.
Modern radiology and clinical imaging has helped to revolutionise modern trauma management. There is increasing evidence to suggest that early CT scanning may improve outcome in the most unstable patients by identifying life-threatening injuries and directing treatment. When a source of bleeding is identified it may be treated surgically, but in many cases interventional radiology with the placement of glue or metal coils into blood vessels to stop the bleeding offers an alternative and less invasive solution.
The evolution of the trauma team is at the core of modern trauma management. Advances in resuscitation, surgery, and imaging have undoubtedly moved trauma care forward. However, the care of the unstable, seriously injured patient is a major challenge. Transporting someone who is suffering serious bleeding to and from the CT scanner requires excellent teamwork; parallel working so that several tasks are carried out at the same time requires coordination and leadership; making the decision between damage control and definitive surgery requires effective joint decision-making. The emergence of modern trauma care has been matched by the development of the modern trauma team and of specialists dedicated to the care of seriously injured patients. It is to this, above all, that the increasing numbers of survivors from serious trauma owe their lives.
Dr Simon Howell is on the Board of the British Journal of Anaesthesia (BJA) and is the Editor of this year’s Postgraduate Educational Issue: Advances in Trauma Care. This issue contains a series of reviews that give an overview of the revolution in trauma care. The reviews expand on a number of presentations that were given at a two-day meeting on trauma care organised by the Royal College of Anaesthetists in the Spring of 2014. They visit aspects of the trauma patient’s journey from the moment of injury to care in the field, on to triage, and arrival in a trauma centre finally to resuscitation and surgical care.
Founded in 1923, one year after the first anaesthetic journal was published by the International Anaesthesia Research Society, the British Journal of Anaesthesia remains the oldest and largest independent journal of anaesthesia. It became the Journal of The College of Anaesthetists in 1990. The College was granted a Royal Charter in 1992. Since April 2013, the BJA has also been the official Journal of the College of Anaesthetists of Ireland and members of both colleges now have online and print access. Although there are links between BJA and both colleges, the Journal retains editorial independence.