What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

JacketFlap Sponsors

Spread the word about books.
Put this Widget on your blog!
  • Powered by JacketFlap.com

Are you a book Publisher?
Learn about Widgets now!

Advertise on JacketFlap

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Date

Click days in this calendar to see posts by day or month
new posts in all blogs
Viewing: Blog Posts Tagged with: journals, Most Recent at Top [Help]
Results 1 - 25 of 152
1. World Water Monitoring Day 2014

World Water Monitoring Day is an annual celebration reaching out to the global community to build awareness and increase involvement in the protection of water resources around the world. The hope is that individuals will feel motivated and empowered to investigate basic water monitoring in their local area. Championed by the Water Environment Federation, a broader challenge has arisen out of the awareness day, celebrated on September 18th each year. Simple water testing kits are available, and individuals are encouraged to go out and test the quality of local waterways.

Water monitoring can refer to anything from the suitability for drinking from a particular water source, to taking more responsibility for our own consumption of water as an energy source, to the technology needed for alternative energies. Discover more about water issues from around the world using the map below.

Image credit: Ocean beach at low tide against the sun, by Brocken Inaglory. CC-BY-3.0 via Wikimedia Commons

The post World Water Monitoring Day 2014 appeared first on OUPblog.

0 Comments on World Water Monitoring Day 2014 as of 9/17/2014 7:22:00 AM
Add a Comment
2. RestUK, international law, and the Scottish referendum

With Scotland voting on independence on 18 September 2014, the UK coalition government sought advice on the relevant law from two leading international lawyers, James Crawford and Alan Boyle. Their subsequent report has a central argument. An independent Scotland would be separatist, breaking away from the remainder of the UK. Therefore, the latter (known as restUK or rUK) would be the continuator state – enjoying all the rights and duties of the existing UK, while Scotland would be new state having none of rUK’s rights and especially no membership of any international organizations it enjoys now as part of the UK. The bargaining power of rUK as to what it might concede of the UK’s rights would be complete, e.g. with respect to a common currency. This legal opinion has created a confrontational atmosphere around the referendum vote and caused anxiety among Scottish voters about to ‘jump into the unknown’.

It is essential to unpack the distracting complexity of the expert international law professionalism of this advice. Firstly, Crawford and Boyle gloss over the actual legal circumstances of the contract of union between Scotland and England, in particular that the Union was a bargain among powers equal in the eyes of international law at that time. More specifically, the England which, with Wales, concluded the Treaty of Union is exactly the same entity standing opposite to Scotland now as then (leaving aside the North of Ireland which has the option under the Belfast Agreement of leaving the UK by referendum).

There is no international standard, in the event of a dissolution of a union, which can provide any objective criterion to determine that Scotland is the breakaway entity. In international law, recognition of new states is largely a matter of the political discretion of existing states. It depends on an international consensus, or lack of it, where political preference may or may not trump any possibly objective standard of political legitimacy, e.g. self-determination by democratic consent. The vast amount of state practice which Crawford and Boyle’s legal opinion displays is misleading insofar as there is, in fact, no definitive legal marker of guidance. This is shown by the fact that England is the continuator state because it is larger than Scotland. Legally, there has to be a continuator state. But since this obviously cannot be Scotland, it must be England. Even Scotland assumes this to be the case.

Scottish Parliament Building. © andy2673 via iStock.
Scottish Parliament Building. © andy2673 via iStock.

It is necessary to focus upon an international legal history of the individual states, rather than the more general international law offered by Crawford and Boyle. The Anglo-Scottish Union displays a phenomenon that Linda Colley has referred to as the composite state. This is where two or more sovereign nations agree to merge their highest governmental level institution (parliament) into a single state made up of several nations – a state-nation – but other lesser local institutions might remain. In the Europe of the 15th to the 17th century this was a common phenomenon, the most celebrated being in Scandinavia, involving Sweden, Denmark and Norway in a variety of partnerships from the Kalmar Union (1397) onwards. The logic of these partnerships was that they were always open to renegotiation. Now, this is precisely what the English generously recognize in the Edinburgh Agreement. The logic of the composite state does not cover the many cases in which a core nation forms itself into a state and then jealously guards its territorial integrity against dissident minorities, which are then regarded as separatist and destructive of national unity. It is possible that an aura of this type of scenario runs through the legal opinion of Crawford and Boyle, although they have to accept the consensual context of the advice they are being asked to give.

The real issues facing Scotland have to be confronted on a basis of equality and mutual consent in accordance with the international law established as apposite for this case. These issues are a matter of history, not merely that of the 17th-18th century, but also the evolution of the 1707 Treaty of Union (implemented through separate Acts of Union passed in the Scottish and English Parliaments) to the very recent past – especially the Thatcher years and the neo-liberal revolution in English-dominated UK politics. It has to be recognized that there are profound differences of social philosophy now between Scotland and England around the issue of neo-liberalism and the defense of community. These provide good reasons to revisit that 1707 bargain. This revisiting should be on the basis of complete equality. The sharing of common institutions of the United Kingdom, such as the currency, would have to be negotiated after reaching an agreement in which neither side – as so-called continuator state – would have a higher standing.

The post RestUK, international law, and the Scottish referendum appeared first on OUPblog.

0 Comments on RestUK, international law, and the Scottish referendum as of 9/16/2014 5:46:00 AM
Add a Comment
3. Out with the old?

Innovation is a primary driver of economic growth and of the rise in living standards, and a substantial body of research has been devoted to documenting the welfare benefits from it (an example being Trajtenberg’s 1989 study). Few areas have experienced more rapid innovation than the Personal Computers (PC) industry, with much of this progress being associated with a particular component, the Central Processing Unit (CPU). The past few decades had seen a consistent process of CPU innovation, in line with Moore’s Law: the observation that the number of transistors on an integrated circuit doubles every 18-24 months (see figure below). This remarkable innovation process has clearly benefitted society in many, profound ways.

gra econ
“Transistor Count and Moore’s Law – 2011″ by Wgsimon – Own work. CC-BYSA-3.0 via Wikimedia Commons.

A notable feature of this innovation process is that a new PC is often considered “obsolete” within a very short period of time, leading to the rapid elimination of non-frontier products from the shelf. This happens despite the heterogeneity of PC consumers: while some (e.g., engineers or gamers) have a high willingness-to-pay for cutting edge PCs, many consumers perform only basic computing tasks, such as word processing and Web browsing, that require modest computing power. A PC that used to be on the shelf, say, three years ago, would still adequately perform such basic tasks today. The fact that such PCs are no longer available (except via a secondary market for used PCs which remains largely undeveloped) raises a natural question: is there something inefficient about the massive elimination of products that can still meet the needs of large masses of consumers?

Consider, for example, a consumer whose currently-owned, four-year old laptop PC must be replaced since it was severely damaged. Suppose that this consumer has modest computing-power needs, and would have been perfectly happy to keep using the old laptop, had it remained functional. This consumer cannot purchase the old model since it has long vanished from the shelf. Instead, she must purchase a new laptop model, and pay for much more computing power than she actually needs. Could it be, then, that some consumers are actually hurt by innovation?

A natural response to this concern might be that the elimination of older PC models from the shelves likely indicates that demand for them is low. After all, if we believe in markets, we may think that high levels of demand for something would provide ample incentives for firms to offer it. This intuition, however, is problematic: as shown in seminal theoretical work by Nobel Prize laureate Michael Spence, the set of products offered in an oligopoly equilibrium need not be efficient due to the misalignment of private and social incentives. The possibility that yesterday’s PCs vanish from the shelf “too fast” cannot, therefore, be ruled out by economic theory alone, motivating empirical research.

A recent article addresses this question by applying a retrospective analysis of the U.S. Home Personal Computer market during the years 2001-2004. Data analysis is used to explore the nature of consumers’ demand for PCs, and firms’ incentives to offer different types of products. Product obsolescence is found to be a real issue: the average household’s willingness-to-pay for a given PC model is estimated to drop by 257 $US as the model ages by one year. Nonetheless, substantial heterogeneity is detected: some consumers’ valuation of a PC drops at a much faster rate, while from the perspective of other consumers, PCs becomes “obsolete” at a much lower pace.

Laptop and equipment. Public domain via Pixabay.
Laptop and equipment. Public domain via Pixabay.

The paper focuses on a leading innovation: Intel’s introduction of its Pentium M® chip, widely considered as a landmark in mobile computing. This innovation is found to have crowded out laptops based on older Intel technologies, such as the Pentium III® and Pentium 4®. It is also found to have made a substantial contribution to the aggregate consumer surplus, boosting it by 3.2%- 6.3%.

These substantial aggregate benefits were, however, far from being uniform across different consumer types: the bulk of the benefits were enjoyed by the 20% least price-sensitive households, while the benefits to the remaining 80% were small and sometimes negligible. The analysis also shows that the benefits from innovation could have “trickled down” to the masses of price-sensitive households, had the older laptop models been allowed to remain on the shelf, alongside the cutting-edge ones. This would have happened since the presence of the new models would have exerted a downward pressure on the prices of older models. In the market equilibrium, this channel is shut down, since the older laptops promptly disappear.

Importantly, while the analysis shows that some consumers benefit from innovation much more than others, no consumers were found to be actually hurt by it. Moreover, the elimination of the older laptops was not found to be inefficient: the social benefits from keeping such laptops on the shelf would have been largely offset by fixed supplier costs.

So what do we make of this analysis? The main takeaway is that one has to go beyond aggregate benefits and consider the heterogeneous effects of innovation on different consumer types, and the possibility that rapid elimination of basic configurations prevents the benefits from trickling down to price-sensitive consumers. Just the same, the paper’s analysis is constrained by its focus on short-run benefits. In particular, it misses certain long-term benefits from innovation, such as complementary innovations in software that are likely to trickle down to all consumer types. Additional research is, therefore, needed in order to fully appreciate the dramatic contribution of innovation in personal computing to economic growth and welfare.

The post Out with the old? appeared first on OUPblog.

0 Comments on Out with the old? as of 9/14/2014 5:00:00 AM
Add a Comment
4. Increasing income inequality

Quite abruptly income inequality has returned to the political agenda as a prominent societal issue. At least part of this can be attributed to Piketty’s provoking premise of rising concentration at the top end of the income and wealth distribution in Capital in the Twenty-First Century (2014), providing some academic ground for the ‘We are the 99 percent’ Occupy movement slogan. Yet, this revitalisation of inequality is based on broader concerns than the concentration at the very top alone. There is growing evidence that earnings in the bottom and the middle of the distribution have hardly risen, if at all, during the last 20 years or so. Incomes are becoming more dispersed not only at the top, but also more generally within developed countries.

We should distinguish between increasing concentration at the top and the rise of inequality across the entire population. Even though both developments might take place simultaneously, the causes, consequences, and possible policy responses differ.

The most widely accepted explanation for rising inequality across the entire population is so-called skill-biased technological change. Current technological developments are particularly suited for replacing routine jobs, which disproportionally lie in the middle of the income distribution. In addition, low- and middle-skilled manufacturing jobs are gradually being outsourced to low-wage countries (see for instance Autor et al., 2013). Decreasing influence of trade unions and more decentralised levels of wage coordination are also likely to play a role in creating more dispersed earnings patterns.

Increased globalisation or technological change are not likely to be main drivers of rising top income shares, though the larger size of markets allows for higher rewards at the top. Since the rise of top income shares was especially an Anglo-Saxon phenomenon, and as the majority of the top 1 per cent in these countries comes from the financial sector, executive compensation practices play a role. Marginal top tax cuts implemented in these countries and inherited wealth are potentially important as well.

So should we care about these larger income differences? At the end of the day this remains a normative question. Yet, whether higher levels of inequality have negative effects on the size of our total wealth is a more technical issue, albeit not a less contested one in political economy. Again, we should differentiate between effects of increasing concentration at the top and the broader higher levels of inequality. To start with the latter, higher dispersion could incite people to put forth additional effort, as the rewards will be higher as well. Yet, when inequality of income disequalises opportunities, there will be an economic cost as Krugman also argues. Investment in human capital for instance will be lower as Standard & Poor’s notes for the US.

Coins on a scale, © asafta, via iStock Photo.

High top income shares do not lead to suboptimal human capital investment, but will disrupt growth if the rich use their wealth for rent-seeking activities. Stiglitz and Hacker and Pierson in Winner-Take All Politics (2010) argue that this indeed takes place in the US. On the other hand, a concentration of wealth could facilitate large and risky investments with positive externalities.

If large income differences indeed come at the price of lower total economic output, then the solution seems simple: redistribute income from the rich to the poor. Yet, both means-tested transfers and progressive taxes based on economic outcomes such as income will negatively affect economic growth as they lower the incentives to gain additional wealth. It might thus be that ‘the cure is worse than the disease’, as the IMF phrases this dilemma. Nevertheless, there can be benefits of redistribution in addition to lessening any negative effects of inequality on growth. The provision of public insurance could have stimulating effects by allowing individuals to take risks to generate income.

How to leave from here? First of all, examining whether inequality or redistribution affects growth requires data that makes a clean distinction between inequality before and after redistribution across countries over time. There are interesting academic endeavours trying to decompose inequality into a part resulting from differences in effort and a part due to fixed circumstances, such as gender, race, or educational level of parents. This can help our understanding which ‘types’ of inequality negatively affect growth and which might boost it. Moreover, redistribution itself can be achieved through multiple means, some of which, such as higher heritage taxes, are likely to be more pro-growth than others, such as higher income tax rates.

All things considered, whether inequality or redistribution hampers growth is too broad of a question. Inequality at which part of the distribution, due to what economic factors, and how the state intervenes all matter a great deal for total growth.

The post Increasing income inequality appeared first on OUPblog.

0 Comments on Increasing income inequality as of 9/10/2014 5:48:00 AM
Add a Comment
5. Migratory patterns: H-OralHist finds a new home on H-Net Commons

It is hard to believe that it has been nearly one year now since I was approached with a very unique opportunity. I was working as a newly appointed staff member of the Baylor University Institute for Oral History (BUIOH) when then-Senior Editor Elinor Maze asked if I would be interested in joining the ranks of H-OralHist and guiding the listserv’s transition to a new web-based format, the H-Net Commons.

My journey began with a nomination to the H-OralHist editorial team, a journey I took with BUIOH Editor Michelle Holland. For the uninitiated, H-OralHist originally served as an e-mail subscription listserv for those interested in current topics in the field of oral history. Participants could submit a question, a news announcement, or details on an upcoming conference or event, and H-OralHist would circulate that information to its membership. For every topic, members had the ability to respond and provide further information or answers as they saw fit. The H-OralHist editors would moderate this discussion, making sure the flow of information stayed relevant.

After our induction in October 2013, Michelle and I became the first editors trained in the new web-based system. Previously, editors merely interacted with listserv members via e-mail exchanges using the H-Net mail server. The H-Net Commons has a much more robust interface to navigate, including both the public face through which the entire membership interact, plus the back-end review system where editors select and work with submissions. The new features and training are quite substantial. H-Net Commons now provides multiple avenues of interaction, ranging from the familiar discussion posts to the ability to upload photos, write blog posts and more.

While Michelle took the editorial reigns of H-OralHist in early 2014, still operating under the old listserv system, I worked with the H-Net administrators to prepare our list for migration to the new Commons platform. In late March, it was our turn in the migration schedule, and we went live on the new platform in April 2014. Michelle and I worked out the initial bugs, and pretty soon the conversations were flowing again. Users of the new H-OralHist may now choose how they stay on top of new discussions. They can continue to have individual topics pop up in their e-mail inbox, receive a digest system for daily summaries, or work exclusively with the new online platform. The Commons functions much like a typical online forum now, allowing one to reply to discussions from the topic page. For those interested, the archive of prior discussions still exists and is available from the splash page sidebar under “Discussion Logs.”

At the moment, the remainder of the H-OralHist editorial team is working through the new training. We have had one successful editorial transition already this summer, with two more planned for the rest of the year. My hope is that as we enter 2015, the entire staff will have the necessary experience under their belts and editorial shifts will proceed like clockwork. As for me, I am currently revisiting the old resource materials and adding/cleaning links to the various oral history collections and centers across the world. Additionally, with the help of Oral History Association President Cliff Kuhn, we have planned an H-OralHist open forum event for this year’s annual meeting in Madison, WI. It is scheduled for noon on Thursday, October 9th. It will be an opportunity for anyone — especially our 3690 subscribers — to stop by and ask questions about the new web interface or offer suggestions on what other tools we should employ on the Commons. I hope I will get an opportunity to meet many of you there as we continue the discussion on the future of this invaluable resource we call H-OralHist!

Headline image credit: Migrating birds. Public domain via Pixabay.

The post Migratory patterns: H-OralHist finds a new home on H-Net Commons appeared first on OUPblog.

0 Comments on Migratory patterns: H-OralHist finds a new home on H-Net Commons as of 9/5/2014 9:22:00 AM
Add a Comment
6. The crossroads of sports concussions and aging

The consequences of traumatic brain injury (TBI) are sizable in both human and economic terms. In the USA alone, about 1.7 million new injuries happen annually, making TBI the leading cause of death and disability in people younger than 35 years of age. Survivors usually exhibit lifelong disabilities involving both motor and cognitive domains, leading to an estimated annual cost of $76.5 billion in direct medical services and loss of productivity in the USA. This issue has received even more intense scrutiny in the popular media with respect to sports-related concussions where there is a proposed link between having suffered multiple injuries, regardless of severity, with later neurodegeneration. At present, there is a dearth of evidence to either support or undermine the role of sports concussions in the later development of neurodegenerative processes, much less the influence of those brain injuries on the normal aging process.

As most people agree that no two concussions are alike, they all share at least one feature in common; they all involve the near instant transfer of kinetic energy to the brain. The brain absorbs kinetic energy as a result of acceleration forces, while deceleration forces cause it to release kinetic energy when colliding with the skull. Coup contrecoup injury is one of the most ancient and best supported biomechanical models of traumatic brain injury induction. Acceleration/deceleration forces can either be transferred to the brain in a straight line passing through the head’s centre of gravity or in a tangential line and arc around its centre of gravity. Shearing and stretching of axons are common manifestations of inertial forces applied to the brain and this type of damage is commonly referred to as traumatic axonal injury. Although robustly demonstrated in both animal and post-mortem models of TBI, neuroimaging techniques limitations, however, have long prevented us from accurately tracking projecting axonal assemblies, also called white matter fibers, in living humans. The recent emergence of a magnetic resonance imaging (MRI)-based tool called Diffusion Tensor Imaging (DTI) can reveal abnormalities in white matter fibers with increasing sensitivity. DTI has quickly gained in popularity among TBI researchers who have long sought to characterize the neurofunctional repercussions of traumatic axonal injury in living humans. One particularly appealing clinical application of DTI is with athletes who have sustained sports concussion in whom conventional MRI assessments typically turn out negative despite the persistence of long-lasting, cumulative neurofunctional symptoms. First applied to young concussed athletes, a follow-up DTI study conducted in our laboratory revealed subtle white matter tracts anomalies detected in the first few days after the injury and again 6 months later. Interestingly, these young concussed athletes were all asymptomatic at follow-up and performance on concussion-sensitive neuropsychological tests had returned to normal.

MRT big by Helmut Januschka CC-BY-SA-3.0 via Wikimedia Commons.
MRT big by Helmut Januschka CC-BY-SA-3.0 via Wikimedia Commons.

In parallel, our group became increasingly interested in the characterization of the remote neurofunctional repercussions of concussion sustained decades earlier in late adulthood former elite athletes. Quantifiable cognitive (i.e. memory and attention) and motor function alterations were found on age-sensitive clinical tests, a finding that significantly contrasts with the full recovery typically found within a few days post-concussion in young, active athletes on equivalent neurofunctional measures. This finding was the first of many demonstrations that a remote history of sports concussion synergistically interacts with advancing age to precipitate brain function decline. These neuropsychological tests performance alterations specific to former concussed athletes were soon after found to correlate significantly with markers of structural damage restricted to ventricular enlargement and age-dependent cortical thinning. However, besides the significant interaction of age and a prior history of concussion on cortical thinning, former concussed athletes could not be differentiated from age-matched unconcussed teammates using highly sophisticated measures of grey matter morphometry. White matter integrity disruptions therefore appeared as a likely candidate to explain the observed significant ventricular enlargement found in former concussed athletes. We thus turned to state-of-the-art DTI metrics to conduct the first study of white matter integrity with older but clinically normal retired athletes with a history of sports-related concussions. A particular emphasis was put on bringing together former elite athletes who were free from confounding factors such as clinical comorbidities, drug/alcohol abuse, and genetic predisposition that are too often confusing the long-term effects of concussions on brain health. Our results show that aging with a history of prior sports-related concussions induces a diffuse pattern of white matter anomalies affecting many major inter-hemispheric, intra-hemispheric as well as projection fiber tracts. Of crucial clinical significance with relation to our previous findings on former concussed athletes, we found ventricular enlargement to correlate significantly with widespread alterations of key markers of white matter integrity including not only peri-ventricular white matter tracts, but also an extensive network of fronto-parietal connections. Most of all, these white matter integrity losses were found to be associated with altered neurocognitive functions including memory and learning.

Taken together with previous functional and structural characterizations of the remote effects of concussion in otherwise healthy older former athletes, the pattern of white matter alterations, being more pronounced over fronto-parietal brain areas, more closely resemble what has been observed in normal aging. From this interpretation, we suggest that concussion induces a latent microstructural injury that synergistically interacts with the aging process to exert late-life brain decline in both structure and function.

The post The crossroads of sports concussions and aging appeared first on OUPblog.

0 Comments on The crossroads of sports concussions and aging as of 9/3/2014 9:56:00 AM
Add a Comment
7. Stop and search, and the UK police

The recent announcement made jointly by the Home Office and College of Policing is a vacuous document that will do little or nothing to change police practice or promote better police-public relations.

Let us be clear: objections to police stop and search is not just a little local difficulty, experienced solely in this country. Similar powers are felt to be just as discriminatory throughout North America where it is regarded as tantamount to an offence of ‘driving whilst black’ (DWB). This and other cross-national similarities persist despite differences in the statutory powers upon which the police rely. It would, therefore, seem essential to ask whether differences in legislation or policy have proven more or less effective in different jurisdictions. Needless to say, absolutely no evidence of experience elsewhere is to be found in this latest Home Office document. Instead, to assuage the concerns of the Home Secretary, more meaningless paperwork will be created.

One reason why evidence seems to be regarded as unnecessary is the commonplace assumption that ‘everyone knows’ why minorities experience disproportionate levels of stop and search: namely that officers rely not upon professional judgement, but upon prejudice, when exercising this power. Enticing though such an assumption is, it has serious weaknesses. As Professor Marion Fitzgerald discovered, when officers are deciding who to stop and search entirely autonomously, they act less disproportionately than when acting on specific information, such as a description.

Research that I and Kevin Stenson conducted in the early 2000s also found that the profile of those stopped and searched very largely corresponded to the so-called ‘available population’ of people out and about in public places at the times when stop and search is most prevalent. This is not to say that these stops and searches were conducted either lawfully or properly. Indeed, a former Detective Chief Superintendent interviewed a sample of 60 officers about their most recent stops and searches as part of this research. What he found was quite alarming, for in around a third of cases the accounts that officers freely gave about the circumstances of these 128 stops and searches could not convince any of us that they were lawful. There was also a woeful lack of knowledge amongst these officers about the statutory basis for the powers upon which officers were relying.

Uk police officer watches traffic at roadside. © RussDuparcq  via iStock.
UK police officer watches traffic at roadside. © RussDuparcq via iStock.

If officers were much better informed about their powers, then perhaps the experience of stop and search may be less disagreeable — it is unlikely ever to be welcomed — than it often is. Paragraph 1.5 of the Code of Practice governing how police stop and search states:

1.5   An officer must not search a person, even with his or her consent, where no power to search is applicable. Even where a person is prepared to submit to a search voluntarily, the person must not be searched unless the necessary legal power exists, and the search must be in accordance with the relevant power and the provisions of this Code.

The implication of this is quite clear: police may stop and search someone with their consent, but may not use such consent as a means of subverting the requirements under which the search would be lawful. Yet, so few officers seem even to be aware of this and conduct stop and search solely on the basis of their formal powers. I believe they do this as a ‘shield’; they imagine that if they go through the formal motions then no one can object to the lawfulness of the search. But they do object and do so most valuably, which gravely damages the public reputation of the police.

Research evidence aplenty confirms that it is not the possession of this power by the police that irks even those who are most at risk of stop and search. What they really object to is the manner in which the stop and search is conducted. A more consensual approach by police officers might just make the use of this power just a little more palatable.

The post Stop and search, and the UK police appeared first on OUPblog.

0 Comments on Stop and search, and the UK police as of 9/1/2014 6:47:00 AM
Add a Comment
8. Does industry sponsorship restrict the disclosure of academic research?

Long-run trends suggest a broad shift is taking place in the institutional financing structure that supports academic research. According to data compiled by the OECD reported in Figure 1, industry sources are financing a growing share of academic research while “core” public funding is generally shrinking. This ongoing shift from public to private sponsorship is a cause for concern because these sponsorship relationships are fundamentally different. Available evidence suggests that industry financing does not simply replace dwindling public money, but imposes additional restrictions on academic researchers. In particular, industry sponsors frequently limit disclosure of research findings, methods, or materials by delaying or banning public release.

Recent economic research highlights why public disclosure of academic research is important. Disclosure permits the stock of public knowledge to be cumulative, accessible, and reliable. It limits duplication of research efforts, allows new knowledge to be replicated and verified by professional peers, and permits access and use by other researchers which enhances opportunities for complementary research. Some work finds that greater access to ideas and materials in academic research not only increased incentives for direct follow-on research, but led to an increase in the diversity of research by increasing the number of experimental research lines. Other work, examining the theoretical conditions supporting “open science” versus “secrecy”, stressed that maintaining and growing the stock of public knowledge requires a limit on the private financial returns obtained through secrecy.

graph f

To better understand the potential implications of increased industry funding, we implemented a research project that examined the relationship between industry sponsorship and restrictions on publication disclosure using individual-level data on German academic researchers. Germany is an apt setting for examining this relationship. It has a strong tradition of public financial support for academic research and, according to the OECD, Germany experienced the most dramatic growth in its share of industry sponsorship, an 11.3 percentage point increase from 1995 to 2010 (see Figure 1).

German academic researchers were surveyed about the degree of publication disclosure restrictions experienced during research projects sponsored by government, foundations, industry, and other sources. To examine if industry sponsorship jeopardizes disclosure of academic research, we modeled the degree of restrictiveness (i.e. delay and secrecy) as a function of the researcher’s budget share financed by industry. This formulation allows us to examine two potential effects of industry sponsored research contracts. The first is an adoption effect that takes place when academic researchers commit to industry funding. The second is an intensity effect that captures how publication restrictions depend on the researcher’s exposure to greater ex post review and evaluation by industry sponsors. Our models include covariates that control for non-industry extramural sponsorship, personal characteristics, research characteristics, institutional affiliations, and scientific fields of study.

Both the descriptive and regression results show a positive relationship between the degree of publication restrictions and industry sponsorship. The percentage of respondents who reported higher secrecy (partial or full) is significantly larger for industry sponsored researchers than it is for researchers with other extramural sponsors, 41% and 7% respectively. Controlling for selection, adopting industry sponsorship more than doubles the expected probabilities of publication delay and secrecy. The intensity effect is positive and significant with a larger effect on publication secrecy than on publication delay when academic researchers become heavily supported by industrial firms. These results are robust to the possibility that researchers self-select into extramural sponsorship and to the possibility that the share of industry sponsorship is endogenous due to unobserved variables.

Based on our analysis, the shift from public to private sponsorship seen in the OECD aggregate data reflects changes in the microeconomic environment shaping incentives for disclosure by academic researchers. On average, academic researchers are willing to restrict disclosure in exchange for financial support by industry sponsors. Our results shed light on an important challenge facing policymakers. Understanding the trade-off between public and private sponsorship of academic research involves gauging the impact of disclosure restrictions on the quantity, quality, and evolution of academic research to better understand how these restrictions may ultimately influence innovation and economic growth.

Image credit: Computer research, © Jürgen François, via iStock Photo.

The post Does industry sponsorship restrict the disclosure of academic research? appeared first on OUPblog.

0 Comments on Does industry sponsorship restrict the disclosure of academic research? as of 8/31/2014 4:58:00 AM
Add a Comment
9. The burden of guilt and German politics in Europe

Since the outbreak of the First World War just over one hundred years ago, the debate concerning the conflict’s causes has been shaped by political preoccupations as well as historical research. Wartime mobilization of societies required governments to explain the justice of their cause, the “war guilt” clause of the treaty of Versailles became a focal point of German revisionist foreign policy in the 1920s, and the Fischer debate in West Germany in the 1960s took place against a backdrop of the Cold War and the efforts of German society to come to terms with the Nazi past. More recently critics of Sir Edward Grey’s foreign policy, such as Niall Ferguson and John Charmley, are writing in the context of intense debates about Britain’s relationship with Europe, while accounts that emphasise the strength of the great power peace before 1914 are informed in part by contemporary discussions of globalization and the improbability of a war between the world’s leading powers today – the conflict in the Ukraine notwithstanding.

The persistent political backdrop to debates about the origins of the war is evident in the reception of Christopher Clark’s best-selling work, The Sleepwalkers, particularly its resonance within Germany. Clark’s references to the Euro-crisis, 9/11, and the Yugoslav wars of the 1990s, dotted throughout the book, nod to the contemporary relevance of the collapse of the international system in 1914.

While Clark seeks to eschew debates about war guilt or responsibility, preferring to concentrate on the ‘how’ rather than the ‘why’, his conclusion contends that leaders in the capitals of the five Great Powers and in Belgrade bear somewhat equal responsibility for the war. This thesis has attracted considerable attention in Germany, where the last major public reckoning over the origins of the war took place in the 1960s, when Fritz Fischer’s thesis that German leaders planned for war from December 1912 and therefore bore the largest responsibility for its outbreak was the subject of intense and often vindictive debate. Fischer carried the day in the 1960s, but now Clark’s argument, comparative in a way that Fischer did not claim to be, has overturned what appeared to be a publicly accepted orthodoxy.

The centenary debate has also coincided with a particular moment in German political and cultural debate. The post-unification economic slowdown has now given way to a booming economy, while much of the rest of Europe is mired in austerity. In tandem with economic prosperity, German elites are displaying growing political confidence as Europe’s dominant state.

In this context Clark’s thesis about shared responsibility for the war has been read in two ways. One group, whose most notable advocates include Thomas Weber (Aberdeen/Harvard) and Dominik Geppert (Bonn), argue that the ongoing belief in German ‘war guilt’ is an historic fiction that damages both German and European politics. It has contributed to the unwillingness of successive German governments to take on greater leadership within Europe. The marginalization of the German national interest after 1945, they claim, is partly the product of a misinformed reading of history that holds the pursuit of the German national interest as responsible for two catastrophic global conflicts. This has resulted in a damaging approach to European politics, which holds that the national is inherently opposed to the European interest. By neglecting the national interest German leaders are creating instability within Europe and alienating many German citizens from participating in a European project that must take account of national diversity. Hence they welcome Clark’s book and the enormous public interest it has aroused in Germany.

Parade of Cuirassier Guards Marching to the Parade Ground, Berlin, Germany. Keystone View Company, copyrighted Underwood & Underwood Public domain via via Wikimedia Commons.
Parade of Cuirassier Guards Marching to the Parade Ground, Berlin, Germany. Keystone View Company, copyrighted Underwood & Underwood Public domain via via Wikimedia Commons.

However Clark’s thesis has not met with universal approval. Leading critics include Gerd Krumeich and John Röhl, both representatives of a generation of historians who came to the fore during and soon after the Fischer debate. They criticize Clark for downplaying the responsibility of German political and military leaders for the war, both by stressing the comparatively restrained character of German foreign policy up to the July crisis and by his criticisms of the aggressive nature of Russian, French, and British foreign policy before 1914. Not only do they take issue with Clark’s arguments, they also express concern that the ‘relativizing’ of German responsibility for the outbreak of the war will lead to a recrudescence of a more assertive German nationalism, undoing the successful integration of the Federal Republic into a community of democratic, European nations. From their perspective, a more assertive German nationalism, freed from the historic burden of war guilt, constitutes a potential danger.

The debate blends divergent generational perspectives on German national identity and European politics, as well as different interpretations of the sources and methodological approaches to studying the origins of the war. For the record, this author finds Clark’s account persuasive. On balance there is a greater risk in Germany not playing a leading role in European politics than there is of a re-assertion of a muscular German national interest and identity. Yet both groups may overestimate the significance of the “war guilt” in shaping perspectives in German and European politics. While the centenary has created a privileged space for the first world war in public discussion, the politics of history within Germany remain firmly fixed on the crimes of the Third Reich. When Europeans today think of Germany’s historical burden, they think primarily of the Nazi past. After all, disaffected protesters in countries hit by austerity after 2008 compared current German policies to those of the Third Reich, not the Kaiserreich. Grotesque and unfounded as the comparison was, it was striking that protesters did not think about Wilhelm II. While historians may revise their views of German responsibility for the First World War, no serious historian disputes the primacy of the Hitler’s regime in starting a genocidal war in Europe in 1939.

The post The burden of guilt and German politics in Europe appeared first on OUPblog.

0 Comments on The burden of guilt and German politics in Europe as of 8/30/2014 8:57:00 AM
Add a Comment
10. Time to see the end

Imagine that you’re watching a movie. You’re fully enjoying the thrill of different emotions, unexpected changes, and promising developments in the plot. All of a sudden, the projection is abruptly halted with no explanation whatsoever. You’re unable to learn how things unfold. You can’t see the end of the movie and you’re left with a sense of incompleteness you won’t ever be able to overcome.

Now imagine that movie is the existence of a human being which, out of the blue, is interrupted. Enforced disappearance cuts the life-flow of a person and it’s often impossible to discover how it truly ends. The secrecy that shrouds the fate of the disappeared is the distinctive element of this heinous practice and differentiates it from other crimes. All that you can imagine is that the end is not likely to be a happy one, but you will never give up hope. The impossibility to unveil the truth paralyses also the life of family members, friends, colleagues, and, to a certain extent, of society at large. If you don’t see the end, you’re unable to move on. You can’t grieve. You can’t rejoice. You’re trapped between hope and despair.

Today is the International Day of the Victims of Enforced Disappearances. Besides commemorating thousands of human beings who have been subjected to enforced disappearance throughout the world and honouring the memory of brave family members and human rights defenders who continue to combat against this scourge, is there anything to celebrate?

While the UN General Assembly decided to observe this Day beginning in 2011, associations of relatives of disappeared persons in Latin America had been doing so since 1981.

Over more than 30 years much has been done to eradicate enforced disappearance, both at domestic and international levels. Specific human rights bodies, such as the United Nations Working Group on Enforced or Involuntary Disappearances (WGEID) and the Committee on Enforced Disappearances (CED) have been established. Legal instruments, both of international human rights law and of international criminal law, deal with this crime in-depth and establish detailed obligations and severe sanctions. Regional human rights courts and UN Treaty Bodies have developed a rich, although not always coherent, jurisprudence. Domestic courts have delivered some landmark sentences, holding perpetrators accountable.

Ceremony organised by the Asian Federation against Involuntary Disappearances, held in Manila on 30 August 2009, to commemorate the International Day of the Victims of Enforced Disappearances. Photo by Gabriella Citroni.
Ceremony organised by the Asian Federation against Involuntary Disappearances, held in Manila on 30 August 2009, to commemorate the International Day of the Victims of Enforced Disappearances. Photo by Gabriella Citroni.

However, much remains to be done. First, the phenomenon has evolved: once mainly perpetrated in the context of military dictatorships, nowadays it is committed also under supposedly democratic regimes, and is being used to counter terrorism, to fight organised crime, or to suppress legitimate movements of civil protest. Enforced disappearance is practiced in a widespread and systematic manner in complex situations of internal armed conflict, as highlighted, among others, in the recent report “Without a Trace” concerning enforced disappearances in Syria.

During its latest session, held in February 2014, the WGEID transmitted 87 newly reported cases of enforced disappearance to 11 states. More than 43,000 cases, committed in a total of 84 states, remain under the WGEID’s active consideration.

Against this discouraging scenario, less than 15 states have codified enforced disappearance as an autonomous offence under their criminal legislation and thus lack the adequate legal framework to tackle this crime. Only a handful of states have adopted specific measures to regulate the legal situation of disappeared persons in field such as welfare, financial matters, family law and property rights. This causes additional anguish to the relatives of the disappeared and may also hamper investigation and prosecution. Amnesty laws or similar measures that have the effect of exempting perpetrators from any criminal proceedings or sanctions are in force in various countries and are in the process of being adopted in others. Recourse to military tribunals is often used to grant impunity.

Relatives of disappeared men from Lebanon and Algeria taking part in a gathering organised by the Fédération Euro-méditerranéenne contre les disparitions forcées in Beirut on 21 February 2013. Photo by Gabriella Citroni.
Relatives of disappeared men from Lebanon and Algeria taking part in a gathering organised by the Fédération Euro-méditerranéenne contre les disparitions forcées in Beirut on 21 February 2013. Photo by Gabriella Citroni.

States do not seem to be proactive in engaging in a serious struggle against enforced disappearance at the international level either. Opened for signature in February 2007, the International Convention on the Protection of All Persons from Enforced Disappearance has so far been ratified by 43 states, out of which only 18 have recognized the competence of the CED to receive and examine individual and inter-state communications.

Furthermore, states often fail to cooperate with international human rights mechanisms, hindering the fact-finding process, and proving reluctant in the enforcement of judgments. On their part, some of these international mechanisms, such as the European Court of Human Rights, narrowed their jurisprudence on enforced disappearance, undertaking a particularly restrictive approach when assessing their competence ratione temporis, when evaluating states’ compliance with their positive obligations to investigate on cases of disappearance, prosecute and sanction those responsible, and when awarding measures of redress and reparation.

One may wonder why 30 August was chosen by relatives of disappeared persons as the International Day against this crime. Purportedly, they picked a random date. They didn’t want it to be related to the enforced disappearance of anyone in particular: anyone can be subjected to enforced disappearance, anytime, and anywhere.

That was the idea back in 1981. Sadly, it still seems to be the case in 2014. It’s about time the obligations set forth in international treaties on enforced disappearance are duly implemented, domestic legal frameworks are strengthened, and legislative or procedural obstacles to investigation and prosecution are removed. It’s time to see the end of the movie. The end of enforced disappearance.

The post Time to see the end appeared first on OUPblog.

0 Comments on Time to see the end as of 8/30/2014 8:57:00 AM
Add a Comment
11. A preview of the 2014 OHA Annual Meeting

In a few months, Troy and I hope to welcome you all to the 2014 Oral History Association (OHA) Annual Meeting, “Oral History in Motion: Movements, Transformations, and the Power of Story.” This year’s meeting will take place in our lovely, often frozen hometown of Madison, Wisconsin, from 8-12 October 2014. I am sure most of you have already registered and booked your hotel room. For those of you still dragging your feet, hopefully these letters from OHA Vice President/President Elect Paul Ortiz and Program Committee co-chairs Natalie Fousekis and Kathy Newfont will kick you into gear.

*   *   *   *   *

Madison, Wisconsin. The capitol city of the Badger State evokes images of social movements of all kinds. This includes the famed “Wisconsin Idea,” a belief put forth during an earlier, tumultuous period of American history that this place was to become a “laboratory for democracy,” where new ideas would be developed to benefit the entire society. In subsequent years, Madison became equally famous for the Madison Farmers Market, hundreds of locally-owned businesses, live music, and a top-ranked university. Not to mention world-famous cafes, microbreweries, and brewpubs! [Editor’s note: And fried cheese curds!] Our theme, “Oral History in Motion: Movements, Transformations and the Power of Story,” is designed to speak directly to the rich legacies of Wisconsin and the upper Midwest, as well as to the interests and initiatives of our members. Early on, we decided to define “movements” broadly — and inclusively — to encompass popular people’s struggles, as well as the newer, exciting technological changes oral history practitioners are implementing in our field.

Creating this year’s conference has been a collaborative effort. Working closely with the OHA executive director’s office, our program and local arrangements committees have woven together an annual meeting with a multiplicity of themes, as well as an international focus tied together by our belief in the transformative power of storytelling, dialog, and active listening. Our panels also reflect the diversity of our membership’s interests. You can attend sessions ranging from the historical memories of the Haitian Revolution and the future of the labor movement in Wisconsin to the struggles of ethnic minority refugees from Burma. We’ll explore the legacies left by story-telling legends like Pete Seeger and John Handcox, even as we learn new narratives from Latina immigrants, digital historians and survivors of sexual abuse.

Based on the critical input we’ve received from OHA members, this year’s annual meeting in will build on the strengths and weaknesses of previous conferences. New participants will have the opportunity to be matched with veteran members through the OHA Mentoring Program. We will also invite all new members to the complimentary Newcomers’ Breakfast on Friday morning. Building on its success at last year’s annual meeting, we are also holding Interest Group Meetings on Thursday, in order to help members continue to knit together national—and international—networks. The conference program features four hands-on oral history workshops on Wednesday, and a “Principles and Best Practices for Oral History Education (grades 4-12)” workshop on Saturday morning. This year’s plenary and special sessions are also superb.

With such an exciting program, it is little wonder that early pre-registration was so high! I hope that you will join us in Madison, Wisconsin for what will be one of the most memorable annual meetings in OHA history!

In Solidarity,
Paul Ortiz
OHA Vice President/President Elect

*   *   *   *   *

The 2014 OHA Annual Meeting in Madison, Wisconsin is shaping up to be an especially strong conference. The theme, “Oral History in Motion: Movements, Transformations and the Power of Story,” drew a record number of submissions. As a result, the slate of concurrent sessions includes a wide variety of high quality work. We anticipate that most conference-goers will, even more so than most years, find it impossible to attend all sessions that pique their interest!

The local arrangements team in Madison has done a wonderful job lining up venues for the meeting and its special sessions, including sites on the University of Wisconsin-Madison campus, the Wisconsin Historical Society and the Madison Public Library. The meeting will showcase some of Madison’s richest cultural offerings. For instance, we will open Wednesday evening in Sterling Hall with an innovative, oral-history inspired performance on the 1970 bomb explosion, which proved a key flashpoint in the Vietnam-era anti-war movement. After Thursday evening’s Presidential Reception, we will hear a concert by Jazz Master bassist Richard Davis — who will also do a live interview Saturday evening.

In keeping with our theme, many of our feature presentations will address past and present fights for social and political change. Thursday afternoon’s mixed-media plenary session will focus on the music and oral poetry of sharecropper “poet laureate” John Handcox, whose songs continue to inspire a broad range of justice movements in the U.S. and beyond. Friday morning’s “Academics as Activists” plenary session will offer a report from the front lines of contemporary activism. It will showcase an interdisciplinary panel of scholars who have emerged as leading voices in recent pushes for social change in Wisconsin, North Carolina and nationwide. The Friday luncheon keynote will feature John Biewen of Duke University’s Center for Documentary Studies, who has earned recognition for—among other things—his excellent work on disadvantaged groups. Finally, on Friday evening we will screen Private Violence, a film featured at this year’s Sundance festival. Private Violence examines domestic violence, long a key concern in women’s and children’s rights movements. The event will be hosted by Associate Producer Malinda Maynor Lowery, who is also Director of the University of North Carolina’s Southern Oral History Program.

Join us for all this and much more!

Natalie Fousekis and Kathy Newfont
Program Committee

*   *   *   *   *

See you all in October!

Headline image credit: Resources of Wisconsin. Edwin Blashfield’s mural “Resources of Wisconsin”, Wisconsin State Capitol dome, Madison, Wisconsin. Photo by Jeremy Atherton. CC BY 2.0 via jatherton Flickr.

The post A preview of the 2014 OHA Annual Meeting appeared first on OUPblog.

0 Comments on A preview of the 2014 OHA Annual Meeting as of 8/29/2014 12:53:00 PM
Add a Comment
12. Special events and the dynamical statistics of Twitter

A large variety of complex systems in ecology, climate science, biomedicine, and engineering have been observed to exhibit so-called tipping points, where the dynamical state of the system abruptly changes. Typical examples are the rapid transition in lakes from clear to turbid conditions or the sudden extinction of species after a slightly change of environmental conditions. Data and models suggest that detectable warning signs may precede some, though clearly not all, of these drastic events. This view is also corroborated by recently developed abstract mathematical theory for systems, where processes evolve at different rates and are subject to internal and/or external stochastic perturbations.

One main idea to derive warning signs is to monitor the fluctuations of the dynamical process by calculating the variance of a suitable monitoring variable. When the tipping point is approached via a slowly-drifting parameter, the stabilizing effects of the system slowly diminish and the noisy fluctuations increase via certain well-defined scaling laws.

Based upon these observations, it is natural to ask, whether these scaling laws are also present in human social networks and can allow us to make predictions about future events. This is an exciting open problem, to which at present only highly speculative answers can be given. It is indeed to predict a priori unknown events in a social system. Therefore, as an initial step, we try to reduce the problem to a much simpler problem to understand whether the same mechanisms, which have been observed in the context of natural sciences and engineering, could also be present in sociological domains.

Courtesy of Christian Kuehn.
Courtesy of Christian Kuehn.

In our work, we provide a very first step towards tackling a substantially simpler question by focusing on a priori known events. We analyse a social media data set with a focus on classical variance and autocorrelation scaling law warning signs. In particular, we consider a few events, which are known to occur on a specific time of the year, e.g., Christmas, Halloween, and Thanksgiving. Then we consider time series of the frequency of Twitter hashtags related to the considered events a few weeks before the actual event, but excluding the event date itself and some time period before it.

Now suppose we do not know that a dramatic spike in the number of Twitter hashtags, such as #xmas or #thanksgiving, will occur on the actual event date. Are there signs of the same stochastic scaling laws observed in other dynamical systems visible some time before the event? The more fundamental question is: Are there similarities to known warning signs from other areas also present in social media data?

We answer this question affirmatively as we find that the a priori known events mentioned above are preceded by variance and autocorrelation growth (see Figure). Nevertheless, we are still very far from actually using social networks to predict the occurrence of many other drastic events. For example, it can also be shown that many spikes in Twitter activity are not predictable through variance and autocorrelation growth. Hence, a lot more research is needed to distinguish different dynamical processes that lead to large outburst of activity on social media.

The findings suggest that further investigations of dynamical processes in social media would be worthwhile. Currently, a main focus in the research on social networks lies on structural questions, such as: Who connects to whom? How many connections do we have on average? Who are the hubs in social media? However, if one takes dynamical processes on the network, as well as the changing dynamics of the network topology, into account, one may obtain a much clearer picture, how social systems compare and relate to classical problems in physics, chemistry, biology and engineering.

The post Special events and the dynamical statistics of Twitter appeared first on OUPblog.

0 Comments on Special events and the dynamical statistics of Twitter as of 8/27/2014 4:05:00 AM
Add a Comment
13. Research replication in social science: reflections from Nathaniel Beck

Introduction from Michael Alvarez, co-editor of Political Analysis:

Questions about data access, research transparency and study replication have recently become heated in the social sciences. Professional societies and research journals have been scrambling to respond; for example, the American Political Science Association established the Data Access and Research Transparency committee to study these issues and to issue guidelines and recommendations for political science. At Political Analysis, the journal that I co-edit with Jonathan N. Katz, we require that all of the papers we publish provide replication data, typically before we send the paper to production. These replication materials get archived at the journal’s Dataverse, which provides permanent and easy access to these materials. Currently we have over 200 sets of replication materials archived there (more arriving weekly), and our Dataverse has seen more than 13,000 downloads of replication materials.

Due to the interest in replication, data access, and research transparency in political science and other social sciences, I’ve asked a number of methodologists who have been front-and-center in political science with respect to these issues to provide their thoughts and comments about what we do in political science, how well it has worked so far, and what the future might hold for replication, data access, and research transparency. I’ll also be writing more about what we have done at Political Analysis.

The first of these discussions are reflections from Nathaniel Beck, Professor of Politics at NYU, who is primarily interested in political methodology as applied to comparative politics and international relations. Neal is a former editor of Political Analysis, chairs our journal’s Advisory Board, and is now heading up the Society for Political Methodology’s own committee on data access and research transparency. Neal’s reflections provide some interesting perspectives on the importance of replication for his research and teaching efforts, and shed some light more generally on what professional societies and journals might consider for their policies on these issues.

Research replication in social science: reflections from Nathaniel Beck

Replication and data access has become a hot topic throughout the sciences. As a former editor of Political Analysis and the chair of the Society for Political Methodology‘s Data Access and Research Transparency (DA-RT) committee, I have been thinking about these issues a lot lately. But here I simply want to share a few recent experiences (two happy, one at this moment less so) which have helped shape my thinking on some of these issues. I note that in none of these cases was I concerned that the authors had done anything wrong, though of course I was concerned about the sensitivity of results to key assumptions.

The first happy experience relates to an interesting paper on the impact of having an Islamic mayor on educational outcomes in Turkey by Meyerson published recently in Econometrica. I first heard about the piece from some students, who wanted my opinion on the methodology. Since I am teaching a new (for me) course on causality, I wanted to dive more deeply into the regression discontinuity design (RDD) as used in this article. Coincidentally, a new method for doing RDD was presented at the recent (2014) meetings of the Society for Political Methodology by Rocio Titiunik. I want to see how her R code worked with interesting comparative data. All recent Econometrica articles are linked to both replication and supplementary materials on the Econometrica web site. It took perhaps 15 minutes to make sure that I could run Stata on my desktop and get the same results as in the article. So thanks to both Meyerson and Econometrica for making things so easy.

I gained from this process, getting a much better feel for real RDD data analysis so I can say more to my students than “the math is correct.” My students gain by seeing a first rate application that interests them (not a toy, and not yet another piece on American elections). And Meyerson gains a few readers who would not normally peruse Econometrica, and perhaps more cites in the ethnicity literature. And thanks to Titiunik for making her R code easily accessible.

The second happy experience was similar to the first, but also opened my eyes to my own inferior practice. At the same Society meetings, I was the discussant on a paper by Grant and Lebo on using fractional integration methods. I had not thought about such methods in a very long time, and believed (based on intuition and no evidence to the contrary) that using fractional integration methods led to no changes in substantive findings. But clearly one should base arguments on evidence and not intuition. I decided to compare the results of a fractional integration study by Box-Steffensmeier and Smith with the results of a simpler analysis. Their piece had a footnote saying the data were available through the ICPSR (excellent by the standards of 1998). Alas, on going to the ICPSR web site I could not find the data (noting that the lots of things have happened since 1998 and who knows if my search was adequate). Fortunately I know Jan so I wrote to her, and she kindly replied that the data were on her Dataverse at Harvard. A minute later I had the data and was ready to try to see if my intuitions might indeed be supported by evidence.

Feel free to use this image just link to www.rentvine.com
Typing on Keyboard – Male Hand by Dave Dugdale. CC BY-SA 2.0 via Flickr.

This experience made me think: could someone find my replication data sets? For as long as I can remember (at least back to 1995), I always posted my replication data sets somewhere. Articles written until 2003 sent readers my public ftp site at UCSD. But UCSD has changed the name and file structure of that server several times since 2003, and for some reason they did not feel obligated to keep my public ftp site going (and I was not worried enough about replication to think of moving that ftp site to NYU). Fortunately I can usually find the replication files if anyone writes me, and if I cannot, my various more careful co-authors can find the data. But I am sure that I am not the only person to have replication data on obsolete servers. Thankfully Political Analysis has required me to put my data on the Political Analysis Dataverse so I no longer have to remember to be a good citizen. And my resolution is to get as many replication data sets from old pieces on my own Harvard Dataverse. I will feel less hypocritical once that is done. It would be very nice if other authors emulated Jan!

The possibly less happy outcome relates to the recent article in PNAS on a Facebook experiment on social contagion. The authors, in a footnote, said that replication data was available by writing to the authors. I wrote twice, giving them a full month, but heard nothing. I then wrote to the editor of PNAS who informed me that the lead author had both been on vacation and was overwhelmed with responses to the article. I am promised that the check is in the mail.

What editor wants to be bothered by fielding inquiries about replication data sets? What author wants to worry about going on vacation (and forgetting to set a vacation message)? How much simpler the world would have been for the authors, editor, and me, if PNAS simply followed the good practice of Political Analysis, the American Journal of Political Science, the Quarterly Journal of Political Science, Econometrica, and (if rumors are correct) soon the American Political Science Review of demanding that authors post, either on the journal web site or the journal Dataverse, all replication materials before an article is actually published? Why does not every journal do this?

A distant second best is to require authors to post their replication on their personal website. As we have seen from my experience, this often leads to lost or non-working URLs. While the simple solution here is the Dataverse, surely at a minimum authors should provide a standard Document Object Identifier (DOI) which should persist even as machine names change. But the Dataverse solution does this, and so much more, that it seems odd in this day and age for all journals not to use this solution. And we can all be good citizens and put our own pre-replication standard datasets on our own Dataverses. All of this is as easy (and maybe) easier than maintaining private data web pages, and one can rest easy that one’s data will be available until either Harvard goes out of business or the sun burns out.

Featured image: BalticServers data center by Fleshas CC-BY-SA-3.0 via Wikimedia Commons.

The post Research replication in social science: reflections from Nathaniel Beck appeared first on OUPblog.

0 Comments on Research replication in social science: reflections from Nathaniel Beck as of 8/24/2014 8:31:00 AM
Add a Comment
14. New words, new dialogues

In August 2014, OxfordDictionaries.com added numerous new words and definitions to their database, and we invited a few experts to comment on the new entries. Below, Janet Gilsdorf, President-elect of Pediatric Infectious Diseases Society, discusses anti-vax and anti-vaxxer. The views expressed do not necessarily reflect the opinions or positions of Oxford Dictionaries or Oxford University Press.

It’s beautiful, our English language — fluid and expressive, colorful and lively. And it’s changeable. New words appear all the time. Consider “selfie” (a noun), “problematical” (an adjective), and “Google” (a noun that turned into verbs.) Now we have two more: “anti-vax” and “anti-vaxxer.” (Typical of our flexible vernacular, “anti-vaxxer” is sometimes spelled with just one “x.”) I guess inventing these words was inevitable; a specific, snappy short-cut was needed when speaking about something as powerful and almost cult-like as the anti-vaccine movement and its disciples.

When we string our words together, either new ones or the old reliables, we find avenues for telling others of our joys and disappointments, our loves and hates, our passions and indifferences, our trusts and distrusts, and our fears. The words we choose are windows into our minds. Searching for the best terms to use helps us refine our thinking, decide what, exactly, we are contemplating, and what we intend to say.

Embedded in the force of the new words “anti-vax” and “anti-vaxxer” are many of the tales we like to tell: our joy in our children, our disappointment with the world; our love of independence and autonomy, our hate of things that hurt us or those important to us; our passion for coming together in groups, our indifference to the worries of strangers; our trust, fueled by hope rather than evidence, in whatever nutty things may sooth our anxieties, our distrust in our sometimes hard-to-understand scientific, medical, and public health systems; and, of course, our fears.

Fear is usually a one-sided view. It is blinding, so that in the heat of the moment we aren’t distracted by nonsense (the muddy foot prints on the floor, the lawn that needs mowing) and can focus on the crisis at hand. Unfortunately, fear may also prevent us from seeing useful things just beyond the most immediate (the helping hands that may look like claws, the alternatives that, in the end, are better).

Image credit: Vaccination. © Sage78 via iStockphoto. - See more at: http://blog.oup.com/2014/04/vaccines-world-immunization-week/#sthash.9VlGEhJM.dpuf
Image credit: Vaccination. © Sage78 via iStockphoto.

For the anti-vax group, fear is the gripping terror that awful things will happen from a jab (aka shot, stick, poke). Of course, it isn’t the jab that’s the problem. Needles through the skin, after all, deliver medicines to cure all manner of illnesses. For anti-vaxxers, the fear is about the immunization materials delivered by the jab. They dread the vaccine antigens, the molecules (i.e. pieces of microbes-made-safe) that cause our bodies to think we have encountered a bad germ so we will mount a strong immune response designed to neutralize that bad germ. What happens after a person receives a vaccine is, in effect, identical to what happens after we recover from a cold or the flu — or anthrax, smallpox, or possibly ebola (if they don’t kill us first). Our blood is subsequently armed with protective immune cells and antibodies so we don’t get infected with that specific virus or bacterium again. Same for measles, polio, or chicken-pox. If we either get those diseases (which can be bad) or the vaccines to prevent them (which is good), our immune system can effectively combat these viruses in future encounters and prevent infections.

So what should we do with our new words? We can use them to express our thoughts about people who haven’t yet seen the value of vaccines. Hopefully, these new words will lead to constructive dialogues rather than attacks. Besides being incredibly valuable, words are among the most vicious weapons we have and we must find ways to use them responsibly.

The post New words, new dialogues appeared first on OUPblog.

0 Comments on New words, new dialogues as of 8/22/2014 11:15:00 AM
Add a Comment
15. The road to egalitaria

In 1985, Nobel Laureate Gary Becker observed that the gap in employment between mothers and fathers of young children had been shrinking since the 1960s in OECD countries. This led Becker to predict that such sex differences “may only be a legacy of powerful forces from the past and may disappear or be greatly attenuated in the near future.” In the 1990s, however, the shrinking of the mother-father gap stalled before Becker’s prediction could be realized. In today’s economy, how big is this mother-father employment gap, what forces underlie it, and are there any policies which could close it further?

A simple way to characterize the mother-father employment gap is to sum up how much more work is done by mothers compared to fathers of children from ages 0 to 10. In 2010, fathers in the United States worked 3.1 more years on average than mothers over this age 0 to 10 age range. In the United Kingdom, the comparable number is 3.8, while in Canada it is 2.9 and Germany 4.5. The figure below traces the evolution of this mother-father employment gap for all four of these countries.

Graph shows the difference in years worked by mothers and fathers when their children are between the ages of 0 to 10. (Graph credit: Milligan, 2014 CESifo Economic Studies)
Graph shows the difference in years worked by mothers and fathers when their children are between the ages of 0 to 10. (Graph credit: Milligan, 2014 CESifo Economic Studies)

Becker’s theorizing about the family can help us to understand the development of this mother-father employment gap. Becker’s theoretical models suggest that if there are even slight differences between the productivity of mothers and fathers in the home vs. the workplace, spouses will tend to specialize completely in either in-home or in out-of-home work. These kind of productivity differences could arise because of cultural conditioning, as society pushes certain roles and expectations on women and men. Also, biology could be important as women have a heavier physical burden during pregnancy and after the birth of a child women have an advantage in breastfeeding. It is possible that the initial impact of these unique biological roles for mothers lingers as their children age. Biology is not destiny, but should be acknowledged as a potential barrier that contributes to the origins of the mother-father work gap.

Will today’s differences in mother-father work patterns persist into the future? To some extent that may depend on how cultural attitudes evolve. But there’s also the possibility that family-friendly policy can move things along more quickly. Both parental leave and subsidized childcare are options to consider.

Analysis of some data across the four countries suggest that these kinds of policies can make some difference, but the impact is limited.

Parental leave makes a very big difference when the children are age zero and the parent is actually taking the leave—but because mothers take much more parental leave than fathers, this increases the mother-father employment gap rather than shrinking it. Evidence suggests that after age 0 when most parents return to work, there doesn’t seem to be any lasting impact of having taken a maternity leave on mothers’ employment patterns when their children are ages 1 to 10.

Another policy that might matter is childcare. In the Canadian province of Quebec, a subsidized childcare program was put in place in 1997 that required parents to pay only $5 per day for childcare. This program not only increased mothers’ work at pre-school ages, but also seems to have had a lasting impact when their children reach older ages, as employment of women in Quebec increased at all ages from 0 to 10. When summed up over these ages, Quebec’s subsidized childcare closed the mother-father employment gap by about half a year of work.

Gary Becker’s prediction about the disappearance of mother-father work gaps hasn’t come true – yet. Evidence from Canada, Germany, the United States, and the United Kingdom suggests that policy can contribute to a shrinking of the mother-father employment gap. However, the analysis makes clear that policy alone may not be enough to overcome the combination of strong cultural attitudes and any persistence of intrinsic biological differences between mothers and fathers.

Image credit: Hispanic mother with two children, © Spotmatik, via iStock Photo.

The post The road to egalitaria appeared first on OUPblog.

0 Comments on The road to egalitaria as of 8/20/2014 4:51:00 AM
Add a Comment
16. Pigment profile in the photosynthetic sea slug Elysia viridis

How can sacoglossan sea slugs perform photosynthesis – a process usually associated with plants?

Kleptoplasty describes a special type of endosymbiosis where a host organism retain photosynthetic organelles from their algal prey. Kleptoplasty is widespread in ciliates and foraminifera; however, within Metazoa animals (animals having the body composed of cells differentiated into tissues and organs, and usually a digestive cavity lined with specialized cells), sacoglossan sea slugs are the only known species to harbour functional plastids. This characteristic gives these sea slugs their very special feature.

The “stolen” chloroplasts are acquired by the ingestion of macro algal tissue and retention of undigested functional chloroplasts in special cells of their gut. These “stolen” chloroplasts (thereafter named kleptoplasts) continue to photosynthesize for varied periods of time, in some cases up to one year.

In our study, we analyzed the pigment profile of Elysia viridis in order to evaluate appropriate measures of photosynthetic activity.

The pigments siphonaxanthin, trans and cis-neoxanthin, violaxanthin, siphonaxanthin dodecenoate, chlorophyll (Chl) a and Chl b, ε,ε- and β,ε-carotenes, and an unidentified carotenoid were observed in all Elysia viridis. With the exception of the unidentified carotenoid, the same pigment profile was recorded for the macro algae C. tomentosum (its algal prey).

In general, carotenoids found in animals are either directly accumulated from food or partially modified through metabolic reactions. Therefore, the unidentified carotenoid was most likely a product modified by the sea slugs since it was not present in their food source.

Image credit: Lettuce sea slug, by Laszlo Ilyes. CC-BY-SA-2.0 via Flickr.
Image credit: Lettuce sea slug, by Laszlo Ilyes. CC-BY-SA-2.0 via Flickr.

Pigments characteristic of other macro algae present in the sampling locations were not detected inthe sea slugs. These results suggest that these Elysia viridis retained chloroplasts exclusively from C. tomentosum.

In general, the carotenoids to Chl a ratios were significantly higher in Elysia viridis than in C. tomentosum. Further analysis using starved individuals suggests carotenoid retention over Chlorophylls during the digestion of kleptoplasts. It is important to note that, despite a loss of 80% of Chl a in Elysia viridis starved for two weeks, measurements of maximum capacity of performing photosynthesis indicated a decrease of only 5% of the photosynthetic capacity of kleptoplasts that remain functional.

This result clearly illustrates that measurement of photosynthetic activity using this approach can be misleading when evaluating the importance of kleptoplasts for the overall nutrition of the animal.

Finally, concentrations of violaxanthin were low in C. tomentosum and Elysia viridis and no detectable levels of antheraxanthin or zeaxanthin were observed in either organism. Therefore, the occurrence of a xanthophyll cycle as a photoregulatory mechanism, crucial for most photosynthetic organisms, seems unlikely to occur in C. tomentosum and Elysia viridis but requires further research.

The post Pigment profile in the photosynthetic sea slug Elysia viridis appeared first on OUPblog.

0 Comments on Pigment profile in the photosynthetic sea slug Elysia viridis as of 8/20/2014 4:51:00 AM
Add a Comment
17. Why referendum campaigns are crucial

As we enter the potentially crucial phase of the Scottish independence referendum campaign, it is worth remembering more broadly that political campaigns always matter, but they often matter most at referendums.

Referendums are often classified as low information elections. Research demonstrates that it can be difficult to engage voters on the specific information and arguments involved (Lupia 1994, McDermott 1997) and consequently they can be decided on issues other than the matter at hand. Referendums also vary from traditional political contests, in that they are usually focused on a single issue; the dynamics of political party interaction can diverge from national and local elections; non-political actors may often have a prominent role in the campaign; and voters may or may not have strong, clear views on the issue being decided. Furthermore, there is great variation in the information environment at referendums. As a result the campaign itself can be vital.

We can understand campaigns through the lens of LeDuc’s framework which seeks to capture some of the underlying elements which can lead to stability or volatility in voter behaviour at referendums. The essential proposition of this model is that referendums ask different types of questions of voters, and that the type of question posed conditions the behaviour of voters. Referendums that ask questions related to the core fundamental values and attitudes held by voters should be stable. Voters’ opinions that draw on cleavages, ideology, and central beliefs are unlikely to change in the course of a campaign. Consequently, opinion polls should show very little movement over the campaign. At the other end of the spectrum, volatile referendums are those which ask questions on which voters do not have pre-conceived fixed views or opinions. The referendum may ask questions on new areas of policy, previously un-discussed items, or items of generally low salience such as political architecture or institutions.

Another essential component determining the importance of the campaign are undecided voters. When voter political knowledge emanates from a low base, the campaign contributes greatly to increasing political knowledge. This point is particularly clear from Farrell and Schmitt-Beck (2002) where they demonstrated that voter ignorance is widespread and levels of political knowledge among voters are often overestimated. As Ian McAllister argues, partisan de-alignment has created a more volatile electoral environment and the number of voters who make their decisions during campaigns has risen. In particular, there has been a sharp rise in the number of voters who decide quite late in a campaign. In this case, the campaign learning is vital and the campaign may change voters’ initial disposition. Opinions may only form during the campaign when voters acquire information and these opinions may be changeable, leading to volatility.

The experience of referendums in Ireland is worth examining as Ireland is one of a small but growing number of countries which makes frequent use of referendums. It is also worth noting that Ireland has a highly regulated campaign environment. In the Oireachtas Inquiries Referendum 2011, Irish voters were asked to decide on a parliamentary reform proposal (Oireachtas Inquiries – OI) in October 2011. The issue was of limited interest to voters and co-scheduled with a second referendum on reducing the pay of members of the judiciary along with a lively presidential election.

The OI referendum was defeated by a narrow margin and the campaign period witnessed a sharp fall in support for the proposal. Only a small number of polls were taken but the sharp decline is clear from the figure below.

Figure One – The Campaign Matters (OI Referendum)
The Campaign Matters (OI Referendum)

Few voters had any existing opinion on the proposal and the post-referendum research indicated that voters relied significantly on heuristics or shortcuts emanating from the campaign and to a lesser extent on either media campaigns or rational knowledge. The evidence showed that just a few weeks after the referendum, many voters were unable to recall the reasons for their voting decision. An interesting result was that while there was underlying support for the reform with 74% of all voters in support of Oireachtas Inquiries in principle, it failed to pass. There was a very high level of ignorance of the issues where some 44% of voters could not give cogent reasons for why they voted ‘no’, underlining the common practice of ‘if you don’t know, vote no’.

So are there any lessons we can draw for Scottish Independence campaign? Scottish independence would likely be placed on the stable end of the Le Duc spectrum in that some voters could be expected to have an ideological predisposition on this question. Campaigns matter less at these types of referendums. However, they are by no means a foregone conclusion. We would expect that the number of undecided voters will be key and these voters may use shortcuts to make their decision. In other words the positions of the parties, of celebrities of unions and businesses and others will likely matter. In addition, the extent to which voters feel fully informed on the issues will also possibly be a determining factor. It may be instructive to look at another Irish referendum, on the introduction of divorce in the 1980s, during which voters’ opinions moved sharply during the campaign, even though the referendum question drew largely from the deep rooted conservative-liberal cleavage in Irish politics (Darcy and Laver 1990). The Scottish campaign might thus still conceivably see some shifts in opinion.

Headline image: Scottish Parliament Building via iStockphoto.

The post Why referendum campaigns are crucial appeared first on OUPblog.

0 Comments on Why referendum campaigns are crucial as of 1/1/1900
Add a Comment
18. Can changing how prosecutors do their work improve public safety?

In the 1990s, policing in major US cities was transformed. Some cities embraced the strategy of “community policing” under which officers developed working relationships with members of their local communities on the belief that doing so would change the neighborhood conditions that give rise to crime. Other cities pursued a strategy of “order maintenance” in which officers strictly enforced minor offenses on the theory that restoring public order would avert more serious crimes. Numerous scholars have examined and debated the efficacy of these approaches.

A companion concept, called “community prosecution,” seeks to transform the work of local district attorneys in ways analogous to how community policing changed the work of big-city cops. Prosecutors in numerous jurisdictions have embraced the strategy. Indeed, Attorney General Eric Holder was an early adopter of the strategy when he was US Attorney for the District of Columbia in the mid-1990s. Yet, community prosecution has not received the level of public attention or academic scrutiny that community policing has.

A possible reason for community prosecution’s lower profile is the difficulty of defining it. Community prosecution contrasts with the traditional model of a local prosecutor, which is sometimes called the “case processor” approach. In the traditional model, police provide a continuous flow of cases to the prosecutor, and she prioritizes some cases for prosecution and declines others. The prosecutor secures guilty pleas in most of the pursued cases, often through plea bargains, and trials are rare. The signature feature of the traditional prosecutor’s work is quickly resolving or processing a large volume of cases.

Community prosecution breaks with the traditional paradigm and changes the work of prosecutors in several ways. It removes prosecutors from the central courthouse and relocates them to a small office in a neighborhood, often in a retail storefront. This permits the prosecutor to develop relationships with community groups and individual residents, even allowing residents to walk into the prosecutor’s office and express concerns. It frees the prosecutors from responsibility for managing the flow of cases supplied by police and allows them to undertake two main tasks. The first is that prosecutors partner with community members to identify the sources of crime within the neighborhood and formulate solutions that will prevent crime before it occurs. The second is that when community prosecutors seek to impose criminal punishments, they develop their own cases rather than rely on those presented by police, and they typically focus on the cases they anticipate will have the greatest positive impact on the local community.

In the past fifteen years, Chicago, Illinois, has had a unique experience with community prosecution that allowed the first examination of its impact on crime rates. The State’s Attorney in Cook County (in which Chicago is located), opened four community prosecution offices between 1998 and 2000. Each of these offices had responsibility for applying the community prosecution approach to a target neighborhood in Chicago, and collectively, about 38% of Chicago’s population resided in a target neighborhood. Other parts of the city received no community prosecution intervention. The efforts continued until early 2007, when a budget crisis compelled the closure of these offices and the cessation of the county’s community prosecution program. For more than two years, Chicago had no community prosecution program. In 2009, a new State’s Attorney re-launched the program, and during the next three years, the four community prosecution offices were re-opened.

Window of an apartment block at night. © bartosz_zakrzewski  via iStockphoto.
Window of an apartment block at night. © bartosz_zakrzewski via iStockphoto.

This sequence of events provided an opportunity to evaluate the impact of community prosecution on crime. The first adoption of community prosecution in the late 1990s lent itself to differences-in-differences estimation. The application of community prosecution to four sets of neighborhoods, each beginning at four different dates, enabled comparisons of crime rates before and after the program’s implementation within those neighborhoods. The fact that other neighborhoods received no intervention permitted these comparisons to drawn relative to the crime rates in a control group. Furthermore, Chicago’s singular experience with community prosecution – its launch, cancellation, and re-launch – furnished a sequence of three policy transitions (off to on, on to off again, and off again to on again). By contrast, the typical policy analysis observes only one policy transition (commonly from off to on). These multiple rounds of program application enhanced the opportunity to detect whether community prosecution affected public safety.

The estimates from this differences-in-differences approach showed that community prosecution reduced crime in Chicago. The declines in violent crime were large and statistically significant. For example, the estimates imply that aggravated assaults fell by 7% following the activation of community prosecution in a neighborhood. The estimates for property crime also showed declines, but they were too imprecisely estimated to permit firm statistical inferences. These results are the first evidence that community prosecution can produce reductions in crime and that the reductions are sizable.

Moreover, there was no indication that community prosecution simply displaced crime, moving it from one neighborhood to another. Neighborhoods just over the border of each community prosecution target area experienced no change in their average rates of crime. The declines thus appeared to reflect a true reduction instead of a reallocation of crime. In addition, the drops in offending were immediate and sustained. One might expect responses in crime rates would arrive slowly and gain momentum over time as prosecutors’ relationships with the community grew. But the estimates instead suggest that community prosecutors were able to identify and exploit immediately opportunities to improve public safety.

This evaluation of the community prosecution in Chicago offers broad lessons about the role of prosecutors. As with any empirical study, some caveats apply. The highly decentralized and flexible nature of community prosecution forbids reducing the program to a fixed set of principles and steps that can be readily implemented elsewhere. To the degree that its success depends on bonds of trust between prosecutor and community, its success may hinge on the personality and talents of specific prosecutors. (Indeed, the article’s estimates show variation in the estimated impacts across offices within Chicago.) At minimum, the results demonstrate that, under circumstances that require more study, community prosecution can reduce crime.

More broadly, the estimates suggest that the role of prosecutors is more far-reaching than typically thought. Crime control is conventionally understood to be primarily the responsibility of police. It was for this very reason that in the 1990s so much attention was devoted to the cities’ choice of policing style – community policing or order maintenance. Restructuring the work of police was thought to be a key mechanism through which crime could be reduced. By contrast, a conventional view of prosecutors is that their responsibilities pertain to the selection of cases, adjudication in the courtroom, and striking plea bargains. This article’s estimates show that this view is unduly narrow. Just as altering the structure and tasks of police may affect crime, so too can changing how prosecutors perform their work.

The post Can changing how prosecutors do their work improve public safety? appeared first on OUPblog.

0 Comments on Can changing how prosecutors do their work improve public safety? as of 8/18/2014 6:57:00 AM
Add a Comment
19. Oral history, historical memory, and social change in West Mount Airy

By Caitlin Tyler-Richards


There are many exciting things coming down the Oral History Review pipeline, including OHR volume 41, issue 2, the Oral History Association annual meeting, and a new staff member. But before we get to all of that, I want to take one last opportunity to celebrate OHR volume 41, issue 1 — specifically, Abigail Perkiss’ “Reclaiming the Past: Oral History and the Legacy of Integration in West Mount Airy, Philadelphia.” In this article, Abigail investigates an oral history project launched in her hometown in the 1990s, which sought to resolve contemporary tensions by collecting stories about the area’s experience with racial integration in the 1950s. Through this intriguing local history, Abigail digs into the connection between oral history, historical memory, and social change.

Abby Perkiss. Photo credit:  Laurel Harrish Photography

Abigail Perkiss. Photo credit: Laurel Harrish Photography

If that weren’t enough to whet your academic appetite, the article also went live the same week her first daughter, Zoe, was born.

Perkiss_screenshot

How awesome is that?

But back to business. Earlier this month I chatted with Abigail about the article and the many other projects she has had in the works this year. So, please enjoy this quick interview and her article, which is currently available to all.

How did you become interested in oral history?

I’ve been gathering people’s stories in informal ways for as long as I can remember, and as an undergraduate sociology major at Bryn Mawr College, my interests began to coalesce around the intersection of storytelling and social change. I took classes in ethnography, worked as a PA on a few documentary projects, and interned at a documentary theater company. All throughout, I had the opportunity to develop and hone my skills as an interviewer.

I began taking history classes my junior year, and through that I started to think about the idea of oral history in a more intentional way. I focused my research around oral history, which culminated in my senior thesis, in which I interviewed several folksingers to examine the role of protest music in creating a collective memory of the Vietnam War, and how that memory was impacting the way Americans understood the war in Iraq. A flawed project, but pretty amazing to speak with people like Pete Seeger, Janis Ian, and Mary Travers!

After college, I studied at the Salt Institute for Documentary Studies in Portland, Maine, and when I began my doctoral studies at Temple University, I knew that I wanted to pursue research that would allow me to use oral history as one of the primary methodological approaches.

What sparked your interest in the Mount Airy project?

When I started my graduate work at Temple, I was pursuing a joint JD/PhD in US history. I knew I wanted to do something in the fields of urban history and racial justice, and I kept coming back to the Mount Airy integration project. I actually grew up in West Mount Airy, and even as a kid, I was very much aware of the lore of the neighborhood integration project. There was a real sense that the community was unique, special.

I knew that there had to be more to the utopian vision that was so pervasive in public conversations about the neighborhood, and I realized that by contextualizing the community’s efforts within the broader history of racial justice and urban space in the mid-twentieth century, I would be able to look critically about the concept and process of interracial living. I could also use oral history as a key piece of my research.

Your article focuses on an 1990s oral history project led by a local organization, the West Mount Airy Neighbors. Why did you choose to augment the interviews they collected with your own?

The 1993 oral history project was a wonderful resource for my book project (from which this article comes); but for my purposes, it was also incomplete. Interviewers focused largely on the early years of integration, so I wasn’t able to get much of a sense of the historical evolution of the efforts. The questions were also framed according to a very particular set of goals that project coordinators sought to achieve — as I argue, they hoped to galvanize community cohesion in the 1990s and to situate the local community organization at the center of contemporary change.

So, while the interviews were quite telling about the West Mount Airy Neighbors’ efforts to maintain institutional control in the neighborhood, they weren’t always useful for me in getting at some of the other questions I was trying to answer: about the meaning of integration for various groups in the community, about the racial politics that emerged, about the perception of Mount Airy in the city at large. To get at those questions, it was important for me to conduct additional interviews.

Is there anything you couldn’t address in the article that you’d like to share here?

As I alluded to above, it is part of a larger book project on postwar residential integration, Making Good Neighbors: Civil Rights, Liberalism, and Integration in Postwar Philadelphia (Cornell University Press, 2014). There, I look at the broader process of integrating and the challenges that emerged as the integration efforts coalesced and evolved over the decades. Much of the research for the book came from archival collections, but the oral histories from the 1990s, and the ones I collected, were instrumental in fleshing out the story and humanizing what could otherwise have been a rather institutional history of the West Mount Airy Neighbors organization.

Are you working on any projects the OHR community should know about?

I’ve spent the past 18 months directing an oral history project on Hurricane Sandy, Staring out to Sea, which came about through a collaboration with Oral History in the Mid-Atlantic Region (remember them?) and a seminar I taught in Spring 2013. That semester, I worked intensively with six undergraduates, studying the practice of oral history and setting up the project’s parameters. The students developed the themes and questions, recruited participants, conducted and transcribed interviews. They then processed and analyzed their findings, looking specifically at issues of race, power and representation in the wake of the storm.

In addition to blogging about their experience, the students presented their work at the 2013 OHMAR and OHA meetings. You can read a bit more about that and the project in Perspectives on History. This fall, I’ll be working with Professor Dan Royles and his digital humanities students to index the interviews we’ve collected and develop an online digital library for the project. I’ll also be attending to the OHA annual meeting this year to discuss the project’s transformative impact on the students themselves.

Excellent! I look forward to seeing you (and the rest of our readers) in Madison this October.

Caitlin Tyler-Richards is the editorial/media assistant at the Oral History Review. When not sharing profound witticisms at @OralHistReview, Caitlin pursues a PhD in African History at the University of Wisconsin-Madison. Her research revolves around the intersection of West African history, literature and identity construction, as well as a fledgling interest in digital humanities. Before coming to Madison, Caitlin worked for the Lannan Center for Poetics and Social Practice at Georgetown University.

The Oral History Review, published by the Oral History Association, is the U.S. journal of record for the theory and practice of oral history. Its primary mission is to explore the nature and significance of oral history and advance understanding of the field among scholars, educators, practitioners, and the general public. Follow them on Twitter at @oralhistreview, like them on Facebook, add them to your circles on Google Plus, follow them on Tumblr, listen to them on Soundcloud, or follow their latest OUPblog posts via email or RSS to preview, learn, connect, discover, and study oral history.

Subscribe to the OUPblog via email or RSS.
Subscribe to only history articles on the OUPblog via email or RSS.

The post Oral history, historical memory, and social change in West Mount Airy appeared first on OUPblog.

0 Comments on Oral history, historical memory, and social change in West Mount Airy as of 8/8/2014 10:56:00 AM
Add a Comment
20. Improving survey methodology: a Q&A with Lonna Atkeson

By R. Michael Alvarez


I recently had the opportunity to talk with Lonna Atkeson, Professor of Political Science and Regents’ Lecturer at the University of New Mexico. We discussed her opinions about improving survey methodology and her thoughts about how surveys are being used to study important applied questions. Lonna has written extensively about survey methodology, and has developed innovative ways to use surveys to improve election administration (her 2012 study of election administration is a wonderful example).

In the current issue of Political Analysis is the Symposium on Advances in Survey Methodology, which Lonna and I co-edited; in addition to the five research articles in the Symposium, we wrote an introduction that puts each of the research articles in context and talks about the current state of research in survey methodology. Also, Lonna and I are co-editing the Oxford Handbook on Polling and Polling Methods, which is in initial stages of development.

It’s well-known that response rates for traditional telephone surveying have declined dramatically. What’s the solution? ow can survey researchers produce quality data given low response rates with traditional telephone survey approaches?

What we’ve learned about response rates is they are not the be all or end all as an evaluative tool for the quality of the survey, which is a good thing because response rates are ubiquitously low! There is mounting evidence that response rates per se are not necessarily reflective of problems in nonresponse. Nonresponse error appears to be more related to the response rate interacting with the characteristic of the nonrespondent. Thus, if survey topic salience leads to response bias then nonresponse error becomes a problem, but in and of itself response rate is only indirect evidence of a potential problem. One potential solution to falling response rates is to use mixed mode surveys and find the best contact and response option for the respondent. As polling becomes more and more sophisticated, we need to consider best contact and response methods for different types of sample members. Survey researchers need to be able to predict the most likely response option for the individual and pursue that strategy.

Close up of a man smiling  on the line through a headset. © cenix via iStockphoto.

Close up of a man smiling on the line through a headset. © cenix via iStockphoto.

Much of your recent work uses “mixed-mode” survey methods. What’s a “mixed-mode” survey? What are the strengths and weaknesses of this approach?

Mixed mode surveys use multiple methods to contact or receive information from respondents. Thus, mixed mode surveys involve both mixtures of data collection and communications with the respondent. For example, a mixed mode survey might contact sample members by phone or mail and then have them respond to a questionnaire over the Internet. Alternatively a mixed mode survey might allow for multiple forms of response. For example, sample frame members may be able to complete the interview over the phone, by mail, or on the web. Thus a respondent who does not respond over the Internet may in subsequent contact receive a phone call or a FTF visit or may be offered a choice of response mode on the initial contact.

When you see a poll or survey reported online or in the news media, how do you determine if the poll was conducted in a way that has produced reliable data? What indicates a high-quality poll?

This is a difficult question because all polls are not created equally and many reported polls might have problems with sampling, nonresponse bias, question wording, etc. The point being that there are many places where error creeps into your survey not just one and to evaluate a poll researchers like to think in terms of total survey error, but the tools for that evaluation are still in the development stage and is an area of opportunity for survey researchers and political methodologists. We also need to consider a total survey error approach in how survey context, which now varies tremendously, influences respondents and what that means for our models and inferences. This is an area for continued research. Nevertheless, the first criteria for examining a poll ought to be its transparency. Polling data should include information on who funded the poll, a copy of the instrument, a description of the sampling frame, and sampling design (e.g. probability, non-probability, the study size, estimates of sampling error for probability designs, information on any weighting of the data, and how and when the data were collected). These are basic criteria that are necessary to evaluate the quality of the poll.

Clearly, as our symposium on survey methodology in the current issue of Political Analysis discusses, survey methodology is at an important juncture. What’s the future of public opinion polling?

Survey research is a rapidly changing environment with new methods for respondent contacting and responding. Perhaps the biggest change in the most recent decade is the move away from predominantly interviewer driven data collection methods (e.g. phone, FTF) to respondent driven data collection methods (e.g. mail, Internet, CASI), the greater use of mixed mode surveys, and the introduction of professional respondents who participate over long periods of time in discontinuous panels. We are just beginning to figure out how all these pieces fit together and we need to come up with better tools to assess the quality of data we are obtaining. The future of polling and its importance in the discipline, in marketing, and in campaigns will continue, and as academics we need to be at the forefront of evaluating these changes and their impact on our data. We tend to brush over the quality of data in favor of massaging the data statistically or ignoring issues of quality and measurement altogether. I’m hoping the changing survey environment will bring more political scientists into an important interdisciplinary debate about public opinion as a methodology as opposed to the study of the frequencies of opinions. To this end, I have a new Oxford Handbook, along with my co-editor Mike Alvarez, on polling and polling methods that will take a closer look at many of these issues and be a helpful guide for current and future projects.

In your recent research on election administration, you use polling techniques as tools to evaluate elections. What have you learned from these studies, and based on your research what do you see are issues that we might want to pay close attention to in this fall’s midterm elections in the United States?

We’ve learned so much from our election administration work about designing polling places, training poll workers, mixed mode surveys and more generally evaluating the election process. In New Mexico, for example, we have been interviewing both poll workers and voters since 2006, giving us five election cycles, including 2014, that provide an overall picture of the current state of election administration and how it’s doing relative to past election cycles. Our multi-method approach provides continuous evaluation, review, and improvement to New Mexico elections. This fall I think there are many interesting questions. We are interested in some election reform questions about purging voter registration files, open primaries, the straight party ballot options and felon re-enfranchisement. We are also especially interested in how voters decide whether to vote early or on Election Day and on Election Day where they decide to vote if they are using voting convenience centers instead of precincts. This is an important policy question, but where we place vote centers might impact turnout or voter satisfaction or confidence. We are also very interested in election lines and their impact on voters. In 2012 we found that voters on average can fairly easily tolerate lines of about ½ an hour, but feel there are administrative problems when lines grow longer. We want to continue to drill down on this question and examine when lines deter voters or create poor experiences that reduce the quality of their vote experience.

Lonna Rae Atkeson is Professor of Political Science and Regents’ Lecturer at the University of New Mexico. She is a nationally recognized expert in the area of campaigns, elections, election administration, survey methodology, public opinion and political behavior and has written numerous articles, book chapters, monographs and technical reports on these topics. Her work has been supported by the National Science Foundation, the Pew Charitable Trusts, the JEHT Foundation, the Galisano Foundation, the Bernalillo County Clerk, and the New Mexico Secretary of State. She holds a BA in political science from the University of California, Riverside and a Ph.D. in political science from the University of Colorado, Boulder.

R. Michael Alvarez is a professor of Political Science at Caltech. His research and teaching focuses on elections, voting behavior, and election technologies. He is editor-in-chief of Political Analysis with Jonathan N. Katz.

Political Analysis chronicles the exciting developments in the field of political methodology, with contributions to empirical and methodological scholarship outside the diffuse borders of political science. It is published on behalf of The Society for Political Methodology and the Political Methodology Section of the American Political Science Association. Political Analysis is ranked #5 out of 157 journals in Political Science by 5-year impact factor, according to the 2012 ISI Journal Citation Reports. Like Political Analysis on Facebook and follow @PolAnalysis on Twitter.

Subscribe to the OUPblog via email or RSS.
Subscribe to only politics and political science articles on the OUPblog via email or RSS.

The post Improving survey methodology: a Q&A with Lonna Atkeson appeared first on OUPblog.

0 Comments on Improving survey methodology: a Q&A with Lonna Atkeson as of 8/9/2014 7:06:00 AM
Add a Comment
21. What goes up must come down

Biomechanics is the study of how animals move. It’s a very broad field, including concepts such as how muscles are used, and even how the timing of respiration is associated with moving. Biomechanics can date its beginnings back to the 1600s, when Giovanni Alfonso Borelli first began investigating animal movements. More detailed analyses by pioneers such as Etienne Jules Marey and Eadweard Muybridge, in around the late 1800s started examining the individual frames of videos of moving animals. These initial attempts led to a field known as kinematics – the study of animal movement, but this is only one side of the coin. Kinetics, the study of motion and its causes, and kinematics together provide a very strong tool for fully understanding the strategies animals use to move as well as why they move the way they do.

One factor that really changes the way an animal moves is its body size. Small animals tend to have a much more z-shaped leg posture (when looking at them from a lateral view), and so are considered to be more crouched as their joints are more flexed. Larger animals on the other hand have straighter legs, and if you look at the extreme (e.g. elephant), they have very columnar legs. Just this one change in morphology has a significant effect on the way an animal can move.

We know that the environment animals live in is not uniform, but is cluttered with many different obstacles that must be overcome to successfully move and survive. One type of terrain that animals will frequently encounter is slopes: inclines and declines. Each of the two different types of slopes impose different mechanical challenges on the locomotor system. Inclines require much greater work from the muscles to move uphill against gravity! On declines, an animal is moving with gravity and so the limbs need to brake to prevent a headlong rush down the slope. Theoretically, there are many ways an animal can achieve successful locomotion on slopes, but, to date, there has been no consensus across species or animals of differing body sizes as to whether they do use similar strategies on slopes.

ICB_locomotion14

From published literature we generated an overview of how animals, ranging in size from ants to horses, move across slopes. We also investigated and analysed how strategies of moving uphill and downhill change with body size, using a traditional method for scaling analyses. What really took us by surprise was the lack of information on how animals move down slopes. There was nearly double the number of studies on inclines as opposed to declines. This is remarkable given that, if an animal climbs up something inevitably it has to find a way to come back down, either on its own or by having their owner call the fire department out to help!

Most animals tend to move slower up inclines and keep limbs in contact with the ground longer; this allows more time for the muscles to generate work to fight against gravity. Although larger animals have to do more absolute work than smaller animals to move up inclines, the relative stride length did not change across body size or on inclines. Even though there is much less data in the literature on how animals move downhill, we did notice that smaller animals (<~10kg) seem to use different strategies compared to large animals. Small animals use much shorter strides going downhill than on level terrain whereas large animals use longer strides. This difference may be due to stability issues that become more problematic (more likely to result in injury) as an animal’s size increases.

Our study highlights the lack of information we have about how size affects non-level locomotion and emphasises what future work should focus on. We really do not have any idea of how animals deal with stability issues going downhill, nor whether both small and large animals are capable of moving downhill without injuring themselves. It is clear that body size is important in determining the strategies an animal will use as it moves on inclines and declines. Gaining a better understanding of this relationship will be crucial for demonstrating how these mechanical challenges have affected the evolution of the locomotor system and the diversification of animals into various ecological niches.

Image credit: Mountain goat, near Masada, by mogos gazhai. CC-BY-2.5 via Wikimedia Commons.

The post What goes up must come down appeared first on OUPblog.

0 Comments on What goes up must come down as of 8/15/2014 9:24:00 AM
Add a Comment
22. Publishing tips from a journal editor: selecting the right journal

One of the most common questions that scholars confront is trying to find the right journal for their research papers. When I go to conferences, often I am asked: “How do I know if Political Analysis is the right journal for my work?”

This is an important question, in particular for junior scholars who don’t have a lot of publishing experience — and for scholars who are nearing important milestones (like contract renewal, tenure, and promotion). In a publishing world where it may take months for an author to receive an initial decision from a journal, and then many additional months if they need to revise and resubmit their work to one or more subsequent journals, selecting the most appropriate journal can be critical for professional advancement.

So how can a scholar try to determine which journal is right for their work?

The first question an author needs to ask is how suitable their paper is for a particular journal. When I meet with my graduate students, and we talk about potential publication outlets for their work, my first piece of advice is that they should take a close look at the last three or four issues of the journals they are considering. I’ll recommend that they look at the subjects that each journal is focusing on, including both substantive topics and methodological approaches. I also tell them to look closely at how the papers appearing in those journals are structured and how they are written (for example, how long the papers typically are, and how many tables and figures they have). The goal is to find a journal that is currently publishing papers that are most closely related to the paper that the student is seeking to publish, as assessed by the substantive questions typically published, the methodological approaches generally used, paper framing, and manuscript structure.

Potential audience is the second consideration. Different journals have different readers — meaning that authors can have some control over who might be exposed to their paper when they decide which journals to target for their work. This is particularly true for authors who are working on highly interdisciplinary projects, where they might be able to frame their paper for publication in related but different academic fields. In my own work on voting technology, for example, some of my recent papers have appeared in journals that have their primary audience in computer science, while others have appeared in more typical political science journals. So authors need to decide in many cases which audience they want to appeal two, and make sure that when they submit their work to a journal that appeals to that audience that the paper is written in an appropriate manner for that journal.

Peer reviewer
Peer reviewer for Scientific Review by Center for Scientific Review. Public domain via Wikimedia Commons.

However, most authors will want to concentrate on journals in a single field. For those papers, a third question arises: whether to target a general interest journal or a more specialized field journal. This is often a very subjective question, as it is quite hard to know prior to submission whether a particular paper will be interesting to the editors and reviewers of a general interest journal. As general interest journals often have higher impact factors (I’ll say more about impact factors next), many authors will be drawn to submit their papers to general interest journals even if that is not the best strategy for their work. Many authors will “start high”, that is begin with general interest journals, and then once the rejection letters pile up, they will move to the more specialized field journals. While this strategy is understandable (especially for authors who are nearing promotion or tenure deadlines), it may also be counterproductive — the author will likely face a long and frustrating process getting their work published, if they submit first to general interest journals, get the inevitable rejections, and then move to specialized field journals. Thus, my advice (and my own practice with my work) is to avoid that approach, and to be realistic about the appeal of the particular research paper. That is, if your paper is going to appeal only to readers in a narrow segment of your discipline, then send it to the appropriate specialized field journal.

A fourth consideration is the journal’s impact factor. Impact factors are playing an increasingly important role in many professional decisions, and they may be a consideration for many authors. Clearly, an author should generally seek to publish their work in journals that have higher impact than those that are lower impact. But again, authors should try to be realistic about their work, and make sure that regardless of the journal’s impact factor that their submission is appropriate for the journal they are considering.

Finally, authors should always seek the input of their faculty colleagues and mentors if they have questions about selecting the right journal. And in many fields, journal editors, associate editors, and members of the journal’s editorial board will often be willing to give an author some quick and honest advice about whether a particular paper is right for their journal. While many editors shy away from giving prospective authors advice about a potential submission, giving authors some brief and honest advice can actually save the editor and the journal a great deal of time. It may be better to save the author (and the journal) the time and effort that might get sunk into a paper that has little chance at success in the journal, and help guide the author to a more appropriate journal.

Selecting the right journal for your work is never an easy process. All scholars would like to see their work published in the most widely read and highest impact factor journals in their field. But very few papers end up in those journals, and authors can get their work into print more quickly and with less frustration if they first make sure their paper is appropriate for a particular journal.

Heading image: OSU William Oxley Thompson Memorial Library Stacks by Ibagli. Public Domain via Wikimedia Commons.

The post Publishing tips from a journal editor: selecting the right journal appeared first on OUPblog.

0 Comments on Publishing tips from a journal editor: selecting the right journal as of 8/17/2014 8:01:00 AM
Add a Comment
23. Miles Davis’s Kind of Blue

What is a classic album? Not a classical album – a classic album. One definition would be a recording that is both of superb quality and of enduring significance. I would suggest that Miles Davis’s 1959 recording Kind of Blue is indubitably a classic. It presents music making of the highest order, and it has influenced — and continues to influence — jazz to this day.

Cover art for Kind of Blue by the artist Miles Davis (c) Columbia Records
Cover art for Kind of Blue by the artist Miles Davis (c) Columbia Records via Wikimedia Commons.

There were several important records released in 1959, but no event or recording matches the importance of the release of the new Miles Davis album Kind of Blue on 17 August 1959. There were people waiting in line at record stores to buy it on the day it appeared. It sold very well from its first day, and it has sold increasingly well ever since. It is the best-selling jazz album in the Columbia Records catalogue, and at the end of the twentieth century it was voted one of the ten best albums ever produced.

But popularity or commercial success do not correlate with musical worth, and it is in the music on the recording that we find both quality and significance. From the very first notes we know we are hearing something new. Piano and bass draw in the listener into a new world of sound: contemplative, dreamy and yet intense.

The pianist here is Bill Evans, who was new to Davis’s band and a vital contributor to the whole project. Evans played spaciously and had an advanced harmonic sense. His sound was floating and open. The lighter sound and less crowded manner were more akin to the understated way in which Davis himself played. “He plays the piano the way it should be played,” said Davis about Bill Evans. And although Davis’s speech was often sprinkled with blunt Anglo-Saxon expressions, he waxed poetic about Evans’s playing: “Bill had this quiet fire. . . . [T]he sound he got was like crystal notes or sparkling water cascading down from some clear waterfall.” The admiration was mutual. Evans thought of Davis and the other musicians in his band as “superhumans.”

Evans makes his mark throughout the album, though Wynton Kelly substitutes for him on the bluesier and somewhat more traditional second track “Freddie Freeloader.”

Musicians refer to the special sound on Kind of Blue as “modal.” And the term “modal jazz” is often found in writings about jazz styles and jazz history. What exactly is modal jazz? There are two characteristic features that set this style apart. The first is the use of scales that are different from the standard major and minor ones. So the first secret of the special sound on this album is the use of unusual scales. But the second characteristic is even more noticeable, and that is the way the music is grounded on long passages of unchanging harmony. “So What” is an AABA form in which all the A sections are based on a single harmony and the B sections on a different harmony a half step higher.

A [D harmony]
A [D harmony]
B [Eb harmony]
A [D harmony]

Unusual scales are most clearly heard on “All Blues.”

And for hypnotic and meditative, you can’t do better than “Flamenco Sketches,” the last track, which brings the modal conception to its most developed point. It is based upon five scales or modes, and each musician improvises in turn upon all five in order. A clear analysis of this track is given in Mark Gridley’s excellent jazz textbook Jazz Styles.)

An aside here:
It is possible — even likely — that the titles of these two tracks are reversed. In my Musical Quarterly article (link below), I suggest that “Flamenco Sketches” is the correct title for the strumming medium-tempo music on the track that is now known as “All Blues” and that “All Blues” is the correct title for the last, very slow, track on the album. I also show how the mixup occurred in 1959, just as the album was released.

Perhaps the most beautiful piece on the album is the Evans composition “Blue in Green,” for which Coltrane fashions his greatest and most moving solo. Of the five tracks on the album, four are quite long, ranging from nine to eleven and a half minutes, and they are placed two before and two after “Blue in Green.” Regarding the program as a whole, therefore, one sees “Blue in Green” as the small capstone of a musical arch. But “Blue in Green” itself is in arch form, with a palindromic arrangement of the solos. The capstone of this arch upon an arch is the thirty seconds or so of Coltrane’s solo.

 

 

 

 
 

Saxophone (Coltrane)

Piano                                           Piano

Trumpet                                                           Trumpet

Piano                                                                                    Piano

“Blue in Green”

“Freddie Freeloader”                     “All Blues”

“So What”                                                     “Flamenco Sketches”

Kind of Blue

The great strength of Kind of Blue lies in the consistency of its inspiration and the palpable excitement of its musicians. “See,” wrote Davis in his autobiography, “If you put a musician in a place where he has to do something different from what he does all the time . . . that’s where great art and music happens.”

The post Miles Davis’s Kind of Blue appeared first on OUPblog.

0 Comments on Miles Davis’s Kind of Blue as of 8/17/2014 10:53:00 AM
Add a Comment
24. A Fields Medal reading list

One of the highest points of the International Congress of Mathematicians, currently underway in Seoul, Korea, is the announcement of the Fields Medal prize winners. The prize is awarded every four years to up to four mathematicians under the age of 40, and is viewed as one of the highest honours a mathematician can receive.

This year sees the first ever female recipient of the Fields Medal, Maryam Mirzakhani, recognised for her highly original contributions to geometry and dynamical systems. Her work bridges several mathematic disciplines – hyperbolic geometry, complex analysis, topology, and dynamics – and influences them in return.

We’re absolutely delighted for Professor Mirzakhani, who serves on the editorial board for International Mathematics Research Notices. To celebrate the achievements of all of the winners, we’ve put together a reading list of free materials relating to their work and to fellow speakers at the International Congress of Mathematicians.

Ergodic Theory of the Earthquake Flow” by Maryam Mirzakhani, published in International Mathematics Research Notices

Noted by the International Mathematical Union as work contributing to Mirzakhani’s achievement, this paper investigates the dynamics of the earthquake flow defined by Thurston on the bundle PMg of geodesic measured laminations.

Ergodic Theory of the Space of Measured Laminations” by Elon Lindenstrauss and Maryam Mirzakhani, published in International Mathematics Research Notices

A classification of locally finite invariant measures and orbit closure for the action of the mapping class group on the space of measured laminations on a surface.

Mass Forumlae for Extensions of Local Fields, and Conjectures on the Density of Number Field Discriminants” by Majul Bhargava, published in International Mathematics Research Notices

Manjul Bhargava joins Maryam Mirzakhani amongst this year’s winners of the Fields Medal. Here he uses Serre’s mass formula for totally ramified extensions to derive a mass formula that counts all étale algebra extentions of a local field F having a given degree n.

Model theory of operator algebras” by Ilijas Farah, Bradd Hart, and David Sherman, published in International Mathematics Research Notices

Several authors, some of whom speaking at the International Congress of Mathematicians, have considered whether the ultrapower and the relative commutant of a C*-algebra or II1 factor depend on the choice of the ultrafilter.

Small gaps between products of two primes” by D. A. Goldston, S. W. Graham, J. Pintz, and C. Y. Yildrim, published in Proceedings of the London Mathematical Society

Speaking on the subject at the International Congress, Dan Goldston and colleagues prove several results relating to the representation of numbers with exactly two prime factors by linear forms.

On Waring’s problem: some consequences of Golubeva’s method” by Trevor D. Wooley, published in the Journal of the London Mathematical Society

Wooley’s paper, as well as his talk at the congress, investigates sums of mixed powers involving two squares, two cubes, and various higher powers concentrating on situations inaccessible to the Hardy-Littlewood method.

 

Image credit: (1) Inner life of human mind and maths, © agsandrew, via iStock Photo. (2) Maryam Mirzakhani 2014. Photo by International Mathematical Union. Public Domain via Wikimedia Commons.

The post A Fields Medal reading list appeared first on OUPblog.

0 Comments on A Fields Medal reading list as of 8/18/2014 4:03:00 AM
Add a Comment
25. The First World War and the development of international law

On 28 June 1914, Archduke Franz Ferdinand and his wife Sophie, Duchess of Hohenberg, were assassinated in Sarajevo, setting off a six week diplomatic battle that resulted in the start of the First World War. The horrors of that war, from chemical weapons to civilian casualties, led to the first forays into modern international law. The League of Nations was established to prevent future international crises and a Permanent Court of International Justice created to settle disputes between nations. While these measures did not prevent the Second World War, this vision of a common law for all humanity was essential for international law today. To mark the centenary of the start of the Great War, and to better understand how international law arose from it, we’ve compiled a brief reading list.

The Oxford Handbook of the History of International Law, Edited by Bardo Fassbender, Anne Peters, and Simone Peter

How did international law develop from the 15th century until the end of World War II? This 2014 ASIL Certificate of Merit winnor looks at the history of international law in relation to themes such as peace and war, the sovereignty of states, hegemony, and the protection of the individual person. It includes Milos Vec’s ‘From the Congress of Vienna to the Paris Peace Treaties of 1919′ and Peter Krüger’s ‘From the Paris Peace Treaties to the End of the Second World War’.

Formalizing Displacement: International Law and Population Transfers by Umut Özsu

A detailed study into the 1922-34 exchange of minorities between Greece and Turkey, supported by the League of Nations, in which two million people were forcibly relocated. Check out the specific chapters on: Wilson and international law; US jurisprudence and international law in the wake of WWI; and the failed marriage of the US and the League of Nations and America’s reaction of isolationism through WWII.

The Birth of the New Justice: The Internationalization of Crime and Punishment, 1919-1950 by Mark Lewis

How could the world repress aggressive war, war crimes, terrorism, and genocide in the wake of the First World War? Mark Lewis examines attempts to create specific criminal justice courts to address these crimes, and the competing ideologies behind them.

A History of Public Law in Germany 1914-1945 by Michael Stolleis, Translated by Thomas Dunlap

How did the upheaval of the first half of the 20th century impact the creation of public law within and across states? Germany offers an interesting case given its central role in many of the events.

“Neutrality and Multilateralism after the First World War” by Aoife O’ Donoghue in the Journal of Conflict and Security Law

What exactly did ‘neutrality’ mean before, during, and after the First World War? The newly independent Ireland exemplified many of the debates surrounding neutrality and multilateralism.

The Signing of Peace in the Hall of Mirrors, Versailles, 28th June 1919 by William Orpen. Imperial War Museum. Public domain via Wikimedia Commons.
The Signing of Peace in the Hall of Mirrors, Versailles, 28th June 1919 by William Orpen. Imperial War Museum. Public domain via Wikimedia Commons.

“What is Aggression? : Comparing the Jus ad Bellum and the ICC Statute” by Mary Ellen O’Connell and Mirakmal Niyazmatov in the Journal of International Criminal Justice

The Treaty of Versailles marked the first significant attempt to hold an individual — Kaiser Wilhelm — accountable for unlawful resort to major military force. Mary Ellen O’Connell and Mirakmal Niyazmatov discuss the prohibition on aggression, the Jus ad Bellum, the ICC Statute, successful prosecution, Kampala compromise, and protecting the right to life of millions of people.

“Delegitimizing Aggression: First Steps and False Starts after the First World War” by Kirsten Sellars in the Journal of International Criminal Justice

Following the First World war, there was a general movement in international law towards the prohibition of aggressive war. So why is there an absence of legal milestones marking the advance towards the criminalization of aggression?

“The International Criminal Tribunal for the Former Yugoslavia: The Third Wang Tieya Lecture” by Mohamed Shahabuddeen in the Chinese Journal of International Law

What is the bridge between the International Military Tribunal, formed following the Treaty of Versailles, and the International Criminal Tribunal for the former Yugoslavia? Mohamed Shahabuddeen examines the first traces of the development of international criminal justice before the First World War and today’s ideas of the responsibility of the State and the criminal liability of the individual.

“Collective Security, Demilitarization and ‘Pariah’ States” by David J. Bederman in the European Journal of International Law

When are sanctions doomed to failure? David J. Bederman analyzes the historical context of the demilitarization sanctions imposed against Iraq in the aftermath of the Gulf War of 1991 from the 1919 Treaty of Versailles through to the present day.

“Peace Treaties after World War I” by Randall Lesaffer, Mieke van der Linde in the Max Planck Encyclopedia of Public International Law

How did legal terminology and provisions concerning hostilities, prisoners of war, and other wartime-related concerns change following the introduction of modern warfare during the First World War?

“League of Nations” by Christian J Tams in the Max Planck Encyclopedia of Public International Law

What lessons does the first body of international law hold for the United Nations and individual nations today?

“Alliances” by Louise Fawcett in the Max Planck Encyclopedia of Public International Law

Peace was once ensured through a complex web of diplomatic alliances. However, those same alliances proved fatal as they ensured that various European nations and their empires were dragged into war. How did the nature of alliances between nations change following the Great War?

“International Congress of Women (1915)” by Freya Baetens in the Max Planck Encyclopedia of Public International Law

In the midst of tremendous suffering and loss, suffragists continued to march and protest for the rights of women. How did the First World War hinder the women’s suffrage movement, and how did it change many of the demands and priorities of the suffragists?

“History of International Law, World War I to World War II” by Martti Koskenniemi in the Max Planck Encyclopedia of Public International Law

A brief overview of the development of international law during the interwar period: where there was promise, and where there was failure.
 
Headline image credit: Stanley Bruce chairing the League of Nations Council in 1936. Joachim von Ribbentrop is addressing the council. Bruce Collection, National Archives of Australia. Public domain via Wikimedia Commons.

The post The First World War and the development of international law appeared first on OUPblog.

0 Comments on The First World War and the development of international law as of 8/18/2014 6:57:00 AM
Add a Comment

View Next 25 Posts