JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: journals, Most Recent at Top [Help]
Results 1 - 25 of 262
How to use this Page
You are viewing the most recent posts tagged with the words: journals in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
Modern society requires a reliable and trustworthy Internet infrastructure. To achieve this goal, cybersecurity research has previously drawn from a multitude of disciplines, including engineering, mathematics, and social sciences, as well as the humanities. Cybersecurity is concerned with the study of the protection of information – stored and processed by computer-based systems – that might be vulnerable to unintended exposure and misuse.
Sore throats are an inevitable part of childhood, no matter where in the world one lives. However for those children living in poor, under-resourced and marginalised societies of the world, this could mean a childhood either cut short by crippling heart failure or the need for open-heart surgery.
Migrant farmworkers plant and pick most of the fruits and vegetables that you eat. Seasonal crop farmers, who employ workers only a few weeks of the year, rely on workers who migrate from one job to another. However, farmers’ ability to rely on migrants to fill their seasonal labor needs is in danger. From 1989 through 1988, roughly half of all seasonal crop farmworkers migrated: traveled at least 75 miles for a U.S. job. Since then, the share of workers who migrate has dropped by more than in half, hitting 18% in 2012.
January saw the critically acclaimed and award winning Broadchurch return to our TV screens for a second series. There was a publicity blackout in an attempt to prevent spoilers or leaks; TV critics were not sent the usual preview DVDs. The opening episode sees Joe Miller plead not guilty to the murder of Danny Latimer, a shock as the previous season’s finale ended with his admission of guilt. The change of plea means that the programme shifts from police procedural to courtroom drama – both staples of the TV schedules. Witnesses have to give evidence, new information is revealed through cross-examination, and old scores settled by witnesses and barristers.
Meet Professor Adam Timmis, the Editor in Chief of the latest member of the European Society of Cardiology journal family, the European Heart Journal -- Quality of Care and Clinical Outcomes (EHJ-QCCO). We spoke to Timmis about how he became involved in cardiology, the challenges and developments in his field, and his plans for EHJ-QCCO.
A few months ago, we asked you to tell us about the work you’re doing. Many of you responded, so for the next few months, we’re going to be publishing reflections, stories, and difficulties faced by fellow oral historians. This week, we bring you the first post in this series, focusing on a multimedia project from Mark Larson. We encourage you to engage with these posts by leaving comments on the post or on social media, or by reaching out directly to the authors.
Renowned English cosmologist Stephen Hawking has made his name through his work in theoretical physics as a bestselling author. His life – his pioneering research, his troubled relationship with his wife, and the challenges imposed by his disability – is the subject of a poignant biopic, The Theory of Everything. Directed by James Marsh, the film stars Eddie Redmayne, who has garnered widespread critical acclaim for his moving portrayal.
The scraps of an archive often speak in ways that standard histories cannot. In 2005, I spent my days at the Paul Sacher Foundation in Basel, a leading archive for twentieth-century concert music, where I transcribed the papers of the German-Jewish émigré composer Stefan Wolpe (1902-1971). The task was alternately exhilarating and grim. Wolpe had made fruitful connections with creators and thinkers across three continents, from Paul Klee to Anton Webern to Hannah Arendt to Charlie Parker. An introspective storyteller and exuberant synthesizer of ideas, Wolpe narrated a history of modernism in migration as a messy, real-time chronicle in his correspondence and diaries. Yet, within this narrative, the composer had also reckoned with more than his share of death and loss as a multiply-displaced Nazi-era refugee. He had preserved letters from friends as symbols of the ties that had sustained him, in some cases carrying them over dozens of precarious border crossings during his 1933 flight. By the 1950s, his circumstances had calmed down, after he had settled in New York following some years in Mandatory Palestine. Amidst his mid-century papers, I was surprised to come across a cache of artfully spaced poems typewritten on thick leaves of paper, with the attribution “Yoko Ono.” The poems included familiar, stark images of death, desolation, and flight. It was only later that I realized they responded not to Wolpe’s life history, but likely to Ono’s own. The poems inspired a years-long path of research that culminated in my article, “Limits of National History: Yoko Ono, Stefan Wolpe, and Dilemmas of Cosmopolitanism,” recently published in The Musical Quarterly.
Yoko Ono befriended Stefan Wolpe and his wife the poet Hilda Morley in New York City around 1957. Although of different backgrounds and generations, Wolpe and Ono were both displaced people in a city of immigrants. Both had been wartime refugees, and both endured forms of national exile, though in different ways. Ono had survived starvation conditions as an internal refugee after the Tokyo firebombings. She was twelve when her family fled the city to the countryside outside Nagano, while her father was stranded in a POW camp. By then, she had already felt a sense of cultural apartness, since she had spent much of her early childhood shuttling back and forth between Japan and California, following her father’s banking career. When she began her own career as an artist in New York in the 1950s, Ono entered what art historian Midori Yoshimoto has called a gender-based exile from Japan. Her career and lifestyle clashed with a society where there were “few alternatives to the traditional women’s role of becoming ryōsai kenbo (good wives and wise mothers).” Though Ono eventually became known primarily as a performance and visual artist, she identified first as a composer and poet. After she moved to the city to pursue a career in the arts, Ono’s family disowned her. It was around this time that she befriended the Wolpe-Morleys, who often hosted her at their Upper West Side apartment, where she “loved the intellectual, warm, and definitely European atmosphere the two of them had created.”
In 2008, I wrote a letter to Ono, asking her about the poems in Wolpe’s collection. Given her busy schedule, I was surprised to receive a reply within a week. She confirmed that she had given the poems to Wolpe and Morley in the 1950s. She also shared other poems and prose from her early adulthood, alongside a written reminiscence of Wolpe and Morley. She later posted this reminiscence on her blog several months before her 2010 exhibit “Das Gift,” an installation in Wolpe’s hometown Berlin dedicated to addressing histories of violence. The themes of the installation trace back to the earliest phase of her career when she knew Wolpe. During their period of friendship, both creators devoted their artistic projects to questions of violent history and traumatic memory, refashioning them as a basis for rehabilitative thought, action, and community.
Virtually no historical literature acknowledges Ono’s and Wolpe’s connection, which was premised on shared experiences of displacement, exile, and state violence. Their affiliation remains virtually unintelligible to standard art and music histories of modernism and the avant-garde, which tend to segregate their narratives along stable lines of genre, medium, and nation—by categories like “French symbolist poetry,” “Austro-German Second Viennese School composition,” and “American experimental jazz.” From this narrow perspective, Wolpe the German-Jewish, high modernist composer would have little to do with Ono the expatriate Japanese performance artist.
What do we lose by ignoring such creative bonds forged in diaspora? Wolpe and Ono both knew what it was to be treated as less than human. They had both felt the hammer of military state violence. They both knew what it was to not “fit” in the nation—to be neither fully American, Japanese, nor German. And they both directed their artistic work toward the dilemmas arising from these difficult experiences. The record levels of forced displacement during their lifetimes have not ended, but have only risen in our own. According to the most recent report from the UN High Commissioner on Refugees, “more people were forced to flee their homes in 2013 than ever before in modern history.” Though the arts cannot provide refuge, they can do healing work by virtue of the communities they sustain, with the call-and-response of human recognition exemplified in boundary-crossing friendships like Wolpe’s and Ono’s. And to recognize such histories of connection is to recognize figures of history as fully human.
Headline image credit: Cards and poems made for Yoko Ono’s Wish Tree and sent to Hirshhorn Museum and Sculpture Garden (Washington), 7 November 2010. Photo by Gianpiero Actis & Lidia Chiarelli. CC BY 3.0 via Wikimedia Commons.
Global Summitry is a new journal published by Oxford University Press in association with University of Toronto’s Munk School of Global Affairs and Rotman School of Management. The journal features articles on the organization and execution of global politics and policy. The first issue is slated to publish in summer 2015. We sat down with editors Alan Alexandroff and Don Brean to discuss the changing global summitry field and their plans for the journal’s digital scope, including audio podcasts, and videos.
* * * * *
What new approaches will Global Summitry bring to its field?
Global Summitry is concerned with examining today’s international governance in all of its dimensions. The Journal, it is hoped, will describe, analyse, and evaluate the evolution, the contemporary setting, and the future of collaboration of the global order. Global Summitry has emerged to capture contemporary global policy-making in all its complexity.
Global Summitry is dedicated to raising public knowledge of the global order and its policy outcomes. The Journal seeks informed commentary and analysis to the process and more particularly, the outcomes of global summitry. Global Summitry will feature articles on the organization and execution of global politics and policy from a variety of perspectives — political, historical, economic, and legal — from academics, policy experts, and media personnel, as well as from distinguished officials and professionals in the field.
How has the field changed in the last 25 years?
There has been dramatic change in the global order and its actors. The ending of the Cold War and the demise of the Soviet Union left the United States as the last superpower. The end of the Cold War saw the rise of global governance and the primary leadership of the United States. Increasingly, the problems of the international system focused on growing economic and political interdependence questions. Alongside the formal institutions of international organization — the UN and Bretton Woods systems — new informal institutions — the G7/8, APEC, EAS, NSS, and the G20 — emerged to meet the growing challenges — climate change, development, human rights and justice, nuclear material security, global poverty and development, and global security. And, with the traditional great powers, we saw the emergence of the new large emerging market states, like Brazil and India, but most spectacularly, China.
Today global governance involves a variety of actors — international organizations, both formal, and informal, states, transgovernmental networks, and select non-state entities. All of these actors are involved in the organization and execution of global politics and policy today. Global summitry today is concerned with the architecture, the institutions and, most critically, the political behavior and outcomes in coordinated global initiatives. We will reach out to scholars from all across the globe from the traditional academic centers, to the new centers in the BRICS and the New Frontier states for commentary and insights into the global order.
What do you hope to see in the coming years from both the field and the journal?
The global summitry field will chronicle, we hope, how international governance meets the challenges of economic and political interdependence. But attention will also be directed to understanding how we meet the growing geopolitical tensions that have appeared — conflict with Russia in Europe, the new tensions in East Asia, the growing disorder in the Middle East that have created consequences well beyond that region. Global Summitry will bring expert description, analysis, and evaluation to a field that until now has not been a stand-alone focus of inquiry by researchers, policy analysts, media and officials from across the globe.
What are your plans to innovate and engage with your audience?
We see a multi-platform world evolving for all academic publishing. As a result, from the commencement of Global Summitry, we intend to present information through all contemporary digital means. The Journal intends to provide a steady stream of academic and policy articles of course but we are determined to offer video interviews with our experts, policy makers, and media guests. We also intend to provide podcast presentations and discussions. As various digital platforms evolve, we anticipate evolving as well.
February is Heart Month in both the United States and the United Kingdom. It is a time to raise awareness of heart and circulatory diseases. Heart Month highlights all forms of heart disease, from certain life-threatening heart conditions that individuals are born with, to heart attacks and heart failure in later life.
To mark Heart Month, we have created the interactive image below to demonstrate different cardiovascular and thoracic surgical procedures from the Multimedia Manual of Cardio-Thoracic Surgery, selected by the Editor-in-Chief, Professor Marko I. Turina. Cardio-thoracic surgery deals specifically with the treatment of diseases affecting organs inside the chest, the heart, and lungs. Heart conditions, which may be treated through cardio-thoracic surgery, include heart valve disease and congenital heart disease.
Heading image: Human heart and circulatory system By bryan Brandenburg. CC BY-SA 3.0 via Wikimedia Commons.
Many attempts have been made to explain the historic and current lack of women working in STEM fields. During her two years of service as Director of Policy Planning for the US State Department, from 2009 to 2011, Anne-Marie Slaughter suggested a range of strategies for corporate and political environments to better support women at work. These spanned from social-psychological interventions to the introduction of role models and self-affirmation practices. Slaughter has written and spoken extensively on the topic of equality between men and women. Beyond abstract policy change, and continuing our celebration of women in STEM, there are practical tips and guidance for young women pursuing a career in Science, Technology, Engineering, or Mathematics.
(1) &nsbp; Be open to discussing your research with interested people.
From in-depth discussions at conferences in your field to a quick catch up with a passing colleague, it can be endlessly beneficial to bounce your ideas off a range of people. New insights can help you to better understand your own ideas.
(2) &nsbp; Explore research problems outside of your own.
Looking at problems from multiple viewpoints can add huge value to your original work. Explore peripheral work, look into the work of your colleagues, and read about the achievements of people whose work has influenced your own. New information has never been so discoverable and accessible as it is today. So, go forth and hunt!
(3) &nsbp; Collaborate with people from different backgrounds.
The chance of two people having read exactly the same works in their lifetime is nominal, so teaming up with others is guaranteed to bring you new ideas and perspectives you might never have found alone.
(4) &nsbp; Make sure your research is fun and fulfilling.
As with any line of work, if it stops being enjoyable, your performance can be at risk. Even highly self-motivated people have off days, so look for new ways to motivate yourself and drive your work forward. Sometimes this means taking some time to investigate a new perspective or angle from which to look at what you are doing. Sometimes this means allowing yourself time and distance from your work, so you can return with a fresh eye and a fresh mind!
(5) &nsbp; Surround yourself with friends who understand your passion for scientific research.
The life of a researcher can be lonely, particularly if you are working in a niche or emerging field. Choose your company wisely, ensuring your valuable time is spent with friends and family who support and respect your work.
Image Credit: “Board” by blickpixel. Public domain via Pixabay.
Life is the most exquisite natural outcome on our planet, arising as an evolutionary experiment that has persisted since the formation of this planet 4.5 billion years ago.
The enormous biodiversity we see today represents only a small fraction of life that has existed on earth. As the most intelligent (and probably lucky!) species, we humans, with our unique and conscious minds, have never stopped inquiring where we came from.
By unearthing and examining fossils—petrified remains left behind by prehistoric organisms—paleontology deciphers the biological messages of past organisms.
Fossils were discovered early in human history, and their meaning was interpreted in various ways by Western wisdom and by Chinese naturalists for over 2000 years.
Paleontology as a scientific discipline took shape in 18th-century Europe and grew quickly during the 19th century. After Charles Darwin published On the Origin of Species in 1859, paleontology as a school of natural sciences refocused on understanding the evolutionary path of life.
Along with developments in geology, biology, and modern technology in the 19th and 20th centuries, the traditional practice of paleontology using morphology, taxonomy, and biochronology evolved into a form that is equipped with multidisciplinary approaches and is technically and methodologically sophisticated. New concepts, theories, and methods that developed along with the appearance and progress of plate tectonics, radiometric dating, stable isotopic studies, and molecular biology are now blended into the blood of traditional paleontology.
Searching for the mechanisms behind the diversification of life, mass extinctions, and the paleoenvironmental background, paleontology has been brought to a new stage in which organisms and their surroundings have become a single multifaceted research subject commonly tackled by joint international teamwork.
Paleontology in China has blossomed into a strong research enterprise during the last two decades, thanks to an enriched intellectual atmosphere, the energy of a promising economy, and the groundwork laid by generations of scientists.
China contains rich and unique fossil resources, such as the Precambrian Weng’an Biota, the early Cambrian Chengjiang Biota, and the Early Cretaceous Jehol Biota, to name but a few. Numerous important fossils, some of which are considered to be ‘missing links’ in the chain of organismal evolution, have been discovered in the strata of various geologic time intervals.
Research on these fossils has significantly advanced our knowledge of the history of life as a whole.
Discoveries are forever, and our efforts to search for the history of life are endless. What has been achieved in paleontology in China is undoubtedly superb, but it is only the opening statement of an influential speech; much remains to be said in the decades to come. The great potential for research opportunities needs to be cultivated and numerous scientific problems remain to be solved. Looking into the history of life, we see a bright future for the study of paleontology in China and the rest of the world.
Image Credit: “Death Throw.” Photo by Mike Beauregard. CC by 2.0 via Flickr.
Today’s data scientist must know how to write good code. Regardless of whether they are working with a commercial off-the-shelf statistical software package, R, python, or perl, all require the use of good coding practices. Large and complex datasets need lots of manipulation to wrangle them into shape for analytics, statistical estimation often is complex, and presentation of complicated results sometimes requires writing lots of code. To make sure that code is understandable to the author and others, good coding practices are essential.
Many who teach methodology, statistics, and data science, are increasingly teaching their students how to write good computer code. As a practical matter, if a professor requires that students turn in their code for a problem set, that code needs to be well-crafted to be legible to the instructor. But as increasing numbers of our students are writing and distributing their code and software tools to the public, professionally we need to do more to train students how to write good code. Finally, good code is critical for research replication and transparency — if you can’t understand someone’s code, it might be difficult or impossible to be able to reproduce their analysis.
When I first started teaching methods to graduate students, there was little in the methodological literature that I found useful for teaching graduate students good coding practices. But in 1995, my colleague Jonathan Nagler wrote out some great guidance on good methodological practices, in particular guidelines for good coding style. His piece is available online (“Coding Style and Good Computing Practices”), and his advice from 1995 is as relevant today as it was then. I use Jonathan’s guidelines in my graduate teaching.
Over the past few years, as Political Analysis has focused resources on research replication and transparency, it’s become clear that we need to develop better guidance for researchers and authors regarding how to write good code. One of the biggest issues that we run into when we review replication materials that are submitted to the journal is poor documentation and unclear code; and if we can’t figure out how the code works, I’m sure that our readers will have the same problem.
We’ve been thinking of developing some guidelines for documentation of replication materials, and standards for coding practices. As part of that research, I asked Jonathan if he would write an update of his 1995 essay, and for him to reflect some on how things might have evolved in terms of good computing practices since 1995. His thoughts are below, and I encourage readers to also read Jonathan’s original 1995 essay.
* * * * *
Coding style and good computing practices: it is easy to get the style right, harder to get good practice, by Jonathan Nagler, NYU
Many years ago I was prompted to write Coding Style and Good Computing Practices, an article laying out guidelines for coding style for political scientists. The article was reprinted in a symposium on replication in PS (September 1995, Vol. 28, No. 3, 488-492). According to Google Scholar, it has rarely been cited, but I’m convinced it has been read quite often because I’ve seem some idiosyncratic suggestions made in it in the code of other political scientists. Though re-reading the article I am reminded how many people have not read it, or just ignored it.
Here is a list of basic points reproduced from that article:
Command files: they should be kept.
Data-manipulation vs. data-analysis: these should be in distinct files.
Keep tasks compartmentalized (‘modularity’).
Know what the code is supposed to do before you start.
Don’t be too clever.
Variable names should mean something.
Use parentheses and white-space to make code readable.
Documentation: all code should include comments meaningful to others.
And I concluded with a list of rules:
Maintain a labbook from the beginning of a project to the end.
Code each variable so that it corresponds as closely as possible to a verbal description of the substantive hypothesis the variable will be used to test.
Errors in code should be corrected where they occur and the code re-run.
Separate tasks related to data-manipulation vs data-analysis into separate files.
Each program should perform only one task.
Do not try to be as clever as possible when coding. Try to write code that is as simple as possible.
Each section of a program should perform only one task.
Use a consistent style regarding lower and upper case letters.
Use variable names that have substantive meaning.
Use variable names that indicate direction where possible.
Use appropriate white-space in your programs, and do so in a consistent fashion to make them easy to read.
Include comments before each block of code describing the purpose of the code.
Include comments for any line of code if the meaning of the line will not be unambiguous to someone other than yourself.
Rewrite any code that is not clear.
Verify that missing data is handled correctly on any recode or creation of a new variable.
After creating each new variable or recoding any variable, produce frequencies or descriptive statistics of the new variable and examine them to be sure that you achieved what you intended.
When possible, automate things and avoid placing hard-wired values (those computed ‘by-hand’) in code.
Those are still very good rules, I would not change any of them. I would add one, and that is to put comments in any paper citing the piece of code that produced the figures or tables in the paper. In 20 years a lot of things have changed about how we do computing. It has gotten much easier to follow good computing practices. Github has made it easy to share code, maintain revision history, and publish code. And the set of people who seamlessly collaborate by sharing files over Dropbox or one of its competitors probably dwarfs the number of political scientists using Github. But to paraphrase a common computing aphorism (GIGO), sharing or publishing badly written code won’t make it easy for people to replicate or build on your work.
I was motivated to write that article because as I stated then, most political scientists aren’t trained as computer programmers. Nor were most political scientists trained to work in a laboratory. So the article covered both style of code, and computing practice to make sure that an entire research project could be reproduced by someone else. That means keeping track of where you got your data, how it was processed, etc.
Any computer code is a set of instructions that produces results when read by a machine, and we can evaluate the code based on the results it produces. But when we share code we expect it to be read by humans. Two pieces of code be functionally equivalent — they could produce identical results when read by a machine — even though one is easy to read and understand by a human; while the other is pretty much unintelligible to a human. If you expect people to use your code, you need to make the code easy to read. I try to ask every graduate student I am going to work with to read several chapters from Brian W. Kernighan and Rob Pike’s, The Practice of Programming (1999), especially the Preface, Chapters 1, 3, 5, 6, and the Epilogue.
It has turned out to be easier to write clean code than to maintain good computing practices overall that would lead to easy reproducibility of an entire research project. It is fairly easy to post a ‘replication’ dataset, and the code used to produce the figures and tables in a paper. But that doesn’t really tell someone everything they need to know to try to reproduce your work, or extend it to other data. They need to know how your data was generated. And those steps occur in the production of the replication dataset, not in the use of it.
Most research projects in political science pull in data from many sources. And many, many coding decisions are made along the way to a finished product. All of those decisions may be visible in the code; but keeping coherent lab-books is essential for sifting through all the lines of code of any large project. And ‘projects’ rarely stand-alone anymore. Work on one dataset is linked to many projects, often with over-lapping sets of co-authors.
At the beginning of a research project it’s important for everyone to agree where the code is, where the data is, and what the overall structure of the documentation is. That means decisions about whether documentation is grouped by project (which could mean by individual paper), or by dataset. And it means reaching some agreement on whether there is a master document that points to many smaller documents describing individual tasks, or whether the whole project description sits in a single document. None of this is exciting to work out, certainly not as exciting as doing the research. But it is essential. A good goal of doing all this is to make it as easy as possible to make the whole bundle of documentation and code public as soon as it is time to do so. It both saves time when it is time to release documentation, and imposes some good habits and structure along the way.
Heading image: Typing computer screen reflection by Almonroth. CC BY-SA 3.0 via Wikimedia Commons.
This week, we’re excited to bring you another podcast, featuring Mark Cave, Stephen M. Sloan, and Managing Editor Troy Reeves. Cave and Sloan are the editors of a recently published book, Listening on the Edge: Oral History in the Aftermath of Crisis, which includes stories of practicing oral history in traumatic situations from around the world. Here, they discuss the process of putting the book together, the healing potential of oral history, and the ways oral historians can take care of themselves when recording difficult interviews. Enjoy the interview, and send us your proposals if you’d like to share your work with the OHR blog.
Image Credit: Refugees from DR Congo board a UNHCR truck in Rwanda. Photo by Graham Holliday. CC by NC 2.0 via Flickr.
This time the fuss is about already critically acclaimed (The New York Times critic in residence, AO Scott, called it “a triumph of efficient, emphatic cinematic storytelling”) biopic Selma, starring David Oyelowo as the Rev Dr Martin Luther King, Jr.
The film starts with King’s acceptance of the Nobel Peace Prize in December 1964 and focuses on the three 1965 marches in Alabama that eventually led to the adoption of the Voting Rights Act later that year.
The King estate has not expressly objected to the making of this film. However, back in 2009 the same estate had granted DreamWorks and Warner Bros a licence to reproduce King’s speeches in a film that Steven Spielberg is set to produce but has yet to see the light. Apparently Selma producers attempted in vain to get permission to reproduce King’s speeches in their film. What happened in the end was that the authors of the script had to convey the same meaning of King’s speeches without using the actual words he had employed.
Put it otherwise: Selma is a film about Martin Luther King that does not feature any actual extracts from his historic speeches.
Still in his NYT review, AO Scott wrote that “Dr. King’s heirs did not grant permission for his speeches to be quoted in “Selma,” and while this may be a blow to the film’s authenticity, [the film director] turns it into an advantage, a chance to see and hear him afresh.”
Indeed, the problem of authenticity has been raised by some commentators who have argued that, because of copyright constraints, historical accuracy has been negatively affected.
But is this all copyright’s fault? Is it really true that if you are not granted permission to reproduce a copyright-protected work, you cannot quote from it?
“The social benefit in having a truthful depiction of King’s actual words would be much greater than the copyright owners’ loss.”
Well, probably not. Copyright may have many faults and flaws, but certainly does not prevent one from quoting from a work, provided that use of the quotation can be considered a fair use (to borrow from US copyright language) of, or fair dealing (to borrow from other jurisdictions, e.g. UK) with such work. Let’s consider the approach to quotation in the country of origin, i.e. the United States.
§107 of the US Copyright Act states that the fair use of a work is not an infringement of copyright. As the US Supreme Court stated in the landmark Campbell decision, the fair use doctrine “permits and requires courts to avoid rigid application of the copyright statute when, on occasion, it would stifle the very creativity that the law is designed to foster.”
Factors to consider to determine whether a certain use of a work is fair include:
the purpose and character of the use, including whether the use is commercial or for nonprofit educational purposes (the fact that a use is commercial is not per se a bar from a finding of fair use though);
the nature of the copyright-protected work, e.g. if it is published or unpublished;
amount and substantiality of the taking; and
the effect upon the potential market for or value of the copyright-protected work.
There is fairly abundant case law on fair use as applied to biographies. With particular regard to the re-creation of copyright-protected works (as it would have been the case of Selma, should Oyelowo/King had reproduced actual extracts from King’s speeches), it is worth recalling the recent (2014) decision of the US District Court for the Southern District of New York in Arrow Productions v The Weinstein Company.
This case concerned Deep Throat‘s Linda Lovelace biopic, starring Amanda Seyfried. The holders of the rights to the “famous  pornographic film replete with explicit sexual scenes and sophomoric humor” claimed that the 2013 film infringed – among other things – their copyright because three scenes from Deep Throat had been recreated without permission. In particular, the claimants argued that the defendants had reproduced dialogue from these scenes word for word, positioned the actors identically or nearly identically, recreated camera angles and lighting, and reproduced costumes and settings.
The court found in favour of the defendants, holding that unauthorised reproduction of Deep Throat scenes was fair use of this work, also stressing that critical biographical works (as are both Lovelace and Selma) are “entitled to a presumption of fair use”.
In my opinion reproduction of extracts from Martin Luther King’s speeches would not necessarily need a licence. It is true that the fourth fair use factor might weigh against a finding of fair use (this is because the Martin Luther King estate has actually engaged in the practice of licensing use of his speeches). However the social benefit in having a truthful depiction of King’s actual words would be much greater than the copyright owners’ loss. Also, it is not required that all four fair use factors weigh in favour of a finding of fair use, as recent judgments, e.g. Cariou v Princeor Seltzer v Green Day, demonstrate. Additionally, in the context of a film like Selma in which Martin Luther King is played by an actor (not incorporating the filmed speeches actually delivered by King), it is arguable that the use of extracts would be considered highly transformative.
In conclusion, it would seem that in principle that US law would not be against the reproduction of actual extracts from copyright-protected works (speeches) for the sake of creating a new work (a biographic film).
This article originally appeared on The IPKat in a slightly different format on Monday 12 January 2015.
Featured image credit: Dr. Martin Luther King speaking against war in Vietnam, St. Paul Campus, University of Minnesota, by St. Paul Pioneer Press. Minnesota Historical Society. CC-BY-2.0 via Flickr.
February 2nd marks Groundhog Day, an annual tradition in which we rouse a sleepy, burrowing rodent to give us winter-weary humans the forecast for spring. Although Punxsutawney Phil does his best as an ambassador for his species, revelers in Gobbler’s Knob and elsewhere likely know little about the true life of a wild groundhog beyond its penchant for vegetable gardens and large burrow entrances. In celebration of the only mammal to have its own holiday, I share with you eight lesser-known facts about groundhogs.
1. Groundhogs, whistlepigs, woodchucks, all names for the same animal. Depending on where you live, you might have heard all three of these names; however, woodchuck is the scientifically accepted common name for the species, Marmota monax. As the first word suggests, the woodchuck is a marmot, a genus comprised of 15 species of medium-sized, ground-dwelling squirrels. Although woodchucks are generally solitary and live in lowland areas, most marmot species live in social groups in mountainous parts of Europe, Asia, and North America.
2. How much wood would a woodchuck chuck? As a biologist who studies woodchucks, this is the number one question I am asked about my study species. To set the record straight, woodchucks do not actually chuck wood! In fact, the name “woodchuck” is actually thought to derive from the Native American word for the animal, not because of the species’ association with wood. Although they may chew or scent mark on woody debris near their burrows, they do not cut down trees (unlike their cousin, the American beaver, Castor canadensis).
3. Woodchucks are the widest-ranging marmot, and are able to adjust to a variety of habitats and climates to survive. Woodchucks are found in wooded edges, agricultural fields, residential gardens, and suburban office parks as far north as Alaska, eastward throughout Canada, and as far south as Alabama and Georgia. The weather extremes of these areas range from subzero winters to scorching summers, thus woodchucks must employ unique physiological strategies to survive. Woodchucks are considered urban-adapters because of their ability to live around humans by taking advantage of anthropogenic food sources such as garden landscaping and managed vegetation.
4. Woodchucks are considered the largest true-hibernators. As herbivores, woodchucks have very little to eat during the winter months when most vegetation has died. To save energy during the winter, woodchucks hibernate. The timing of this slowdown is thought to depend partly on photoperiod, which varies by latitude. They generally seek hibernacula under structures or in wooded areas protected from wind. Prior to hibernating, a woodchuck will go in and out of the burrow for a few days to a few weeks, foraging to build up fat stores until entering one last time to plug the burrow entrance behind them with leaves and debris. As a true hibernator, the body temperature of a woodchuck can drop to just a few degrees above that of the burrow, their breathing decreases, and their heart rate slows to around 10 beats per minute. Although they rarely exit the burrow, hibernating woodchucks awake every 10 days or so, hang out in their burrows, and then go back sleep after a few days. The length of the hibernation season can range from just 75 days, to over 175 days, depending on their location. They emerge in early spring, and generally breed soon after.
5. Woodchucks dig complex underground burrow systems, in which they rest, rear young, and escape from predators. If you are a homeowner who has had a woodchuck on your property, you are probably familiar with the large and numerous holes that woodchucks dig in the ground. These many entrances are used as “escape hatches” for a woodchuck to quickly go underground at the first sign of a threat. As escape is their best line of defense, rarely will a woodchuck forage more than 20 meters from a burrow entrance. Underground, burrow systems are comprised of multiple tunnels, some up to 13 meters in length and over 2 meters deep, that lead to multiple chambers, including bedroom chambers, and even a latrine burrow (woodchucks rarely defecate above ground to avoid attracting predators). Based on our research, woodchucks can use up to 25 different burrow systems, likely moving around to avoid predators, look for mates, and find new foraging spots.
6. Woodchucks can swim and climb trees. Although their portly body shape does not suggest agility, woodchucks can move quickly when they really need to. To avoid predators, woodchucks are able to swim short distances across creeks and drainage ditches, and are able to climb trees. They have even been spotted on rooftops and on high branches of mulberry trees, foraging on berries.
7. Woodchucks vocalize. The origin of the name “whistle pig” comes from the high-pitched, loud whistle woodchucks emit when threatened, likely to warn offspring or other adults of an approaching predator. In addition to the whistle, woodchucks will chatter their large incisors as a threatening reminder of the strength of their bite.
8. Woodchucks are easy to observe. My favorite characteristic about woodchucks is that their size (about the size of a house cat) and daytime activity patterns make them easy to observe. Unlike most mammals, you can easily spot them foraging in open fields and roadsides and they generally will tolerate the presence of humans at a distance. If you live in the woodchuck’s native range, keep your eyes peeled for these large squirrels, grab your binoculars, and take a minute to watch them forage and vigilantly observe their environment. It’s a fun way for kids and adults alike to test their skills as a wildlife biologist!
Image Credit: “Groundhog.” Photo by Matt MacGillivray. CC by 2.0 via Flickr.
Two hundred and ninety-eight passengers aboard Malaysian Airlines flight MH17 were killed when Ukrainian rebels shot down the commercial airliner in July 2014. Because of the rebels’ close ties with the Russian Republic, the international community immediately condemned the Putin regime for this tragedy. Yet, while Russia is certainly deserving of moral and political blame, what is less clear is Russian responsibility under international law. The problem is that international law has often struggled assigning state responsibility when national borders are crossed and two (or more) sovereigns are involved. The essence of the problem is that under governing legal standards, a state could provide enormous levels of military, economic, and political support to another state or to a paramilitary group in another state – even with full knowledge that the recipient will thereby violate international human rights and humanitarian law standards — but will not share any responsibility for these international wrongs unless it can be established that the sending state exercised near total control over the recipient.
The leading caselaw in this area has been handed down by the International Court of Justice (ICJ) but what adds another layer of complexity to the present situation is that the Ukraine and Russia are both parties to the European Convention; it is possible that the European Court of Human Rights (ECtHR) might well provide a different answer.
To be clear, this article concerns itself only with determining Russian responsibility for the downing of MH17. Following this tragic event, approximately five thousand Russian troops took part in what now appears to have been a limited invasion of areas of the Ukraine. Thus, there are elements of both “indirect” and “direct” Russian involvement in the Ukraine, although only the former will be addressed. The larger point involves the legal uncertainty when states act outside their borders and in doing so contribute to the violation of international human rights standards.
International Court of Justice
The two leading cases regarding transnational or extraterritorial state responsibility have been handed down by the International Court of Justice. In Nicaragua v. United States (1986) Nicaragua brought an action against the United States based on two grounds. One related to “direct” actions carried out by US agents in Nicaragua, including the mining of the country’s harbors, and on this claim the Court ruled against the United States. The second claim was based on the “indirect” actions of the United States, namely, its support for the contra rebels who were trying to overthrow the ruling Sandinista regime. Nicaragua’s argument was that because of the very close ties between the United States and the contras, the former should bear at least some responsibility for the massive levels of human rights violations carried out by the latter.
The Court rejected this position employing an “effective control” standard, which in many ways is much closer to an absolute control test. Or to quote from the Court itself: “In light of the evidence and material available to it, the Court is not satisfied that all the operations launched by the contra force, at every stage of the conflict, reflected strategy and tactics wholly devised by the United States” (par. 106, emphases supplied).
Nearly a decade later, the International Court of Justice was faced with a similar scenario in the Genocide Case (Bosnia v. Serbia). The claim made by Bosnia was that because of the deep connections between the Serbian government and its Bosnian Serb allies, the former should have some responsibility for the acts of genocide carried out by the latter. Yet, as in Nicaragua, the ICJ ruled that Serbia had not exercised the requisite level of control over the Bosnian Serbs. Thus, the Court ruled that Serbia was not responsible for carrying out genocide itself, or for directing genocide, or even for “aiding and assisting” or “complicity” in the genocide that occurred following the overthrow of Srebrenica. However, in a part of its ruling that has received far too little attention, the Court did rule that Serbia had failed to “prevent” genocide when it could have exercised its “influence” to do so, and that it had also not met its Convention obligation to “punish” those involved in genocide due to its failure to fully cooperate with the International Criminal Tribunal for the Former Yugoslavia.
Turning back to the situation involving MH17, while no action has yet been filed with the International Court of Justice (and perhaps never will be filed), according to the Nicaragua-Bosnia line of cases any attempt to hold Russia responsible for the downing of MH17 would appear likely to fail for the simple reason that the relationship between the Russian state and its Ukrainian allies was nowhere near as strong as the relationship between the United States and the contras (Nicaragua) or that between the Serbian government and its Bosnian Serb allies (Genocide Case). The point is that if responsibility could not be established in these other cases it is by no means likely that it could be established in the present situation.
European Court of Human Rights
Because Russia and the Ukraine are both parties to the European Convention of Human Rights, what also needs to be considered is how the European Court of Human Rights (ECtHR) might address this issue if a case were brought either under the inter-state complaint mechanism, or (more likely) by means of an individual complaint filed by a family member killed in the crash.
Although the European Court of Human Rights has increasing dealt with cases with an extraterritorial element, in nearly every instance the claim has been based on European states carrying out “direct” actions in other states – whether it be NATO forces dropping bombs in Serbia and killing civilians on the ground (Bankovic), or Turkish officials arresting a suspected terrorist in Kenya (Ocalan), or British troops killing civilians in Iraq (Al-Skeini) – rather than instances where the Convention states have acted “indirectly.” The most pertinent ECtHR case is Ilascu v. Russia and Moldova where the applicants (Moldovan citizens) claimed they were arrested at their homes in Tiraspol by security personnel, some of whom were wearing the insignia of the former USSR. Unlike the ICJ’s “effective control” standard, the ECtHR ruled that Russia had exercised what it termed as “effective authority” or “decisive influence” over paramilitary forces in Moldova and because of this they bore responsibility for violations of the European Convention suffered by the applicants. Thus, on the basis of Ilascu, there is at least some possibility that due to the “effective authority” or the “decisive influence” that Russia appeared to exercise over its Ukrainian rebel allies, the ECtHR, unlike the ICJ, could assign responsibility to Russia for the downing of MH17.
Notwithstanding the immediate international condemnation of the Putin regime following the MH17 tragedy, international law seems to exist in a totally removed from international opinion and consensus. Under the caselaw of the International Court of Justice, Russia would appear not to be responsible for the downing of MH17 on the basis that it would be difficult to establish that the Russian government had exercised the requisite level of “effective control” over its Ukrainian rebel allies. On the other hand, if a case were brought before the European Court of Human Rights, there is at least some chance of establishing Russian responsibility on the basis of the Court’s previous ruling in Ilascu, although it should be said that this is not a particularly strong precedent.
The larger point is to ask why state responsibility is so difficult to establish when international borders are crossed and states act in another country, at least indirectly, as in the present situation. The key element ought to be the extent to which a state has acted in a way that leads to violations of international human rights and humanitarian law standards. Employing such a standard, it would be eminently clear – would it not? – that Russia would be at least partly responsible because of its strong relationship with Ukrainian rebels that were both armed (by Russia) and dangerous, and which had already shown a complete disregard for international law.
I recall a dinner conversation at a symposium in Paris that I organized in 2010, where a number of eminent evolutionary biologists, economists and philosophers were present. One of the economists asked the biologists why it was that whenever the topic of “group selection” was brought up, a ferocious argument always seemed to ensue. The biologists pondered the question. Three hours later the conversation was still stuck on group selection, and a ferocious argument was underway.
Group selection refers to the idea that natural selection sometimes acts on whole groups of organisms, favoring some groups over others, leading to the evolution of traits that are group-advantageous. This contrasts with the traditional ‘individualist’ view which holds that Darwinian selection usually occurs at the individual level, favoring some individual organisms over others, and leading to the evolution of traits that benefit individuals themselves. Thus, for example, the polar bear’s white coat is an adaptation that evolved to benefit individual polar bears, not the groups to which they belong.
The debate over group selection has raged for a long time in biology. Darwin himself primarily invoked selection at the individual level, for he was convinced that most features of the plants and animals he studied had evolved to benefit the individual plant or animal. But he did briefly toy with group selection in his discussion of social insect colonies, which often function as highly cohesive units, and also in his discussion of how self-sacrificial (‘altruistic’) behaviours might have evolved in early hominids.
In the 1960s and 1970s, the group selection hypothesis was heavily critiqued by authors such as G.C. Williams, John Maynard Smith, and Richard Dawkins. They argued that group selection was an inherently weak evolutionary mechanism, and not needed to explain the data anyway. Examples of altruism, in which an individual performs an action that is costly to itself but benefits others (e.g. fighting an intruder), are better explained by kin selection, they argued. Kin selection arises because relatives share genes. A gene which causes an individual to behave altruistically towards its relatives will often be favoured by natural selection—since these relatives have a better than random chance of also carrying the gene. This simple piece of logic tallies with the fact that empirically, altruistic behaviours in nature tend to be kin-directed.
Strangely, the group selection controversy seems to re-emerge anew every generation. Most recently, Harvard’s E.O. Wilson, the “father of sociobiology” and a world-expert on ant colonies, has argued that “multi-level selection”—essentially a modern version of group selection—is the best way to understand social evolution. In his earlier work, Wilson was a staunch defender of kin selection, but no longer; he has recently penned sharp critiques of the reigning kin selection orthodoxy, both alone and in a 2010 Nature article co-authored with Martin Nowak and Corina Tarnita. Wilson’s volte-face has led him to clash swords with Richard Dawkins, who says that Wilson is “just wrong” about kin selection and that his most recent book contains “pervasive theoretical errors.” Both parties point to eminent scientists who support their view.
What explains the persistence of the controversy over group and kin selection? Usually in science, one expects to see controversies resolved by the accumulation of empirical data. That is how the “scientific method” is meant to work, and often does. But the group selection controversy does not seem amenable to a straightforward empirical resolution; indeed, it is unclear whether there are any empirical disagreements at all between the opposing parties. Partly for this reason, the controversy has sometimes been dismissed as “semantic,” but this is too quick. There have been semantic disagreements, in particular over what constitutes a “group,” but this is not the whole story. For underlying the debate are deep issues to do with causality, a notoriously problematic concept, and one which quickly lands one in philosophical hot water.
All parties agree that differential group success is common in nature. Dawkins uses the example of red squirrels being outcompeted by grey squirrels. However, as he intuitively notes, this is not a case of genuine group selection, as the success of one group and the decline of another was a side-effect of individual level selection. More generally, there may be a correlation between some group feature and the group’s biological success (or “fitness”); but like any correlation, this need not mean that the former has a direct causal impact on the latter. But how are we to distinguish, even in theory, between cases where the group feature does causally influence the group’s success, so “real” group selection occurs, and cases where the correlation between group feature and group success is “caused from below”? This distinction is crucial; however it cannot even be expressed in terms of the standard formalisms that biologists use to describe the evolutionary process, as these are statistical not causal. The distinction is related to the more general question of how to understand causality in hierarchical systems that has long troubled philosophers of science.
Recently, a number of authors have argued that the opposition between kin and multi-level (or group) selection is misconceived, on the grounds that the two are actually equivalent—a suggestion first broached by W.D. Hamilton as early as 1975. Proponents of this view argue that kin and multi-level selection are simply alternative mathematical frameworks for describing a single evolutionary process, so the choice between them is one of convention not empirical fact. This view has much to recommend it, and offers a potential way out of the Wilson/Dawkins impasse (for it implies that they are both wrong). However, the equivalence in question is a formal equivalence only. A correct expression for evolutionary change can usually be derived using either the kin or multi-level selection frameworks, but it does not follow that they constitute equally good causal descriptions of the evolutionary process.
This suggests that the persistence of the group selection controversy can in part be attributed to the mismatch between the scientific explanations that evolutionary biologists want to give, which are causal, and the formalisms they use to describe evolution, which are usually statistical. To make progress, it is essential to attend carefully to the subtleties of the relation between statistics and causality.
Introduction, from Michael Alvarez, co-editor of Political Analysis
Recently I asked Nathaniel Beck to write about his experiences with research replication. His essay, published on 24 August 2014 on the OUPblog, concluded with a brief discussion of a recent experience of his when he tried to obtain replication data from the authors of a recent study published in PNAS, on an experiment run on Facebook regarding social contagion. Since then the story of Neal’s efforts to obtain this replication material have taken a few interesting twists and turns, so I asked Neal to provide an update — because the lessons from his efforts to get the replication data from this PNAS study are useful for the continued discussion of research transparency in the social sciences.
After not hearing from Adam Kramer of Facebook, even after contacting PNAS, I persisted with both the editor of PNAS (Inder Verma, who was most kind) and with the NAS through “well connected” friends. (Getting replication data should not depend on knowing NAS members!). I was finally contacted by Adam Kramer, who offered that I could come out to Palo Alto to look at the replication data. Since Facebook did not offer to fly me out, I said no. I was then offered a chance to look at the replication files in the Facebook office 4 blocks from NYU, so I accepted. Let me stress that all dealings with Adam Kramer were highly cordial, and I assume that delays were due to Facebook higher ups who were dealing with the human subjects firestorm related to the Kramer piece.
When I got to the Facebook office I was asked to sign a standard non-disclosure agreement, which I dec. To my surprise this was not a problem, with the only consequence being that a security officer would have had to escort me to the bathroom. I then was put in a room with a Facebook secure notebook with the data and R-studio loaded; Adam Kramer was there to answer questions, and I was also joined by a security person and an external relations person. All were quite pleasant, and the security person and I could even discuss the disastrous season being suffered by Liverpool.
I was given a replication file which was a data frame which had approximately 700,000 rows (one for each respondent) and 7 columns containing the number of positive and negative words used by each respondent as well as the total word count of each respondent, percentages based on these numbers, experimental condition. and a variable which omitted some respondents for producing the tables. This is exactly the data frame that would have been put in an archive since it contained all the data needed to replicate the article. I also was given the R-code that produced every item in the article. I was allowed to do anything I wanted with that data, and I could copy the results into a file. That file was then checked by Facebook people and about two weeks later I received the entire file I created. All good, or at least as good as it is going to get.
The data frame I played with was based on aggregating user posts so each user had one row of data, regardless of the number of posts (and the data frame did not contain anything more than the total number of words posted). I can understand why Facebook did not want to give me the data frame, innocuous as it seemed; those who specialize in de-de-identifying private data and reverse engineering code are quite good these days, and I can surely understand Facebook’s reluctance to have this raw data out there. And I understand why they could not give me all the actual raw data, which included how feeds were changed and so forth; this is the secret sauce that they would not like reverse engineered.
I got what I wanted. I could see their code, could play with density plots to get a sense of words used, I could change the number of extreme points dropped, and I could have moved to a negative binomial instead of a Poisson. Satisfied, I left after about an hour; there are only so many things one can do with one experiment on two outcomes. I felt bad that Adam Kramer had to fly to New York, but I guess this is not so horrible. Had the data been more complicated I might have felt that I could not do everything I wanted, and running a replication with 3 other people in a room is not ideal (especially given my typing!).
My belief is that that PNAS and the authors could simply have had a different replication footnote. This would have said that the code used (about 5 lines of R, basically a call to a Poisson regression using GLM) is available at a dataverse. In addition, they could have noted that the GLM called used the data frame I described, with the summary statistics for that data frame. Readers could then see what was done, and I can see no reason for such a procedure to bother Facebook (though I do not speak for them). I also note a clear statement on a dataverse would have obviated the need for some discussion. Since bytes are cheap, the dataverse could also contain whatever policy statement Facebook has on replication data. This (IMHO) is much better than the “contact the authors for replication data” footnote that was published. It is obviously up to individual editors as to whether this is enough to satisfy replication standards, but at least it is better than the status quo.
What if I didn’t work four blocks from Astor Place? Fortunately I did not have to confront this horror. How many other offices does Facebook have? Would Adam Kramer have flown to Peoria? I batted this around, but I did most of the batting and the Facebook people mostly did no comment. So someone else will have to test this issue. But for me, the procedure worked. Obviously I am analyzing lots more proprietary data, and (IMHO) this is a good thing. So Facebook, et al., and journal editors and societies have many details to work out. But, based on this one experience, this can be done. So I close this with thanks to Adam Kramer (but do remind him that I have had auto-responders to email for quite while now).
On the more trivial issue of my own dataverse, I am happy to report that almost everything that was once on an a private ftp site is now on my Harvard dataverse. Some of this was already up because of various co-authors who always cared about replication. And on stuff that was not up, I was lucky to have a co-author like Jonathan Katz, who has many skills I do not possess (and is a bug on RCS and the like, which beats my “I have a few TB and the stuff is probably hidden there somewhere”). So everything is now on the dataverse, except for one data set that we were given for our 1995 APSR piece (and which Katz never had). Interestingly, I checked the original authors’ web sites (one no longer exists, one did not go back nearly that far) and failed to make contact with either author. Twenty years is a long time! So everyone should do both themselves and all of us a favor, and build the appropriate dataverse files contemporaneously with the work. Editors will demand this, but even with this coercion, this is just good practice. I was shocked (shocked) at how bad my own practice was.
Heading image: Wikimedia Foundation Servers-8055 24 by Victorgrigas. CC BY-SA 3.0 via Wikimedia Commons.
From the comfort of a desk, looking at a computer screen or the printed page of a newspaper, it is very easy to ignore the fact that thousands of tons of insecticide are sprayed annually.
Consider the problem of the fall armyworm in Mexico. As scientists and crop advisors, we’ve worked for the past two decades trying to curb its impact on corn yield. We’ve tested dozens of chemicals to gain some control over this pest on different crops.
A couple of years ago, we were comparing information on the number of insecticide applications needed to battle this worm during a break of a technical meeting. Anecdotal information from other parts of the country got into the conversation. Some colleagues reported that the fall armyworm wasn’t the worst pest in a particular region of Mexico and it was easy to control with a couple of insecticide applications. Others mentioned that up to six sprays were necessary in other parts of the country. Wait a second, I said, that is completely ridiculous and tremendously expensive to use so much insecticide in maize production.
At that point we decided to contact more professionals throughout Mexico and put together a geographical and seasonal ‘map’ of the occurrence of corn pests and the insecticides used in their control. Our report was compiled doing simple arithmetic and the findings really surprised us: a conservative estimate of 3,000 tons of insecticidal active ingredient are used against just the fall armyworm every year in Mexico. No wonder our country has the highest use of pesticide per hectare of arable land in North America.
Mexican farmers are stuck on what has been called ‘the pesticide treadmill.’ The first insecticide application sometimes occurs at the time that maize seed is put in the ground, then a second one follows a couple of weeks later, then another, and another; this process usually involves the harshest insecticides, or those that are highly toxic for the grower and the environment, because they are the cheapest. A way of curtailing these initial applications can be achieved by genetically-modified (GM) maize that produces its own very specific and safe insecticide. Not spraying against pests in the first few weeks of maize development allows the beneficial fauna (lacewings, ladybird beetles, spiders, wasps, etc.) to build their populations and control maize pests; simply put, it enables the use of biological control. The combination of GM crops and natural enemies is an essential part of an integrated pest management program — a successful strategy employed all over the world to control pests, reducing the use of insecticides, and helping farmers to obtain more from their crop land.
We have good farmers in Mexico, a great diversity of natural enemies of the fall armyworm and other maize pests, and growers that are familiar with the benefits of using integrated pest management in other crop systems. Now we need modern technology to fortify such a program in Mexican maize.
Mexican scientists have developed GM maize to respond to some of the most pressing production needs in the country, such as lack of water. Maize hybrids developed by Mexican research institutions may be useful in local environments (e.g., tolerant to drought and cold conditions). These local genetically-engineered maize varieties go through the same regulatory process as corporate developers.
At present, maize pest control with synthetic insecticides has been pretty much the only option for Mexican growers. They use pesticides because controlling pests is necessary for obtaining a decent yield, not because they are forced to spray them by chemical corporations or for being part of a government program. This constitutes an urgent situation that demands solutions. There are a few methods to prevent most of these applications, genetic engineering being one of them. Other countries have reduced their pesticide use by 40% due to the acceptance of GM crops. Mexico, the birthplace of maize, only produces 70% of the maize it consumes because growers face so many environmental and pest control challenges, with heavy reliance on synthetic pesticides. Accepting the technology of GM crops, and educating farmers on better management practices, is key for Mexico to jump off the pesticide treadmill.
Image Credit: Maize diversity. Photo by Xochiquetzal Fonseca/CIMMYT. CC BY SA NC ND 2.0 via Flickr.
It is becoming widely accepted that women have, historically, been underrepresented and often completely written out of work in the fields of Science, Technology, Engineering, and Mathematics (STEM). Explanations for the gender gap in STEM fields range from genetically-determined interests, structural and territorial segregation, discrimination, and historic stereotypes. As well as encouraging steps toward positive change, we would also like to retrospectively honour those women whose past works have been overlooked.
From astronomer Caroline Herschel to the first female winner of the Fields Medal, Maryam Mirzakhani, you can use our interactive timeline to learn more about the women whose works in STEM fields have changed our world.
With free Oxford University Press content, we tell the stories and share the research of both famous and forgotten women.
Featured image credit: Microscope. Public Domain via Pixabay.
Meet Utricularia. It’s a bladderwort, an aquatic carnivorous plant, and one of the fastest things on the planet. It can catch its prey in a millisecond, accelerating it up to 600g.
Once caught inside the prey suffocates and digestive enzymes break down the unfortunate creature for its nutrients. Anything small enough to be pulled in won’t know their mistake until it’s too late. But as lethal as the trap is, it did seem to have some flaws. The traps don’t just catch animals, they catch anything that gets sucked in, so often that’s algae and pollen too.
A team at the University of Vienna led by Marianne Koller-Peroutka and Wolfram Adlassnig closely examined Utricularia and found the plants were not very efficient killers. Studying over 2000 traps showed that only about 10% of the objects sucked in were animals. Animals are great if you want nutrients like nitrogen and phosphorus, but half of the catch was algae and a third pollen.
What was more puzzling was that not all the algae entered with an animal. If a bladder is left for a long while, it will trigger anyway. No animal is needed; algae, pollen, and fungi will enter. Is this a sign that the plant is desperate for a meal, and hoping an animal is passing? Koller-Peroutka and Adlassnig found that the traps catching algae and pollen grew larger and had more biomass. Examining the bladders under a microscope showed that algae caught in the traps died and decayed. This was more evidence that it’s happy to eat other plants too. It seems that it’s not just animals that Utricularia is hunting.
Koller-Peroutka and Adlassnig say this is why Utricularia is able to live in places with comparatively few animals. Nitrogen from animals and other elements from plants mean it is happy with a balanced diet. It can grow more and bigger traps, and use these for catching animals or plants or both.
Fortunately even the big traps only catch tiny animals, so if someone has bought you one for Christmas you can leave it on the dinner table without losing your turkey and trimmings in a millisecond.
For our second blog post of 2015, we’re looking back at a great article from Katie Kuszmar in The Oral History Review (OHR), “From Boat to Throat: How Oral Histories Immerse Students in Ecoliteracy and Community Building” (OHR, 41.2.) In the article, Katie discussed a research trip she and her students used to record the oral histories of local fishing practices and to learn about sustainable fishing and consumption. We followed up with her over email to see what we could learn from high school oral historians, and what she has been up to since the article came out. Enjoy the article, and check out her current work at Narrability.com.
In the article, you mentioned that your students’ youthful curiosity, or lack of inhibition, helped them get answers to tough questions. Can you think of particular moments where this made a difference? Were there any difficulties you didn’t expect, working with high school oral historians?
One particular moment was at the end of the trip. Our final interview was with the Monterey Bay Aquarium’s (MBA) Seafood Watch public relations coordinator, who was kind enough to arrange the fisheries historian interviews and offered to be one of the interviewees as well. When we finally interviewed the coordinator, the most burning question the students had was whether or not Seafood Watch worked directly with fishermen. The students didn’t like her answer. She let us know that fishermen are welcome to approach Seafood Watch and that Seafood Watch is interested in fishermen, but they didn’t work directly with fishermen in setting the standards for their sustainable seafood guidelines. The students seemed to think that taking sides with fishermen was the way to react. When we left the interview they were conflicted. The Monterey Bay Aquarium is a well-respected organization for young people in the area. The aquarium itself is full of nostalgic memories for most students in the region who visit the aquarium frequently on field trips or on vacation. How could such a beloved establishment not consider fishermen voices, for whom the students had just built a newfound respect? It was a big learning moment about bureaucracy, research, empathetic listening, and the usefulness of oral history.
After the interview, when the students cooled off, we discussed how the dynamics in an interview can change when personal conflicts arise. The narrator may even change her story and tone because of the interviewer’s biases. We explored several essential questions that I would now use for discussion before interviews were to occur, for I was learning too. Some questions that we considered were: When you don’t agree with your narrator, how do you ask questions that will keep the communication safe and open?
Oral history has power in this way: voices can illuminate the issues without the need for strong editorializing.
How do you set aside your own beliefs from the narrator, and why is this important when collecting oral history? In other words, how do you take the ego out of it?
The students were given a learning opportunity from which I hoped we all could gain insight. We discussed how if you can capture in your interview the narrator’s perspective (even if different than your own or other narrators for that matter), then the audience will be able to see discrepancies in the narratives and gather the evidence they need to engage with the issues. Hearing that Seafood Watch doesn’t work with fishermen might potentially help an audience to ask questions on a larger public scale. Considering oral history’s usefulness in engaging the public, inspiring advocacy, and questioning bureaucracy might be a powerful way for students to engage in the process without worrying about trying to prove their narrators wrong or telling the audience what to think. Oral history has power in this way: voices can illuminate the issues without the need for strong editorializing. This narrative power can be studied beforehand with samples of oral history, as it can also be a great way for students to reflect metacognitively on what they have participated in and how they might want to extend their learning experiences into the real world. Voice of Witness (VOW) contends that students who engage in oral history are “history makers.” What a powerful way to learn!
How did this project start? Did you start with wanting to do oral history with your students, or were you more interested in exploring sustainability and fall into oral history as a method?
Being a fisherwoman myself and just having started commercial fishing with my husband who is a fishmonger, I found my two worlds of fishing and teaching oral history colliding. Even after teaching English for ten years because of my love of storytelling, I have long been interested in creating experiential learning opportunities for students concerning where food comes from and sustainable food hubs.
Through a series of uncanny events connecting fishing and oral history, the project seemed to fall into place. I first attended an oral history for educators training through a collaborative pilot program created by VOW and Facing History and Ourselves (FHAO). After the training, I mentored ten seniors at my school to produce oral history Senior Service Learning Projects that ended in a public performance at a local art museum’s performance space. VOW was integral in my first year’s experience with oral history education. I still work with VOW and sit on their Education Advisory Board, which helps me to continue my engagement in oral history education.
In the same year as the pilot program with VOW, I attended the annual California Association of Teachers of English conference in which the National Oceanic Atmospheric Association’s (NOAA) Voices of the Bay (VOB) program coordinator offered a training. The training offered curriculum strategies in marine ecology, fishing, economics, and basic oral history skill-building. To record interviews, NOAA would help arrange interviews with local fishermen in classrooms or at nearby harbors. The interviews would eventually go into a national archive called Voices from the Fisheries.
The trainer for VOB and I knew many of the same fishermen and mongers up and down the central and north (Pacific) coast. I arranged a meeting between the two educational directors of VOW and VOB, who were both eager to meet each other, as they both were just firing up their educational programs in oral history education. The meeting was very fruitful for all of us, as we brainstormed new ways to approach interdisciplinary oral history opportunities. As such, I was able to synthesize curriculum from both programs in preparing my students for the immersion trip, considering sustainability as an interdependent learning opportunity in environmental, social, and economic content. When I created the trip I didn’t have a term for what the outcome would be, except that I had hoped they would become aware more aware of sustainable seafood and how to promote its values. Ecoliteracy was a term that came to fruition after the projects were completed, but I think it can be extremely valuable as a goal in interdisciplinary oral history education.
I believe oral history education can help to shape our students into compassionate critical thinkers, and may even inspire them to continue to interview and listen empathetically to solve problems in their personal, educational, and professional futures.
What pointers can you give to other educators interested in using oral history to engage their students?
With all the material out there, I feel that educators have ample access to help prepare for projects. In the scheme of these projects, I would advise scheduling time for thoughtful processing or metacognitive reflection. All too often, it is easy to focus on the preparation, conducting and capturing the interviews, and then getting something tangible done with it. Perhaps, it is embedded in the education world of outcome-based assessment: getting results and evidence that learning is happening. With high school students, the experience of interviewing is an extremely valuable learning tool that could easily get overlooked when we are focusing on a project
For example, on an immersion trip to El Salvador with my high school students, we were given an opportunity to interview the daughter of the sole survivor of El Mozote, an infamous massacre that happened at the climax of the civil war. The narrator insisted on telling us her and her mother’s story, despite the fact that she had just gotten chemotherapy the day prior. She said that her storytelling was therapeutic for her and helped her feel that her mother, who had passed away, and all those victims of the massacre would not die in vain. This was such heavy content for her and for us as her audience. We all needed to talk, be quiet about it, cry about it, and reflect on the value of the witnessing. In the end, it wasn’t the deliverable that would be the focus of the learning, it was the actual experience. From it, compassion was built in the students, not just for El Salvadorian victims and survivors, but on a broader scale for all people who face civil strife and persecution. After such an experience, statistics were not just numbers anymore, they had a human face. This, to date, for me has been the most valuable part of oral history education: the transformation that can occur during the experience of an interview, as opposed to the product produced from it. For educators, it is vital to facilitate a pointed and thoughtful discussion with the interviewer to hone in on the learning and realize the transformation, if there is one. The discussion about the experience is essential in understanding the value of the oral history interviewing.
Do you have plans to do similar projects in the future?
After such positive experiences with oral history education, I wanted a chance to actively be an oral historian who captures narratives in issues of sustainable food sources. I have transitioned from teaching to running my own business called Narrability with the mission to build sustainability through community narratives. I just completed a small project, in which I collected oral histories of local fishermen called: “Long Live the King: Storytelling the Value of Salmon Fishing in the Monterey Bay.” Housed on the Monterey Bay Salmon and Trout Project (MBSTP) website, the project highlights some of the realities connected to the MBSTP local hatchery net pen program that augments the natural Chinook salmon runs from rivers in the Sacramento area to be released into the Monterey Bay. Because of drought, dams, overfishing, and urbanization, the Chinook fishery in the central coast area has been deeply affected, and the need for a net pen program seems strong. In the Monterey Bay, there have been many challenges in implementing the Chinook net pen program due to the unfortunate bureaucracy of a discouraging port commission out of the Santa Cruz harbor. Because of the challenges, the oral histories that I collected help to illustrate that regional Chinook salmon fishing builds environmental stewardship, family bonding, community building, and provides a healthy protein source.
Through Narrability, I have also been working on developing a large oral history program with a group of organic farming, wholesale, and certification pioneers. As many organic pioneers face retirement, the need for their history to be recorded is growing. Irene Reti sparked this realization in her project through University of California, Santa Cruz: Cultivating a Movement: An Oral History Series on Organic Farming & Sustainable Agriculture on California’s Central Coast. Through collaboration with some of the major players in organics, we aim to build a comprehensive national collection of the history of organics for the public domain.
Is there anything you couldn’t address in the article that you’d like to share here?
I know being a teacher can be time crunched, and once interviews are recorded, students and teachers want to do something tactile with the interviews (podcasts/narratives/documentaries). I encourage educators to implement time to reflect on the process. I wished I would have done more reflective processing in this manner: to interview as a class; to discuss the experience of interviewing and the feelings elicited before, during and after an interview; to authentically analyze how the interviews went, including considering narrator dynamics. In many cases, the skills learned and personal growth is not the most tangible outcome. Despite this, I believe oral history education can help to shape our students into compassionate critical thinkers, and may even inspire them to continue to interview and listen empathetically to solve problems in their personal, educational, and professional futures. This might not be something we can grade or present as a deliverable, it might be a long-term effect that grows with a students’ life long learning.
Image Credit: Front entrance of the Aquarium. Photo by Amadscientist. CC by SA 3.0 via Wikimedia Commons.
Is it better to be positive or negative? Many of the most vivid public health appeals have been negative – “Smoking Kills” or “Drive, Drive, and Die” – but do these negative messages work when it comes to changing eating behavior?
Past literature reviews of positive- or gain-framed versus negative or loss-based health messages have been inconsistent. In our content analysis of 63 nutrition education studies, we discovered four key questions which can resolve these inconsistencies and help predict which type of health message will work best for a particular target audience. The more questions are answered with a “Yes,” the more a negative- or loss-based health message will be effective.
Is the target audience highly involved in this issue?
The more knowledgeable or involved a target audience, the more strongly they’ll be motivated by a negative- or loss-based message. In contrast, those who are less involved may not believe the message or may simply wish to avoid bad news. Less involved consumers generally respond better to positive messages that provide a clear, actionable step that leaves them feeling positive and motivated. For instance, telling them to “eat more sweet potatoes to help your skin look younger” is more effective than telling them “your skin will age faster if you don’t eat sweet potatoes.” The former doesn’t require them to know why or to link sweet potatoes to Vitamin A.
Is the target audience detail-oriented?
People who like details – such as most of the people designing public health messages – prefer negative- or loss-framed messages. They have a deeper understanding and knowledge base on which to elaborate on the message. In her coverage of the article for the Food Navigator, Elizabeth Crawford, noted that most of the general public is not interested in the details and is more influenced by the more superficial features of the message, including whether it is more positive or attractive relative to the other things vying for their attention at that moment.
Is the target audience risk averse?
When a positive outcome is certain, gain-framed messages work best (“you’ll live 7 years longer if you are a healthy weight”). When a negative outcome is certain, loss-framed messages work best (“you’ll die 7 years earlier if you are obese”). For instance, we found that if it is believed that eating more fruits and vegetables leads to lower obesity, a positive message (“eat broccoli and live longer”) is more effective than a negative message.
Is the outcome uncertain?
When claims appear factual and convincing, positive messages tend to work best. If a person believes that eating soy will extend their life by reducing their risk of heart disease, a positive message stating this is best. If they aren’t as convinced, a more effective message could be “people who don’t eat soy have a higher rate of heart disease.”
These findings show how those who design health messages, such as health care professionals, will be impacted by them differently than the general public. When writing a health message, rather than appealing to the sentiment of the experts, the message will be more effective if it’s presented positively. The general public is more likely to adopt the behavior being promoted if they see that there is a potential positive outcome. Evoking fear may seem like a good way to get your message across but this study shows that, in fact, the opposite is true—telling the public that a behavior will help them be healthier and happier is actually more effective.
There’s a puzzle around economics. On the one hand, economists have the most policy influence of any group of social scientists. In the United States, for example, economics is the only social science that controls a major branch of government policy (through the Federal Reserve), or has an office in the White House (the Council of Economic Advisers). And though they don’t rank up there with lawyers, economists make a fairly strong showing among prime ministers and presidents, as well.
But as any economist will tell you, that doesn’t mean that policymakers commonly take their advice. There are lots of areas where economists broadly agree, but policymakers don’t seem to care. Economists have wide consensus on the need for carbon taxes, but that doesn’t make them an easier political sell. And on topics where there’s a wider range of economic opinions, like over minimum wages, it seems that every politician can find an economist to tell her exactly what she wants to hear.
So if policymakers don’t take economists’ advice, do they actually matter in public policy? Here, it’s useful to distinguish between two different types of influence: direct and indirect.
Direct influence is what we usually think of when we consider how experts might affect policy. A political leader turns to a prominent academic to help him craft new legislation. A president asks economic advisers which of two policy options is preferable. Or, in the case where the expert is herself the decisionmaker, she draws on her own deep knowledge to inform political choices.
This happens, but to a limited extent. Though politicians may listen to economists’ recommendations, their decisions are dominated by political concerns. They pay particular attention to advice that agrees with what they already want to do, and the rise of think tanks has made it even easier to find experts who support a preexisting position.
Research on experts suggests that direct advisory effects are more likely to occur under two conditions. The first is when a policy decision has already been defined as more technical than political—that experts are the appropriate group to be deciding. So we leave it to specialists to determine what inventions can be patented, or which drugs are safe for consumers, or (with occasional exceptions) how best to count the population. In countries with independent central banks, economists often control monetary policy in this way.
Experts can also have direct effects when possible solutions to a problem have not yet been defined. This can happen in crisis situations: think of policymakers desperately casting about for answers during the peak of the financial crisis. Or it can take place early in the policy process: consider economists being brought in at the beginning of an administration to inject new ideas into health care reform.
But though economists have some direct influence, their greatest policy effects may take place through less direct routes—by helping policymakers to think about the world in new ways.
For example, economists help create new forms of measurement and decision-making tools that change public debate. GDP is perhaps the most obvious of these. A hundred years ago, while politicians talked about economic issues, they did not talk about “the economy.” “The economy,” that focal point of so much of today’s chatter, only emerged when national income and product accounts were created in the mid-20th century. GDP changes have political, as well as economic, effects. There were military implications when China’s GDP overtook Japan’s; no doubt the political environment will change more when it surpasses the United States.
Less visible economic tools also shape political debate. When policymakers require cost-benefit analysis of new regulation, conversations change because the costs of regulation become much more visible, while unquantifiable effects may get lost in the debate. Indicators like GDP and methods like cost-benefit analysis are not solely the product of economists, but economists have been central in developing them and encouraging their use.
The spread of technical devices, though, is not the only way economics changes how we think about policy. The spread of an economic style of reasoning has been equally important.
Philosopher Ian Hacking has argued that the emergence of a statistical style of reasoning first made it possible to say that the population of New York on 1 January 1820 was 100,000. Similarly, an economic style of reasoning—a sort of Econ 101-thinking organized around basic concepts like incentives, efficiency, and opportunity costs—has changed the way policymakers think.
While economists might wish economic reasoning were more visible in government, over the past fifty years it has in fact become much more widespread. Organizations like the US Congressional Budget Office (and its equivalents elsewhere) are now formally responsible for quantifying policy tradeoffs. Less formally, other disciplines that train policymakers now include some element of economics. This includes master’s programs in public policy, organized loosely around microeconomics, and law, in which law and economics is an important subfield. These curricular developments have exposed more policymakers to basic economic reasoning.
The policy effects of an economic style of reasoning are harder to pinpoint than, for example, whether policymakers adopted an economist’s tax policy recommendation. But in the last few decades, new policy areas have been reconceptualized in economic terms. As a result, we now see education as an investment in human capital, science as a source of productivity-increasing technological innovations, and the environment as a collection of ecosystem services. This subtle shift in orientation has implications for what policies we consider, as well as our perception of their ultimate goals.
In the end, then, there is no puzzle. Economists do matter in public policy, even though policymakers, in fact, often ignore their advice. If we are interested in understanding how, though, we should pay attention to more than whether politicians take economists’ recommendations—we must also consider how their intellectual tools shape the very ways that policymakers, and all of us, think.