JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: journals, Most Recent at Top [Help]
Results 1 - 25 of 245
How to use this Page
You are viewing the most recent posts tagged with the words: journals in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
I recall a dinner conversation at a symposium in Paris that I organized in 2010, where a number of eminent evolutionary biologists, economists and philosophers were present. One of the economists asked the biologists why it was that whenever the topic of “group selection” was brought up, a ferocious argument always seemed to ensue. The biologists pondered the question. Three hours later the conversation was still stuck on group selection, and a ferocious argument was underway.
Group selection refers to the idea that natural selection sometimes acts on whole groups of organisms, favoring some groups over others, leading to the evolution of traits that are group-advantageous. This contrasts with the traditional ‘individualist’ view which holds that Darwinian selection usually occurs at the individual level, favoring some individual organisms over others, and leading to the evolution of traits that benefit individuals themselves. Thus, for example, the polar bear’s white coat is an adaptation that evolved to benefit individual polar bears, not the groups to which they belong.
The debate over group selection has raged for a long time in biology. Darwin himself primarily invoked selection at the individual level, for he was convinced that most features of the plants and animals he studied had evolved to benefit the individual plant or animal. But he did briefly toy with group selection in his discussion of social insect colonies, which often function as highly cohesive units, and also in his discussion of how self-sacrificial (‘altruistic’) behaviours might have evolved in early hominids.
In the 1960s and 1970s, the group selection hypothesis was heavily critiqued by authors such as G.C. Williams, John Maynard Smith, and Richard Dawkins. They argued that group selection was an inherently weak evolutionary mechanism, and not needed to explain the data anyway. Examples of altruism, in which an individual performs an action that is costly to itself but benefits others (e.g. fighting an intruder), are better explained by kin selection, they argued. Kin selection arises because relatives share genes. A gene which causes an individual to behave altruistically towards its relatives will often be favoured by natural selection—since these relatives have a better than random chance of also carrying the gene. This simple piece of logic tallies with the fact that empirically, altruistic behaviours in nature tend to be kin-directed.
Strangely, the group selection controversy seems to re-emerge anew every generation. Most recently, Harvard’s E.O. Wilson, the “father of sociobiology” and a world-expert on ant colonies, has argued that “multi-level selection”—essentially a modern version of group selection—is the best way to understand social evolution. In his earlier work, Wilson was a staunch defender of kin selection, but no longer; he has recently penned sharp critiques of the reigning kin selection orthodoxy, both alone and in a 2010 Nature article co-authored with Martin Nowak and Corina Tarnita. Wilson’s volte-face has led him to clash swords with Richard Dawkins, who says that Wilson is “just wrong” about kin selection and that his most recent book contains “pervasive theoretical errors.” Both parties point to eminent scientists who support their view.
What explains the persistence of the controversy over group and kin selection? Usually in science, one expects to see controversies resolved by the accumulation of empirical data. That is how the “scientific method” is meant to work, and often does. But the group selection controversy does not seem amenable to a straightforward empirical resolution; indeed, it is unclear whether there are any empirical disagreements at all between the opposing parties. Partly for this reason, the controversy has sometimes been dismissed as “semantic,” but this is too quick. There have been semantic disagreements, in particular over what constitutes a “group,” but this is not the whole story. For underlying the debate are deep issues to do with causality, a notoriously problematic concept, and one which quickly lands one in philosophical hot water.
All parties agree that differential group success is common in nature. Dawkins uses the example of red squirrels being outcompeted by grey squirrels. However, as he intuitively notes, this is not a case of genuine group selection, as the success of one group and the decline of another was a side-effect of individual level selection. More generally, there may be a correlation between some group feature and the group’s biological success (or “fitness”); but like any correlation, this need not mean that the former has a direct causal impact on the latter. But how are we to distinguish, even in theory, between cases where the group feature does causally influence the group’s success, so “real” group selection occurs, and cases where the correlation between group feature and group success is “caused from below”? This distinction is crucial; however it cannot even be expressed in terms of the standard formalisms that biologists use to describe the evolutionary process, as these are statistical not causal. The distinction is related to the more general question of how to understand causality in hierarchical systems that has long troubled philosophers of science.
Recently, a number of authors have argued that the opposition between kin and multi-level (or group) selection is misconceived, on the grounds that the two are actually equivalent—a suggestion first broached by W.D. Hamilton as early as 1975. Proponents of this view argue that kin and multi-level selection are simply alternative mathematical frameworks for describing a single evolutionary process, so the choice between them is one of convention not empirical fact. This view has much to recommend it, and offers a potential way out of the Wilson/Dawkins impasse (for it implies that they are both wrong). However, the equivalence in question is a formal equivalence only. A correct expression for evolutionary change can usually be derived using either the kin or multi-level selection frameworks, but it does not follow that they constitute equally good causal descriptions of the evolutionary process.
This suggests that the persistence of the group selection controversy can in part be attributed to the mismatch between the scientific explanations that evolutionary biologists want to give, which are causal, and the formalisms they use to describe evolution, which are usually statistical. To make progress, it is essential to attend carefully to the subtleties of the relation between statistics and causality.
There’s a puzzle around economics. On the one hand, economists have the most policy influence of any group of social scientists. In the United States, for example, economics is the only social science that controls a major branch of government policy (through the Federal Reserve), or has an office in the White House (the Council of Economic Advisers). And though they don’t rank up there with lawyers, economists make a fairly strong showing among prime ministers and presidents, as well.
But as any economist will tell you, that doesn’t mean that policymakers commonly take their advice. There are lots of areas where economists broadly agree, but policymakers don’t seem to care. Economists have wide consensus on the need for carbon taxes, but that doesn’t make them an easier political sell. And on topics where there’s a wider range of economic opinions, like over minimum wages, it seems that every politician can find an economist to tell her exactly what she wants to hear.
So if policymakers don’t take economists’ advice, do they actually matter in public policy? Here, it’s useful to distinguish between two different types of influence: direct and indirect.
Direct influence is what we usually think of when we consider how experts might affect policy. A political leader turns to a prominent academic to help him craft new legislation. A president asks economic advisers which of two policy options is preferable. Or, in the case where the expert is herself the decisionmaker, she draws on her own deep knowledge to inform political choices.
This happens, but to a limited extent. Though politicians may listen to economists’ recommendations, their decisions are dominated by political concerns. They pay particular attention to advice that agrees with what they already want to do, and the rise of think tanks has made it even easier to find experts who support a preexisting position.
Research on experts suggests that direct advisory effects are more likely to occur under two conditions. The first is when a policy decision has already been defined as more technical than political—that experts are the appropriate group to be deciding. So we leave it to specialists to determine what inventions can be patented, or which drugs are safe for consumers, or (with occasional exceptions) how best to count the population. In countries with independent central banks, economists often control monetary policy in this way.
Experts can also have direct effects when possible solutions to a problem have not yet been defined. This can happen in crisis situations: think of policymakers desperately casting about for answers during the peak of the financial crisis. Or it can take place early in the policy process: consider economists being brought in at the beginning of an administration to inject new ideas into health care reform.
But though economists have some direct influence, their greatest policy effects may take place through less direct routes—by helping policymakers to think about the world in new ways.
For example, economists help create new forms of measurement and decision-making tools that change public debate. GDP is perhaps the most obvious of these. A hundred years ago, while politicians talked about economic issues, they did not talk about “the economy.” “The economy,” that focal point of so much of today’s chatter, only emerged when national income and product accounts were created in the mid-20th century. GDP changes have political, as well as economic, effects. There were military implications when China’s GDP overtook Japan’s; no doubt the political environment will change more when it surpasses the United States.
Less visible economic tools also shape political debate. When policymakers require cost-benefit analysis of new regulation, conversations change because the costs of regulation become much more visible, while unquantifiable effects may get lost in the debate. Indicators like GDP and methods like cost-benefit analysis are not solely the product of economists, but economists have been central in developing them and encouraging their use.
The spread of technical devices, though, is not the only way economics changes how we think about policy. The spread of an economic style of reasoning has been equally important.
Philosopher Ian Hacking has argued that the emergence of a statistical style of reasoning first made it possible to say that the population of New York on 1 January 1820 was 100,000. Similarly, an economic style of reasoning—a sort of Econ 101-thinking organized around basic concepts like incentives, efficiency, and opportunity costs—has changed the way policymakers think.
While economists might wish economic reasoning were more visible in government, over the past fifty years it has in fact become much more widespread. Organizations like the US Congressional Budget Office (and its equivalents elsewhere) are now formally responsible for quantifying policy tradeoffs. Less formally, other disciplines that train policymakers now include some element of economics. This includes master’s programs in public policy, organized loosely around microeconomics, and law, in which law and economics is an important subfield. These curricular developments have exposed more policymakers to basic economic reasoning.
The policy effects of an economic style of reasoning are harder to pinpoint than, for example, whether policymakers adopted an economist’s tax policy recommendation. But in the last few decades, new policy areas have been reconceptualized in economic terms. As a result, we now see education as an investment in human capital, science as a source of productivity-increasing technological innovations, and the environment as a collection of ecosystem services. This subtle shift in orientation has implications for what policies we consider, as well as our perception of their ultimate goals.
In the end, then, there is no puzzle. Economists do matter in public policy, even though policymakers, in fact, often ignore their advice. If we are interested in understanding how, though, we should pay attention to more than whether politicians take economists’ recommendations—we must also consider how their intellectual tools shape the very ways that policymakers, and all of us, think.
Is it better to be positive or negative? Many of the most vivid public health appeals have been negative – “Smoking Kills” or “Drive, Drive, and Die” – but do these negative messages work when it comes to changing eating behavior?
Past literature reviews of positive- or gain-framed versus negative or loss-based health messages have been inconsistent. In our content analysis of 63 nutrition education studies, we discovered four key questions which can resolve these inconsistencies and help predict which type of health message will work best for a particular target audience. The more questions are answered with a “Yes,” the more a negative- or loss-based health message will be effective.
Is the target audience highly involved in this issue?
The more knowledgeable or involved a target audience, the more strongly they’ll be motivated by a negative- or loss-based message. In contrast, those who are less involved may not believe the message or may simply wish to avoid bad news. Less involved consumers generally respond better to positive messages that provide a clear, actionable step that leaves them feeling positive and motivated. For instance, telling them to “eat more sweet potatoes to help your skin look younger” is more effective than telling them “your skin will age faster if you don’t eat sweet potatoes.” The former doesn’t require them to know why or to link sweet potatoes to Vitamin A.
Is the target audience detail-oriented?
People who like details – such as most of the people designing public health messages – prefer negative- or loss-framed messages. They have a deeper understanding and knowledge base on which to elaborate on the message. In her coverage of the article for the Food Navigator, Elizabeth Crawford, noted that most of the general public is not interested in the details and is more influenced by the more superficial features of the message, including whether it is more positive or attractive relative to the other things vying for their attention at that moment.
Is the target audience risk averse?
When a positive outcome is certain, gain-framed messages work best (“you’ll live 7 years longer if you are a healthy weight”). When a negative outcome is certain, loss-framed messages work best (“you’ll die 7 years earlier if you are obese”). For instance, we found that if it is believed that eating more fruits and vegetables leads to lower obesity, a positive message (“eat broccoli and live longer”) is more effective than a negative message.
Is the outcome uncertain?
When claims appear factual and convincing, positive messages tend to work best. If a person believes that eating soy will extend their life by reducing their risk of heart disease, a positive message stating this is best. If they aren’t as convinced, a more effective message could be “people who don’t eat soy have a higher rate of heart disease.”
These findings show how those who design health messages, such as health care professionals, will be impacted by them differently than the general public. When writing a health message, rather than appealing to the sentiment of the experts, the message will be more effective if it’s presented positively. The general public is more likely to adopt the behavior being promoted if they see that there is a potential positive outcome. Evoking fear may seem like a good way to get your message across but this study shows that, in fact, the opposite is true—telling the public that a behavior will help them be healthier and happier is actually more effective.
For our second blog post of 2015, we’re looking back at a great article from Katie Kuszmar in The Oral History Review (OHR), “From Boat to Throat: How Oral Histories Immerse Students in Ecoliteracy and Community Building” (OHR, 41.2.) In the article, Katie discussed a research trip she and her students used to record the oral histories of local fishing practices and to learn about sustainable fishing and consumption. We followed up with her over email to see what we could learn from high school oral historians, and what she has been up to since the article came out. Enjoy the article, and check out her current work at Narrability.com.
In the article, you mentioned that your students’ youthful curiosity, or lack of inhibition, helped them get answers to tough questions. Can you think of particular moments where this made a difference? Were there any difficulties you didn’t expect, working with high school oral historians?
One particular moment was at the end of the trip. Our final interview was with the Monterey Bay Aquarium’s (MBA) Seafood Watch public relations coordinator, who was kind enough to arrange the fisheries historian interviews and offered to be one of the interviewees as well. When we finally interviewed the coordinator, the most burning question the students had was whether or not Seafood Watch worked directly with fishermen. The students didn’t like her answer. She let us know that fishermen are welcome to approach Seafood Watch and that Seafood Watch is interested in fishermen, but they didn’t work directly with fishermen in setting the standards for their sustainable seafood guidelines. The students seemed to think that taking sides with fishermen was the way to react. When we left the interview they were conflicted. The Monterey Bay Aquarium is a well-respected organization for young people in the area. The aquarium itself is full of nostalgic memories for most students in the region who visit the aquarium frequently on field trips or on vacation. How could such a beloved establishment not consider fishermen voices, for whom the students had just built a newfound respect? It was a big learning moment about bureaucracy, research, empathetic listening, and the usefulness of oral history.
After the interview, when the students cooled off, we discussed how the dynamics in an interview can change when personal conflicts arise. The narrator may even change her story and tone because of the interviewer’s biases. We explored several essential questions that I would now use for discussion before interviews were to occur, for I was learning too. Some questions that we considered were: When you don’t agree with your narrator, how do you ask questions that will keep the communication safe and open?
Oral history has power in this way: voices can illuminate the issues without the need for strong editorializing.
How do you set aside your own beliefs from the narrator, and why is this important when collecting oral history? In other words, how do you take the ego out of it?
The students were given a learning opportunity from which I hoped we all could gain insight. We discussed how if you can capture in your interview the narrator’s perspective (even if different than your own or other narrators for that matter), then the audience will be able to see discrepancies in the narratives and gather the evidence they need to engage with the issues. Hearing that Seafood Watch doesn’t work with fishermen might potentially help an audience to ask questions on a larger public scale. Considering oral history’s usefulness in engaging the public, inspiring advocacy, and questioning bureaucracy might be a powerful way for students to engage in the process without worrying about trying to prove their narrators wrong or telling the audience what to think. Oral history has power in this way: voices can illuminate the issues without the need for strong editorializing. This narrative power can be studied beforehand with samples of oral history, as it can also be a great way for students to reflect metacognitively on what they have participated in and how they might want to extend their learning experiences into the real world. Voice of Witness (VOW) contends that students who engage in oral history are “history makers.” What a powerful way to learn!
How did this project start? Did you start with wanting to do oral history with your students, or were you more interested in exploring sustainability and fall into oral history as a method?
Being a fisherwoman myself and just having started commercial fishing with my husband who is a fishmonger, I found my two worlds of fishing and teaching oral history colliding. Even after teaching English for ten years because of my love of storytelling, I have long been interested in creating experiential learning opportunities for students concerning where food comes from and sustainable food hubs.
Through a series of uncanny events connecting fishing and oral history, the project seemed to fall into place. I first attended an oral history for educators training through a collaborative pilot program created by VOW and Facing History and Ourselves (FHAO). After the training, I mentored ten seniors at my school to produce oral history Senior Service Learning Projects that ended in a public performance at a local art museum’s performance space. VOW was integral in my first year’s experience with oral history education. I still work with VOW and sit on their Education Advisory Board, which helps me to continue my engagement in oral history education.
In the same year as the pilot program with VOW, I attended the annual California Association of Teachers of English conference in which the National Oceanic Atmospheric Association’s (NOAA) Voices of the Bay (VOB) program coordinator offered a training. The training offered curriculum strategies in marine ecology, fishing, economics, and basic oral history skill-building. To record interviews, NOAA would help arrange interviews with local fishermen in classrooms or at nearby harbors. The interviews would eventually go into a national archive called Voices from the Fisheries.
The trainer for VOB and I knew many of the same fishermen and mongers up and down the central and north (Pacific) coast. I arranged a meeting between the two educational directors of VOW and VOB, who were both eager to meet each other, as they both were just firing up their educational programs in oral history education. The meeting was very fruitful for all of us, as we brainstormed new ways to approach interdisciplinary oral history opportunities. As such, I was able to synthesize curriculum from both programs in preparing my students for the immersion trip, considering sustainability as an interdependent learning opportunity in environmental, social, and economic content. When I created the trip I didn’t have a term for what the outcome would be, except that I had hoped they would become aware more aware of sustainable seafood and how to promote its values. Ecoliteracy was a term that came to fruition after the projects were completed, but I think it can be extremely valuable as a goal in interdisciplinary oral history education.
I believe oral history education can help to shape our students into compassionate critical thinkers, and may even inspire them to continue to interview and listen empathetically to solve problems in their personal, educational, and professional futures.
What pointers can you give to other educators interested in using oral history to engage their students?
With all the material out there, I feel that educators have ample access to help prepare for projects. In the scheme of these projects, I would advise scheduling time for thoughtful processing or metacognitive reflection. All too often, it is easy to focus on the preparation, conducting and capturing the interviews, and then getting something tangible done with it. Perhaps, it is embedded in the education world of outcome-based assessment: getting results and evidence that learning is happening. With high school students, the experience of interviewing is an extremely valuable learning tool that could easily get overlooked when we are focusing on a project
For example, on an immersion trip to El Salvador with my high school students, we were given an opportunity to interview the daughter of the sole survivor of El Mozote, an infamous massacre that happened at the climax of the civil war. The narrator insisted on telling us her and her mother’s story, despite the fact that she had just gotten chemotherapy the day prior. She said that her storytelling was therapeutic for her and helped her feel that her mother, who had passed away, and all those victims of the massacre would not die in vain. This was such heavy content for her and for us as her audience. We all needed to talk, be quiet about it, cry about it, and reflect on the value of the witnessing. In the end, it wasn’t the deliverable that would be the focus of the learning, it was the actual experience. From it, compassion was built in the students, not just for El Salvadorian victims and survivors, but on a broader scale for all people who face civil strife and persecution. After such an experience, statistics were not just numbers anymore, they had a human face. This, to date, for me has been the most valuable part of oral history education: the transformation that can occur during the experience of an interview, as opposed to the product produced from it. For educators, it is vital to facilitate a pointed and thoughtful discussion with the interviewer to hone in on the learning and realize the transformation, if there is one. The discussion about the experience is essential in understanding the value of the oral history interviewing.
Do you have plans to do similar projects in the future?
After such positive experiences with oral history education, I wanted a chance to actively be an oral historian who captures narratives in issues of sustainable food sources. I have transitioned from teaching to running my own business called Narrability with the mission to build sustainability through community narratives. I just completed a small project, in which I collected oral histories of local fishermen called: “Long Live the King: Storytelling the Value of Salmon Fishing in the Monterey Bay.” Housed on the Monterey Bay Salmon and Trout Project (MBSTP) website, the project highlights some of the realities connected to the MBSTP local hatchery net pen program that augments the natural Chinook salmon runs from rivers in the Sacramento area to be released into the Monterey Bay. Because of drought, dams, overfishing, and urbanization, the Chinook fishery in the central coast area has been deeply affected, and the need for a net pen program seems strong. In the Monterey Bay, there have been many challenges in implementing the Chinook net pen program due to the unfortunate bureaucracy of a discouraging port commission out of the Santa Cruz harbor. Because of the challenges, the oral histories that I collected help to illustrate that regional Chinook salmon fishing builds environmental stewardship, family bonding, community building, and provides a healthy protein source.
Through Narrability, I have also been working on developing a large oral history program with a group of organic farming, wholesale, and certification pioneers. As many organic pioneers face retirement, the need for their history to be recorded is growing. Irene Reti sparked this realization in her project through University of California, Santa Cruz: Cultivating a Movement: An Oral History Series on Organic Farming & Sustainable Agriculture on California’s Central Coast. Through collaboration with some of the major players in organics, we aim to build a comprehensive national collection of the history of organics for the public domain.
Is there anything you couldn’t address in the article that you’d like to share here?
I know being a teacher can be time crunched, and once interviews are recorded, students and teachers want to do something tactile with the interviews (podcasts/narratives/documentaries). I encourage educators to implement time to reflect on the process. I wished I would have done more reflective processing in this manner: to interview as a class; to discuss the experience of interviewing and the feelings elicited before, during and after an interview; to authentically analyze how the interviews went, including considering narrator dynamics. In many cases, the skills learned and personal growth is not the most tangible outcome. Despite this, I believe oral history education can help to shape our students into compassionate critical thinkers, and may even inspire them to continue to interview and listen empathetically to solve problems in their personal, educational, and professional futures. This might not be something we can grade or present as a deliverable, it might be a long-term effect that grows with a students’ life long learning.
Image Credit: Front entrance of the Aquarium. Photo by Amadscientist. CC by SA 3.0 via Wikimedia Commons.
Meet Utricularia. It’s a bladderwort, an aquatic carnivorous plant, and one of the fastest things on the planet. It can catch its prey in a millisecond, accelerating it up to 600g.
Once caught inside the prey suffocates and digestive enzymes break down the unfortunate creature for its nutrients. Anything small enough to be pulled in won’t know their mistake until it’s too late. But as lethal as the trap is, it did seem to have some flaws. The traps don’t just catch animals, they catch anything that gets sucked in, so often that’s algae and pollen too.
A team at the University of Vienna led by Marianne Koller-Peroutka and Wolfram Adlassnig closely examined Utricularia and found the plants were not very efficient killers. Studying over 2000 traps showed that only about 10% of the objects sucked in were animals. Animals are great if you want nutrients like nitrogen and phosphorus, but half of the catch was algae and a third pollen.
What was more puzzling was that not all the algae entered with an animal. If a bladder is left for a long while, it will trigger anyway. No animal is needed; algae, pollen, and fungi will enter. Is this a sign that the plant is desperate for a meal, and hoping an animal is passing? Koller-Peroutka and Adlassnig found that the traps catching algae and pollen grew larger and had more biomass. Examining the bladders under a microscope showed that algae caught in the traps died and decayed. This was more evidence that it’s happy to eat other plants too. It seems that it’s not just animals that Utricularia is hunting.
Koller-Peroutka and Adlassnig say this is why Utricularia is able to live in places with comparatively few animals. Nitrogen from animals and other elements from plants mean it is happy with a balanced diet. It can grow more and bigger traps, and use these for catching animals or plants or both.
Fortunately even the big traps only catch tiny animals, so if someone has bought you one for Christmas you can leave it on the dinner table without losing your turkey and trimmings in a millisecond.
From the comfort of a desk, looking at a computer screen or the printed page of a newspaper, it is very easy to ignore the fact that thousands of tons of insecticide are sprayed annually.
Consider the problem of the fall armyworm in Mexico. As scientists and crop advisors, we’ve worked for the past two decades trying to curb its impact on corn yield. We’ve tested dozens of chemicals to gain some control over this pest on different crops.
A couple of years ago, we were comparing information on the number of insecticide applications needed to battle this worm during a break of a technical meeting. Anecdotal information from other parts of the country got into the conversation. Some colleagues reported that the fall armyworm wasn’t the worst pest in a particular region of Mexico and it was easy to control with a couple of insecticide applications. Others mentioned that up to six sprays were necessary in other parts of the country. Wait a second, I said, that is completely ridiculous and tremendously expensive to use so much insecticide in maize production.
At that point we decided to contact more professionals throughout Mexico and put together a geographical and seasonal ‘map’ of the occurrence of corn pests and the insecticides used in their control. Our report was compiled doing simple arithmetic and the findings really surprised us: a conservative estimate of 3,000 tons of insecticidal active ingredient are used against just the fall armyworm every year in Mexico. No wonder our country has the highest use of pesticide per hectare of arable land in North America.
Mexican farmers are stuck on what has been called ‘the pesticide treadmill.’ The first insecticide application sometimes occurs at the time that maize seed is put in the ground, then a second one follows a couple of weeks later, then another, and another; this process usually involves the harshest insecticides, or those that are highly toxic for the grower and the environment, because they are the cheapest. A way of curtailing these initial applications can be achieved by genetically-modified (GM) maize that produces its own very specific and safe insecticide. Not spraying against pests in the first few weeks of maize development allows the beneficial fauna (lacewings, ladybird beetles, spiders, wasps, etc.) to build their populations and control maize pests; simply put, it enables the use of biological control. The combination of GM crops and natural enemies is an essential part of an integrated pest management program — a successful strategy employed all over the world to control pests, reducing the use of insecticides, and helping farmers to obtain more from their crop land.
We have good farmers in Mexico, a great diversity of natural enemies of the fall armyworm and other maize pests, and growers that are familiar with the benefits of using integrated pest management in other crop systems. Now we need modern technology to fortify such a program in Mexican maize.
Mexican scientists have developed GM maize to respond to some of the most pressing production needs in the country, such as lack of water. Maize hybrids developed by Mexican research institutions may be useful in local environments (e.g., tolerant to drought and cold conditions). These local genetically-engineered maize varieties go through the same regulatory process as corporate developers.
At present, maize pest control with synthetic insecticides has been pretty much the only option for Mexican growers. They use pesticides because controlling pests is necessary for obtaining a decent yield, not because they are forced to spray them by chemical corporations or for being part of a government program. This constitutes an urgent situation that demands solutions. There are a few methods to prevent most of these applications, genetic engineering being one of them. Other countries have reduced their pesticide use by 40% due to the acceptance of GM crops. Mexico, the birthplace of maize, only produces 70% of the maize it consumes because growers face so many environmental and pest control challenges, with heavy reliance on synthetic pesticides. Accepting the technology of GM crops, and educating farmers on better management practices, is key for Mexico to jump off the pesticide treadmill.
Image Credit: Maize diversity. Photo by Xochiquetzal Fonseca/CIMMYT. CC BY SA NC ND 2.0 via Flickr.
It is becoming widely accepted that women have, historically, been underrepresented and often completely written out of work in the fields of Science, Technology, Engineering, and Mathematics (STEM). Explanations for the gender gap in STEM fields range from genetically-determined interests, structural and territorial segregation, discrimination, and historic stereotypes. As well as encouraging steps toward positive change, we would also like to retrospectively honour those women whose past works have been overlooked.
From astronomer Caroline Herschel to the first female winner of the Fields Medal, Maryam Mirzakhani, you can use our interactive timeline to learn more about the women whose works in STEM fields have changed our world.
With free Oxford University Press content, we tell the stories and share the research of both famous and forgotten women.
Featured image credit: Microscope. Public Domain via Pixabay.
Introduction, from Michael Alvarez, co-editor of Political Analysis
Recently I asked Nathaniel Beck to write about his experiences with research replication. His essay, published on 24 August 2014 on the OUPblog, concluded with a brief discussion of a recent experience of his when he tried to obtain replication data from the authors of a recent study published in PNAS, on an experiment run on Facebook regarding social contagion. Since then the story of Neal’s efforts to obtain this replication material have taken a few interesting twists and turns, so I asked Neal to provide an update — because the lessons from his efforts to get the replication data from this PNAS study are useful for the continued discussion of research transparency in the social sciences.
After not hearing from Adam Kramer of Facebook, even after contacting PNAS, I persisted with both the editor of PNAS (Inder Verma, who was most kind) and with the NAS through “well connected” friends. (Getting replication data should not depend on knowing NAS members!). I was finally contacted by Adam Kramer, who offered that I could come out to Palo Alto to look at the replication data. Since Facebook did not offer to fly me out, I said no. I was then offered a chance to look at the replication files in the Facebook office 4 blocks from NYU, so I accepted. Let me stress that all dealings with Adam Kramer were highly cordial, and I assume that delays were due to Facebook higher ups who were dealing with the human subjects firestorm related to the Kramer piece.
When I got to the Facebook office I was asked to sign a standard non-disclosure agreement, which I dec. To my surprise this was not a problem, with the only consequence being that a security officer would have had to escort me to the bathroom. I then was put in a room with a Facebook secure notebook with the data and R-studio loaded; Adam Kramer was there to answer questions, and I was also joined by a security person and an external relations person. All were quite pleasant, and the security person and I could even discuss the disastrous season being suffered by Liverpool.
I was given a replication file which was a data frame which had approximately 700,000 rows (one for each respondent) and 7 columns containing the number of positive and negative words used by each respondent as well as the total word count of each respondent, percentages based on these numbers, experimental condition. and a variable which omitted some respondents for producing the tables. This is exactly the data frame that would have been put in an archive since it contained all the data needed to replicate the article. I also was given the R-code that produced every item in the article. I was allowed to do anything I wanted with that data, and I could copy the results into a file. That file was then checked by Facebook people and about two weeks later I received the entire file I created. All good, or at least as good as it is going to get.
The data frame I played with was based on aggregating user posts so each user had one row of data, regardless of the number of posts (and the data frame did not contain anything more than the total number of words posted). I can understand why Facebook did not want to give me the data frame, innocuous as it seemed; those who specialize in de-de-identifying private data and reverse engineering code are quite good these days, and I can surely understand Facebook’s reluctance to have this raw data out there. And I understand why they could not give me all the actual raw data, which included how feeds were changed and so forth; this is the secret sauce that they would not like reverse engineered.
I got what I wanted. I could see their code, could play with density plots to get a sense of words used, I could change the number of extreme points dropped, and I could have moved to a negative binomial instead of a Poisson. Satisfied, I left after about an hour; there are only so many things one can do with one experiment on two outcomes. I felt bad that Adam Kramer had to fly to New York, but I guess this is not so horrible. Had the data been more complicated I might have felt that I could not do everything I wanted, and running a replication with 3 other people in a room is not ideal (especially given my typing!).
My belief is that that PNAS and the authors could simply have had a different replication footnote. This would have said that the code used (about 5 lines of R, basically a call to a Poisson regression using GLM) is available at a dataverse. In addition, they could have noted that the GLM called used the data frame I described, with the summary statistics for that data frame. Readers could then see what was done, and I can see no reason for such a procedure to bother Facebook (though I do not speak for them). I also note a clear statement on a dataverse would have obviated the need for some discussion. Since bytes are cheap, the dataverse could also contain whatever policy statement Facebook has on replication data. This (IMHO) is much better than the “contact the authors for replication data” footnote that was published. It is obviously up to individual editors as to whether this is enough to satisfy replication standards, but at least it is better than the status quo.
What if I didn’t work four blocks from Astor Place? Fortunately I did not have to confront this horror. How many other offices does Facebook have? Would Adam Kramer have flown to Peoria? I batted this around, but I did most of the batting and the Facebook people mostly did no comment. So someone else will have to test this issue. But for me, the procedure worked. Obviously I am analyzing lots more proprietary data, and (IMHO) this is a good thing. So Facebook, et al., and journal editors and societies have many details to work out. But, based on this one experience, this can be done. So I close this with thanks to Adam Kramer (but do remind him that I have had auto-responders to email for quite while now).
On the more trivial issue of my own dataverse, I am happy to report that almost everything that was once on an a private ftp site is now on my Harvard dataverse. Some of this was already up because of various co-authors who always cared about replication. And on stuff that was not up, I was lucky to have a co-author like Jonathan Katz, who has many skills I do not possess (and is a bug on RCS and the like, which beats my “I have a few TB and the stuff is probably hidden there somewhere”). So everything is now on the dataverse, except for one data set that we were given for our 1995 APSR piece (and which Katz never had). Interestingly, I checked the original authors’ web sites (one no longer exists, one did not go back nearly that far) and failed to make contact with either author. Twenty years is a long time! So everyone should do both themselves and all of us a favor, and build the appropriate dataverse files contemporaneously with the work. Editors will demand this, but even with this coercion, this is just good practice. I was shocked (shocked) at how bad my own practice was.
Heading image: Wikimedia Foundation Servers-8055 24 by Victorgrigas. CC BY-SA 3.0 via Wikimedia Commons.
Wolves in the panhandle of southeast Alaska are currently being considered as an endangered species by the US Fish and Wildlife Service in response to a petition by environmental groups. These groups are proposing that the Alexander Archipelago wolf (Canis lupus ligoni) subspecies that inhabits the entire region and a distinct population segment of wolves on Prince of Wales Island are threatened or endangered with extinction.
Whether or not these wolves are endangered with extinction was beyond the scope of our study. However our research quantified the genetic variation of these wolves in southeast Alaska which can contribute to assessing their status as a subspecies.
Because the US Endangered Species Act (ESA) defines species as “species, subspecies, and distinct population segments”, these categories are all considered “species” for the ESA. Although this definition is not consistent with the scientific definition of species it has become the legal definition of species for the ESA.
Therefore we have two questions to consider:
Are the wolves in southeast Alaska a subspecies?
Are the wolves on Prince of Wales Island a distinct population segment?
The literature on subspecies and distinct population segment designation is vast, but it is important to understand that subspecies is a taxonomic category, and basically refers to a group of populations that share an independent evolutionary history.
Taxonomy is the science of biological classification and is based on evolutionary history and common ancestry (called phylogeny). Species, subspecies, and higher-level groups (e.g, a genus such as Canis) are classified based on common ancestry. For example, wolves and foxes share common ancestry and are classified in the same family (Canidae), while bobcats and lions are classified in a different family (Felidae) because they share a common ancestry that is different from foxes and wolves.
Subspecies designations are often subjective because of uncertainty about the relationships among populations of the same species. This leads many scientists to reject or ignore the subspecies category, but because the ESA is the most powerful environmental law in the United States the analysis of subspecies is of great practical importance.
Our results and other research showed that the wolves in Southeast Alaska differed in allele frequencies compared to wolves in other regions. Allele frequencies reflect the distribution of genetic variation within and among populations. However, the wolves in southeast Alaska do not comprise a homogeneous population, and there is as much genetic variation among the Game Management Units (GMU) in southeast Alaska as there is between southeast Alaska and other areas.
Our research data showed that the wolves in southeast Alaska are not a homogeneous group, but consist of multiple populations with different histories of colonization, isolation, and interbreeding. The genetic data also showed that the wolves on Prince of Wales Island are not particularly differentiated compared to the overall differentiation in Southeast Alaska and do not support designation as a distinct population segment.
The overall pattern for wolves in southeast Alaska is not one of long term isolation and evolutionary independence and does not support a subspecies designation. Other authors, including biologists with the US Fish and Wildlife Service, also do not designate wolves in southeast Alaska as a subspecies and there is general recognition that North America wolf subspecies designations have been arbitrary and are not supported by genetic data.
There is growing recognition in the scientific community of unwarranted taxonomic inflation of wildlife species and subspecies designations to achieve conservation goals. Because the very nature of subspecies is vague, wildlife management and conservation should focus on populations, including wolf populations. This allows all of the same management actions as proposed for subspecies, but with increased scientific rigor.
Headline image credit: Alaskan wolf, by Douglas Brown. CC-BY-NC-SA-2.0 via Flickr.
About 500,000 Canadians are living with Alzheimer’s disease or a related dementia. This number is expected to soar to 1.1 million within 25 years. To date, there is no definitive way for health care professionals to forecast the onset of dementia in a patient with memory complaints. However, new research provides a glimmer of hope.
As a geriatrician, I have been looking at walking speed and variability as a predictor of dementia’s progression and whether it is associated with physical changes in the brain.
The “Gait and Brain Study” is a longitudinal cohort study funded by the Canadian Institutes of Health Research (CIHR). It assessed up to 150 seniors with mild cognitive impairment (MCI) — a pre-dementia syndrome — in order to detect an early predictor of cognitive and mobility decline, and progression to dementia.
While walking has long been considered an automatic motor task, emerging evidence suggests cognitive function plays a key role in the control of walking, avoidance of obstacles, and maintenance of navigation.
In our recent research, my team asked people with mild cognitive impairment to walk on a specially-designed mat linked to a computer. The computer recorded the individual’s walking gait variability and speed. This information was then compared to their walking gait while simultaneously performing a demanding cognitive task, such as counting backwards or doing calculations while walking (“walking-while-talking”).
It was subsequently determined that some specific gait characteristics are associated with high variability, particularly during walking-while-talking. These gait abnormalities were more marked in MCI individuals with the worst episodic memory and with executive dysfunction revealing a motor signature of cognitive impairment.
If confirmed in subsequent studies, these gait changes can be an effective predictor of cognitive decline and may eventually help with earlier diagnoses of dementia.
Finding early dementia detection methods is vital. In the future, it is conceivable that we will be able to make diagnoses of Alzheimer’s disease and other dementias before people even have significant memory loss. We believe that gait, as a complex brain-motor task, provides a golden window of opportunity for researchers to see brain function. The high variability observed in people with mild cognitive impairment can be seen as a “gait arrhythmia,” predicting mobility decline, falls, and now, cognitive impairment. Our hope is to combine these methods with promising new medications to slow or halt the progression of mild cognitive impairment to dementia.
The recent tragedies in France have reminded us of the tensions that are often associated with the relations between religious groups and the larger society. A recent article in Social Forces explores whether Islam fundamentally conflicts with mainstream French society, and whether Muslims are more attached to their religion than they are to their French identity. I spoke with its authors, Rahsaan Daniel Maxwell and Erik Bleich, to further understand the relationship between Muslim and French identities.
Is there tension between French and Muslim identities? If so, why? What influences personal identities?
There is longstanding tension between French and Muslim identities. On the one hand, the French tradition of secularism requires French national identity to be dominant over all other identities. There is a long history of the French republic battling the Catholic church for dominance. So there is a fear that when contemporary Muslims claim their religious identity they are threatening the French tradition of secularism. But there is also clearly an element of anti-minority stigmatization in the French fear of Islam. The main religion in France has historically been Christianity and Islam is perceived by some as a new threat to that tradition. In addition, many Muslims in France are migrants from former French colonies and there are lingering tensions from that history. These are all of the long and deep historical trends but of course in the past 10-15 years there is the added politicization of Islam due to extremist terrorism and the Western-led wars in Afghanistan, Iraq, and elsewhere in the Middle East. Moreover, it is also important to note that many Muslims in France feel conflicted in how to balance their complex identity commitments to religion and the French nation.
To what extent do Muslims face anti-Muslim sentiments in France?
There is considerable evidence of anti-Muslim sentiments in France. Recent studies suggest Muslims face a range of discrimination from the labor market to suspicion and hostility in daily life. Moreover, French society is still working out a way for someone to be legitimately considered French and Muslim.
Your recent Social Forces article focused on Muslims’ religious and national identities in France. To what extent is the Muslim experience in France unique or similar to that in the United States?
One major difference is that religion in the United States is not seen as a threat to the dominance of national identity. In addition, Muslims in the United States are not from former American colonies and tend to be wealthier on average so the historical and economic relationships are different. But, as issues of Islamic extremism and the fight to combat terrorism become more important, there will be more similarities between the two countries.
Why do you think religiosity has been so central to scholarship on Muslim identity?
In many respects it is a natural connection because Islam is a religion. But one of the key points we make in our research is that people’s attitudes have complex origins that cannot just be linked to one aspect of themselves. Moreover, it is important to note that the same people who are now primarily conceived of as ‘Muslims’ in France were not always seen through that religious lens. When they first arrived in the 1950s and 60s they were primarily viewed as ‘migrant workers’ who came to help France rebuild after the Second World War. That led to various debates about how their socio-economic conditions should be managed. Later the same were viewed through the lens of their national origins, as it became clear that Algerians and Moroccans may face different integration challenges from Senegalese, Portuguese or Italian immigrants. It was in the 90s and early 00s that religion and Islam became a more prominent way of understanding the integration challenges. In part this was because of the global politicization of Islam, and it is an important issue. But we want to remind people that identity has multiple sources.
As far as either international policy or narratives in the popular media go, do these findings challenge any prevailing assumptions in those fields?
One main narrative that our findings challenge is the notion that Muslims in France are all alienated and hostile to French identity. Our research speaks to the broad masses of French Muslims who feel a strong connection to France, irrespective of the intensity of their religious practices.
How might Muslim or French identities change in the aftermath of the shootings? How might integration of Muslims in France change in the wake of the terrorist attacks in Paris?
In the short run it is pretty clear that we can expect more tension as non-Muslims in France are afraid of more attacks and Muslims are afraid of anti-Muslim backlashes. One can hope for more unity in the long-term, and there is some historical evidence to support this, as previous groups of immigrants in France at times engaged in violence on French soil. Whatever happens, much will depend on the actions that occur in the upcoming weeks and months.
Amidst the images of burning vehicles and riots in Ferguson, Missouri, the US President, Barack Obama, has responded to growing concerns about policing by pledging to spend $75 million to equip his nation’s police with 50,000 Body Worn Videos. His initiative will give added impetus to an international movement to make street policing more transparent and accountable. But is this just another example of a political and technical quick fix or a sign of a different relationship between the police and science?
At the heart of the shift to Body Worn Video is a remarkable story of a Police Chief who undertook an experiment as part of his Cambridge University Masters programme. Rialto Police Department, California serves a city of 100,000 and has just over one hundred sworn officers. Like many other departments, it had faced allegations that its officers used excessive force. Its Chief, Tony Farrar, decided to test whether issuing his officers with Body Worn Video would reduce use of force and complaints against his officers. Instead of the normal police approach to issuing equipment like this, Farrar, working with his Cambridge academic supervisor, Dr Barak Ariel, designed a randomised field trial, dividing his staff’s tours of duty into control – no video – and treatment – with video. The results showed a significant reduction in both use of force and citizen complaints.
Why is this story so different? A former Victoria Police Commissioner described the relationship between the police and research as a “dialogue of the deaf”. The Police did not value research and researchers frequently did not value policing. Police Chiefs often saw research as yet another form of criticism of the organisation. Yet, despite this, research has had a major effect on modern policing. There are very few police departments in the developed world that don’t claim to target “hot-spots” of crime, an approach developed by a series of randomised trials.
However, even with the relative success of “hot-spot policing”, police have not owned the science of their own profession. This is why Chief Farrar’s story is so important. Not only was Farrar the sponsor of the research, but he was also part of the research team. His approach has allowed his department to learn by testing. Moreover, because the Rialto trial has been published to both the professional and academic field, its lessons have spread and it is now being replicated not just in the United States but also in the United Kingdom. The UK College of Policing has completed randomised trials of Body Worn Video in Essex Police to test whether the equipment is effective at gathering evidence in domestic violence investigations. The National Institute of Justice in the United States is funding trials in several US cities.
This is the type of approach we have come to expect in medicine to test promising medical treatments. We have not, up to now, seen such a focus on science in policing. Yet there are signs of real transformation, which are being driven by an urgent need to respond to a perfect storm created by a crisis of legitimacy and acute financial pressures. Not only are Chiefs trying to deal with the “Ferguson” factor, but they also have to do so against a backdrop of severe constraint.
“Science can provide a means to transform policing as long as police are prepared to own and adopt the science”
As the case of Body Worn Video has shown, science can provide a means to transform policing as long as police are prepared to own and adopt the science. But for Body Worn Video not to be an isolated case, policing will need to adopt many of the lessons from medicine about how it was transformed from eighteenth century barber surgeons to a modern science-based profession. This means policing needs an education and training system that does not just teach new recruits law and procedure, but also the most effective ways to apply them and why they work. It means that police leaders will need to target their resources using the best available science, test new practices, and track their impact. It will require emerging professional bodies like the College of Policing to work towards a new profession in policing, in which practice is accredited and expertise is valued and rewarded.
Obama’s commitment to Body Worn Video will not, of itself, solve the problems that Ferguson has so dramatically illustrated. The Rialto study suggests it may help – a bit. However, the White House announcement also included money for police education. If that is used wisely and police leaders grasp the opportunity to invest in a new science-based profession, then the future may be brighter.
“Never waste a good crisis,” or so Rahm Emanuel (President Obama’s former Chief of Staff and now Mayor of Chicago) is reputed to have said. Well, whether Prince Andrew allegedly had sex with an underage girl at some time in the distant past looks like a crisis for the Royal Household. May be it’s an opportunity not to be wasted.
How might it be put to use? It could facilitate a debate into the supposed ‘rights of victims’. Such a debate has been a long time coming. There has been no shortage of inept police investigations that failed to recognise malign intentions even when staring officers in the face. The ‘Yorkshire Ripper’ (Sutcliffe) was interviewed nine times without the West Yorkshire Police appreciating that they were talking to the murderer. A succession of child abuse cases have revealed failures on the part of officers to become sufficiently suspicious of parents. Dr Harold Shipman murdered an unknown — but undoubtedly huge — number of his elderly patients without stirring police suspicions even after a fellow doctor expressed her concerns.
Over the past thirty years, victims have become a more visible and voluble beast in the criminal justice undergrowth. Feminists were in the vanguard of this movement, protesting about crimes against women, especially domestic violence and sex crimes. They were joined by those concerned with the welfare of children. Meanwhile, the Savile Affair and prosecution of a cast list of celebrities on charges of ‘historical child sexual abuse’, plus the shenanigans over the choice of who should chair the inevitable official inquiry, have kept the issue of child abuse at the top of the news agenda.
Enter Prince Andrew who has been accused (along with others) of having a sexual relationship with a young American woman who was under the age of consent. This has prompted Establishment figures, including his ex-wife, to step forward and insist that such allegations are ridiculous. I have no reason to doubt his supporters are genuine, but neither can I shake off the echoes of my own sense of incredulity when Rolf Harris (of all people) was convicted of sex crimes against young women. How do we know that a seemingly inoffensive person — whether a celebrity or a neighbour — has a vile secret?
I don’t claim to know the answer, but I do maintain that it is a legitimate question to ask. What I fear is a moral panic in which the police will be encouraged to look more suspiciously on those accused of heinous crimes. This, it seems to me, is the emphasis contained in two recent and authoritative reports. In March last year Her Majesty’s Inspector of Constabulary issued a report on the policing of domestic violence . When asked, victims said that the main cause of their dissatisfaction with the police handling of their allegations, was that they felt they were not believed. In response the HMIC recommended that the police should be more willing to accept allegations of domestic violence and abuse. Likewise, in the autumn Alexis Jay published her report into child sexual exploitation in Rotherham, revealing an unprecedented criminal conspiracy to abuse vulnerable young girls whilst agencies charged with their protection disregarded evidence that should have prompted action. Again, recommendations appeared to emphasise that officers should treat allegations made by young women in care much more seriously than they have in the past.
Should the police accept at face value accusations made by anyone? Or should they weigh the credibility of the accuser as well as the nature of the accusation? The ultimate arbiters of such allegations are juries, but when juries have deliberated on such allegations, they have not endorsed them all. There have been celebrities aplenty acquitted as well as those who have not and are now serving terms of imprisonment. Rape is a criminal charge that is notoriously difficult to prosecute.
This is not just a question that afflicts ageing celebrities and dilapidated northern cities, but is faced everyday by police officers who respond to contested allegations of wrongdoing. One party to a dispute alleges that the other has done wrong, but the other denies it and probably counter-claims that wrong has been done by their accuser. It happens most commonly in episodes of domestic conflict, as anyone who has been on the margins of a ‘messy’ divorce will attest. When viewed in this context, accusations tend to lack credibility because the parties have vested interests in making and denying such allegations.
The issue of the credibility of putative victims arose in the course of research that I and others are hoping to publish with Oxford University Press later this year. We asked focus groups throughout the Black Country region of the West Midlands to evaluate and discuss video clips of encounters between police and members of the public broadcast by the BBC (of the kind I’m sure you will be familiar with). One of the clips focused on the police response to an alleged knife-point robbery of an elderly man and his young female companion in the man’s home. Spontaneously, almost every focus group concluded that the elderly man’s companion was complicit in the robbery. What had ignited their suspicions? Well, wasn’t it odd that such a young woman would spend an occasional evening watching television with an elderly ‘friend of the family’? Wasn’t it suspicious that she became confused, even about whether the robber addressed her by name? How could she insist that the robber was ‘about 20’ years old, if she did not see his face? Why didn’t she scream when the man forced his way into the property? There was almost unanimous agreement that there was ‘more to this than met the eye’! Most focus groups were content with how the officers dealt with the investigation, but if they were critical then it was because the police had not arrested the young woman who was so ‘obviously’ guilty. What they were not to know was that in programme from which this episode was extracted, it was revealed that the young woman’s boyfriend was convicted of the robbery, but no charges were brought against her. On the other hand, when an officer could see on CCTV three youths breaking into a car, many of our focus groups felt that the officer too hastily assumed that they were attempting to steal it, rather than rescuing one of the lad’s girlfriend who had locked herself out of the car (which turned out to be the truth)!
Being ‘innocent until proven guilty’ is a legal principle that receives overwhelming endorsement. If so, the unpalatable corollary must surely be that those who allege guilt must overcome a formidable barrier before conviction can be secured. Crown Prosecutors must be convinced that there is a better than evens chance of overcoming that barrier before prosecuting someone alleged to have done wrong. This undoubtedly works to the disadvantage of those who regard themselves as genuine victims of wrongdoing. It is equally undoubtedly the case that offenders will do all in their power to exploit the ‘presumption of innocence’ to their malign advantage. Yet, it also protects the innocent victims of malign false allegations made for whatever reason. To be wrongfully accused is also an acutely painful experience from which a system of justice should surely also safeguard the innocent. Amid all this uncertainty, what is surely obvious is that prescriptions for the police to believe accusations at face value is no remedy.
I sat down with Samantha Snyder, a Student Assistant at the University of Wisconsin-Madison Archives, to talk about her work. From time to time, the UW Archives has students test various voice recognition programs, and for the last few months Samantha has been testing the software program Dragon NaturallySpeaking. This is an innovative way of processing oral histories, so we were excited to hear how it was going.
To start off, can you tell me a bit about the project you’re working on?
I started this project in June of 2014, and worked on it most of the summer. The interviews I transcribed included three sessions with a UW-Madison Teaching Assistant who participated in the 2011 Capitol Protests. There was some great content that was waiting to be transcribed, and I decided to dive right in. Each interview session was about fifty minutes.
I was asked to try out Dragon NaturallySpeaking. I had never heard of the software before, and was excited to be the one to test it out. What I didn’t realize is that there is quite the steep learning curve.
Sounds like it started off slow. What did it take to get the program working?
I spent quite a bit of time reading through practice exercises, which are meant to get the program to the point where it will recognize your voice. The exercises include things like Kennedy’s inaugural speech, children’s books, and cover letters. They were actually fun to read, but I knew I had to get down to business.
Yep, it allows you to slow down and speed up the interview, which I learned was absolutely necessary. With the programs finally up and running, I plugged in the start/stop pedal, opened a Word document and began. I immediately realized I had to slow the interview down to about 60% of its regular speed, because I was having a tough time keeping up.
Unfortunately, I think most oral historians are familiar with the drudgery of transcription. Do you think the program helped to make the process any easier?
The first interview session took me around five hours total to complete. This included editing the words and sentences that came out completely different than what I thought I had said clearly, and formatting the interview into its proper transcript form. During the first interview I tried using commands to delete and fix phrases, but I found it was easier to just going back through and edit after finishing the dictation. I was surprised at how long it took me to complete the first interview, and I was stressed that maybe this wasn’t worth it, and I should just listen and type without dictating.
For the second and third interview sessions, it became much easier, and Dragon began to recognize my voice, for the most part. It only took me around two hours to dictate and edit subsequent interviews, a much more manageable timeframe than five hours. I think using the pedal and Express Scribe made the process much easier, because I was able to slow down the interview as well as stop and start when needed. I definitely would recommend using similar products along with Dragon, because it does play audio but does not have the option to slow down or speed up the interview. Without the pedal and Express Scribe I think it would have taken me much longer! My pedal stopped working during one of my days working on transcribing, and it turned into a much more stressful process.
It sounds like the experiment was fairly success. Two hours to transcribe and edit a 50-minute interview doesn’t seem bad at all!
Overall I would say Dragon NaturallySpeaking is an innovative way to transcribe oral history interviews, but I wouldn’t say it is necessarily the most efficient. I would like to transcribe an interview of similar length by simply listening and typing to compare the amount of time taken, but I haven’t had a chance to do so yet.
Maybe we can get another review when you’ve had the chance to compare the methods. Any final thoughts?
I think I will still be transcribing by doing my old standard, listening and typing along with the recording. Speech recognition software is an innovative tool, but in the end there is still a long way to go before it replaces the traditional transcription process.
I’m sure we all look forward to the days when software can fully take over transcription. Thanks for your help, and for the excellent review!
If you’ve tried voice recognition software, or other creative oral history methods, share your results with us on Twitter, Facebook, Tumblr, even Google+.
Teachers at medical schools have struggled with a basic problem for decades: they want their students not just to be competent doctors, but to be excellent ones. If you understand a little history, you can see why this is such a challenge. Medical schools in the United States and Canada established a standard four-year curriculum over a century ago. Since that time, the volume of medical information has grown exponentially. How should medical schools cram the ever-growing body of knowledge into the same curricular space? This challenge has led to a constant process of curricular reform as faculty cut what was once cutting-edge science to make room for new cutting-edge science. Anatomy has long been a rite of passage of medical school. Bacteriology once exemplified modern life science. But deans of medical education now wonder how much their students really need to learn about these sciences. Can these older fields be displaced to make space for new fields such as genomics, immunology, and neuroscience? Time in the curriculum is increasingly contested.
Given this state of affairs, it might come as a bit of a surprise that faculty representing twenty medical schools met recently to make the case not for the new but for the old, specifically for the history of medicine. Even as medicine remains committed to pushing the frontier of knowledge, there is growing recognition that essential lessons for students and doctors derive from studying history.
Why are historical perspectives invaluable to physicians in training? For starters, it is critical that physicians today understand that the burden of disease and our approach to therapeutics have both changed over time. This is obvious to anyone who has spoken to their grandparents about their childhood, or to anyone who has looked at bills of mortality, old pharmaceutical advertisements, or any other accounts of medicine. The challenge is to have a theory of disease that can account for the rise and fall of various diseases, and an understanding of efficacy that can explain why therapeutic practice changes over time. A condition like obesity may well have a strong genetic component, but genetics alone cannot explain the dramatic rise in obesity prevalence over the past generation. New treatments come and go, only partially in response to evidence of their efficacy. Instead, answers to questions about changing diseases and treatments require careful attention to changing social, economic, and political forces—that is to say, they require careful attention to historical context.
Medical knowledge itself–firmly grounded in science as it may be — is nonetheless the result of specific cultural, economic, and political processes. What we discover in the future will depend on what research we fund now, what rules we set for the approval of new remedies, and what markets we envisage for future therapies. History provides perspective on the contingency of knowledge production and circulation, fostering clinicians’ ability to tolerate ambiguity and make decisions in the setting of incomplete knowledge.
Ethical dilemmas in medical research and practice also change over time. Abortion has been criminalized and decriminalized, and is now at risk of being criminalized once again. Physician-assisted dying, once anathema, has lately become increasingly acceptable. History reveals the specific forces that shape ethical judgments and their consequences.
History can teach many other lessons to students and doctors, lessons that offer invaluable insight into the nature and causes of disease, the meanings of therapeutic efficacy, the structure of medical institutions, and the moral dilemmas of clinical practice. We have not done, and likely cannot do, rigorous outcomes research to prove that better understanding of the history of medicine will produce better doctors. But such research has not been done for many topics in medical school curricula, such as anatomy or genomics, because the usefulness of these topics seems obvious. We argue that the usefulness of history in medical education should be just as obvious.
Making the case for the essential role of history in medical education has the unfortunate effect of making the basic problem — of trying to cram ever more material into the curricula — even worse. Perhaps not every school has yet recruited faculty suited to teach the full range of potential lessons that history offers. But many schools do, and in others much can be done with thoughtful curriculum design. Just as medical school faculty work constantly to find room for new scientific discoveries, they can make space for the lessons of history, today.
Heading image: Anatomy of the heart; And she had a heart!; Autopsy. By Enrique Simonet (1866-1927). Public domain via Wikimedia Commons.
Oxford University Press is delighted to co-sponsor this year’s Force2015 conference which takes place in Oxford’s new Mathematical Institute on 12-13 January 2015. Conference sessions will be live-streamed for a global audience.
This year marks the 350th anniversary of the scholarly journal, as recorded by the first publication of the Royal Society’s Philosophical Transactions in 1665. In a dedicatory epistle to the Society’s Fellows and the Introduction, editor Henry Oldenburg set forth its purpose to inform the scientific community of the latest and most valuable discoveries.
Roughly eighty years earlier, in 1584, a Supplicatio — effectively a 16th century business case — was made to Robert Dudley, the earl of Leicester and chancellor of the University of Oxford, to establish a university press. In seven brief ‘considerations’ it sets out the reasons why a press was necessary as both intrinsic to the act of scholarship and to establish Oxford’s profile as an international seat of learning.
The turmoil of technology, innovation, and transformation in the 16th and 17th centuries is recognizable today. Then, the printing press had been established for some decades. Now, the World Wide Web hurtles towards age thirty. Then, those institutions that understood and harnessed the press’s power were establishing the beginnings of a scalable scholarly communications infrastructure. Now, previously unfeasible or unimagined businesses are being created, and are transforming established industries.
What can we learn from centuries of successful and continuous publication of science in the face of changing scholarly practice? Is the scholarly article still fit for purpose in this data-driven world? What is the role of scholarly publishers in this evolving landscape?
Over two days in January, attendees of FORCE2015 will be grappling with such questions. Indeed, many of the core questions of scholarly publishing were identified by the authors of the epistle and Supplicatio.
How can we address the challenge of discoverability?
“Whereas there is nothing more necessary for promoting the improvement of Philosophical Matters, than the communicating to such, as apply their Studies and Endeavours that way, such things as are discovered or put in practice by others;” [Philosophical Transactions]
“there lie hidden away in the libraries of that University many excellent manuscripts, now shamefully covered in dust and dirt, which, by the boon of establishing a press in the same city, could be rescued from perpetual obscurity and distributed in other parts of Europe to the great credit of the whole nation.” [Supplicatio]
How can one demonstrate research impact?
“This is my Solicitude, That, as I ought not to be unfaithful to those Counsels you have committed to my Trust, so also that I may not altogether waste any minutes of the leasure [sic] you afford me. And thus I have made the best use of some of them, that I could devise; To spread abroad Encouragements, Inquiries, Directions, and Patterns, that may animate, and draw on Universal Assistances.” [Philosophical Transactions]
“given now the opportunity of a press, they might swiftly and easily remove and shake off the imputation of idleness which foreigners daily lay against them.” [Supplicatio]
How is publication integral to academia and the wider scholarly community?
“To the end, that such Productions being clearly and truly communicated, desires after solid and useful knowledge may be further entertained, ingenious Endeavours and Undertakings cherished, and those, addicted and conversant in such matters, may be invited and encouraged to search, try, and find out new things, impart their knowledge to one another, and contribute what they can to the Grand design of improving Natural knowledge, and perfecting all Philosophy Arts, and Sciences.” [Philosophical Transactions]
“where there is a settlement of learned men there should be printers, so that books may be printed more correctly and texts more diligently collated, universities may not be deprived of printers without the greatest loss to literature.” [Supplicatio]
What considerations might a crowd-sourced epistle or supplicatio for a twenty-first century research communications system include? The boundaries between the act of research and the act of publication are blurring (think data curation and publication). It probably wouldn’t propose a platform per se but a web of protocols, services, and best practice (after all ‘the web is the platform’). To guarantee permanence, core services would be built on a sustainable cyberinfrastructure. It would include identifiers, both for things (datasets, publications, maybe even equipment) and people. It might mandate a schema for attribution. It would need to tackle reproducibility. It would support not just computational sciences, but discipline-specific needs such as digital humanities. It would need to address various concerns around openness (access, source, code, data) while ensuring a sustainable, long-term business model. It should be extensible to a future of networked research objects while retaining the rhetorical power of the scientific paper. There may be several hundred considerations.
The Force2015 conference chair, Oxford’s Professor David de Roure, has argued for the reimagination of scholarly communications as networked research objects – ‘a sense making network of humans and machines.’ This scale of evolution and development will not be achieved by any one organization acting alone. Many of the questions are human or organizational (motivation, reward, governance) rather than technology-based. Much of this new world is in place or on the horizon, but collaboration and engagement by all parties – researchers, universities, funders, publishers – is essential to make it a reality.
As the methods and outputs of research evolve, so too must the services of scholarly publishers. We look forward to joining Force2015 attendees in Oxford and online worldwide, where we’ll continue to engage in discussion and work together on the next phase of scholarly communications.
The opinions and other information contained in this blog post and comments do not necessarily reflect the opinions or positions of Oxford University Press.
In 1968, as the world convulsed in an era of social upheaval, Cuba unexpectedly became a destination for airplane hijackers. The hijackers were primarily United States citizens or residents. Commandeering aircraft from the United States to Cuba over ninety times between 1968 and 1973, Americans committed more air hijackings during this period than all other global incidents combined. Some sought refuge from petty criminal charges. A majority, however, identified with the era’s protest movements. The “skyjackers,” as they were called, included young draft dodgers seeking to make a statement against the Vietnam War, and Black radical activists seeking political asylum. Others were self-styled revolutionaries, drawn by the allure of Cuban socialism and the nation’s bold defiance of US domination. Havana and Washington, diplomatically estranged since 1961, maintained no extradition treaty.
But Cuba was an imperfect site for the realization of American skyjacker dreams. Although the surge in hijackings paralleled the warm relations between the Cuban government and US organizations such as the Black Panther Party and Students for a Democratic Society, leftwing skyjackers were not always welcome in Cuba. Many were imprisoned as common criminals or suspected CIA agents. The mutual discomfort of the United States and Cuban governments over the hijacking outbreak resulted in a rare diplomatic collaboration. Amidst the Cold War stalemate of the Nixon-Ford era, skyjackers inadvertently forced Havana and Washington to negotiate. In 1973, the two governments broke their decade-old impasse to produce a bilateral anti-hijacking accord. The hijacking episode of 1968-’73 marks the unlikely meeting point where political protest, the African American freedom struggle, and US-Cuba relations collided amid the tumult of the sixties.
For a generation of Americans radicalized by the Civil Rights era and the Vietnam War, Cuba’s social gains in universal healthcare, education, and wealth redistribution — campaigns disproportionately supported by Afro-Cubans — had made the Cuban Revolution a beacon of inspiration for the United States. Left. By 1970, several thousand Americans, traveling independently or with organizations such as the Student Nonviolent Coordinating Committee, had visited Cuba to witness its transformation up-close. But skyjackers sometimes perceived Cuba in terms that echoed age-old paternalistic tropes about the island, as admiration blurred into entitlement. Cuba, they insisted, should welcome them as revolutionary comrades instead of locking them in jail. Nonetheless, some US skyjackers had fled from circumstances that suggested genuine political repression. Black radical activists, in particular, were often successful in appealing to Cuban officials for political asylum after arriving as skyjackers. The Cuban government allowed these asylees to make lives for themselves in Havana, paying for their living expenses as they transitioned to Cuban society or attended college. Several members of the Black Panther Party, such as William Lee Brent, and members of the Republic of New Afrika, such as Charlie Hill, became long-term residents of Havana.
Hijackers inadvertently forced Washington to face the consequences of American exceptionalism. Cuban émigrés reaching US soil with “dry feet” had been granted sanctuary and accorded a fast-track to citizenship since 1966, when the Cuban Refugee Adjustment Act created a powerful incentive for Cubans to immigrate by any available means, including violence and hijacking, an enticement that Havana had repeatedly protested. Now, Cuba was granting sanctuary to Americans committing similar crimes. The irony was not missed by the State Department. As Henry Kissinger admitted, the United States was now seeking to negotiate with Havana what Washington had earlier refused to negotiate in the aftermath of the Cuban revolution, when Cubans were hijacking planes and boats to the United States and Havana had appealed unsuccessfully to US officials for the return of the vessels. The island’s attractiveness as a legal sanctuary for Americans was in large part a consequence of Washington’s policy of unrelenting hostility, which had severed the normal ties through which the two nations might collaborate, as diplomatic equals, to resolve an issue such as air piracy.
Air hijackings to Cuba declined dramatically after the accord of 1973. A shallow crack appeared in the diplomatic stalemate between Washington and Havana, setting the stage for the mild warming of US-Cuba relations during the coming Carter era. But while mutual cooperation to respond to the hijacking outbreak preceded the brief détente of the late 1970s, air piracy did not itself cause the Cold War thaw. Rather, the significance of hijacking to US-Cuba relations lies in the way in which skyjackers, as radical non-state actors driven by idealism and politics, influenced the terrain of state relations in ways that no one could have anticipated. So too, by granting formal political asylum to Americans, especially African American activists charging racist repression, Havana defied US claims to moral and legal authority in the arena of human rights. As US-Cuba relations now make a historic move toward normalization, it is likely that non-state actors will continue to play unforeseen roles, defying both US and Cuban state power.
Many of you have likely seen the beautiful grand spiral galaxies captured by the likes of the Hubble space telescope. Images such as those below of the Pinwheel and Whirlpool galaxies display long striking spiral arms that wind into their centres. These huge bodies represent a collection of many billions of stars rotating around the centre at hundreds of kilometers per second. Also contained within is a tremendous amount of gas and dust, not much different from that found here on Earth, seen as dark patches on the otherwise bright galactic disc.
Pinwheel and whirlpool spiral galaxies, a.k.a. M101 and M51:
Yet, rather embarrassingly, whilst we have many remarkable images of a veritable zoo of galaxies from across the Universe, we have surprisingly little knowledge of the appearance and structure of our own galaxy (the Milky Way). We do not know with certainty for example how many spiral arms there are. Does it have two, four, or no clear structure? Is there an inner bar (a long thin concentration of stars and gas), and if so does it rotate with the arms, or faster than them? Unfortunately we cannot simply take a picture from outside the galaxy as we can with those above, even if we could travel at the speed of light it would take tens of thousands of years to get far away enough to get a good picture!
The main difficulty comes from that we are located inside the disc of our galaxy. Just as we cannot know what the exterior of a building looks like if we are stuck inside it, we cannot get a good picture of what our own galaxy looks like from the Earth’s position. To build a map of our galaxy we rely on measuring the speeds of stars and gas, which we then convert to distances by making some assumptions of the structure. However the uncertainty in these distances is high, and despite a multitude of measurements we have no resounding consensus on the exact shape of our galaxy.
There is, however, a way around this problem. Instead of trying to calculate distances, we can simply look at the speed of the observed material in the galaxy. The movie above shows the underlying concept. By measuring the speed of material along the line of sight from where the Earth is located in the galaxy, you built up a pseudo-map of the structure. In this example the grey disc is the structure you would see if the galaxy were a featureless disc. If we then superimpose some arm features, where the amount of stars and gas is greater than that in the rest of the galaxy, we see the arms clearly appear in our velocity map. Maps of this kind exist for our galaxy, with those for hydrogen and carbon monoxide (shown below) gas displaying the best arm features.
This may appear the problem is solved; we can simply trace the arm features and map them back onto a top-down map. Unfortunately doing so introduces the problems as measuring distances in the first place, and there is no single solution for mapping material from velocity to position space.
A different approach is to try and reproduce the map shown above by making informed estimates of what we believe the galaxy may look like. If we choose some top-down structure that re-creates the velocity map shown above, that we have observed directly from here on Earth, then we can assume the top-down map is also a reasonable map of the Milky Way.
Our work then began on a large number of simulations investigating the many different possibilities for the shape of the galaxy, investigating such parameters as the number of arms and speed of the bar. Care had to be taken with creating the velocity map, as what is actually measured by observations is the emission of the gas (akin to temperature). This can be absorbed and re-emitted by any additional gas the emission may pass through en route to the Earth.
In the two videos below are our best-fitting maps found for a two armed and four-armed model. Two arms tend not to produce enough structure, while the four-armed models can reproduce many of the features. Unfortunately it is very difficult to match all the features at the same time. This suggests that the arms of the galaxy may be of some irregular shape, and are not well encompassed by some regular, symmetric spiral pattern. This still leaves the question somewhat open, but also informs us that we need to investigate more irregular shapes and perhaps more complex physical processes to finally build a perfect top-down map of our galaxy.
Last April, we asked you to help us out with ideas for the Oral History Review’s blog. We got some great responses, and now we’re back to beg for more! We want to use our social media platforms to encourage discussion within the broad community oral historians, from professional historians to hobbyists. Part of encouraging that discussion is asking you all to contribute your thoughts and experiences.
Whether you have a follow up to your presentation at the Oral History Association Annual Meeting, a new project you want to share, an essay on your experiences doing oral history, or something completely different, we’d love to hear from you.
We are currently looking for posts between 500-800 words or 15-20 minutes of audio or video. These are rough guidelines, however, so we are open to negotiation in terms of media and format. We should also stress that while we welcome posts that showcase a particular project, we can’t serve as landing page for anyone’s kickstarter.
Prometheus, a Titan god, was exiled from Mount Olympus by Zeus because he stole fire from the gods and gave it to mankind. He was condemned, punished, and chained to a rock while eagles ate at his liver. His name, in ancient Greek, means “forethinker “and literary history lauds him as a prophetic hero who rebels against his society to help man progress. The stolen fire is symbolic of creative powers and scientific knowledge. His theft encompasses risk, unintended consequences, and tragedy. Centuries later, modern times has another Promethean hero, Alan Turing. Like the Greek Titan before him, Turing suffers for his foresight and audacity to rebel.
The riveting film, The Imitation Game, directed by Morten Tyldum and staring Benedict Cumberbatch, offers us a portrait of Alan Turing that few of us knew before. After this peak into his extraordinary life, we wonder, how is it possible that within our lifetime, society could condemn to eternal punishment such a special person? Turing accepts his tragic fate and blames himself.
“I am not normal,” he confesses to his ex-fiancée, Joan Clarke.
“Normal?” she responds, angrily. “Could a normal man have shortened World War ll by two years and have saved 16 million people?”
The Turing machine, the precursor to the computer, is the result of his “not normal” mind. His obsession was to solve the greatest enigma of his time – to decode Nazi war messages.
In the film, as the leader of a team of cryptologists at Bletchley Park in 1940, Turing’s Bombe deciphered coded messages where German U-boats would decimate British ships. In 1943, the Colossus machine, built by engineer Tommy Flowers of the group, was able to decode messages directly from Hitler.
The movie, The Imitation Game, while depicting the life of an extraordinary person, also raises philosophical questions, not only about artificial intelligence, but what it is to be human. Cumberbatch’s Turing recognizes the danger of his invention. He feared what would happen if a thinking machine is programmed to replace a man; if a robot is processed by artificial intelligence and not by a human being who has a conscience, a soul, a heart.
Einstein experienced a similar dilemma. His theory of relativity created great advances in physics and scientific achievement, but also had tragic consequences – the development of the atomic bomb.
The Imitation Game will open Pandora’s box. Viewers will ponder on what the film passed over quickly. Who was a Russian spy? Why did Churchill not trust Stalin? What was the role of the Americans during this period of decrypting military codes? How did Israel get involved?
And viewers will want to know more about Alan Turing. Did Turing really commit suicide by biting into an apple laced with cyanide? Or does statistical probability tell us that Turing knew too much about too many things and perhaps too many people wanted him silent? This will be an enigma to decode.
The greatest crime from a sociological perspective, is the one committed by humanity against a unique individual because he is different. The Imitation Game will make us all ashamed of society’s crime of being prejudiced. Alan Turing stole fire from the gods to give to man power and knowledge. While doing so, he showed he was very human. And society condemned him for being so.
Look out Philadelphia! Oxford University Press has been attending the American Philosophical Association (APA) Eastern Division Meeting for decades. The conference has been held in various cities including Baltimore, MD, Newark, DE, New York, NY, and Boston, MA. This year, we’re gearing up to travel to Philadelphia on Saturday 27th December, and we’ve asked staff across various divisions to see what they are most looking forward to.
Clare Cashen, Higher Education Marketing: I’m really looking forward to the APA this year. We, in the Higher Education division, publish the majority of our new books in the fall, and the Eastern meeting is the first time we get to display them all at once. It’s always fun to connect with instructors and share what we’ve been working on. I’m also looking forward to a good Philly cheesesteak and maybe a jog up the steps of the Philadelphia Art Museum!
Joy Mizan, Marketing: This will be my first time attending a conference for Oxford University Press. I’m very excited to be representing the company! I’ll be managing the booth from set up to tear down, and it’ll be a very big job. I’m looking forward to putting faces to the names of authors that I’ve been working with. I’m also excited to see what other products the various exhibitors will have. On a personal note, I’m a big fan of Philly and can’t wait to visit it again. I love the historical sites and delicious (albeit, greasy) foods!
Peter Ohlin, Editorial: I look forward to Eastern to see a lot of familiar faces – authors and friends in philosophy, as well as colleagues at other publishers. It’s also a great time to take stock of what we’ve published over the last year and get feedback from readers about those books at the book display. Lastly, it’s good to hear about interesting projects that will hopefully turn into OUP books by the time future APA’s roll around.
Emily Sacaharin, Editorial: I’m excited to be attending my first APA this year! It will be great to meet so many of our authors in person, especially those I’ve already gotten to know via phone and email.
We hope to see you at the Oxford University Press booth! We’ll be offering the chance to browse and buy our new titles on display at a 20% conference discount, and free trial access to online products, including Electronic Enlightenment. Electronic Enlightenment is the most wide-ranging online collection of edited correspondence of the early modern period, linking people across Europe, the Americas and Asia from the early 17th to the mid-19th century. You can access correspondence sent between important figures in this period, such as David Hume and Adam Smith for instance. Pop by and say hello and you can also pick up sample copies of our latest philosophy journals and browse free articles from British Journal of Aesthetics, Mind, and The Philosophical Quarterly.
We look forward to seeing you there!
Featured image credit: Benjamin Franklin Bridge, Philadelphia, by Khush. CC BY-NC-ND 2.0 via Flickr
Among the earliest, most challenging inventors of troubadour lyric, Marcabru composed songs for the courts of southwestern France during the second quarter of the twelfth century, calling knights to crusade, castigating false lovers, defining and refining courtly values, while developing his own kaleidoscopic image as witty, gritty, biting, rhyming, neologizing, moralizing wordsmith par excellence. As they come down to us in song manuscripts, Marcabru’s forty-some poems — with their wide vocabulary, difficult syntax, and multiple versions — offer a host of problems for modern readers trying to understand their language and fully comprehend them as songs performed live before an engaged public. Marcabru, A critical edition, edited and translated by Simon Gaunt, Ruth Harvey, and Linda Paterson, has been my indispensable tool for taking on that project.
Two of Marcabru’s songs (XXV and XXVI) particularly caught my eye, as they’ve attracted the attention of many others who radically disagree about their import. Estornel, cueill ta volada (Starling, take your flight) and Ges l’estornels no.n s’oblida (The starling did not dally for a moment) outline a series of dramatic exchanges in which a lover first gives the starling a message of complaint for his amia (beloved), demanding that she compensate for her neglect by meeting him in a certain position: flat beneath him. In the second song, the bird delivers the ultimatum, hears the woman’s spirited defense and enticing reply, and returns to anticipate the lover’s lusty triumph. Taken together, Estornel and Ges l’estornels offer a humorous guide to Marcabru’s piebald art of ventriloquism, as they act out the elusive nature of his identity as poet and persona, refracted through multiple voices and changing masks.
To recreate as much as possible the full scope of Marcabru’s dazzling play, I combined popular and scholarly views of ventriloquy. Señor Wences was my first teacher, when he appeared on the Ed Sullivan show in the 1950s and 60s with Pedro, a head in a box (“s’awright?” “s’awright!”), and a soft-spoken boy named Johnny. I can see him holding up one hand to paint lips on his thumb and finger to form Johnny’s mouth, adding eyes and a wig, as low- and high-pitched voices shuttle back and forth between man and dummy. Thanks to YouTube, you can still see how Señor Wences dares us to see the perfection of his art by focusing our gaze right on his lips, as he lights a cigarette and speaks elsewhere through the puppet. He balances a spinning plate on a long stick and spins a three-way conversation (not unlike Marcabru in the starling songs!) with Pedro’s head and Johnny, now tossed behind the table. Why do we get such a kick out of these silly games? The fun of seeing how well the ventriloquist can fool us into not seeing where the voice comes from, or hear it coming from where we know it isn’t? Because we know it’s a fake, we enjoy all the more how the ventriloquist’s counterfeit art displaces reality.
Exploring the more serious side of ventriloquy, I found in Mary Hayes’ Divine Ventriloquism in Medieval English Literature: Power, Anxiety, Subversion an unexpected connection with the incongruous mix in Marcabru’s starling poems. Hayes highlights how the ventriloquist’s displaced voices sharpen issues of source and authority, the confusion of truth and deception, the possibility of (mis)appropriation. Her reminder that Latin “ventriloquist” goes back to Greek “engastrimythos” (belly speakers, like the Pythian oracle whose divine words of uncertain meaning rose up through womb and mouth) goes straight to the sex-talking orifices that Marcabru conjures up in Estornel and Ges l’estornels, no doubt to the great delight of his courtly audience.
Recognized by fellow troubadours as misogynist, Marcabru criticized but also impersonated women — a trick that may well have inspired real women poets to enter the arena in their own right, as more than twenty trobairitz (women troubadours) did. The female impersonators of my title give a nod to Monty Python’s Piranha brothers (who knew how to treat a female impersonator). But in the world of troubadour lyric, men in drag jostle with trobairitz impersonating men and other women, like the Dolly Parton mimic I learned about while working on the starling poems. Charlene Rose-Masuda’s imitation — as well as the original — can be found on YouTube in all her bursting charms, looking like we might imagine Marcabru’s amia in contemporary dress.
Who or what is the genuine article? The presumption that the poet’s first person pronoun speaks for himself or herself is subverted by their obvious pleasure in inventing personas that may not correspond to historical selves. Of course, when Marcabru sets a woman or a starling to chattering, the ventriloquy is patent, but when he speaks as the ribald but courtly lover in Estornel, the disconnect from his usual image as moralizing scold — a sort of Rush Limbaugh avant la lettre – becomes a puzzle as soon as the poet inserts his signature to specify what “Marcabru says” (“Marcabrus/ditz” 60-1). Monologue or dialogue, one speaker or two? The vvoice(s) remain entangled in Estornel’s shifting registers.
As I follow the different masks assumed by the poet through his belly-speaking, vaudevillian, Dolly Parton, bird-screeching impersonations, the starling as intermediary leads me finally to notice the bird’s visual appearance, left unmentioned. The iridescence and spotting of its feathers give the starling’s dark plumage what Marcabru calls the “white, brown and bay desire” (XXXI, 33) of false love, while the mottled poet himself has a brown spot (Marca brun) stamped in his nom de plume. He’s the mimic and master of precisely what he criticizes, as if to “truly” condemn false language and bad loving he must incarnate them. Called on stage by his proper name, Marcabru performs brilliantly with all the mixed colors and rainbow plumage of a male-female-bird-impersonator par excellence.
“Butler Library smells like Adderall and desperation.”
That note from a blogger at Columbia University isn’t exactly scientific. But it speaks to the atmosphere that settles in around exam time here, and at other competitive universities. For some portion of the students whose exams I’m grading this week, study drugs, stimulants, and cognitive enhancement are as much a part of finals as all-nighter and bluebooks. Exactly how many completed exams are coming to me via Adderall or Provigil is impossible to pin down. But we do know that studies have found past-year, nonprescribed stimulant use rates as high as 35% among students. We know, according to HHS, that full-time students use nonprescribed Adderall at twice the rate of non-students. We can suspect, too, that academics aren’t so different in this regard from their students. In unscientific poll, 20% of the readers of Natureacknowledged off-label use of cognitive enhancement drugs (CEDs).
If this sounds like the windup to a drug-panic piece, it’s not. The use of cognitive enhancement drugs concerns me much less than the silence surrounding their use. At universities like Columbia, cognitive enhancement exists in something of an ethical gray zone: technically against rules that are mostly unenforced; an open conversation topic among students in the library at 2 a.m., but a blank spot in “official” academic culture. That blank in itself is worth our concern. CEDs aren’t going away–but more openness about their use could teach us something valuable about the kind of work we do here, and anywhere else focus-boosting pills are popped.
In fact, much of the anti-cognitive enhancement drug literature dwells on the ethics of work, on the question of how much credit we can and should take for our “enhanced” accomplishments. (In focusing on these arguments, I’m setting to one side any health concerns raised by off-label drug use. I’m doing that not because those concerns are unimportant, but because the most challenging bioethics writing on the topic is less about one drug or another than about the promises and limits of cognitive enhancement in general–up to and including drugs that haven’t been invented yet.) In Beyond Therapy, the influential 2003 report on enhancement technologies from the President’s Council on Bioethics, the central argument against CED use had to do with the kind of work we can honestly claim as our own: “The attainment of [excellence] by means of drugs…looks to many people (including some Members of this Council) to be ‘cheating’ or ‘cheap.’” Work done under the influence of CEDs “seems less real, less one’s own, less worthy of our admiration.”
Is that a persuasive argument for keeping cognitive enhancement drug use in the closet, or even for taking stronger steps to ban it on campus? I’m not so sure it is. This kind of anti-enhancement case rests on an assumption about authorship, which I call the individual view. It claims that the dignity and authenticity of our accomplishments lie largely in our ability to claim individual credit for our work. In a word, it’s producer-focused, not product-focused.
That’s a reasonable way to think about authorship–but much of the weight of the anti-cognitive enhancement drug case rests on the presumption that it’s the only way to think about authorship. In fact, there’s another view that’s just as viable: call it the collaborative view. It’s an impersonal way of seeing accomplishment; it’s a product-focused view; it’s less concerned with allocating ownership of our accomplishments and it’s less likely to emphasize originality as the most important mark of quality. It is founded on the understanding that all work, even the most seemingly original, is subject to influences and takes place in a social context.
You can’t tell the history of accomplishments in the arts and sciences without considering those who thought about their work in this way. We can see it in the “thefts” of content that led passages from Plutarch, via Shakespeare, to T.S. Eliot’s poetry, or in the constant musical borrowing that shapes jazz or blues or classical music. We can see it in the medieval architects and writers who, as C.S. Lewis observed, practiced a kind of “shared authorship,” layering changes one on top of the other until they produced cathedrals or manuscripts that are the product of dozens of anonymous hands. We can see it again in the words of writers like Mark Twain, who forcefully argued that “substantially all ideas are second hand,” or Eliot, who advised critics that “to divert interest from the poet to the poetry is a laudable aim.” We can even see it in the history of our language. Consider the evolution of words like genius (from the classical idea of a guardian spirit, to a special ability, to a talented person himself or herself), invent (from a literal meaning of “to find” to a secondary meaning of “to create”), and talent (from a valuable coin to an internal gift). As Owen Barfield has argued, these changes are marks of the way our understanding of accomplishment has become “internalized.” Where earlier writers tended to imagine inspiration as a process that happens from without, we’re more likely to see it as something that happens from within.
The collaborative view is valuable even for those of us who aren’t, say, producing historically-great art. It might relieve of us of the anxiety that the work we produce is a commentary on our personal worth. It’s well-tailored to the creative borrowing and sampling that define the “remix culture” celebrated by writers like Lawrence Lessig. And it is, I think, a tonic against the kind of “callous meritocracy” that John Rawls cogently warned us about.
That’s not to suggest that the collaborative view is the one true perspective on accomplishment. I’d call it one of a range of possible emphases that have struggled or prospered with the times. But if that’s the case, then we’re free to think more critically about the view of work we want to emphasize at any given time.
What does any of this have to do with cognitive enhancement? The collaborative view I’ve outlined and a culture of open cognitive enhancement share some important links. It’s certainly not true that one has to use CEDs to take that view, but there are strong reasons why an honest and thoughtful CED user ought to do so.
Consider the case of a journalist like David Plotz, who kept a running diary of his two-day experiment with Provigil: “Today I am the picture of vivacity. I am working about twice as fast as usual. I have a desperate urge to write…. These have been the two most productive days I’ve had in years.”
How might such a writer account for the boost in his performance? Would he chalk it up to his inherent skill or effort, or to the temporary influence of a drug? If someone singled out his enhanced work for praise, would he be right in taking all the credit for himself and leaving none for the enhancement?
I don’t think he would be. There is a dishonesty in failing to acknowledge the enhancement, because that failure willingly creates a false assumption: it allows us to believe that the marginal improvement in performance reflects on the writer’s efforts, growing skill, or some other personal quality, when the truth seems to be otherwise. In other words, I don’t think enhancement is dishonest in itself–it’s failing to acknowledge enhancement that’s dishonest.
There’s nothing objectionable in collaborative work, forthrightly acknowledged. When we take an impersonal view of our work, we share credit and openly recognize our influences. And we can take a similar attitude to work done under the influence of cognitive enhancement drugs. When we speak of creative influences and working “under the influence” of CEDs, I think we’re exposing a similarity that runs deeper than a pun. Of course, one does not literally “collaborate” with a drug. But whether we acknowledge influences that shape our work or acknowledge the influence of a drug that helped us accomplish that work by improving our performance, we are forgoing full, personal credit. We are directing observers toward the quality of the work, rather than toward what the work may say about our personal qualities. We are, in a sense, making less of a “property claim” on the work. Given the history of innovators who willingly made this more modest claim, and given the benefits of the collaborative view that I’ve discussed, I don’t think that’s such bad news.
But could a culture of open cognitive enhancement drug use really one day change the way we think about work? There are no guarantees, to be sure. When I read first-person accounts of CED use, I’m struck by the way users perceive fast, temporary, and often surprising gains in focus, processing speed, and articulateness. With that strong subjective experience comes the experience of leaving, and returning to, an “unenhanced” state. The contrast seems visceral and difficult to overlook; the marginal gains in performance seem especially difficult to take credit for. The subjective experience of CED use looks like short-term growth in our abilities, arising from an external source, to which we cannot permanently lay claim. For just that reason, I have trouble agreeing with those, like Michael Sandel, who associate cognitive enhancement with “hubris.” Why not humility instead? Of course, I don’t claim that CEDs will inspire the same reflections in all of their users. It’s certainly possible to be unreflective about the implications of CED use. I only argue that it’s a little harder to be unreflective.
But that reflectiveness, in turn, requires openness about the enhancement already going on. As long as students fear job-market ramifications for talking on the record about their cognitive enhancement drug use, I wouldn’t nominate them as martyrs to the cause. But why not start with professors and academics–with, say, those 20% of respondents to the Nature poll? What’s tenure for anyway?
We simply can’t separate enhancement, of any kind, from the ends we ask of it and the work we do with it. So I sympathize with the New Yorker’s Margaret Talbot when she writes that “every era, it seems, has its own defining drug. Neuroenhancers are perfectly suited for the anxiety of white-collar competition in a floundering economy…. They facilitate a pinched, unromantic, grindingly efficient form of productivity.” Yet that’s giving the drug too much credit. I’d look instead to the culture that surrounds it. Our culture of cognitive enhancement is furtive, embarrassed, dedicated to one-upping one another on exams or on the tenure track. But a healthier culture of enhancement is conceivable, and it begins with a greater measure of honesty. Adderall and desperation don’t have to be synonymous, but as long as they are, I’d blame the desperation, not the drug.
A patent like other property rights is a right to exclude others and not a right or an obligation to make the patented invention. Yet today there is a growing campaign by certain industry sectors and the government against patent holders that do not make any products but enforce their patent rights for licensing revenues, often pejoratively called patent “trolls.” In a recent White House report on the subject, President Barack Obama is quoted as saying that these patent holders “don’t actually produce anything themselves. They’re just trying to essentially leverage and hijack somebody else’s idea and see if they can extort some money out of them.” The government alleges that “trolls” are responsible for a patent litigation crisis. Armed with this narrative, the White House had recently announced executive actions for “taking on patent trolls.”
This narrative of harmful patent assertion by non-practicing entities is not new. A century ago, the Wright aircraft company was accused by US government officials of causing such harm because it had attempted to enforce the Wright airplane patent, which it did not practice, by sending licensing demand letters to alleged infringers. Government officials stated that the assertion of the Wright airplane patent was “injurious to the development of aircraft and the aircraft industry in the United States.” They argued that “any adequate return to the Wright stockholders upon their investment [in the Wright patent] must be through the manufacture and sale of airplanes … and not through patent channels,” and that “any return from patents must necessarily fade into insignificance.”
This myth that the assertion of the Wright patent harmed the development of aviation continues to be propagated and erroneous lessons drawn for patent policy today: In 2014, Goldstone’s book Birdmen was published as the latest in the constant reworking through time of the Wright brothers’ story. The widespread reviews of this book emphasized that the Wright brothers were patent “trolls” – Orville Wright was a “vindictive SOB whose greed and begrudgery [sic] were surpassed only by those of his brother Wilbur” and the brothers were “cursed with an addiction to malice to anyone who challenged their primacy or stood in their path to riches.” The purported tool of their “malice” was, of course, their patent. The ostensible general lesson for today is explicitly and erroneously drawn by Goldstone; “Patent law remains [today] the damper on innovation that it was when airplane development was nearly grounded in its infancy.”
We show using primary evidence that the origins of this century-old myth are factually unsupported letters of government manufacture used to persuade the President and Congress to authorize the appropriation of patented inputs at depressed royalty rates. Today these letters are treated as “primary historical sources” to support the myth that the Wright brothers’ patent retarded US aviation development and that it was therefore necessary for the government to intervene in the market for patents and force patent holders into a patent pool. The story that US government officials manufactured in 1917 is believed by many to be fact and for some people, again there is a simple, general ‘policy lesson’ – sometimes government must intervene in markets because private owners of property rights cannot otherwise strike a bargain and reach the optimum social and economic outcome.
But this entrenched and oft-repeated myth is not true, in any part. The Wright brothers licensed their patent early and with vigour into all developed markets. When they won their suit against the principal infringer of their patent, Glenn Curtiss, they never enjoined Curtiss’ manufacturing activities, leaving him free to develop the leading US aviation business. All the commercial and patenting evidence we collect shows that no patent hold-up or development suppression occurred before the government forced patentees into its preferred patent pool.
The result in 1917 was the government-designed patent pool agreement, which depressed royalties to the Wright patent holder to the extraordinarily low rate of 1% – at a time when Congress recognized in the 1917 ‘Trading with the Enemy Act’ that 5% was a fair royalty to pay for a single patent owned by enemies of the US!
The acceptance of the myth has blinded policymakers to the possibility that the design of the 1917 aircraft patent pool was an abuse of the government’s monopsony power that effectively attenuated the development-incentive provided by aviation patents. Patenting rate declined and aircraft sales exhibited little market growth for a decade after the formation of the aircraft patent pool. We hypothesize that as a result of the government intervention, a period followed when aviation innovation may have been depressed and future research should investigate this hypothesis further.
The government’s past use of the “troll” narrative encourages one to wonder whether today’s “troll” narrative, advanced once again to alleviate another purported patent litigation “crisis,” will have the unintended consequence of suppressing innovation.
Headline image credit: Wilbur making a turn with the Wright Glider, 1902. Public domain via Wikimedia Commons.
When the Senate of the Free City of Krakow oversaw the renovation of the main gate to the Royal Castle in 1827, it commemorated its action with an inscription: SENATUS POPULUSQUE CRACOVIENSIS RESTITUIT MDCCCXXVII. The phrase ‘Senatus Populusque Cracoviensis’ [the Senate and People of Krakow], and its abbreviation SPQC, clearly and consciously invoked comparison with ancient Rome and its structures of government: Senatus Populusque Romanus, the Senate and People of Rome. Why did a political entity created only in 1815 find itself looking back nearly two millennia to the institutional structures of Rome and to its Senate in particular?
The situation in Krakow can be seen as a much wider phenomenon current in Europe and North America from the late eighteenth century onwards, as revolutionary movements sought models and ideals to underpin new forms of political organisation. The city-states of classical antiquity offered examples of political communities which existed and succeeded without monarchs and in the case of the Roman Republic, had conquered an empire. The Senate was a particularly intriguing element within Rome’s institutional structures. To the men constructing the American Constitution, it offered a body which could act as a check on the popular will and contribute to political stability. During the French Revolution, the perceived virtue and courage of its members offered examples of civic behaviour. But the Roman Senate was not without its difficulties. Its members could be seen as an aristocracy; and for many historians, its weaknesses were directly responsible for the collapse of the Roman Republic and the establishment of the Empire.
In these modern receptions of the Roman Senate, the contrast between Republican Rome and the Roman Empire was key. The Republic could offer positive models for those engaged in reshaping and creating states, whilst the Roman Empire meant tyranny and loss of freedom. This Tacitean view was not, however, universal in the imperial period itself. Not only was the distinction we take for granted, between Republic and Empire, slow to emerge in the first century A.D.; senatorial writers of the period could celebrate the happy relationship between Senate and Emperor, as Pliny the Younger does in his Panegyricus and many of his letters. Indeed, by late antiquity senators could pride themselves on the improvement of their institution in comparison with its unruly Republican form.
The reception history of the Republican Senate of ancient Rome thus defies a simple summary. Neither purely positive nor purely negative, its use depended and continues to depend on a variety of contextual factors. But despite these caveats, the Roman Senate can still offer us a way of thinking about how we choose our politicians, what we ask them to do, and how we measure their achievements. This continuing vitality reflects too the paradoxes of the Republican institution itself. Its members owed their position to election, yet often behaved like a hereditary aristocracy; a body offering advice in a state where the citizen body was sovereign, it nonetheless controlled vast swathes of policy and action and asserted it could deprive citizens of their rights. These peculiarities contributed to making it an extraordinarily fruitful institution in subsequent political theory.
Headline image credit: Representation of a sitting of the Roman senate: “Cicero Denounces Catiline.” Public domain via Wikimedia Commons.