JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: *Featured, Most Recent at Top [Help]
Results 1 - 25 of 1,853
How to use this Page
You are viewing the most recent posts tagged with the words: *Featured in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
Preparing a new edition of an oral history manual, a decade after the last appeared, highlighted dramatic changes that have swept through the field. Technological development made previous references to equipment sound quaint. The use of oral history for exhibits and heritage touring, for instance, leaped from cassettes and compact discs to QR codes and smartphone apps. As oral historians grew more comfortable with new equipment, they expanded into video and discovered the endless possibilities of posting interviews, transcripts, and recordings on the Internet. Having found a way to get oral history off the archival shelves and into the community, interviewers also had to consider the ethical and legal issues of exposing interviewees to worldwide scrutiny.
Over the last decade, the Internet left no excuses for parochialism. As the practice of oral history grew more international, a manual could neither address a single nation nor ignore the rest of the world. Wherever social, political, or economic turmoil has occurred, oral histories have recorded the change — because state archives tend to reflect the old regimes. War, terrorism, hurricanes, floods, fires, pandemics, and other natural and human-made disasters spurred interviews with those who endured trauma and tragedy, and required interviewers to adjust their approaches. Issues of empathy for those suffering emotional distress increasingly became part of the discourse among oral historians. At the same time, the use of interviewing grew more interdisciplinary, with historians examining the fieldwork techniques and needs of social scientists. Sociologists, anthropologists, and ethnographers have long employed interviewing, usually through participant observation. Many have gradually shifted from quantitative to qualitative analysis, raising questions about identifying their sources rather than rendering them anonymous, and bringing their methods closer to oral history protocols.
New theoretical interests developed, particularly around memory studies. Oral historians became more concerned about not only what people remember, but also what they forget, and how they express these memories. Weighing the relationship between language and thought, and suggesting that that outward behavior reflects underlying signs, narrative theory has challenged the notion of objective history. It sees the past as recalled and recounted as simply a construction, shaped by the way it is told. Memory theories have dealt with the way suggestive questions can reshape memories, and the way recent experiences can block out memories of earlier ones. These theories suggest that people reconstruct memories of past experiences rather than mentally retrieve exact copies of them.
An increasingly litigious culture raised other concerns for oral historians. Lawsuits have alleged that some online interviews are defamatory. A court case with international implications arose when the United States supported British police efforts to subpoena closed interviews that might shed light on a murder case in Northern Ireland, exposing the vulnerability of oral history to judicial intervention. Although the courts treated closed interviews seriously and limited the amount of material to be opened, the case reminded oral historians that they could not promise absolute confidentiality when dealing with sensitive and possibly criminal issues.
It has been breathtaking to document the scope of change in oral history over the last two decades, and sobering to see how dated it made much of the past information and even some of the language. Looking back over the past decade also provided some reassurance about continuity. While it sometimes seems that everything about the practice of oral history has changed, the personal dynamics of conducting an interview have remained very much intact. Whether sitting down face-to-face or using some means of electronic communication, the human interaction of the interview has stayed the same. So have the basic steps: the interviewer’s need for prior research; for knowing how to operate the equipment; for crafting thoughtful, open-ended questions; for establishing rapport; for listening carefully and following up with further questions; and for doing everything possible to elicit candid and substantive responses.
I was glad to see so many of these new trends prominently displayed at the Oral History Association’s recent meeting in Madison, Wisconsin, (October 8-12) where sessions focused on oral history “in motion.” Motion aptly describes the forward-looking nature of oral history, with its expanding methodology and embrace of the latest technology, as well as its eagerness to confront established narratives with alternative voices.
The Salem Witch Trials of 1692-1693 were by far the largest and most lethal outbreak of witchcraft in American history. Yet Salem was just one of many incidents during the Great Age of Witch Hunts which took place throughout Europe and her colonies over many centuries. Indeed, by European standards, Salem was not even a large outbreak. But what exactly were the factors that made Salem stand out?
In A Storm of Witchcraft: The Salem Trials and the American Experience, Emerson Baker places the Salem trials in their broader context and reveals why it has become an enduring legacy. He explains why the Salem crisis marked a turning point in colonial history from Puritan communalism to Yankee independence, from faith in collective conscience to skepticism toward moral governance. Below is an infographic detailing some of the numbers involved in Salem and other witch hunts.
Beginning in the early 1920s, and continuing through the mid 1940s, record companies separated vernacular music of the American South into two categories, divided along racial lines: the “race” series, aimed at a black audience, and the “hillbilly” series, aimed at a white audience. These series were the precursors to the also racially separated Rhythm & Blues and Country & Western charts, and arguably the source of the frequent racial divisions of today’s recording industry. But a closer examination reveals that the two populations rely heavily on many of the same musical resources, and that early blues and country music exhibit thorough interpenetration.
Many admirers of early blues and country music observe that black and white musicians from the 1920s to the 1940s share much with respect to repertoire and genre, and that the separation of the two on commercial recordings grew out of the prejudices of record companies. It becomes even more apparent how deeply intertwined the two traditions are when we examine blues and country musicians’ shared stock of schemes. Schemes are preexisting harmonic grounds and melodic structures that are common resources for the creation of songs. A scheme generates multiple distinct songs, with different lyrics and titles. Many schemes generated songs in both blues and country music.
There are several different types of blues and country schemes. One type is a harmonic progression that combines with one particular tune. The “Trouble In Mind” scheme, for example, generates both Bertha Chippie Hill’s “Trouble in Mind” (1) and the Hackberry Ramblers’ “Fais Pas Ça” (2). Both use the same harmonic progression, and the two melodies have relatively slight variation. Hill recorded for the “race” series, and the Hackberry Ramblers for the “hillbilly” series.
1. Bertha “Chippie” Hill, “Trouble in Mind” (Bertha “Chippie” Hill—Document Records)
2. Hackberry Ramblers, “Fais Pas Ça” (Jolie Blonde—Arhoolie Productions)
A second type of scheme is a preexisting harmonic progression that musicians associate primarily with a specific tune, which they set to lyrics about various subjects, but which they also use to support original melodies. In the “Frankie and Johnny” scheme, the same melody combines with lyrics about Frankie’s shooting of Johnny (or Albert) (3), the Boll Weevil infestation at the turn of the twentieth century (4), and the gambler Stack O’Lee, who shot and killed fellow gambler Billy Lyons (5). Singers also use the harmonic progression to support original melodies, with lyrics about Frankie (6), Stack O’Lee (7), or another subject (8).
In all of the examples, the same correspondence between lyrics and harmony is evident in the harmonic shift that accompanies the completion of the opening rhyming couplet, on the words “above” (3), “your home” (4), “road” (5), “beer” (6), the first “Stack O’Lee” (7), and “that line” (8), and in the harmonic shifts that accompany emphasized words in the refrain, on the words “man” and “wrong” (3, 5, and 6), “no home” and “no home” (4), “bad man” and “Stack O’Lee” (7), and “bad” and “bad” (8). Four of the recordings given here are from the “race” labels, and two are from the “hillbilly” labels, but the same scheme generates all of them.
3. Jimmie Rodgers, “Frankie and Johnny” (The Essential Jimmie Rodgers—Sony)
4. W. A. Lindsey, “Boll Weevil” (People Take Warning—Tomkins Square)
5. Ma Rainey, “Stack O’Lee Blues” (Ma Rainey’s Black Bottom—Yazoo)
7. Mississippi John Hurt, “Stack O’Lee” (Before the Blues—Yazoo)
8. Henry Thomas, “Bob McKinney” (Texas Worried Blues—Document Records)
A third type of scheme is a preexisting harmonic progression that musicians use primarily to support original melodies. This type of scheme is the most productive, and often supports countless melodies. The most well-known and productive of this type is the standard twelve-bar blues scheme. All seven of the following recordings (9–15)—four from the “race” series and three from the “hillbilly” series—contain original melodies combined with the standard twelve-bar blues harmonic progression, and all demonstrate the AAB poetic form that typically combines with the scheme, in which singers state the opening A line of a couplet twice and follow it with one statement of the rhyming B line.
9. Ida Cox, “Lonesome Blues” (Ida Cox Complete Recorded Works—Document Records)
10. Charley Patton, “Moon Going Down” (Charlie Patton Founder of the Delta Blues—Mastercopy Pty Ltd)
11. Jesse “Babyface” Thomas, “Down in Texas Blues” (The Stuff that Dreams are Made Of)
12. Lonnie Johnson, “Mr. Johnson’s Blues No. 2” (A Smithsonian Collection of Classic Blues Singers—Sony/Smithsonian)
13. W. Lee O’Daniel & His Hillbilly Boys, “Dirty Hangover Blues” (White Country Blues—Sony)
14. Jesse “Babyface” Thomas, “Down in Texas Blues” (The Stuff that Dreams are Made Of) (White Country Blues—Sony)
15. Carlisle & Ball, “Guitar Blues” (White Country Blues—Sony)
A fourth type of scheme is a preexisting melodic structure whose harmonizations display considerable variance and yet also certain requirements. The following four examples—two by black musicians and two by white musicians—are all realizations of the “Sitting on Top of the World” scheme, and use the same melodic structure. Their harmonizations are in some ways quite similar—for example, all four harmonize the beginning of the second, rhyming line with the same harmony, and accelerate the rate of harmonic change going into the cadence—but the harmonizations vary more than the melodic structure.
16. Tampa Red, “Things ‘Bout Coming My Way No. 2” (Tampa Red the Guitar Wizard—Sony)
17. Bill Broonzy, “Worrying You Off My Mind” (Big Bill Broonzy Good Time Tonight—Sony)
18. Bob Wills & His Texas Playboys, “Sittin’ on Top of the World” (Bob Wills & His Texas Playboys Anthology—Puzzle Productions)
19. The Carter Family, “I’m Sitting on Top of the World” (On Border Radio—Arhoolie)
Finally, a fifth type of scheme is a preexisting melodic structure for which performers have little shared conception of the harmonic progression. The last four examples—one by a black musician and three by white musicians—are all realizations of the “John Henry” scheme, and use the same melodic structure, but very different harmonic progressions. Riley Puckett, in his instrumental version, uses only one harmony throughout (20). Woody Guthrie uses two harmonies (21). The Williamson Brothers & Curry also use two harmonies, but arrive at a much different harmonization than Guthrie (22). Leadbelly uses three harmonies (23).
20. Riley Puckett, “A Darkey’s Wail” (White Country Blues—Sony)
Record companies presented American vernacular music in the context of a racial divide, but examining the common stock of schemes helps to reveal how extensively black and white musical traditions are intertwined. There are stylistic differences between blues and country music, but many differences lie on the surface, while on a deeper level the two populations frequently rely on the same musical foundations.
The business press and general media often lament that firm executives are exhibiting “short-termism”, succumbing to the pressure by stock market investors to maximize quarterly earnings while sacrificing long-term investments and innovation. In our new article in the Socio-Economic Review, we suggest that this complaint is partly accurate, but partly not.
What seems accurate is that the maximization of short-term earnings by firms and their executives has become somewhat more prevalent in recent years, and that some of the roots of this phenomenon lead to stock market investors. What is inaccurate, though, is the assumption that investors – even if they were “short-term traders” – would inherently attend to short-term quarterly earnings when making trading decisions. Namely, even “short-term trading” (i.e. buying stocks with the aim to sell them after few minutes, days, or months) does not equal or necessitate “short-term earnings focus”, i.e., making trading decisions based on short-term earnings (let alone based on short-term earnings only). This means that in case the media observes – or executives perceive – that firms are pressured by stock market investors to focus on short-term earnings, such a pressure is illusionary, in part.
The illusion, in turn, is based on the phenomenon of “vociferous minority”: a minority of stock investors may be focusing on short-term earnings, causing some weak correlation between short-term earnings and stock price jumps / drops. But the illusion is born when this gets interpreted as if most or all investors (i.e., the majority) would be focusing on short-term earnings only. Alas, such an interpretation may, in the dynamic markets, lead to a self-fulfilling prophecy – whereby an increasing number of investors join the vociferous minority and focus increasingly on short-term earnings (even if still not the majority of investors would focus on short-term earnings only). And more importantly – or more unfortunately – firm executives may start to increasingly maximize short-term earnings, too, due to the (inaccurate) illusion that the majority of investors would prefer that.
A final paradox is the role of the media. Of course, the media have good intentions in lamenting about short-termism in the markets, trying to draw attention to an unsatisfactory state of affairs. However, such lamenting stories may actually contribute to the emergence of the self-fulfilling prophecy. Namely, despite the lamenting tone of the media articles, they are in any case emphasizing that the market participants are focusing just on short-term earnings. This contributes to the illusion that all investors are focusing on short-term earnings only – which in turn may lead a bigger majority of investors and firms to actually join the minority’s bandwagon, in the illusion that everyone else is doing that too.
Should the media do something different, then? Well, we suggest that in this case, the media should report more on “positive stories”, or cases whereby firms have managed to create great innovations with a patient, longer-term focus. The media could also report on an increasing number of investors looking at alternative, long-term measures (such as patents or innovation rates) instead of short-term earnings.
So, more stories like this one about Rolls-Royce – however, without claiming or lamenting that most investors are just wanting “quick results” (i.e., without portraying cases like Rolls-Royce just as rare exceptions). Such positive stories could, in the best scenario, contribute to a reverse, self-fulfilling prophecy – whereby more and more investors, and thereafter firm executives, would replace some of the excessive focus on short-term earnings that they might currently have.
Open access (OA) publishing stands at something of a crossroads. OA is now part of the mainstream. But with increasing success and increasing volume come increasing complexity, scrutiny, and demand. There are many facets of OA which will prove to be significant challenges for publishers over the next few years. Here I’m going to focus on one — licensing — and discuss how the arguments seen over licensing in recent months shine a light on the difference between OA as a movement, and OA as a reality.
Today’s authors face a number of conflicting pressures. Publish in a high impact journal. Publish in a journal with the correct OA options as mandated by your funder. Publish in a journal with the correct OA options as mandated by your institution. Publish your article in a way which complies with government requirements on research excellence. They are then met by a wide array of options, and it’s no wonder we at OUP sometimes receive queries from authors confused as to which OA option they should choose.
One of the most interesting aspects of the various surveys Taylor & Francis (T&F) have conducted on open access over the past year or two has been the divergence between what authors say they want, and what their funders/governments mandate. The T&F findings imply that, whilst there is generally a shared consensus as to what is meant by accessible, there are divergent positions and preferences between funders and researchers as to what constitutes reasonable reuse. T&F’s surveys always reveal the most restrictive licences in the Creative Commons (CC) suite such as Creative Commons Attribution Non-Commercial No-Derivs (CC BY-NC-ND) to be the most popular, with the liberal Creative Commons Attribution (CC BY) licence coming in last. This neither squares with the mandates of funders which are usually, but not always, pro CC BY, or author behaviour at OUP, where CC BY-NC-ND usually comes in a resounding third behind CC BY and CC BY-NC where it’s available. It’s not a dramatic logical step to think that proliferation may lead to confusion, but given the conflicting evidence and demand, and potential for change, it’s logical for publishers to offer myriad options. At the same time elsewhere in the OA space we have a recent example of pressure to remove choice.
In July 2014, the International Association of Science, Technical and Medical Publishers (STM) released their ‘model licences’ for open access. These were at their core a series of alternatives for, and extensions to the terms of the established CC licences. STM’s new addition did not go down well in OA circles, as a ‘Global Coalition’ subsequently called for their withdrawal. One of the interesting elements of the Coalition’s call was that, in amongst some very valid points about interoperability, etc. it fell back on the kind of language more commonly associated with a sermon to make the STM actions seem incompatible with some fundamental precepts about the practice of science: “let us work together in a world where the whole sum of human knowledge… is accessible, usable, reusable, and interoperable.” At root, it could be interpreted that the Coalition was using positive terminology to frame an essentially negative action – barring a new entry to the market. Personally, I don’t have a strong opinion on the new STM licences. We don’t have any plans to adapt them at OUP (we use CC). But it was odd and striking that rather than letting a competitor to the CC status quo exist and in all likelihood fail, some serious OA players felt the need to call for that competitor’s withdrawal.
This illustrates one of the central challenges of the dichotomy of OA. On one hand you have OA as a political movement seeking to replace commercial interests with self-organized and self-governed communities of interest – a bottom-up aspiration for the common good, often suggested to be applied in quite restricted ways, usually adhering to the Berlin, Budapest, and Bethesda declarations. On the other you have OA as a top-down pragmatic means to an end, aiming to improve the flow of research and by extension, economic performance. The OA pragmatist might suggest that it’s fine for an author to be given the choice of liberal or less liberal OA licences, as long as they meet the basic criteria of being free to read and easy to re-use. The OA dogmatist might only be satisfied with the most liberal licence, and with OA along the terms they’ve come to believe is the correct interpretation of their core precepts. The danger of this approach is that there is a ‘right’ and a ‘wrong’ and, as can be seen from the language of the Global Coalition in responding to the STM licences, that can very easily translate into; “If you’re not with us, you’re against us.”
Against this backdrop, publishers find themselves in a thorny position. Do you (a) respect author choice, but possibly at some expense of simplicity, or do you (b) offer fewer options, but potentially leave members of the scholarly community feeling dissatisfied or disenfranchised by your standard option?
Oxford University Press at the moment chooses option (a), as we feel this is the more inclusive way to proceed. To me at least it feels right to give your customers choice. But there is an argument for streamlining processes, avoiding confusion, and giving users consistent knowledge of what to expect. Nature Publishing Group (NPG), for example, recently announced that as part of their move to full OA for Nature Communications they would be making CC BY their default, and only allowing other options on request. This is notable in as much as it’s a very strong steer in a particular direction, while not ruling out everything else. NPG has done more than most to examine the choice issue – changing the order of their licences to see what authors select, sometimes varying charges, etc. Empirical evidence such as this is essential for a viable and credible resolution to the future of OA licensing. Perhaps the Global Coalition should have given a more considered and less emotional response to the STM licences. Was repudiation necessary in a broad OA community which should be able to recognise and accept different variants of OA? It would be a shame if all the positive impacts of open access for the consumer come hand in hand with a diminution of scholarly freedom for the producer.
The opinions and other information contained in this blog post and comments do not necessarily reflect the opinions or positions of Oxford University Press.
Voting for the 2014 Atlas Place of the Year is now underway. However, you still be curious about the nominees. What makes them so special? Each year, we put the spotlight on the top locations in the world that make us go, “wow”. For good or for bad, this year’s longlist is quite the round-up.
Just hover over the place-markers on the map to learn a bit more about this year’s nominations.
Make sure to vote for your Place of the Year below. If you have another Place of the Year that you would like to nominate, we’d love to know about it in the comments section. Follow along with #POTY2014 until our announcement on 1 December.What do you think Place of the Year 2014 should be?
Image Credits: Ferguson: “Cops Kill Kids”. Photo by Shawn Semmler. CC BY 2.0 via Flickr. Liberia: Ebola Virus Particles. Photo by NIAID. CC BY 2.0 via Flickr. Ukraine: Euromaiden in Kiev 2014-02-19 10-22. Photo by Amakuha. CC BY-SA 3.0 via Wikimedia Commons. Colorado: Grow House 105. Photo by Coleen Whitfield. CC BY-SA 2.0 via Flickr. Nauru: In front of the Menen. Photo by Sean Kelleher. CC BY-SA 2.0 via Flickr. Sochi: Olympic Park Flags (2). Photo by american_rugbler. CC BY-SA 2.0 via Flickr. Mount Sinjar: Sinjar Karst. Photo by Cpl. Dean Davis. Public Domain via Wikimedia Commons. Gaza: The home of the Kware family after it was bombed by the military. Photo by B’Tselem. CC BY 4.0 via Wikimedia Commons. Scotland: Vandalised no thanks sign. Photo by kay roxby. CC BY 2.0 via Flickr. Brazil: World Cup stuff, Rio de Janeiro, Brazil (15). Photo by Jorge in Brazil. CC BY 2.0 via Flickr.
As an Africanist historian who has long been committed to reaching broader publics, I was thrilled when the research team for the BBC’s popular genealogy program Who Do You Think You Are? contacted me late last February about an episode they were working on that involved mixed race relationships in colonial Ghana. I was even more pleased when I realized that their questions about the practice and perception of intimate relationships between African women and European men in the Gold Coast, as Ghana was then known, were ones I had just explored in a newly published American Historical Review article, which I readily shared with them. This led to a month-long series of lengthy email exchanges, phone conversations, Skype chats, and eventually to an invitation to come to Ghana to shoot the Who Do You Think You Are? episode.
After landing in Ghana in early April, I quickly set off for the coastal town of Sekondi where I met the production team, and the episode’s subject, Reggie Yates, a remarkable young British DJ, actor, and television presenter. Reggie had come to Ghana to find out more about his West African roots, but discovered instead that his great grandfather was a British mining accountant who worked in the Gold Coast for several years. His great grandmother, Dorothy Lloyd, was a mixed-race Fante woman whose father—Reggie’s great-great grandfather—was rumored to be a British district commissioner at the turn of the century in the Gold Coast.
The episode explores the nature of the relationship between Dorothy and George, who were married by customary law around 1915 in the mining town of Broomassi, where George worked as the paymaster at the local mine. George and Dorothy set up house in Broomassi and raised their infant son, Harry, there for two years before George left the Gold Coast in 1917 for good. Although their marriage was relatively short lived, it appears that Dorothy’s family and the wider community that she lived in regarded it as a respectable union and no social stigma was attached to her or Harry after George’s departure from the coast.
George and Dorothy lived openly as man and wife in Broomassi during a time period in which publicly recognized intermarriages were almost unheard of. As a privately employed European, George was not bound by the colonial government’s directives against cohabitation between British officers and local women, but he certainly would have been aware of the informal codes of conduct that regulated colonial life. While it was an open secret that white men “kept” local women, these relationships were not to be publicly legitimated.
Precisely because George and Dorothy’s union challenged the racial prescripts of colonial life, it did not resemble the increasingly strident characterizations of interracial relationships as immoral and insalubrious in the African-owned Gold Coast press. Although not a perfect union, as George was already married to an English woman who lived in London with their children, the trajectory of their relationship suggests that George and Dorothy had a meaningful relationship while they were together, that they provided their son Harry with a loving home, and that they were recognized as a respectable married couple. No doubt this had much to do with why the wider African community seemingly embraced the couple, and why Dorothy was able to “marry well” after George left. Her marriage to Frank Vardon, a prominent Gold Coaster, would have been unlikely had she been regarded as nothing more than a discarded “whiteman’s toy,” as one Gold Coast writer mockingly called local women who casually liaised with European men. In her own right, Dorothy became an important figure in the Sekondi community where she ultimately settled and raised her son Harry, alongside the children she had with Frank Vardon.
The “white peril” commentaries that I explored in my AHR article proved to be a rhetorically powerful strategy for challenging the moral legitimacy of British colonial rule because they pointed to the gap between the civilizing mission’s moral rhetoric and the sexual immorality of white men in the colony. But rhetoric often sacrifices nuance for argumentative force and Gold Coasters’ “white peril” commentaries were no exception. Left out of view were men like George Yates, who challenged the conventions of their times, even if imperfectly, and women like Dorothy Lloyd who were not cast out of “respectable” society, but rather took their place in it.
This sense of conflict and connection and of categorical uncertainty is what I hope to have contributed to the research process, storyline development, and filming of the Reggie Yates episode of Who Do You Think You Are? The central question the show raises is how do we think about and define relationships that were so heavily circumscribed by racialized power without denying the “possibility of love?” By “endeavor[ing] to trace its imperfections, its perversions,” was Martinican philosopher and anticolonial revolutionary Frantz Fanon’s answer. While I have yet to see the episode, Fanon’s insight will surely reverberate throughout it.
The theme of this year’s meeting is “International Law in a Time of Chaos”, exploring the role of international law in conflict mitigation. Panel discussions will examine various aspects of both public international law and private international law, including trade, investment, arbitration, intellectual property, combatting corruption, labor standards in the global supply chain, and human rights, as well as issues of international organizations and international security.
ILW is sponsored and organized by the American Branch of the International Law Association (ABILA) and the International Law Students Association (ILSA). Every year more than one thousand practitioners, academics, diplomats, members of the governmental and nongovernmental sectors, and students attend this conference.
This year’s conference highlights include:
This year’s keynote from Lori Damrosch, Hamilton Fish Professor of International Law and Diplomacy, Columbia Law School, and President of the American Society of International Law. “Democratization of Foreign Policy and International Law, 1914-2014” Friday, 1:30PM (Room 2-02A)
Top practitioners in the field discuss “International Investment Arbitration and the Rule of Law”, Friday 4:45PM (Room 2-02A). (Sign up for our Free Investment Claims Webinar on October 20th to brush up on VCLT in BIT arbitrations in time for this panel.)
Looking for career advice? Attend this roundtable discussion on Saturday afternoon “Careers in International Human Rights, International Development, and International Rule of Law,” Saturday, 3:30PM (Room 2-02B)
How rapidly does medical knowledge advance? Very quickly if you read modern newspapers, but rather slowly if you study history. Nowhere is this more true than in the fields of neurology and psychiatry.
It was believed that studies of common disorders of the nervous system began with Greco-Roman Medicine, for example, epilepsy, “The sacred disease” (Hippocrates) or “melancholia”, now called depression. Our studies have now revealed remarkable Babylonian descriptions of common neuropsychiatric disorders a millennium earlier.
There were several Babylonian Dynasties with their capital at Babylon on the River Euphrates. Best known is the Neo-Babylonian Dynasty (626-539 BC) associated with King Nebuchadnezzar II (604-562 BC) and the capture of Jerusalem (586 BC). But the neuropsychiatric sources we have studied nearly all derive from the Old Babylonian Dynasty of the first half of the second millennium BC, united under King Hammurabi (1792-1750 BC).
The Babylonians made important contributions to mathematics, astronomy, law and medicine conveyed in the cuneiform script, impressed into clay tablets with reeds, the earliest form of writing which began in Mesopotamia in the late 4th millennium BC. When Babylon was absorbed into the Persian Empire cuneiform writing was replaced by Aramaic and simpler alphabetic scripts and was only revived (translated) by European scholars in the 19th century AD.
The Babylonians were remarkably acute and objective observers of medical disorders and human behaviour. In texts located in museums in London, Paris, Berlin and Istanbul we have studied surprisingly detailed accounts of what we recognise today as epilepsy, stroke, psychoses, obsessive compulsive disorder (OCD), psychopathic behaviour, depression and anxiety. For example they described most of the common seizure types we know today e.g. tonic clonic, absence, focal motor, etc, as well as auras, post-ictal phenomena, provocative factors (such as sleep or emotion) and even a comprehensive account of schizophrenia-like psychoses of epilepsy.
Early attempts at prognosis included a recognition that numerous seizures in one day (i.e. status epilepticus) could lead to death. They recognised the unilateral nature of stroke involving limbs, face, speech and consciousness, and distinguished the facial weakness of stroke from the isolated facial paralysis we call Bell’s palsy. The modern psychiatrist will recognise an accurate description of an agitated depression, with biological features including insomnia, anorexia, weakness, impaired concentration and memory. The obsessive behaviour described by the Babylonians included such modern categories as contamination, orderliness of objects, aggression, sex, and religion. Accounts of psychopathic behaviour include the liar, the thief, the troublemaker, the sexual offender, the immature delinquent and social misfit, the violent, and the murderer.
The Babylonians had only a superficial knowledge of anatomy and no knowledge of brain, spinal cord or psychological function. They had no systematic classifications of their own and would not have understood our modern diagnostic categories. Some neuropsychiatric disorders e.g. stroke or facial palsy had a physical basis requiring the attention of the physician or asû, using a plant and mineral based pharmacology. Most disorders, such as epilepsy, psychoses and depression were regarded as supernatural due to evil demons and spirits, or the anger of personal gods, and thus required the intervention of the priest or ašipu. Other disorders, such as OCD, phobias and psychopathic behaviour were viewed as a mystery, yet to be resolved, revealing a surprisingly open-minded approach.
From the perspective of a modern neurologist or psychiatrist these ancient descriptions of neuropsychiatric phenomenology suggest that the Babylonians were observing many of the common neurological and psychiatric disorders that we recognise today. There is nothing comparable in the ancient Egyptian medical writings and the Babylonians therefore were the first to describe the clinical foundations of modern neurology and psychiatry.
A major and intriguing omission from these entirely objective Babylonian descriptions of neuropsychiatric disorders is the absence of any account of subjective thoughts or feelings, such as obsessional thoughts or ruminations in OCD, or suicidal thoughts or sadness in depression. The latter subjective phenomena only became a relatively modern field of description and enquiry in the 17th and 18th centuries AD. This raises interesting questions about the possibly slow evolution of human self awareness, which is central to the concept of “mental illness”, which only became the province of a professional medical discipline, i.e. psychiatry, in the last 200 years.
In 2014 Oxford University Press celebrates ten years of open access (OA) publishing. In that time open access has grown massively as a movement and an industry. Here we look back at five key moments which have marked that growth.
2004/05 – Nucleic Acids Research (NAR) converts to OA
At first glance it might seem parochial to include this here, but as Rich Roberts noted on this blog in 2012, Nucleic Acids Research’s move to open access was truly ‘momentous’. To put it in context, in 2004 NAR was OUP’s biggest owned journal and it was not at all clear that many of the elements were in place to drive the growth of OA. But in 2004/2005 NAR moved from being free to publish to free to read – with authors now supporting the journal financially by paying APCs (Article Processing Charges). No wonder Roberts adds that it was ‘with great trepidation’ that OUP and the editors made the change. Roberts needn’t have worried — NAR’s switch has been a huge success — its impact factor has increased, and submissions, which could have fallen off a cliff, have continued to climb. As with anything, there are elements of the NAR model which couldn’t be replicated now, but NAR helped show the publishing world in particular that OA could work. It’s saying something that it’s only ten years on, with the transition of Nature Communications to OA, that any journal near NAR’s size has made the switch.
2008 – National Institutes of Health (NIH) Mandate Introduced
Open access presents huge opportunities for research funders; the removal of barriers to access chimes perfectly with most funders’ aim to disseminate the fruits of their research as widely as possible. But as both the NIH and Wellcome, amongst others, have found out, author interests don’t always chime exactly with theirs. Authors have other pressures to consider – primarily career development – and that means publishing in the best journal, the journal with the highest impact factor, etc. and not necessarily the one with the best open access options. So it was that in 2008 the NIH found it was getting a very low rate of compliance with its recommended OA requirements for authors. What happened next was hugely significant for the progress of open access. As part of an Act which passed through the US legislature, it was made mandatory for all NIH-funded authors to make their works available 12 months after publication. This was transformative in two ways: it meant thousands of articles published from NIH research became available through PubMed Central (PMC), and perhaps just as importantly it legitimised government intervention in OA policy, setting a precedent for future developments in Europe and the United Kingdom.
2008 – Springer buys BioMed Central (BMC)
BioMed Central was the first for-profit open access publisher – and since its inception in 2000 it was closely watched in the industry to see if it could make OA ‘work’. When it was purchased by one of the world’s largest publishers, and when that company’s CEO declared that OA was now a ‘sustainable part of STM publishing’, it was a pretty clear sign to the rest of the industry, and all OA-watchers, that the upstart business model was now proving to be more than just an interesting side line. It also reflected the big players in the industry starting to take OA very seriously, and has been followed by other acquisitions – for example Nature purchasing Frontiers in early 2013. The integration of BMC into Springer has happened gradually over the past five years, and has also been marked by a huge expansion of OA at the parent company. Springer was one of the first subscription publishers to embrace hybrid OA, in 2004, but since acquiring BMC they have also massively increased their fully OA publishing. It seems bizarre to think that back in 2008 there were even some who feared the purchase was aimed at moving all BMC’s journals back to subscription access.
2007 on – Growth of PLOS ONE
The Public Library of Science (PLOS) started publishing open access journals back in 2003, but while its journals quickly developed a reputation for high-quality publishing, the not-for-profit struggled to succeed financially. The advent of PLOS ONE changed all that. PLOS ONE has been transformative for several reasons, most notably its method of peer review. Typically top journals have tended to have their niche, and be selective. A journal on carcinogens would be unlikely to accept a paper about molecular biology, and it would only accept a paper on carcinogens if it was seen to be sufficiently novel and interesting. PLOS ONE changed that. It covers every scientific field, and its peer review is methodological (i.e. is the basic science sound) rather than looking for anything else. This enabled PLOS ONE to rapidly turn into the biggest journal in the world, publishing a staggering 31,500 papers in 2013 alone. PLOS ONE’s success cannot be solely attributed to its OA nature, but it was being OA which enabled PLOS ONE to become the ‘megajournal’ we know today. It would simply not be possible to bring such scale to a subscription journal. The price would balloon beyond the reach of even the biggest library budget. PLOS ONE has spawned a rash of similar journals and more than any one title it has energised the development of OA, dispelling previously-held notions of what could and couldn’t be done in journals publishing.
2012 – The ‘Finch’ Report
It’s difficult to sum up the vast impact of the Finch Report on journals publishing in the UK. The product of a group chaired by the eponymous Dame Janet Finch, the report, by way of two government investigations, catalysed a massive investment in gold open access (funded by APCs) from the UK government, crystallised by Research Councils UK’s OA policy. In setting the direction clearly towards gold OA, ‘Finch’ led to a huge number of journals changing their policies to accommodate UK researchers, and the establishment of OA policies, departments, and infrastructure at academic institutions and publishers across the UK and beyond. The wide-ranging policy implications of ‘Finch’ continue to be felt as time progresses, through 2014’s Higher Education Funding Council (HEFCE) for England policy, through research into the feasibility of OA monographs, and through deliberations in other jurisdictions over whether to follow the UK route to open access. HEFCE’s OA mandate in particular will prove incredibly influential for UK researchers – as it directly ties the assessment of a university’s funding to their success in ensuring their authors publish OA. The mainstream media attention paid to ‘Finch’ also brought OA publishing into the public eye in a way never seen before (or since).
The list was interesting to me on many levels, but one significant one that struck me immediately was the absence of mixing and mastering (my main areas of work in audio). A relatively short time ago almost half of these categories did not exist. There was no streaming, no project studios, no networked audio and no game sound. So what is the state of affairs for the young audio engineering student or practitioner?
Interestingly, of the four new fields mentioned, three of them represent diminished opportunities in the field of music recording, with one a singular beacon of hope.
Streaming audio represents the brave new world of audio delivery systems. As these services continue to capture more of the consumer market share they continue to diminish artists ability to earn a decent living (or pay an accomplished audio engineer). A friend of mine with 3 CD releases recently got his Spotify statement and saw that he had more that 60,000 streams of his music. His check was for $17. CDs don’t pay as well as vinyl records used to, downloads don’t pay as well as CDs, and streaming doesn’t pay as well as downloads (not to mention “file-sharing” which doesn’t pay anything). Sure, there may be jobs at Pandora and Spotify for a few engineers helping with the infrastructure of audio streaming, but generally streaming is another brick in the wall that is restricting audio jobs by shrinking the earning capacity of recording artists.
Project studios now dominate most recording projects outside the reasonably well-funded major label records and even most of that work is done in project studios (though they might be quite elaborate facilities). Project studios rarely have spots for interns or assistant engineers so they provide no entree positions for those trying to come up in the engineering ranks. Not only does that limit the available sources of income, but it also prevents the kind of mentoring that actually trains young engineers in the fine points of running sessions. Of course, almost no project studios provide regular, dependable work or with any kind of benefits.
Networked audio systems provide new, faster, and more elaborate connectivity of audio using digital technology. While there may be opportunities in the tech realm for engineers designing and building digital audio networks there is, once again, a shrinking of opportunities for those aspiring to making commercial music recordings. In many instances, these networking systems allow fewer people to do more—a boon only to a small number of audio engineers working with music recordings who can now do remote recordings without having to be present and without having to employ local recording engineers and studios to complete projects with musicians in other locations.
The one bright spot here is Game Sound. The explosive world of video games is providing many good jobs for audio engineers who want to record music. These recordings have become more interesting, higher quality, and featuring more prominent and talented composers and musicians than virtually any other area of music production. The only reservation here is that the music is intended as secondary to the game play (of course) and there is a preponderance of violent video games and therefore musical styles that tend to fit well into a violent atmosphere. However, this is changing with a much broader array of game types achieving new levels of popularity (Mindcraft!).
I do not fault AES for pointing to these areas of interest for audio engineers (other than the apparent absence of mixing and mastering). These are the places where significant activity, development, and change are occurring. They’re just not very encouraging for those of us who became audio engineers because of our deep love of music and our desire to be engaged in its production.
Headline Image: Sound Mixing via CC0 Public Domain via Pixabay
There’s a lot of interesting social science research these days. Conference programs are packed, journals are flooded with submissions, and authors are looking for innovative new ways to publish their work.
This is why we have started up a new type of research publication at Political Analysis, Letters.
Research journals have a limited number of pages, and many authors struggle to fit their research into the “usual formula” for a social science submission — 25 to 30 double-spaced pages, a small handful of tables and figures, and a page or two of references. Many, and some say most, papers published in social science could be much shorter than that “usual formula.”
We have begun to accept Letters submissions, and we anticipate publishing our first Letters in Volume 24 of Political Analysis. We will continue to accept submissions for research articles, though in some cases the editors will suggest that an author edit their manuscript and resubmit it as a Letter. Soon we will have detailed instructions on how to submit a Letter, the expectations for Letters, and other information, on the journal’s website.
We have named Justin Grimmer and Jens Hainmueller, both at Stanford University, to serve as Associate Editors of Political Analysis — with their primary responsibility being Letters. Justin and Jens are accomplished political scientists and methodologists, and we are quite happy that they have agreed to join the Political Analysis team. Justin and Jens have already put in a great deal of work helping us develop the concept, and working out the logistics for how we integrate the Letters submissions into the existing workflow of the journal.
I recently asked Justin and Jens a few quick questions about Letters, to give them an opportunity to get the word out about this new and innovative way of publishing research in Political Analysis.
Political Analysis is now accepting the submission of Letters as well as Research Articles. What are the general requirements for a Letter?
Letters are short reports of original research that move the field forward. This includes, but is not limited to, new empirical findings, methodological advances, theoretical arguments, as well as comments on or extensions of previous work. Letters are peer reviewed and subjected to the same standards as Political Analysis research articles. Accepted Letters are published in the electronic and print versions of Political Analysis and are searchable and citable just like other articles in the journal. Letters should focus on a single idea and are brief—only 2-4 pages and no longer than 1500-3000 words.
Political Analysis is taking this new direction to publish important results that do not traditionally fit in the longer format of journal articles that are currently the standard in the social sciences, but fit well with the shorter format that is often used in the sciences to convey important new findings. In this regard the role model for the Political Analysis Letters are the similar formats used in top general interest science journals like Science, Nature, or PNAS where significant findings are often reported in short reports and articles. Our hope is that these shorter papers also facilitate an ongoing and faster paced dialogue about research findings in the social sciences.
What is the main difference between a Letter and a Research Paper?
The most obvious difference is the length and focus. Letters are intended to only be 2-4 pages, while a standard research article might be 30 pages. The difference in length means that Letters are going to be much more focused on one important result. A letter won’t have the long literature review that is standard in political science articles and will have much more brief introduction, conclusion, and motivation. This does not mean that the motivation is unimportant; it just means that the motivation has to briefly and clearly convey the general relevance of the work and how it moves the field forward. A Letter will typically have 1-3 small display items (figures, tables, or equations) that convey the main results and these have to be well crafted to clearly communicate the main takeaways from the research.
If you had to give advice to an author considering whether to submit their work to Political Analysis as a Letter or a Research Article, what would you say?
Our first piece of advice would be to submit your work! We’re open to working with authors to help them craft their existing research into a format appropriate for letters. As scholars are thinking about their work, they should know that Letters have a very high standard. We are looking for important findings that are well substantiated and motivated. We also encourage authors to think hard about how they design their display items to clearly convey the key message of the Letter. Lastly, authors should be aware that a significant fraction of submissions might be desk rejected to minimize the burden on reviewers.
You both are Associate Editors of Political Analysis, and you are editing the Letters. Why did you decide to take on this professional responsibility?
Letters provides us an opportunity to create an outlet for important work in Political Methodology. It also gives us the opportunity to develop a new format that we hope will enhance the quality and speed of the academic debates in the social sciences.
It’s fairly common knowledge that languages, like people, have families. English, for instance, is a member of the Germanic family, with sister languages including Dutch, German, and the Scandinavian languages. Germanic, in turn, is a branch of a larger family, Indo-European, whose other members include the Romance languages (French, Italian, Spanish, and more), Russian, Greek, and Persian.
Being part of a family of course means that you share a common ancestor. For the Romance languages, that mother language is Latin; with the spread and then fall of the Roman empire, Latin split into a number of distinct daughter languages. But what did the Germanic mother language look like? Here there’s a problem, because, although we know that language must have existed, we don’t have any direct record of it.
The earliest Old English written texts date from the 7th century AD, and the earliest Germanic text of any length is a 4th-century translation of the Bible into Gothic, a now-extinct Germanic language. Though impressively old, this text still dates from long after the breakup of the Germanic mother language into its daughters.
How does one go about recovering the features of a language that is dead and gone, and which has left no records of itself in spoken or written form? This is the subject matter of linguistic necromancy – or linguistic reconstruction, as it is more conventionally known.
The enterprise, dubbed “darkest of the dark arts” and “the only means to conjure up the ghosts of vanished centuries” in the epigraph to a chapter of Campbell’s historical linguistics textbook, really got off the ground in the 1900s due to a development of a toolkit of techniques known as the comparative method.
Crucial to the comparative method was a revolutionary empirical finding: the regularity of sound change. Though it has wide-reaching implications, the basic finding is simple to grasp. In a nutshell: it’s sounds that change, not words, and when they change, all words which include those sounds are affected.
Let’s take an example. Lots of English words beginning with a p sound have a German counterpart that begins with pf. Here are some of them:
English path: German Pfad
English pepper: German Pfeffer
English pipe: German Pfeife
English pan: German Pfanne
English post: German Pfoste
If the forms of words simply changed at random, these systematic correspondences would be a miraculous coincidence. However, in the light of the regularity of sound change they make perfect sense. Specifically, at some point in the early history of German, the language sounded a lot more like (Old) English. But then the sound p underwent a change to pf at the beginning of words, and all words starting with p were affected.
There’s much more to be said about the regularity of sound change, since it underlies pretty much everything we know about language family groupings. (If you’re interested in finding out more, Guy Deutscher’s book The Unfolding of Language provides an accessible summary.) But for now let’s concentrate on its implications for necromantic purposes, which are immense.
If we want to invoke the words and sounds of a long-dead language like the mother language Proto-Germanic (the ‘proto-’ indicates that the language is reconstructed, rather than directly evidenced in texts), we just need to figure out what changes have happened to the sounds of the daughter languages, and to peel them back one by one like the layers of an onion. Eventually we’ll reach a point where all the daughter languages sound the same; and voilà, we’ve conjured up a proto-language.
There’s more to living languages than just sounds and words though. Living languages have syntax: a structure, a skeleton. By contrast, reconstructed protolanguages tend to look more like ghosts: hauntingly amorphous clouds of words and sounds. There are practical reasons why the reconstruction of proto-syntax has lagged behind. One is simply that our understanding of syntax, in general, has come a long way since the work of the reconstruction pioneers in the 19th century.
Another is that there is nothing quite like the regularity of syntactic change in syntax: how can we tell which syntactic structures correspond to each other across languages? These problems have led some to be sceptical about the possibility of syntactic reconstruction, or at any rate about its fruitfulness. Nevertheless, progress is being made. To take one example, English is a language that doesn’t like to leave out the subject of a sentence. We say “He speaks Swahili” or “It is raining”, not “Speaks Swahili” or “Is raining”. Though most of the modern Germanic languages behave the same, many other languages, like Italian and Japanese, have no such requirement; speakers can include or omit the subject of the sentence as the fancy takes them. Was Proto-Germanic like English, or like Italian or Japanese, in this respect? Doing a bit of necromancy based on the earliest Germanic written records suggests that Proto-Germanic was, like the latter, quite happy to omit the subject, at least under certain circumstances.Of course the issue is more complex than that – Italian and Japanese themselves differ with regard to the circumstances under which subjects can be omitted.
Slowly but surely, though, historical linguists are starting to add skeletons to the reanimated spectres of proto-languages.
Causation is now commonly supposed to involve a succession that instantiates some lawlike regularity. This understanding of causality has a history that includes various interrelated conceptions of efficient causation that date from ancient Greek philosophy and that extend to discussions of causation in contemporary metaphysics and philosophy of science. Yet the fact that we now often speak only of causation, as opposed to efficient causation, serves to highlight the distance of our thought on this issue from its ancient origins. In particular, Aristotle (384-322 BCE) introduced four different kinds of “cause” (aitia): material, formal, efficient, and final. We can illustrate this distinction in terms of the generation of living organisms, which for Aristotle was a particularly important case of natural causation. In terms of Aristotle’s (outdated) account of the generation of higher animals, for instance, the matter of the menstrual flow of the mother serves as the material cause, the specially disposed matter from which the organism is formed, whereas the father (working through his semen) is the efficient cause that actually produces the effect. In contrast, the formal cause is the internal principle that drives the growth of the fetus, and the final cause is the healthy adult animal, the end point toward which the natural process of growth is directed.
From a contemporary perspective, it would seem that in this case only the contribution of the father (or perhaps his act of procreation) is a “true” cause. Somewhere along the road that leads from Aristotle to our own time, material, formal and final aitiai were lost, leaving behind only something like efficient aitiai to serve as the central element in our causal explanations. One reason for this transformation is that the historical journey from Aristotle to us passes by way of David Hume (1711-1776). For it is Hume who wrote: “[A]ll causes are of the same kind, and that in particular there is no foundation for that distinction, which we sometimes make betwixt efficient causes, and formal, and material … and final causes” (Treatise of Human Nature, I.iii.14). The one type of cause that remains in Hume serves to explain the producing of the effect, and thus is most similar to Aristotle’s efficient cause. And so, for the most part, it is today.
However, there is a further feature of Hume’s account of causation that has profoundly shaped our current conversation regarding causation. I have in mind his claim that the interrelated notions of cause, force and power are reducible to more basic non-causal notions. In Hume’s case, the causal notions (or our beliefs concerning such notions) are to be understood in terms of the constant conjunction of objects or events, on the one hand, and the mental expectation that an effect will follow from its cause, on the other. This specific account differs from more recent attempts to reduce causality to, for instance, regularity or counterfactual/probabilistic dependence. Hume himself arguably focused more on our beliefs concerning causation (thus the parenthetical above) than, as is more common today, directly on the metaphysical nature of causal relations. Nonetheless, these attempts remain “Humean” insofar as they are guided by the assumption that an analysis of causation must reduce it to non-causal terms. This is reflected, for instance, in the version of “Humean supervenience” in the work of the late David Lewis. According to Lewis’s own guarded statement of this view: “The world has its laws of nature, its chances and causal relationships; and yet — perhaps! — all there is to the world is its point-by-point distribution of local qualitative character” (On the Plurality of Worlds, 14).
Admittedly, Lewis’s particular version of Humean supervenience has some distinctively non-Humean elements. Specifically — and notoriously — Lewis has offered a counterfactural analysis of causation that invokes “modal realism,” that is, the thesis that the actual world is just one of a plurality of concrete possible worlds that are spatio-temporally discontinuous. One can imagine that Hume would have said of this thesis what he said of Malebranche’s occasionalist conclusion that God is the only true cause, namely: “We are got into fairy land, long ere we have reached the last steps of our theory; and there we have no reason to trust our common methods of argument, or to think that our usual analogies and probabilities have any authority” (Enquiry concerning Human Understanding, §VII.1). Yet the basic Humean thesis in Lewis remains, namely, that causal relations must be understood in terms of something more basic.
And it is at this point that Aristotle re-enters the contemporary conversation. For there has been a broadly Aristotelian move recently to re-introduce powers, along with capacities, dispositions, tendencies and propensities, at the ground level, as metaphysically basic features of the world. The new slogan is: “Out with Hume, in with Aristotle.” (I borrow the slogan from Troy Cross’s online review of Powers and Capacities in Philosophy: The New Aristotelianism.) Whereas for contemporary Humeans causal powers are to be understood in terms of regularities or non-causal dependencies, proponents of the new Aristotelian metaphysics of powers insist that regularities and dependencies must be understood rather in terms of causal powers.
Should we be Humean or Aristotelian with respect to the question of whether causal powers are basic or reducible features of the world? Obviously I cannot offer any decisive answer to this question here. But the very fact that the question remains relevant indicates the extent of our historical and philosophical debt to Aristotle and Hume.
American higher education is at a crossroads. The cost of a college education has made people question the benefits of receiving one. To better understand the issues surrounding the supposed crisis, we asked Goldie Blumenstyk, author of American Higher Education in Crisis: What Everyone Needs to Know, to comment on some of the most hot button topics today.
A discussion on the rising cost of higher education.
What does the future of higher education look like?
Are the salaries of university presidents and coaches too high?
A look into the accountability movement in higher education today.
As a bioethics teaching method, narrative genomics highlights the breadth of individuals affected by next-gen technologies — the conversations among professionals and families — bringing to life the spectrum of emotions and challenges that envelope genomics. Recent controversies over genomic sequencing in children and consentissues have brought fundamental ethical theses to the stage to be re-examined, further fueling our belief in drama as an interdisciplinary pedagogical approach to explore how society evaluates, processes, and shares genomic information that may implicate future generations. With a mutual interest in enhancing dialogue and understanding about the multi-faceted implications raised by generating and sharing vast amounts of genomic information, and with diverse backgrounds in bioethics, policy, psychology, genetics, law, health humanities, and neuroscience, we have been collaboratively weaving dramatic narratives to enhance the bioethics educational experience within varied professional contexts and a wide range of academic levels to foster interprofessionalism.
Dramatizations of fictionalized individual, familial, and professional relationships that surround the ethical landscape of genomics create the potential to stimulate bioethical reflection and new perceptions amongst “actors” and the audience, sparking the moral imagination through the lens of others. By casting light on all “the storytellers” and the complexity of implications inherent with this powerful technology, dramatic narratives create vivid scenarios through which to imagine the challenges faced on the genomic path ahead, critique the application of bioethical traditions in context, and re-imagine alternative paradigms.
Because narrative genomics is a pedagogical approach intended to facilitate discourse, as well as provide reflection on the interrelatedness of the cross-disciplinary issues posed, we ground our genomic plays in current scholarship and ensure that it is accurate scientifically as well as provide extensive references and pose focused bioethics questions which can complement and enhance the classroom experience.
In a similar vein, bioethical controversies can also be brought to life with this approach where bioethics reaching incorporates dramatizations and excerpts from existing theatrical narratives, whether to highlight bioethics issues thematically, or to illuminate the historical path to the genomics revolution and other medical innovations from an ethical perspective.
Varying iterations of these dramatic narratives have been experienced (read, enacted, witnessed) by bioethicists, policy makers, geneticists, genetic counselors, other healthcare professionals, basic scientists, bioethicists, lawyers, patient advocates, and students to enhance insight and facilitate interdisciplinary and interprofessional dialogue.
Dramatizations embedded in genomic narratives illuminate the human dimensions and complexity of interactions among family members, medical professionals, and others in the scientific community. By facilitating discourse and raising more questions than answers on difficult issues, narrative genomics links the promise and concerns of next-gen technologies with a creative bioethics pedagogical approach for learning from one another.
Heading image: Andrzej Joachimiak and colleagues at Argonne’s Midwest Center for Structural Genomics deposited the consortium’s 1,000th protein structure into the Protein Data Bank. CC-BY-SA-2.0 via Wikimedia Commons.
Now that Noughth Week has come to an end and the university Full Term is upon us, I thought it might be an appropriate time to investigate the arcane world of Oxford jargon — the University of Oxford, that is. New students, or freshers, do not arrive in Oxford but come up; at the end of term they go down (irrespective of where they live). If they misbehave they may find themselves being sent down by the proctors (a variant of the legal procurator), or — for less heinous crimes — merely rusticated, a form of suspension which, etymologically at least, involves being sent to the countryside (Latin rusticus). The formal beginning of a degree is known as matriculation, a ceremony held in the Sheldonian Theatre, in which membership of the university is conferred by being having one’s name entered on the register, or matricula.
Tutors, fellows, and readers
Being a student of the university involves membership of one of the colleges or private halls; despite their names, St Edmund (Teddy) Hall and Lady Margaret Hall are actually colleges; Regent’s Park College is neither a college nor a park. Christ Church should be referred to simply as Christ Church, rather than Christ Church College, although it is also known as ‘the House’. Magdalen is pronounced ‘maudlin’ and should never be confused with another college of the same name at Cambridge University (affectionately known as ‘The Other Place’, originally a euphemism for hell), which is pronounced the same but spelled Magdalene.
Each college has a head of house, referred to by a variety of terms: Principal, President, Dean, Master, Provost, Rector, or Warden. Teaching in college takes the form of tutorials (or tutes), overseen by Colleges tutors (from a Latin word for ‘protector’); the earliest tutors were responsible for a student’s general welfare — a post now known as moral tutor. Colleges are governed by a body of fellows (students at Christ Church), or dons, from Latin dominus ‘master’. The title reader, a medieval term for a teacher used to refer to a lecturer below the rank of professor, has recently been retired at Oxford in favour of the American title associate professor.
Mods and battels
At Oxford, students read rather than study a subject, a usage which goes back to the Middle Ages. Final examinations were originally known as Greats; this term is now used only of the degree of Literae Humaniores (‘more humane letters’) — Classics to everyone else. No longer in use is the equivalent term Smalls for the first year exams; these are now known as Moderations (or Mods) in the Humanities, or Preliminaries (or Prelims) in the Sciences. Sadly, the slang equivalents great go and little go have now fallen out of use. University examinations are sat in Schools, a forbidding edifice on the High Street (or ‘the High’) which gets its name from its original use for holding scholastic disputations. Students are required to wear formal academic dress to sit exams; this is known as subfusc, from Latin subfuscus ‘somewhat dark’.
College exams, rather less formal affairs, are known today as collections, from Latin collectiones, ‘gathering together’, so-called because they occurred at the end of term when fees were due for collection. Confusingly, the term collection is also used to refer to the end-of-term meeting where a progress report is read by a student’s tutor in the presence of the master of the college. As well as fees, students must pay their battels, a bill for food purchased from the College buttery — originally a wine store, from Latin butta ‘cask’, but now extended to include a range of student delicacies.
Lecturers dusting off their notes and preparing for the new term, for whom such usages are second-nature, may benefit from the salutary lesson of the wall-lecture –a term coined by their 17th-century forbears for a lecture delivered to an empty room. The term may be obsolete, but the prospect remains all too real.
Biology Week is an annual celebration of the biological sciences that aims to inspire and engage the public in the wonders of biology. The Society of Biology created this awareness day in 2012 to give everyone the chance to learn and appreciate biology, the science of the 21st century, through varied, nationwide events. Our belief that access to education and research changes lives for the better naturally supports the values behind Biology Week, and we are excited to be involved in it year on year.
Biology, as the study of living organisms, has an incredibly vast scope. We’ve identified some key figures from the last couple of centuries who traverse the range of biology: from physiology to biochemistry, sexology to zoology. You can read their stories by checking out our Biology Week 2014 gallery below. These biologists, in various different ways, have had a significant impact on the way we understand and interact with biology today. Whether they discovered dinosaurs or formed the foundations of genetic engineering, their stories have plenty to inspire, encourage, and inform us.
If you’d like to learn more about these key figures in biology, you can explore the resources available on our Biology Week page, or sign up to our e-alerts to stay one step ahead of the next big thing in biology.
Headline image credit: Marie Stopes in her laboratory, 1904, by Schnitzeljack. Public domain via Wikimedia Commons.
Scholars have written a lot about the difficulties in the study of religion generally. Those difficulties become even messier when we use the words black or African American to describe religion. The adjectives bear the burden of a difficult history that colors the way religion is practiced and understood in the United States. They register the horror of slavery and the terror of Jim Crow as well as the richly textured experiences of a captured people, for whom sorrow stands alongside joy. It is in this context, one characterized by the ever-present need to account for one’s presence in the world in the face of the dehumanizing practice of white supremacy, that African American religion takes on such significance.
To be clear, African American religious life is not reducible to those wounds. That life contains within it avenues for solace and comfort in God, answers to questions about who we take ourselves to be and about our relation to the mysteries of the universe; moreover, meaning is found, for some, in submission to God, in obedience to creed and dogma, and in ritual practice. Here evil is accounted for. And hope, at least for some, assured. In short, African American religious life is as rich and as complicated as the religious life of other groups in the United States, but African American religion emerges in the encounter between faith, in all of its complexity, and white supremacy.
I take it that if the phrase African American religion is to have any descriptive usefulness at all, it must signify something more than African Americans who are religious. African Americans practice a number of different religions. There are black people who are Buddhist, Jehovah Witness, Mormon, and Baha’i. But the fact that African Americans practice these traditions does not lead us to describe them as black Buddhism or black Mormonism. African American religion singles out something more substantive than that.
The adjective refers instead to a racial context within which religious meanings have been produced and reproduced. The history of slavery and racial discrimination in the United States birthed particular religious formations among African Americans. African Americans converted to Christianity, for example, in the context of slavery. Many left predominantly white denominations to form their own in pursuit of a sense of self- determination. Some embraced a distinctive interpretation of Islam to make sense of their condition in the United States. Given that history, we can reasonably describe certain variants of Christianity and Islam as African American and mean something beyond the rather uninteresting claim that black individuals belong to these different religious traditions.
The adjective black or African American works as a marker of difference: as a way of signifying a tradition of struggle against white supremacist practices and a cultural repertoire that reflects that unique journey. The phrase calls up a particular history and culture in our efforts to understand the religious practices of a particular people. When I use the phrase, African American religion, then, I am not referring to something that can be defined substantively apart from varied practices; rather, my aim is to orient you in a particular way to the material under consideration, to call attention to a sociopolitical history, and to single out the workings of the human imagination and spirit under particular conditions.
When Howard Thurman, the great 20th century black theologian, declared that the slave dared to redeem the religion profaned in his midst, he offered a particular understanding of black Christianity: that this expression of Christianity was not the idolatrous embrace of Christian doctrine which justified the superiority of white people and the subordination of black people. Instead, black Christianity embraced the liberating power of Jesus’s example: his sense that all, no matter their station in life, were children of God. Thurman sought to orient the reader to a specific inflection of Christianity in the hands of those who lived as slaves. That difference made a difference. We need only listen to the spirituals, give attention to the way African Americans interpreted the Gospel, and to how they invoked Jesus in their lives.
We cannot deny that African American religious life has developed, for much of its history, under captured conditions. Slaves had to forge lives amid the brutal reality of their condition and imagine possibilities beyond their status as slaves. Religion offered a powerful resource in their efforts. They imagined possibilities beyond anything their circumstances suggested. As religious bricoleurs, they created, as did their children and children’s children, on the level of religious consciousness and that creativity gave African American religion its distinctive hue and timber.
African Americans drew on the cultural knowledge, however fleeting, of their African past. They selected what they found compelling and rejected what they found unacceptable in the traditions of white slaveholders. In some cases, they reached for traditions outside of the United States altogether. They took the bits and pieces of their complicated lives and created distinctive expressions of the general order of existence that anchored their efforts to live amid the pressing nastiness of life. They created what we call African American religion.
Headline image credit: Candles, by Markus Grossalber, CC-BY-2.0 via Flickr.
For many of us, nature is defined as an outdoor space, untouched by human hands, and a place we escape to for refuge. We often spend time away from our daily routines to be in nature, such as taking a backwoods camping trip, going for a long hike in an urban park, or gardening in our backyard. Think about the last time you were out in nature, what comes to mind? For me, it was a canoe trip with friends. I can picture myself in our boat, the sound of the birds and rustling leaves in the background, the smell of cedars mixed with the clearing morning mist, and the sight of the still waters in front of me. Most of all, I remember a sense of calmness and clarity which I always achieve when I’m in nature.
Nature takes us away from the demands of life, and allows us to concentrate on the world around us with little to no effort. We can easily be taken back to a summer day by the smell of fresh cut grass, and force ourselves to be still to listen to the distant sound of ocean waves. Time in nature has a wealth of benefits from reducing stress, improving mood, increasing attentional capacities, and facilitating and creating social bonds. A variety of work supports nature being healing and health promoting at both an individual level (such as being energized after a walk with your dog) and a community level (such as neighbors coming together to create a local co-op garden). However, it can become difficult to experience the outdoors when we spend most of our day within a built environment.
I’d like you to stop for a moment and look around. What do you see? Are there windows? Are there any living plants or animals? Are the walls white? Do you hear traffic or perhaps the hum of your computer? Are you smelling circulated air? As I write now I hear the buzz of the florescent lights above me, and take a deep inhale of the lingering smell from my morning coffee. There is no nature except for the few photographs of the countryside and flowers that I keep tapped to my wall. I often feel hypocritical researching nature exposure sitting in front of a computer screen in my windowless office. But this is the reality for most of us. So how can we tap into the benefits of nature in order to create healthy and healing indoor environments that mimic nature and provide us with the same benefits as being outdoors?
Urban spaces often get a bad rap. Sure, they’re typically overcrowded, high in pollution, and limited in their natural and green spaces, but they also offer us the ability to transform the world around us into something that is meaningful and also health promoting. Beyond architectural features such as skylights, windows, and open air courtyards, we can use ambient features to adapt indoor spaces to replicate the outdoors. The integration of plants, animals, sounds, scents, and textures into our existing indoor environments enables us to create a wealth of natural environments indoors.
Notable examples of indoor nature, are potted plants or living walls in office spaces, atriums providing natural light, and large mural landscapes. In fact, much research has shown that the presence of such visual aids provides the same benefits of being outdoors. Incorporating just a few pieces of greenery into your workspace can help increase your productivity, boost your mood, improve your health, and help you concentrate on getting your work done. But being in nature is more than just seeing, it’s experiencing it fully and being immersed into a world that engages all of your senses. The use of natural sounds, scents, and textures (e.g. wooden furniture or carpets that look and feel like grass) provides endless possibilities for creating a natural environment indoors, and encouraging built environments to be therapeutic spaces. The more nature-like the indoor space can be, the more apt it is to illicit the same psychological and physical benefits that being outdoors does. Ultimately, the built environment can engage my senses in a way that brings me back to my canoe trip, and help me feel that same clarity and calmness that I did on the lake.
On a broader level, indoor nature may also be a means of encouraging sustainable and eco-friendly behaviors. With more generations growing up inside, we risk creating a society that is unaware of the value of nature. It’s easy to suggest that the solution to our declining involvement with nature is to just “go outside”; but with today’s busy lifestyle, we cannot always afford the time and money to step away. Integrating nature into our indoor environment is one way to foster the relationship between us and nature, and to encourage a sense of stewardship and appreciation for our natural world. By experiencing the health promoting and healing properties of nature, we can instill individuals with the significance of our natural world.
As I look around my office I’ve decided I need to take some of my own advice and bring my own little piece of nature inside. I encourage you to think about what nature means to you, and how you can incorporate this meaning into your own space. Does it involve fresh cut flowers? A photograph of your annual family campsite? The sound of birds in the background as you work? Whatever it is, I’m sure it’ll leave you feeling a little bit lighter, and maybe have you working a little bit faster.
Image: World Financial Center Winter Garden by WiNG. CC-BY-3.0 via Wikimedia Commons.
If a “revolution” in our field or area of knowledge was ongoing, would we feel it and recognize it? And if so, how?
I think a methodological “revolution” is probably going on in the science of epidemiology, but I’m not totally sure. Of course, in science not being sure is part of our normal state. And we mostly like it. I had the feeling that a revolution was ongoing in epidemiology many times. While reading scientific articles, for example. And I saw signs of it, which I think are clear, when reading the latest draft of the forthcoming book Causal Inference by M.A. Hernán and J.M. Robins from Harvard (Chapman & Hall / CRC, 2015). I think the “revolution” — or should we just call it a “renewal”? — is deeply changing how epidemiological and clinical research is conceived, how causal inferences are made, and how we assess the validity and relevance of epidemiological findings. I suspect it may be having an immense impact on the production of scientific evidence in the health, life, and social sciences. If this were so, then the impact would also be large on most policies, programs, services, and products in which such evidence is used. And it would be affecting thousands of institutions, organizations and companies, millions of people.
One example: at present, in clinical and epidemiological research, every week “paradoxes” are being deconstructed. Apparent paradoxes that have long been observed, and whose causal interpretation was at best dubious, are now shown to have little or no causal significance. For example, while obesity is a well-established risk factor for type 2 diabetes (T2D), among people who already developed T2D the obese fare better than T2D individuals with normal weight. Obese diabetics appear to survive longer and to have a milder clinical course than non-obese diabetics. But it is now being shown that the observation lacks causal significance. (Yes, indeed, an observation may be real and yet lack causal meaning.) The demonstration comes from physicians, epidemiologists, and mathematicians like Robins, Hernán, and colleagues as diverse as S. Greenland, J. Pearl, A. Wilcox, C. Weinberg, S. Hernández-Díaz, N. Pearce, C. Poole, T. Lash , J. Ioannidis, P. Rosenbaum, D. Lawlor, J. Vandenbroucke, G. Davey Smith, T. VanderWeele, or E. Tchetgen, among others. They are building methodological knowledge upon knowledge and methods generated by graph theory, computer science, or artificial intelligence. Perhaps one way to explain the main reason to argue that observations as the mentioned “obesity paradox” lack causal significance, is that “conditioning on a collider” (in our example, focusing only on individuals who developed T2D) creates a spurious association between obesity and survival.
The “revolution” is partly founded on complex mathematics, and concepts as “counterfactuals,” as well as on attractive “causal diagrams” like Directed Acyclic Graphs (DAGs). Causal diagrams are a simple way to encode our subject-matter knowledge, and our assumptions, about the qualitative causal structure of a problem. Causal diagrams also encode information about potential associations between the variables in the causal network. DAGs must be drawn following rules much more strict than the informal, heuristic graphs that we all use intuitively. Amazingly, but not surprisingly, the new approaches provide insights that are beyond most methods in current use. In particular, the new methods go far deeper and beyond the methods of “modern epidemiology,” a methodological, conceptual, and partly ideological current whose main eclosion took place in the 1980s lead by statisticians and epidemiologists as O. Miettinen, B. MacMahon, K. Rothman, S. Greenland, S. Lemeshow, D. Hosmer, P. Armitage, J. Fleiss, D. Clayton, M. Susser, D. Rubin, G. Guyatt, D. Altman, J. Kalbfleisch, R. Prentice, N. Breslow, N. Day, D. Kleinbaum, and others.
We live exciting days of paradox deconstruction. It is probably part of a wider cultural phenomenon, if you think of the “deconstruction of the Spanish omelette” authored by Ferran Adrià when he was the world-famous chef at the elBulli restaurant. Yes, just kidding.
Right now I cannot find a better or easier way to document the possible “revolution” in epidemiological and clinical research. Worse, I cannot find a firm way to assess whether my impressions are true. No doubt this is partly due to my ignorance in the social sciences. Actually, I don’t know much about social studies of science, epistemic communities, or knowledge construction. Maybe this is why I claimed that a sociology of epidemiology is much needed. A sociology of epidemiology would apply the scientific principles and methods of sociology to the science, discipline, and profession of epidemiology in order to improve understanding of the wider social causes and consequences of epidemiologists’ professional and scientific organization, patterns of practice, ideas, knowledge, and cultures (e.g., institutional arrangements, academic norms, scientific discourses, defense of identity, and epistemic authority). It could also address the patterns of interaction of epidemiologists with other branches of science and professions (e.g. clinical medicine, public health, the other health, life, and social sciences), and with social agents, organizations, and systems (e.g. the economic, political, and legal systems). I believe the tradition of sociology in epidemiology is rich, while the sociology of epidemiology is virtually uncharted (in the sense of not mapped neither surveyed) and unchartered (i.e. not furnished with a charter or constitution).
Another way I can suggest to look at what may be happening with clinical and epidemiological research methods is to read the changes that we are witnessing in the definitions of basic concepts as risk, rate, risk ratio, attributable fraction, bias, selection bias, confounding, residual confounding, interaction, cumulative and density sampling, open population, test hypothesis, null hypothesis, causal null, causal inference, Berkson’s bias, Simpson’s paradox, frequentist statistics, generalizability, representativeness, missing data, standardization, or overadjustment. The possible existence of a “revolution” might also be assessed in recent and new terms as collider, M-bias, causal diagram, backdoor (biasing path), instrumental variable, negative controls, inverse probability weighting, identifiability, transportability, positivity, ignorability, collapsibility, exchangeable, g-estimation, marginal structural models, risk set, immortal time bias, Mendelian randomization, nonmonotonic, counterfactual outcome, potential outcome, sample space, or false discovery rate.
You may say: “And what about textbooks? Are they changing dramatically? Has one changed the rules?” Well, the new generation of textbooks is just emerging, and very few people have yet read them. Two good examples are the already mentioned text by Hernán and Robins, and the soon to be published by T. VanderWeele, Explanation in causal inference: Methods for mediation and interaction (Oxford University Press, 2015). Clues can also be found in widely used textbooks by K. Rothman et al. (Modern Epidemiology, Lippincott-Raven, 2008), M. Szklo and J Nieto (Epidemiology: Beyond the Basics, Jones & Bartlett, 2014), or L. Gordis (Epidemiology, Elsevier, 2009).
Finally, another good way to assess what might be changing is to read what gets published in top journals as Epidemiology, the International Journal of Epidemiology, the American Journal of Epidemiology, or the Journal of Clinical Epidemiology. Pick up any issue of the main epidemiologic journals and you will find several examples of what I suspect is going on. If you feel like it, look for the DAGs. I recently saw a tweet saying “A DAG in The Lancet!”. It was a surprise: major clinical journals are lagging behind. But they will soon follow and adopt the new methods: the clinical relevance of the latter is huge. Or is it not such a big deal? If no “revolution” is going on, how are we to know?
Last weekend we were thrilled to see so many of you at the 2014 Oral History Association (OHA) Annual Meeting, “Oral History in Motion: Movements, Transformations, and the Power of Story.” The panels and roundtables were full of lively discussions, and the social gatherings provided a great chance to meet fellow oral historians. You can read a recap from Margo Shea, or browse through the Storify below, prepared by Jaycie Vos, to get a sense of the excitement at the meeting. Over the next few weeks, we’ll be sharing some more in depth blog posts from the meeting, so make sure to check back often.
We’re getting ready for Halloween this month by reading the classic horror stories that set the stage for the creepy movies and books we love today. Check in every Friday this October as we tell Fitz-James O’Brien’s tale of an unusual entity in What Was It?, a story from the spine-tingling collection of works in Horror Stories: Classic Tales from Hoffmann to Hodgson, edited by Darryl Jones. Last we left off the narrator was headed to bed after a night of opium and philosophical conversation with Dr. Hammond, a friend and fellow boarded at the supposed haunted house where they are staying.
We parted, and each sought his respective chamber. I undressed quickly and got into bed, taking with me, according to my usual custom, a book, over which I generally read myself to sleep. I opened the volume as soon as I had laid my head upon the pillow, and instantly flung it to the other side of the room. It was Goudon’s ‘History of Monsters,’—a curious French work, which I had lately imported from Paris, but which, in the state of mind I had then reached, was anything but an agreeable companion. I resolved to go to sleep at once; so, turning down my gas until nothing but a little blue point of light glimmered on the top of the tube, I composed myself to rest.
The room was in total darkness. The atom of gas that still remained alight did not illuminate a distance of three inches round the burner. I desperately drew my arm across my eyes, as if to shut out even the darkness, and tried to think of nothing. It was in vain. The confounded themes touched on by Hammond in the garden kept obtruding themselves on my brain. I battled against them. I erected ramparts of would-be blankness of intellect to keep them out. They still crowded upon me. While I was lying still as a corpse, hoping that by a perfect physical inaction I should hasten mental repose, an awful incident occurred. A Something dropped, as it seemed, from the ceiling, plumb upon my chest, and the next instant I felt two bony hands encircling my throat, endeavoring to choke me.
I am no coward, and am possessed of considerable physical strength. The suddenness of the attack, instead of stunning me, strung every nerve to its highest tension. My body acted from instinct, before my brain had time to realize the terrors of my position. In an instant I wound two muscular arms around the creature, and squeezed it, with all the strength of despair, against my chest. In a few seconds the bony hands that had fastened on my throat loosened their hold, and I was free to breathe once more. Then commenced a struggle of awful intensity. Immersed in the most profound darkness, totally ignorant of the nature of the Thing by which I was so suddenly attacked, finding my grasp slipping every moment, by reason, it seemed to me, of the entire nakedness of my assailant, bitten with sharp teeth in the shoulder, neck, and chest, having every moment to protect my throat against a pair of sinewy, agile hands, which my utmost efforts could not confine,—these were a combination of circumstances to combat which required all the strength, skill, and courage that I possessed.
At last, after a silent, deadly, exhausting struggle, I got my assailant under by a series of incredible efforts of strength. Once pinned, with my knee on what I made out to be its chest, I knew that I was victor. I rested for a moment to breathe. I heard the creature beneath me panting in the darkness, and felt the violent throbbing of a heart. It was apparently as exhausted as I was; that was one comfort. At this moment I remembered that I usually placed under my pillow, before going to bed, a large yellow silk pocket-handkerchief. I felt for it instantly; it was there. In a few seconds more I had, after a fashion, pinioned the creature’s arms.
I now felt tolerably secure. There was nothing more to be done but to turn on the gas, and, having first seen what my midnight assailant was like, arouse the household. I will confess to being actuated by a certain pride in not giving the alarm before; I wished to make the capture alone and unaided.
Never losing my hold for an instant, I slipped from the bed to the floor, dragging my captive with me. I had but a few steps to make to reach the gas-burner; these I made with the greatest caution, holding the creature in a grip like a vice. At last I got within arm’s-length of the tiny speck of blue light which told me where the gas-burner lay. Quick as lightning I released my grasp with one hand and let on the full flood of light. Then I turned to look at my captive.
I cannot even attempt to give any definition of my sensations the instant after I turned on the gas. I suppose I must have shrieked with terror, for in less than a minute afterward my room was crowded with the inmates of the house. I shudder now as I think of that awful moment. I saw nothing! Yes; I had one arm firmly clasped round a breathing, panting, corporeal shape, my other hand gripped with all its strength a throat as warm, and apparently fleshly, as my own; and yet, with this living substance in my grasp, with its body pressed against my own, and all in the bright glare of a large jet of gas, I absolutely beheld nothing! Not even an outline,—a vapor!
I do not, even at this hour, realize the situation in which I found myself. I cannot recall the astounding incident thoroughly. Imagination in vain tries to compass the awful paradox.
It breathed. I felt its warm breath upon my cheek. It struggled fiercely. It had hands. They clutched me. Its skin was smooth, like my own. There it lay, pressed close up against me, solid as stone,—and yet utterly invisible!
I wonder that I did not faint or go mad on the instant. Some wonderful instinct must have sustained me; for, absolutely, in place of loosening my hold on the terrible Enigma, I seemed to gain an additional strength in my moment of horror, and tightened my grasp with such wonderful force that I felt the creature shivering with agony.
Just then Hammond entered my room at the head of the household. As soon as he beheld my face—which, I suppose, must have been an awful sight to look at—he hastened forward, crying, ‘Great heaven, Harry! what has happened?’
‘Hammond! Hammond!’ I cried, ‘come here. O, this is awful!
I have been attacked in bed by something or other, which I have hold of; but I can’t see it,—I can’t see it!’
Hammond, doubtless struck by the unfeigned horror expressed in my countenance, made one or two steps forward with an anxious yet puzzled expression. A very audible titter burst from the remainder of my visitors. This suppressed laughter made me furious. To laugh at a human being in my position! It was the worst species of cruelty. Now, I can understand why the appearance of a man struggling violently, as it would seem, with an airy nothing, and calling for assistance against a vision, should have appeared ludicrous. Then, so great was my rage against the mocking crowd that had I the power I would have stricken them dead where they stood.
‘Hammond! Hammond!’ I cried again, despairingly, ‘for God’s sake come to me. I can hold the—the thing but a short while longer. It is overpowering me. Help me! Help me!’
‘Harry,’ whispered Hammond, approaching me, ‘you have been smoking too much opium.’
‘I swear to you, Hammond, that this is no vision,’ I answered, in the same low tone. ‘Don’t you see how it shakes my whole frame with its struggles? If you don’t believe me, convince yourself. Feel it,— touch it.’
Hammond advanced and laid his hand in the spot I indicated. A wild cry of horror burst from him. He had felt it! In a moment he had discovered somewhere in my room a long piece of cord, and was the next instant winding it and knotting it about the body of the unseen being that I clasped in my arms.
‘Harry,’ he said, in a hoarse, agitated voice, for, though he preserved his presence of mind, he was deeply moved, ‘Harry, it’s all safe now. You may let go, old fellow, if you’re tired. The Thing can’t move.’
I was utterly exhausted, and I gladly loosed my hold.
Check back next Friday, 24 October to find out what happens next. Missed a part of the story? Catch up with part 1 and part 2.
The outbreak of Ebola, in Africa and in the United States, is a stark reminder of the clear and present danger that infection represents in all our lives, and we need reminding. Despite all of our medical advances, more familiar infections still take tens of thousands of American lives each year – and too often these deaths are avoidable.
Hospital infections kill 75,000 Americans a year — more than twice the number of people who die in car crashes. Most people know that motor vehicle deaths could be drastically reduced. What’s not as widely appreciated is that the far greater number of hospital infections could be reduced by up to 70%.
Changes that would reduce infections are evidence-based and scientific, supported by the Centers for Disease Control and Prevention. For example, the campaign against hospital-acquired urinary tract infection — one of the most common hospital infections in the world — seeks to minimize the use of internal, Foley catheters, a major vector of infection. Nurses who have always relied on Foleys to deal with patients who have urinary incontinence are told to use straight catheters intermittently instead, which increases their workload. Surgeons who are accustomed to placing Foley catheters in their patients for several days after an operation are told to remove the catheter shortly after surgery – or not to use one at all. Similar approaches can be used to reduce other common infections. If we know what needs to be done to lower the rate of hospital infections, why have the many attempts to do so fallen so woefully short?
Our research shows that a major reason is the unwillingness of some nurses and physicians to support the desired new behaviors. We have found that opposition to hospitals’ infection prevention initiatives comes from the three groups we call Active Resisters, Organizational Constipators, and Timeservers. While we know these types of individuals exist in hospitals since we have seen them in action, we suspect they can also be found in all types of organizations.
Active resisters refuse to abide by and sometimes campaign against an initiative’s proposed changes. Some active resisters refuse to change a practice they have used for years because they fear it might have a negative impact on their patients’ health. Others resist because they doubt the scientific validity of a change, or because the change is inconvenient. For others it’s simply a matter of ego, as in, “Don’t tell me what to do.” Some ignore the evidence. Many initiatives to prevent urinary tract infection ask nurses to remind physicians when it’s time to remove an indwelling catheter, but many nurses are unwilling to confront physicians – and many physicians are unwilling to be so confronted.
Organizational constipators present a different set of challenges. Most are mid- to upper-level staff members who have nothing against an infection prevention initiative per se but simply enjoy exercising their power. Sometimes they refuse to permit underlings to help with an initiative. Sometimes they simply do nothing, allowing memos and emails to pile up without taking action. While we have met some physicians in this category, we have seen, unfortunately, a surprising number of nursing leaders employ this approach.
Timeservers do the least possible in any circumstance. That applies to every aspect of their work, including preventing infection. A timeserver surgeon may neglect to wash her hands before examining a patient, not because she opposes that key infection prevention requirement but because it’s just easier that way. A timeserver nurse may “forget” to conduct “sedation vacations” for patients who are on mechanical breathing machines to assess if the patient can be weaned from the ventilator sooner for the simple reason that sedated patients are less work.
We have learned that different overcoming these human-related barriers to improvement requires different styles of engagement.
To win support among the active resisters, we recommend employing data both liberally and strategically. Doctors are trained to respond to facts, and a graph that shows a high rate of infection department can help sway them. Sharing research from respected journals describing proven methods of preventing infection can also help overcome concerns. Nurse resisters are similarly impressed by such data, but we find that they are also likely to be convinced by appeals to their concern for their patients’ welfare – a description, for example, of the discomfort the Foley causes their patients.
Organizational constipators and timeservers are more difficult to win over, largely because their negative behavior is an incidental result of their normal operating style. Managers sometimes try to work around the organizational constipators and assign an authority figure to harass the timeservers, but their success is limited. Efforts to fire them can sometimes be difficult.
Hospitals’ administrative and medical leaders often play an important role in successful infection prevention initiatives by emphasizing their approval in their staff encounters, by occasionally attending an infection prevention planning session, and by making adherence to the goals of the initiative a factor in employee performance reviews. Some innovative leaders also give out physician or nurse champion-of-the-year awards that serve the dual purpose of rewarding the healthcare workers who have been helpful in a successful initiative while encouraging others by showing that they, too, could someday receive similar recognition. It may help to include potential obstructors in planning for an infection prevention campaign; the critics help spot weaknesses and are also inclined to go easy on the campaign once it gets underway.
But the leadership of a successful infection prevention project can also come from lower down in a hospital’s hierarchy, with or without the active support of the senior executives. We found the key to a positive result is a culture of excellence, when the hospital staff is fully devoted to patient-centered, high-quality care. Healthcare workers in such hospitals endeavor to treat each patient as a family member. In such institutions, a dedicated nurse can ignite an infection prevention initiative, and the staff’s all-but-universal commitment to patient safety can win over even the timeservers. The closer the nation’s hospitals approach that state of grace, the greater the success they will have in their efforts to lower infection rates.
Preventing infection is a team sport. Cooperation — among doctors, nurses, microbiologists, public health officials, patients, and families — will be required to control the spread of Ebola. Such cooperation is required to prevent more mundane infections as well.
Anti-politics is in the air. There is a prevalent feeling in many societies that politicians are up to no good, that establishment politics are at best irrelevant and at worst corrupt and power-hungry, and that the centralization of power in national parliaments and governments denies the public a voice. Larger organizations fare even worse, with the European Union’s ostensible detachment from and imperviousness to the real concerns of its citizens now its most-trumpeted feature. Discontent and anxiety build up pressure that erupts in the streets from time to time, whether in Takhrir Square or Tottenham. The Scots rail against a mysterious entity called Westminster; UKIP rides on the crest of what it terms patriotism (and others term typical European populism) intimating, as Matthew Goodwin has pointed out in the Guardian, that Nigel Farage “will lead his followers through a chain of events that will determine the destiny of his modern revolt against Westminster.”
At the height of the media interest in Wootton Bassett, when the frequent corteges of British soldiers who were killed in Afghanistan wended their way through the high street while the townspeople stood in silence, its organizers claimed that it was a spontaneous and apolitical display of respect. “There are no politics here,” stated the local MP. Those involved held that the national stratum of politicians was superfluous to the authentic feeling of solidarity that could solely be generated at the grass roots. A clear resistance emerged to national politics trying to monopolize the mourning that only a town at England’s heart could convey.
Academics have been drawn in to the same phenomenon. A new Anti-politics and Depoliticization Specialist Group has been set up by the Political Studies Association in the UK dedicated, as it describes itself, to “providing a forum for researchers examining those processes throughout society that seem to have marginalized normative political debates, taken power away from elected politicians and fostered an air of disengagement, disaffection and disinterest in politics.” The term “politics” and what it apparently stands for is undoubtedly suffering from a serious reputational problem.
But all that is based on a misunderstanding of politics. Political activity and thinking isn’t something that happens in remote places and institutions outside the experience of everyday life. It is ubiquitous, rooted in human intercourse at every level. It is not merely an elite activity but one that every one of us engages in consciously or unconsciously in our relations with others: commanding, pleading, negotiating, arguing, agreeing, refusing, or resisting. There is a tendency to insist on politics being mainly about one thing: power, dissent, consensus, oppression, rupture, conciliation, decision-making, the public domain, are some of the competing contenders. But politics is about them all, albeit in different combinations.
It concerns ranking group priorities in terms of urgency or importance—whether the group is a family, a sports club or a municipality. It concerns attempts to achieve finality in human affairs, attempts always doomed to fail yet epitomised in language that refers to victory, authority, sovereignty, rights, order, persuasion—whether on winning or losing sides of political struggle. That ranges from a constitutional ruling to the exasperated parent trying to end an argument with a “because I say so.” It concerns order and disorder in human gatherings, whether parliaments, trade union meetings, classrooms, bus queues, or terrorist attacks—all have a political dimension alongside their other aspects. That gives the lie to a demonstration being anti-political, when its ends are reform, revolution or the expression of disillusionment. It concerns devising plans and weaving visions for collectivities. It concerns the multiple languages of support and withholding support that we engage in with reference to others, from loyalty and allegiance through obligation to commitment and trust. And it is manifested through conservative, progressive or reactionary tendencies that the human personality exhibits.
When those involved in the Wootton Bassett corteges claimed to be non-political, they overlooked their organizational role in making certain that every detail of the ceremony was in place. They elided the expression of national loyalty that those homages clearly entailed. They glossed over the tension between political centre and periphery that marked an asymmetry of power and voice. They assumed, without recognizing, the prioritizing of a particular group of the dead – those that fell in battle.
People everywhere engage in political practices, but they do so in different intensities. It makes no more sense to suggest that we are non-political than to suggest that we are non-psychological. Nor does anti-politics ring true, because political disengagement is still a political act: sometimes vociferously so, sometimes seeking shelter in smaller circles of political conduct. Alongside political philosophy and the history of political thought, social scientists need to explore the features of thinking politically as typical and normal features of human life. Those patterns are always with us, though their cultural forms will vary considerably across and within societies. Being anti-establishment, anti-government, anti-sleaze, even anti-state are themselves powerful political statements, never anti-politics.
Headline image credit: Westminster, by “Stròlic Furlàn” – Davide Gabino. CC-BY-SA-2.0 via Flickr.