JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Voting for the 2014 Atlas Place of the Year is now underway. However, you still be curious about the nominees. What makes them so special? Each year, we put the spotlight on the top locations in the world that make us go, “wow”. For good or for bad, this year’s longlist is quite the round-up.
Just hover over the place-markers on the map to learn a bit more about this year’s nominations.
Make sure to vote for your Place of the Year below. If you have another Place of the Year that you would like to nominate, we’d love to know about it in the comments section. Follow along with #POTY2014 until our announcement on 1 December.What do you think Place of the Year 2014 should be?
Image Credits: Ferguson: “Cops Kill Kids”. Photo by Shawn Semmler. CC BY 2.0 via Flickr. Liberia: Ebola Virus Particles. Photo by NIAID. CC BY 2.0 via Flickr. Ukraine: Euromaiden in Kiev 2014-02-19 10-22. Photo by Amakuha. CC BY-SA 3.0 via Wikimedia Commons. Colorado: Grow House 105. Photo by Coleen Whitfield. CC BY-SA 2.0 via Flickr. Nauru: In front of the Menen. Photo by Sean Kelleher. CC BY-SA 2.0 via Flickr. Sochi: Olympic Park Flags (2). Photo by american_rugbler. CC BY-SA 2.0 via Flickr. Mount Sinjar: Sinjar Karst. Photo by Cpl. Dean Davis. Public Domain via Wikimedia Commons. Gaza: The home of the Kware family after it was bombed by the military. Photo by B’Tselem. CC BY 4.0 via Wikimedia Commons. Scotland: Vandalised no thanks sign. Photo by kay roxby. CC BY 2.0 via Flickr. Brazil: World Cup stuff, Rio de Janeiro, Brazil (15). Photo by Jorge in Brazil. CC BY 2.0 via Flickr.
As an Africanist historian who has long been committed to reaching broader publics, I was thrilled when the research team for the BBC’s popular genealogy program Who Do You Think You Are? contacted me late last February about an episode they were working on that involved mixed race relationships in colonial Ghana. I was even more pleased when I realized that their questions about the practice and perception of intimate relationships between African women and European men in the Gold Coast, as Ghana was then known, were ones I had just explored in a newly published American Historical Review article, which I readily shared with them. This led to a month-long series of lengthy email exchanges, phone conversations, Skype chats, and eventually to an invitation to come to Ghana to shoot the Who Do You Think You Are? episode.
After landing in Ghana in early April, I quickly set off for the coastal town of Sekondi where I met the production team, and the episode’s subject, Reggie Yates, a remarkable young British DJ, actor, and television presenter. Reggie had come to Ghana to find out more about his West African roots, but discovered instead that his great grandfather was a British mining accountant who worked in the Gold Coast for several years. His great grandmother, Dorothy Lloyd, was a mixed-race Fante woman whose father—Reggie’s great-great grandfather—was rumored to be a British district commissioner at the turn of the century in the Gold Coast.
The episode explores the nature of the relationship between Dorothy and George, who were married by customary law around 1915 in the mining town of Broomassi, where George worked as the paymaster at the local mine. George and Dorothy set up house in Broomassi and raised their infant son, Harry, there for two years before George left the Gold Coast in 1917 for good. Although their marriage was relatively short lived, it appears that Dorothy’s family and the wider community that she lived in regarded it as a respectable union and no social stigma was attached to her or Harry after George’s departure from the coast.
George and Dorothy lived openly as man and wife in Broomassi during a time period in which publicly recognized intermarriages were almost unheard of. As a privately employed European, George was not bound by the colonial government’s directives against cohabitation between British officers and local women, but he certainly would have been aware of the informal codes of conduct that regulated colonial life. While it was an open secret that white men “kept” local women, these relationships were not to be publicly legitimated.
Precisely because George and Dorothy’s union challenged the racial prescripts of colonial life, it did not resemble the increasingly strident characterizations of interracial relationships as immoral and insalubrious in the African-owned Gold Coast press. Although not a perfect union, as George was already married to an English woman who lived in London with their children, the trajectory of their relationship suggests that George and Dorothy had a meaningful relationship while they were together, that they provided their son Harry with a loving home, and that they were recognized as a respectable married couple. No doubt this had much to do with why the wider African community seemingly embraced the couple, and why Dorothy was able to “marry well” after George left. Her marriage to Frank Vardon, a prominent Gold Coaster, would have been unlikely had she been regarded as nothing more than a discarded “whiteman’s toy,” as one Gold Coast writer mockingly called local women who casually liaised with European men. In her own right, Dorothy became an important figure in the Sekondi community where she ultimately settled and raised her son Harry, alongside the children she had with Frank Vardon.
The “white peril” commentaries that I explored in my AHR article proved to be a rhetorically powerful strategy for challenging the moral legitimacy of British colonial rule because they pointed to the gap between the civilizing mission’s moral rhetoric and the sexual immorality of white men in the colony. But rhetoric often sacrifices nuance for argumentative force and Gold Coasters’ “white peril” commentaries were no exception. Left out of view were men like George Yates, who challenged the conventions of their times, even if imperfectly, and women like Dorothy Lloyd who were not cast out of “respectable” society, but rather took their place in it.
This sense of conflict and connection and of categorical uncertainty is what I hope to have contributed to the research process, storyline development, and filming of the Reggie Yates episode of Who Do You Think You Are? The central question the show raises is how do we think about and define relationships that were so heavily circumscribed by racialized power without denying the “possibility of love?” By “endeavor[ing] to trace its imperfections, its perversions,” was Martinican philosopher and anticolonial revolutionary Frantz Fanon’s answer. While I have yet to see the episode, Fanon’s insight will surely reverberate throughout it.
The theme of this year’s meeting is “International Law in a Time of Chaos”, exploring the role of international law in conflict mitigation. Panel discussions will examine various aspects of both public international law and private international law, including trade, investment, arbitration, intellectual property, combatting corruption, labor standards in the global supply chain, and human rights, as well as issues of international organizations and international security.
ILW is sponsored and organized by the American Branch of the International Law Association (ABILA) and the International Law Students Association (ILSA). Every year more than one thousand practitioners, academics, diplomats, members of the governmental and nongovernmental sectors, and students attend this conference.
This year’s conference highlights include:
This year’s keynote from Lori Damrosch, Hamilton Fish Professor of International Law and Diplomacy, Columbia Law School, and President of the American Society of International Law. “Democratization of Foreign Policy and International Law, 1914-2014” Friday, 1:30PM (Room 2-02A)
Top practitioners in the field discuss “International Investment Arbitration and the Rule of Law”, Friday 4:45PM (Room 2-02A). (Sign up for our Free Investment Claims Webinar on October 20th to brush up on VCLT in BIT arbitrations in time for this panel.)
Looking for career advice? Attend this roundtable discussion on Saturday afternoon “Careers in International Human Rights, International Development, and International Rule of Law,” Saturday, 3:30PM (Room 2-02B)
How rapidly does medical knowledge advance? Very quickly if you read modern newspapers, but rather slowly if you study history. Nowhere is this more true than in the fields of neurology and psychiatry.
It was believed that studies of common disorders of the nervous system began with Greco-Roman Medicine, for example, epilepsy, “The sacred disease” (Hippocrates) or “melancholia”, now called depression. Our studies have now revealed remarkable Babylonian descriptions of common neuropsychiatric disorders a millennium earlier.
There were several Babylonian Dynasties with their capital at Babylon on the River Euphrates. Best known is the Neo-Babylonian Dynasty (626-539 BC) associated with King Nebuchadnezzar II (604-562 BC) and the capture of Jerusalem (586 BC). But the neuropsychiatric sources we have studied nearly all derive from the Old Babylonian Dynasty of the first half of the second millennium BC, united under King Hammurabi (1792-1750 BC).
The Babylonians made important contributions to mathematics, astronomy, law and medicine conveyed in the cuneiform script, impressed into clay tablets with reeds, the earliest form of writing which began in Mesopotamia in the late 4th millennium BC. When Babylon was absorbed into the Persian Empire cuneiform writing was replaced by Aramaic and simpler alphabetic scripts and was only revived (translated) by European scholars in the 19th century AD.
The Babylonians were remarkably acute and objective observers of medical disorders and human behaviour. In texts located in museums in London, Paris, Berlin and Istanbul we have studied surprisingly detailed accounts of what we recognise today as epilepsy, stroke, psychoses, obsessive compulsive disorder (OCD), psychopathic behaviour, depression and anxiety. For example they described most of the common seizure types we know today e.g. tonic clonic, absence, focal motor, etc, as well as auras, post-ictal phenomena, provocative factors (such as sleep or emotion) and even a comprehensive account of schizophrenia-like psychoses of epilepsy.
Early attempts at prognosis included a recognition that numerous seizures in one day (i.e. status epilepticus) could lead to death. They recognised the unilateral nature of stroke involving limbs, face, speech and consciousness, and distinguished the facial weakness of stroke from the isolated facial paralysis we call Bell’s palsy. The modern psychiatrist will recognise an accurate description of an agitated depression, with biological features including insomnia, anorexia, weakness, impaired concentration and memory. The obsessive behaviour described by the Babylonians included such modern categories as contamination, orderliness of objects, aggression, sex, and religion. Accounts of psychopathic behaviour include the liar, the thief, the troublemaker, the sexual offender, the immature delinquent and social misfit, the violent, and the murderer.
The Babylonians had only a superficial knowledge of anatomy and no knowledge of brain, spinal cord or psychological function. They had no systematic classifications of their own and would not have understood our modern diagnostic categories. Some neuropsychiatric disorders e.g. stroke or facial palsy had a physical basis requiring the attention of the physician or asû, using a plant and mineral based pharmacology. Most disorders, such as epilepsy, psychoses and depression were regarded as supernatural due to evil demons and spirits, or the anger of personal gods, and thus required the intervention of the priest or ašipu. Other disorders, such as OCD, phobias and psychopathic behaviour were viewed as a mystery, yet to be resolved, revealing a surprisingly open-minded approach.
From the perspective of a modern neurologist or psychiatrist these ancient descriptions of neuropsychiatric phenomenology suggest that the Babylonians were observing many of the common neurological and psychiatric disorders that we recognise today. There is nothing comparable in the ancient Egyptian medical writings and the Babylonians therefore were the first to describe the clinical foundations of modern neurology and psychiatry.
A major and intriguing omission from these entirely objective Babylonian descriptions of neuropsychiatric disorders is the absence of any account of subjective thoughts or feelings, such as obsessional thoughts or ruminations in OCD, or suicidal thoughts or sadness in depression. The latter subjective phenomena only became a relatively modern field of description and enquiry in the 17th and 18th centuries AD. This raises interesting questions about the possibly slow evolution of human self awareness, which is central to the concept of “mental illness”, which only became the province of a professional medical discipline, i.e. psychiatry, in the last 200 years.
In 2014 Oxford University Press celebrates ten years of open access (OA) publishing. In that time open access has grown massively as a movement and an industry. Here we look back at five key moments which have marked that growth.
2004/05 – Nucleic Acids Research (NAR) converts to OA
At first glance it might seem parochial to include this here, but as Rich Roberts noted on this blog in 2012, Nucleic Acids Research’s move to open access was truly ‘momentous’. To put it in context, in 2004 NAR was OUP’s biggest owned journal and it was not at all clear that many of the elements were in place to drive the growth of OA. But in 2004/2005 NAR moved from being free to publish to free to read – with authors now supporting the journal financially by paying APCs (Article Processing Charges). No wonder Roberts adds that it was ‘with great trepidation’ that OUP and the editors made the change. Roberts needn’t have worried — NAR’s switch has been a huge success — its impact factor has increased, and submissions, which could have fallen off a cliff, have continued to climb. As with anything, there are elements of the NAR model which couldn’t be replicated now, but NAR helped show the publishing world in particular that OA could work. It’s saying something that it’s only ten years on, with the transition of Nature Communications to OA, that any journal near NAR’s size has made the switch.
2008 – National Institutes of Health (NIH) Mandate Introduced
Open access presents huge opportunities for research funders; the removal of barriers to access chimes perfectly with most funders’ aim to disseminate the fruits of their research as widely as possible. But as both the NIH and Wellcome, amongst others, have found out, author interests don’t always chime exactly with theirs. Authors have other pressures to consider – primarily career development – and that means publishing in the best journal, the journal with the highest impact factor, etc. and not necessarily the one with the best open access options. So it was that in 2008 the NIH found it was getting a very low rate of compliance with its recommended OA requirements for authors. What happened next was hugely significant for the progress of open access. As part of an Act which passed through the US legislature, it was made mandatory for all NIH-funded authors to make their works available 12 months after publication. This was transformative in two ways: it meant thousands of articles published from NIH research became available through PubMed Central (PMC), and perhaps just as importantly it legitimised government intervention in OA policy, setting a precedent for future developments in Europe and the United Kingdom.
2008 – Springer buys BioMed Central (BMC)
BioMed Central was the first for-profit open access publisher – and since its inception in 2000 it was closely watched in the industry to see if it could make OA ‘work’. When it was purchased by one of the world’s largest publishers, and when that company’s CEO declared that OA was now a ‘sustainable part of STM publishing’, it was a pretty clear sign to the rest of the industry, and all OA-watchers, that the upstart business model was now proving to be more than just an interesting side line. It also reflected the big players in the industry starting to take OA very seriously, and has been followed by other acquisitions – for example Nature purchasing Frontiers in early 2013. The integration of BMC into Springer has happened gradually over the past five years, and has also been marked by a huge expansion of OA at the parent company. Springer was one of the first subscription publishers to embrace hybrid OA, in 2004, but since acquiring BMC they have also massively increased their fully OA publishing. It seems bizarre to think that back in 2008 there were even some who feared the purchase was aimed at moving all BMC’s journals back to subscription access.
2007 on – Growth of PLOS ONE
The Public Library of Science (PLOS) started publishing open access journals back in 2003, but while its journals quickly developed a reputation for high-quality publishing, the not-for-profit struggled to succeed financially. The advent of PLOS ONE changed all that. PLOS ONE has been transformative for several reasons, most notably its method of peer review. Typically top journals have tended to have their niche, and be selective. A journal on carcinogens would be unlikely to accept a paper about molecular biology, and it would only accept a paper on carcinogens if it was seen to be sufficiently novel and interesting. PLOS ONE changed that. It covers every scientific field, and its peer review is methodological (i.e. is the basic science sound) rather than looking for anything else. This enabled PLOS ONE to rapidly turn into the biggest journal in the world, publishing a staggering 31,500 papers in 2013 alone. PLOS ONE’s success cannot be solely attributed to its OA nature, but it was being OA which enabled PLOS ONE to become the ‘megajournal’ we know today. It would simply not be possible to bring such scale to a subscription journal. The price would balloon beyond the reach of even the biggest library budget. PLOS ONE has spawned a rash of similar journals and more than any one title it has energised the development of OA, dispelling previously-held notions of what could and couldn’t be done in journals publishing.
2012 – The ‘Finch’ Report
It’s difficult to sum up the vast impact of the Finch Report on journals publishing in the UK. The product of a group chaired by the eponymous Dame Janet Finch, the report, by way of two government investigations, catalysed a massive investment in gold open access (funded by APCs) from the UK government, crystallised by Research Councils UK’s OA policy. In setting the direction clearly towards gold OA, ‘Finch’ led to a huge number of journals changing their policies to accommodate UK researchers, and the establishment of OA policies, departments, and infrastructure at academic institutions and publishers across the UK and beyond. The wide-ranging policy implications of ‘Finch’ continue to be felt as time progresses, through 2014’s Higher Education Funding Council (HEFCE) for England policy, through research into the feasibility of OA monographs, and through deliberations in other jurisdictions over whether to follow the UK route to open access. HEFCE’s OA mandate in particular will prove incredibly influential for UK researchers – as it directly ties the assessment of a university’s funding to their success in ensuring their authors publish OA. The mainstream media attention paid to ‘Finch’ also brought OA publishing into the public eye in a way never seen before (or since).
The list was interesting to me on many levels, but one significant one that struck me immediately was the absence of mixing and mastering (my main areas of work in audio). A relatively short time ago almost half of these categories did not exist. There was no streaming, no project studios, no networked audio and no game sound. So what is the state of affairs for the young audio engineering student or practitioner?
Interestingly, of the four new fields mentioned, three of them represent diminished opportunities in the field of music recording, with one a singular beacon of hope.
Streaming audio represents the brave new world of audio delivery systems. As these services continue to capture more of the consumer market share they continue to diminish artists ability to earn a decent living (or pay an accomplished audio engineer). A friend of mine with 3 CD releases recently got his Spotify statement and saw that he had more that 60,000 streams of his music. His check was for $17. CDs don’t pay as well as vinyl records used to, downloads don’t pay as well as CDs, and streaming doesn’t pay as well as downloads (not to mention “file-sharing” which doesn’t pay anything). Sure, there may be jobs at Pandora and Spotify for a few engineers helping with the infrastructure of audio streaming, but generally streaming is another brick in the wall that is restricting audio jobs by shrinking the earning capacity of recording artists.
Project studios now dominate most recording projects outside the reasonably well-funded major label records and even most of that work is done in project studios (though they might be quite elaborate facilities). Project studios rarely have spots for interns or assistant engineers so they provide no entree positions for those trying to come up in the engineering ranks. Not only does that limit the available sources of income, but it also prevents the kind of mentoring that actually trains young engineers in the fine points of running sessions. Of course, almost no project studios provide regular, dependable work or with any kind of benefits.
Networked audio systems provide new, faster, and more elaborate connectivity of audio using digital technology. While there may be opportunities in the tech realm for engineers designing and building digital audio networks there is, once again, a shrinking of opportunities for those aspiring to making commercial music recordings. In many instances, these networking systems allow fewer people to do more—a boon only to a small number of audio engineers working with music recordings who can now do remote recordings without having to be present and without having to employ local recording engineers and studios to complete projects with musicians in other locations.
The one bright spot here is Game Sound. The explosive world of video games is providing many good jobs for audio engineers who want to record music. These recordings have become more interesting, higher quality, and featuring more prominent and talented composers and musicians than virtually any other area of music production. The only reservation here is that the music is intended as secondary to the game play (of course) and there is a preponderance of violent video games and therefore musical styles that tend to fit well into a violent atmosphere. However, this is changing with a much broader array of game types achieving new levels of popularity (Mindcraft!).
I do not fault AES for pointing to these areas of interest for audio engineers (other than the apparent absence of mixing and mastering). These are the places where significant activity, development, and change are occurring. They’re just not very encouraging for those of us who became audio engineers because of our deep love of music and our desire to be engaged in its production.
Headline Image: Sound Mixing via CC0 Public Domain via Pixabay
There’s a lot of interesting social science research these days. Conference programs are packed, journals are flooded with submissions, and authors are looking for innovative new ways to publish their work.
This is why we have started up a new type of research publication at Political Analysis, Letters.
Research journals have a limited number of pages, and many authors struggle to fit their research into the “usual formula” for a social science submission — 25 to 30 double-spaced pages, a small handful of tables and figures, and a page or two of references. Many, and some say most, papers published in social science could be much shorter than that “usual formula.”
We have begun to accept Letters submissions, and we anticipate publishing our first Letters in Volume 24 of Political Analysis. We will continue to accept submissions for research articles, though in some cases the editors will suggest that an author edit their manuscript and resubmit it as a Letter. Soon we will have detailed instructions on how to submit a Letter, the expectations for Letters, and other information, on the journal’s website.
We have named Justin Grimmer and Jens Hainmueller, both at Stanford University, to serve as Associate Editors of Political Analysis — with their primary responsibility being Letters. Justin and Jens are accomplished political scientists and methodologists, and we are quite happy that they have agreed to join the Political Analysis team. Justin and Jens have already put in a great deal of work helping us develop the concept, and working out the logistics for how we integrate the Letters submissions into the existing workflow of the journal.
I recently asked Justin and Jens a few quick questions about Letters, to give them an opportunity to get the word out about this new and innovative way of publishing research in Political Analysis.
Political Analysis is now accepting the submission of Letters as well as Research Articles. What are the general requirements for a Letter?
Letters are short reports of original research that move the field forward. This includes, but is not limited to, new empirical findings, methodological advances, theoretical arguments, as well as comments on or extensions of previous work. Letters are peer reviewed and subjected to the same standards as Political Analysis research articles. Accepted Letters are published in the electronic and print versions of Political Analysis and are searchable and citable just like other articles in the journal. Letters should focus on a single idea and are brief—only 2-4 pages and no longer than 1500-3000 words.
Political Analysis is taking this new direction to publish important results that do not traditionally fit in the longer format of journal articles that are currently the standard in the social sciences, but fit well with the shorter format that is often used in the sciences to convey important new findings. In this regard the role model for the Political Analysis Letters are the similar formats used in top general interest science journals like Science, Nature, or PNAS where significant findings are often reported in short reports and articles. Our hope is that these shorter papers also facilitate an ongoing and faster paced dialogue about research findings in the social sciences.
What is the main difference between a Letter and a Research Paper?
The most obvious difference is the length and focus. Letters are intended to only be 2-4 pages, while a standard research article might be 30 pages. The difference in length means that Letters are going to be much more focused on one important result. A letter won’t have the long literature review that is standard in political science articles and will have much more brief introduction, conclusion, and motivation. This does not mean that the motivation is unimportant; it just means that the motivation has to briefly and clearly convey the general relevance of the work and how it moves the field forward. A Letter will typically have 1-3 small display items (figures, tables, or equations) that convey the main results and these have to be well crafted to clearly communicate the main takeaways from the research.
If you had to give advice to an author considering whether to submit their work to Political Analysis as a Letter or a Research Article, what would you say?
Our first piece of advice would be to submit your work! We’re open to working with authors to help them craft their existing research into a format appropriate for letters. As scholars are thinking about their work, they should know that Letters have a very high standard. We are looking for important findings that are well substantiated and motivated. We also encourage authors to think hard about how they design their display items to clearly convey the key message of the Letter. Lastly, authors should be aware that a significant fraction of submissions might be desk rejected to minimize the burden on reviewers.
You both are Associate Editors of Political Analysis, and you are editing the Letters. Why did you decide to take on this professional responsibility?
Letters provides us an opportunity to create an outlet for important work in Political Methodology. It also gives us the opportunity to develop a new format that we hope will enhance the quality and speed of the academic debates in the social sciences.
It’s fairly common knowledge that languages, like people, have families. English, for instance, is a member of the Germanic family, with sister languages including Dutch, German, and the Scandinavian languages. Germanic, in turn, is a branch of a larger family, Indo-European, whose other members include the Romance languages (French, Italian, Spanish, and more), Russian, Greek, and Persian.
Being part of a family of course means that you share a common ancestor. For the Romance languages, that mother language is Latin; with the spread and then fall of the Roman empire, Latin split into a number of distinct daughter languages. But what did the Germanic mother language look like? Here there’s a problem, because, although we know that language must have existed, we don’t have any direct record of it.
The earliest Old English written texts date from the 7th century AD, and the earliest Germanic text of any length is a 4th-century translation of the Bible into Gothic, a now-extinct Germanic language. Though impressively old, this text still dates from long after the breakup of the Germanic mother language into its daughters.
How does one go about recovering the features of a language that is dead and gone, and which has left no records of itself in spoken or written form? This is the subject matter of linguistic necromancy – or linguistic reconstruction, as it is more conventionally known.
The enterprise, dubbed “darkest of the dark arts” and “the only means to conjure up the ghosts of vanished centuries” in the epigraph to a chapter of Campbell’s historical linguistics textbook, really got off the ground in the 1900s due to a development of a toolkit of techniques known as the comparative method.
Crucial to the comparative method was a revolutionary empirical finding: the regularity of sound change. Though it has wide-reaching implications, the basic finding is simple to grasp. In a nutshell: it’s sounds that change, not words, and when they change, all words which include those sounds are affected.
Let’s take an example. Lots of English words beginning with a p sound have a German counterpart that begins with pf. Here are some of them:
English path: German Pfad
English pepper: German Pfeffer
English pipe: German Pfeife
English pan: German Pfanne
English post: German Pfoste
If the forms of words simply changed at random, these systematic correspondences would be a miraculous coincidence. However, in the light of the regularity of sound change they make perfect sense. Specifically, at some point in the early history of German, the language sounded a lot more like (Old) English. But then the sound p underwent a change to pf at the beginning of words, and all words starting with p were affected.
There’s much more to be said about the regularity of sound change, since it underlies pretty much everything we know about language family groupings. (If you’re interested in finding out more, Guy Deutscher’s book The Unfolding of Language provides an accessible summary.) But for now let’s concentrate on its implications for necromantic purposes, which are immense.
If we want to invoke the words and sounds of a long-dead language like the mother language Proto-Germanic (the ‘proto-’ indicates that the language is reconstructed, rather than directly evidenced in texts), we just need to figure out what changes have happened to the sounds of the daughter languages, and to peel them back one by one like the layers of an onion. Eventually we’ll reach a point where all the daughter languages sound the same; and voilà, we’ve conjured up a proto-language.
There’s more to living languages than just sounds and words though. Living languages have syntax: a structure, a skeleton. By contrast, reconstructed protolanguages tend to look more like ghosts: hauntingly amorphous clouds of words and sounds. There are practical reasons why the reconstruction of proto-syntax has lagged behind. One is simply that our understanding of syntax, in general, has come a long way since the work of the reconstruction pioneers in the 19th century.
Another is that there is nothing quite like the regularity of syntactic change in syntax: how can we tell which syntactic structures correspond to each other across languages? These problems have led some to be sceptical about the possibility of syntactic reconstruction, or at any rate about its fruitfulness. Nevertheless, progress is being made. To take one example, English is a language that doesn’t like to leave out the subject of a sentence. We say “He speaks Swahili” or “It is raining”, not “Speaks Swahili” or “Is raining”. Though most of the modern Germanic languages behave the same, many other languages, like Italian and Japanese, have no such requirement; speakers can include or omit the subject of the sentence as the fancy takes them. Was Proto-Germanic like English, or like Italian or Japanese, in this respect? Doing a bit of necromancy based on the earliest Germanic written records suggests that Proto-Germanic was, like the latter, quite happy to omit the subject, at least under certain circumstances.Of course the issue is more complex than that – Italian and Japanese themselves differ with regard to the circumstances under which subjects can be omitted.
Slowly but surely, though, historical linguists are starting to add skeletons to the reanimated spectres of proto-languages.
Causation is now commonly supposed to involve a succession that instantiates some lawlike regularity. This understanding of causality has a history that includes various interrelated conceptions of efficient causation that date from ancient Greek philosophy and that extend to discussions of causation in contemporary metaphysics and philosophy of science. Yet the fact that we now often speak only of causation, as opposed to efficient causation, serves to highlight the distance of our thought on this issue from its ancient origins. In particular, Aristotle (384-322 BCE) introduced four different kinds of “cause” (aitia): material, formal, efficient, and final. We can illustrate this distinction in terms of the generation of living organisms, which for Aristotle was a particularly important case of natural causation. In terms of Aristotle’s (outdated) account of the generation of higher animals, for instance, the matter of the menstrual flow of the mother serves as the material cause, the specially disposed matter from which the organism is formed, whereas the father (working through his semen) is the efficient cause that actually produces the effect. In contrast, the formal cause is the internal principle that drives the growth of the fetus, and the final cause is the healthy adult animal, the end point toward which the natural process of growth is directed.
From a contemporary perspective, it would seem that in this case only the contribution of the father (or perhaps his act of procreation) is a “true” cause. Somewhere along the road that leads from Aristotle to our own time, material, formal and final aitiai were lost, leaving behind only something like efficient aitiai to serve as the central element in our causal explanations. One reason for this transformation is that the historical journey from Aristotle to us passes by way of David Hume (1711-1776). For it is Hume who wrote: “[A]ll causes are of the same kind, and that in particular there is no foundation for that distinction, which we sometimes make betwixt efficient causes, and formal, and material … and final causes” (Treatise of Human Nature, I.iii.14). The one type of cause that remains in Hume serves to explain the producing of the effect, and thus is most similar to Aristotle’s efficient cause. And so, for the most part, it is today.
However, there is a further feature of Hume’s account of causation that has profoundly shaped our current conversation regarding causation. I have in mind his claim that the interrelated notions of cause, force and power are reducible to more basic non-causal notions. In Hume’s case, the causal notions (or our beliefs concerning such notions) are to be understood in terms of the constant conjunction of objects or events, on the one hand, and the mental expectation that an effect will follow from its cause, on the other. This specific account differs from more recent attempts to reduce causality to, for instance, regularity or counterfactual/probabilistic dependence. Hume himself arguably focused more on our beliefs concerning causation (thus the parenthetical above) than, as is more common today, directly on the metaphysical nature of causal relations. Nonetheless, these attempts remain “Humean” insofar as they are guided by the assumption that an analysis of causation must reduce it to non-causal terms. This is reflected, for instance, in the version of “Humean supervenience” in the work of the late David Lewis. According to Lewis’s own guarded statement of this view: “The world has its laws of nature, its chances and causal relationships; and yet — perhaps! — all there is to the world is its point-by-point distribution of local qualitative character” (On the Plurality of Worlds, 14).
Admittedly, Lewis’s particular version of Humean supervenience has some distinctively non-Humean elements. Specifically — and notoriously — Lewis has offered a counterfactural analysis of causation that invokes “modal realism,” that is, the thesis that the actual world is just one of a plurality of concrete possible worlds that are spatio-temporally discontinuous. One can imagine that Hume would have said of this thesis what he said of Malebranche’s occasionalist conclusion that God is the only true cause, namely: “We are got into fairy land, long ere we have reached the last steps of our theory; and there we have no reason to trust our common methods of argument, or to think that our usual analogies and probabilities have any authority” (Enquiry concerning Human Understanding, §VII.1). Yet the basic Humean thesis in Lewis remains, namely, that causal relations must be understood in terms of something more basic.
And it is at this point that Aristotle re-enters the contemporary conversation. For there has been a broadly Aristotelian move recently to re-introduce powers, along with capacities, dispositions, tendencies and propensities, at the ground level, as metaphysically basic features of the world. The new slogan is: “Out with Hume, in with Aristotle.” (I borrow the slogan from Troy Cross’s online review of Powers and Capacities in Philosophy: The New Aristotelianism.) Whereas for contemporary Humeans causal powers are to be understood in terms of regularities or non-causal dependencies, proponents of the new Aristotelian metaphysics of powers insist that regularities and dependencies must be understood rather in terms of causal powers.
Should we be Humean or Aristotelian with respect to the question of whether causal powers are basic or reducible features of the world? Obviously I cannot offer any decisive answer to this question here. But the very fact that the question remains relevant indicates the extent of our historical and philosophical debt to Aristotle and Hume.
American higher education is at a crossroads. The cost of a college education has made people question the benefits of receiving one. To better understand the issues surrounding the supposed crisis, we asked Goldie Blumenstyk, author of American Higher Education in Crisis: What Everyone Needs to Know, to comment on some of the most hot button topics today.
A discussion on the rising cost of higher education.
What does the future of higher education look like?
Are the salaries of university presidents and coaches too high?
A look into the accountability movement in higher education today.
As a bioethics teaching method, narrative genomics highlights the breadth of individuals affected by next-gen technologies — the conversations among professionals and families — bringing to life the spectrum of emotions and challenges that envelope genomics. Recent controversies over genomic sequencing in children and consentissues have brought fundamental ethical theses to the stage to be re-examined, further fueling our belief in drama as an interdisciplinary pedagogical approach to explore how society evaluates, processes, and shares genomic information that may implicate future generations. With a mutual interest in enhancing dialogue and understanding about the multi-faceted implications raised by generating and sharing vast amounts of genomic information, and with diverse backgrounds in bioethics, policy, psychology, genetics, law, health humanities, and neuroscience, we have been collaboratively weaving dramatic narratives to enhance the bioethics educational experience within varied professional contexts and a wide range of academic levels to foster interprofessionalism.
Dramatizations of fictionalized individual, familial, and professional relationships that surround the ethical landscape of genomics create the potential to stimulate bioethical reflection and new perceptions amongst “actors” and the audience, sparking the moral imagination through the lens of others. By casting light on all “the storytellers” and the complexity of implications inherent with this powerful technology, dramatic narratives create vivid scenarios through which to imagine the challenges faced on the genomic path ahead, critique the application of bioethical traditions in context, and re-imagine alternative paradigms.
Because narrative genomics is a pedagogical approach intended to facilitate discourse, as well as provide reflection on the interrelatedness of the cross-disciplinary issues posed, we ground our genomic plays in current scholarship and ensure that it is accurate scientifically as well as provide extensive references and pose focused bioethics questions which can complement and enhance the classroom experience.
In a similar vein, bioethical controversies can also be brought to life with this approach where bioethics reaching incorporates dramatizations and excerpts from existing theatrical narratives, whether to highlight bioethics issues thematically, or to illuminate the historical path to the genomics revolution and other medical innovations from an ethical perspective.
Varying iterations of these dramatic narratives have been experienced (read, enacted, witnessed) by bioethicists, policy makers, geneticists, genetic counselors, other healthcare professionals, basic scientists, bioethicists, lawyers, patient advocates, and students to enhance insight and facilitate interdisciplinary and interprofessional dialogue.
Dramatizations embedded in genomic narratives illuminate the human dimensions and complexity of interactions among family members, medical professionals, and others in the scientific community. By facilitating discourse and raising more questions than answers on difficult issues, narrative genomics links the promise and concerns of next-gen technologies with a creative bioethics pedagogical approach for learning from one another.
Heading image: Andrzej Joachimiak and colleagues at Argonne’s Midwest Center for Structural Genomics deposited the consortium’s 1,000th protein structure into the Protein Data Bank. CC-BY-SA-2.0 via Wikimedia Commons.
Now that Noughth Week has come to an end and the university Full Term is upon us, I thought it might be an appropriate time to investigate the arcane world of Oxford jargon — the University of Oxford, that is. New students, or freshers, do not arrive in Oxford but come up; at the end of term they go down (irrespective of where they live). If they misbehave they may find themselves being sent down by the proctors (a variant of the legal procurator), or — for less heinous crimes — merely rusticated, a form of suspension which, etymologically at least, involves being sent to the countryside (Latin rusticus). The formal beginning of a degree is known as matriculation, a ceremony held in the Sheldonian Theatre, in which membership of the university is conferred by being having one’s name entered on the register, or matricula.
Tutors, fellows, and readers
Being a student of the university involves membership of one of the colleges or private halls; despite their names, St Edmund (Teddy) Hall and Lady Margaret Hall are actually colleges; Regent’s Park College is neither a college nor a park. Christ Church should be referred to simply as Christ Church, rather than Christ Church College, although it is also known as ‘the House’. Magdalen is pronounced ‘maudlin’ and should never be confused with another college of the same name at Cambridge University (affectionately known as ‘The Other Place’, originally a euphemism for hell), which is pronounced the same but spelled Magdalene.
Each college has a head of house, referred to by a variety of terms: Principal, President, Dean, Master, Provost, Rector, or Warden. Teaching in college takes the form of tutorials (or tutes), overseen by Colleges tutors (from a Latin word for ‘protector’); the earliest tutors were responsible for a student’s general welfare — a post now known as moral tutor. Colleges are governed by a body of fellows (students at Christ Church), or dons, from Latin dominus ‘master’. The title reader, a medieval term for a teacher used to refer to a lecturer below the rank of professor, has recently been retired at Oxford in favour of the American title associate professor.
Mods and battels
At Oxford, students read rather than study a subject, a usage which goes back to the Middle Ages. Final examinations were originally known as Greats; this term is now used only of the degree of Literae Humaniores (‘more humane letters’) — Classics to everyone else. No longer in use is the equivalent term Smalls for the first year exams; these are now known as Moderations (or Mods) in the Humanities, or Preliminaries (or Prelims) in the Sciences. Sadly, the slang equivalents great go and little go have now fallen out of use. University examinations are sat in Schools, a forbidding edifice on the High Street (or ‘the High’) which gets its name from its original use for holding scholastic disputations. Students are required to wear formal academic dress to sit exams; this is known as subfusc, from Latin subfuscus ‘somewhat dark’.
College exams, rather less formal affairs, are known today as collections, from Latin collectiones, ‘gathering together’, so-called because they occurred at the end of term when fees were due for collection. Confusingly, the term collection is also used to refer to the end-of-term meeting where a progress report is read by a student’s tutor in the presence of the master of the college. As well as fees, students must pay their battels, a bill for food purchased from the College buttery — originally a wine store, from Latin butta ‘cask’, but now extended to include a range of student delicacies.
Lecturers dusting off their notes and preparing for the new term, for whom such usages are second-nature, may benefit from the salutary lesson of the wall-lecture –a term coined by their 17th-century forbears for a lecture delivered to an empty room. The term may be obsolete, but the prospect remains all too real.
Biology Week is an annual celebration of the biological sciences that aims to inspire and engage the public in the wonders of biology. The Society of Biology created this awareness day in 2012 to give everyone the chance to learn and appreciate biology, the science of the 21st century, through varied, nationwide events. Our belief that access to education and research changes lives for the better naturally supports the values behind Biology Week, and we are excited to be involved in it year on year.
Biology, as the study of living organisms, has an incredibly vast scope. We’ve identified some key figures from the last couple of centuries who traverse the range of biology: from physiology to biochemistry, sexology to zoology. You can read their stories by checking out our Biology Week 2014 gallery below. These biologists, in various different ways, have had a significant impact on the way we understand and interact with biology today. Whether they discovered dinosaurs or formed the foundations of genetic engineering, their stories have plenty to inspire, encourage, and inform us.
If you’d like to learn more about these key figures in biology, you can explore the resources available on our Biology Week page, or sign up to our e-alerts to stay one step ahead of the next big thing in biology.
Headline image credit: Marie Stopes in her laboratory, 1904, by Schnitzeljack. Public domain via Wikimedia Commons.
Anti-politics is in the air. There is a prevalent feeling in many societies that politicians are up to no good, that establishment politics are at best irrelevant and at worst corrupt and power-hungry, and that the centralization of power in national parliaments and governments denies the public a voice. Larger organizations fare even worse, with the European Union’s ostensible detachment from and imperviousness to the real concerns of its citizens now its most-trumpeted feature. Discontent and anxiety build up pressure that erupts in the streets from time to time, whether in Takhrir Square or Tottenham. The Scots rail against a mysterious entity called Westminster; UKIP rides on the crest of what it terms patriotism (and others term typical European populism) intimating, as Matthew Goodwin has pointed out in the Guardian, that Nigel Farage “will lead his followers through a chain of events that will determine the destiny of his modern revolt against Westminster.”
At the height of the media interest in Wootton Bassett, when the frequent corteges of British soldiers who were killed in Afghanistan wended their way through the high street while the townspeople stood in silence, its organizers claimed that it was a spontaneous and apolitical display of respect. “There are no politics here,” stated the local MP. Those involved held that the national stratum of politicians was superfluous to the authentic feeling of solidarity that could solely be generated at the grass roots. A clear resistance emerged to national politics trying to monopolize the mourning that only a town at England’s heart could convey.
Academics have been drawn in to the same phenomenon. A new Anti-politics and Depoliticization Specialist Group has been set up by the Political Studies Association in the UK dedicated, as it describes itself, to “providing a forum for researchers examining those processes throughout society that seem to have marginalized normative political debates, taken power away from elected politicians and fostered an air of disengagement, disaffection and disinterest in politics.” The term “politics” and what it apparently stands for is undoubtedly suffering from a serious reputational problem.
But all that is based on a misunderstanding of politics. Political activity and thinking isn’t something that happens in remote places and institutions outside the experience of everyday life. It is ubiquitous, rooted in human intercourse at every level. It is not merely an elite activity but one that every one of us engages in consciously or unconsciously in our relations with others: commanding, pleading, negotiating, arguing, agreeing, refusing, or resisting. There is a tendency to insist on politics being mainly about one thing: power, dissent, consensus, oppression, rupture, conciliation, decision-making, the public domain, are some of the competing contenders. But politics is about them all, albeit in different combinations.
It concerns ranking group priorities in terms of urgency or importance—whether the group is a family, a sports club or a municipality. It concerns attempts to achieve finality in human affairs, attempts always doomed to fail yet epitomised in language that refers to victory, authority, sovereignty, rights, order, persuasion—whether on winning or losing sides of political struggle. That ranges from a constitutional ruling to the exasperated parent trying to end an argument with a “because I say so.” It concerns order and disorder in human gatherings, whether parliaments, trade union meetings, classrooms, bus queues, or terrorist attacks—all have a political dimension alongside their other aspects. That gives the lie to a demonstration being anti-political, when its ends are reform, revolution or the expression of disillusionment. It concerns devising plans and weaving visions for collectivities. It concerns the multiple languages of support and withholding support that we engage in with reference to others, from loyalty and allegiance through obligation to commitment and trust. And it is manifested through conservative, progressive or reactionary tendencies that the human personality exhibits.
When those involved in the Wootton Bassett corteges claimed to be non-political, they overlooked their organizational role in making certain that every detail of the ceremony was in place. They elided the expression of national loyalty that those homages clearly entailed. They glossed over the tension between political centre and periphery that marked an asymmetry of power and voice. They assumed, without recognizing, the prioritizing of a particular group of the dead – those that fell in battle.
People everywhere engage in political practices, but they do so in different intensities. It makes no more sense to suggest that we are non-political than to suggest that we are non-psychological. Nor does anti-politics ring true, because political disengagement is still a political act: sometimes vociferously so, sometimes seeking shelter in smaller circles of political conduct. Alongside political philosophy and the history of political thought, social scientists need to explore the features of thinking politically as typical and normal features of human life. Those patterns are always with us, though their cultural forms will vary considerably across and within societies. Being anti-establishment, anti-government, anti-sleaze, even anti-state are themselves powerful political statements, never anti-politics.
Headline image credit: Westminster, by “Stròlic Furlàn” – Davide Gabino. CC-BY-SA-2.0 via Flickr.
The outbreak of Ebola, in Africa and in the United States, is a stark reminder of the clear and present danger that infection represents in all our lives, and we need reminding. Despite all of our medical advances, more familiar infections still take tens of thousands of American lives each year – and too often these deaths are avoidable.
Hospital infections kill 75,000 Americans a year — more than twice the number of people who die in car crashes. Most people know that motor vehicle deaths could be drastically reduced. What’s not as widely appreciated is that the far greater number of hospital infections could be reduced by up to 70%.
Changes that would reduce infections are evidence-based and scientific, supported by the Centers for Disease Control and Prevention. For example, the campaign against hospital-acquired urinary tract infection — one of the most common hospital infections in the world — seeks to minimize the use of internal, Foley catheters, a major vector of infection. Nurses who have always relied on Foleys to deal with patients who have urinary incontinence are told to use straight catheters intermittently instead, which increases their workload. Surgeons who are accustomed to placing Foley catheters in their patients for several days after an operation are told to remove the catheter shortly after surgery – or not to use one at all. Similar approaches can be used to reduce other common infections. If we know what needs to be done to lower the rate of hospital infections, why have the many attempts to do so fallen so woefully short?
Our research shows that a major reason is the unwillingness of some nurses and physicians to support the desired new behaviors. We have found that opposition to hospitals’ infection prevention initiatives comes from the three groups we call Active Resisters, Organizational Constipators, and Timeservers. While we know these types of individuals exist in hospitals since we have seen them in action, we suspect they can also be found in all types of organizations.
Active resisters refuse to abide by and sometimes campaign against an initiative’s proposed changes. Some active resisters refuse to change a practice they have used for years because they fear it might have a negative impact on their patients’ health. Others resist because they doubt the scientific validity of a change, or because the change is inconvenient. For others it’s simply a matter of ego, as in, “Don’t tell me what to do.” Some ignore the evidence. Many initiatives to prevent urinary tract infection ask nurses to remind physicians when it’s time to remove an indwelling catheter, but many nurses are unwilling to confront physicians – and many physicians are unwilling to be so confronted.
Organizational constipators present a different set of challenges. Most are mid- to upper-level staff members who have nothing against an infection prevention initiative per se but simply enjoy exercising their power. Sometimes they refuse to permit underlings to help with an initiative. Sometimes they simply do nothing, allowing memos and emails to pile up without taking action. While we have met some physicians in this category, we have seen, unfortunately, a surprising number of nursing leaders employ this approach.
Timeservers do the least possible in any circumstance. That applies to every aspect of their work, including preventing infection. A timeserver surgeon may neglect to wash her hands before examining a patient, not because she opposes that key infection prevention requirement but because it’s just easier that way. A timeserver nurse may “forget” to conduct “sedation vacations” for patients who are on mechanical breathing machines to assess if the patient can be weaned from the ventilator sooner for the simple reason that sedated patients are less work.
We have learned that different overcoming these human-related barriers to improvement requires different styles of engagement.
To win support among the active resisters, we recommend employing data both liberally and strategically. Doctors are trained to respond to facts, and a graph that shows a high rate of infection department can help sway them. Sharing research from respected journals describing proven methods of preventing infection can also help overcome concerns. Nurse resisters are similarly impressed by such data, but we find that they are also likely to be convinced by appeals to their concern for their patients’ welfare – a description, for example, of the discomfort the Foley causes their patients.
Organizational constipators and timeservers are more difficult to win over, largely because their negative behavior is an incidental result of their normal operating style. Managers sometimes try to work around the organizational constipators and assign an authority figure to harass the timeservers, but their success is limited. Efforts to fire them can sometimes be difficult.
Hospitals’ administrative and medical leaders often play an important role in successful infection prevention initiatives by emphasizing their approval in their staff encounters, by occasionally attending an infection prevention planning session, and by making adherence to the goals of the initiative a factor in employee performance reviews. Some innovative leaders also give out physician or nurse champion-of-the-year awards that serve the dual purpose of rewarding the healthcare workers who have been helpful in a successful initiative while encouraging others by showing that they, too, could someday receive similar recognition. It may help to include potential obstructors in planning for an infection prevention campaign; the critics help spot weaknesses and are also inclined to go easy on the campaign once it gets underway.
But the leadership of a successful infection prevention project can also come from lower down in a hospital’s hierarchy, with or without the active support of the senior executives. We found the key to a positive result is a culture of excellence, when the hospital staff is fully devoted to patient-centered, high-quality care. Healthcare workers in such hospitals endeavor to treat each patient as a family member. In such institutions, a dedicated nurse can ignite an infection prevention initiative, and the staff’s all-but-universal commitment to patient safety can win over even the timeservers. The closer the nation’s hospitals approach that state of grace, the greater the success they will have in their efforts to lower infection rates.
Preventing infection is a team sport. Cooperation — among doctors, nurses, microbiologists, public health officials, patients, and families — will be required to control the spread of Ebola. Such cooperation is required to prevent more mundane infections as well.
Last weekend we were thrilled to see so many of you at the 2014 Oral History Association (OHA) Annual Meeting, “Oral History in Motion: Movements, Transformations, and the Power of Story.” The panels and roundtables were full of lively discussions, and the social gatherings provided a great chance to meet fellow oral historians. You can read a recap from Margo Shea, or browse through the Storify below, prepared by Jaycie Vos, to get a sense of the excitement at the meeting. Over the next few weeks, we’ll be sharing some more in depth blog posts from the meeting, so make sure to check back often.
We’re getting ready for Halloween this month by reading the classic horror stories that set the stage for the creepy movies and books we love today. Check in every Friday this October as we tell Fitz-James O’Brien’s tale of an unusual entity in What Was It?, a story from the spine-tingling collection of works in Horror Stories: Classic Tales from Hoffmann to Hodgson, edited by Darryl Jones. Last we left off the narrator was headed to bed after a night of opium and philosophical conversation with Dr. Hammond, a friend and fellow boarded at the supposed haunted house where they are staying.
We parted, and each sought his respective chamber. I undressed quickly and got into bed, taking with me, according to my usual custom, a book, over which I generally read myself to sleep. I opened the volume as soon as I had laid my head upon the pillow, and instantly flung it to the other side of the room. It was Goudon’s ‘History of Monsters,’—a curious French work, which I had lately imported from Paris, but which, in the state of mind I had then reached, was anything but an agreeable companion. I resolved to go to sleep at once; so, turning down my gas until nothing but a little blue point of light glimmered on the top of the tube, I composed myself to rest.
The room was in total darkness. The atom of gas that still remained alight did not illuminate a distance of three inches round the burner. I desperately drew my arm across my eyes, as if to shut out even the darkness, and tried to think of nothing. It was in vain. The confounded themes touched on by Hammond in the garden kept obtruding themselves on my brain. I battled against them. I erected ramparts of would-be blankness of intellect to keep them out. They still crowded upon me. While I was lying still as a corpse, hoping that by a perfect physical inaction I should hasten mental repose, an awful incident occurred. A Something dropped, as it seemed, from the ceiling, plumb upon my chest, and the next instant I felt two bony hands encircling my throat, endeavoring to choke me.
I am no coward, and am possessed of considerable physical strength. The suddenness of the attack, instead of stunning me, strung every nerve to its highest tension. My body acted from instinct, before my brain had time to realize the terrors of my position. In an instant I wound two muscular arms around the creature, and squeezed it, with all the strength of despair, against my chest. In a few seconds the bony hands that had fastened on my throat loosened their hold, and I was free to breathe once more. Then commenced a struggle of awful intensity. Immersed in the most profound darkness, totally ignorant of the nature of the Thing by which I was so suddenly attacked, finding my grasp slipping every moment, by reason, it seemed to me, of the entire nakedness of my assailant, bitten with sharp teeth in the shoulder, neck, and chest, having every moment to protect my throat against a pair of sinewy, agile hands, which my utmost efforts could not confine,—these were a combination of circumstances to combat which required all the strength, skill, and courage that I possessed.
At last, after a silent, deadly, exhausting struggle, I got my assailant under by a series of incredible efforts of strength. Once pinned, with my knee on what I made out to be its chest, I knew that I was victor. I rested for a moment to breathe. I heard the creature beneath me panting in the darkness, and felt the violent throbbing of a heart. It was apparently as exhausted as I was; that was one comfort. At this moment I remembered that I usually placed under my pillow, before going to bed, a large yellow silk pocket-handkerchief. I felt for it instantly; it was there. In a few seconds more I had, after a fashion, pinioned the creature’s arms.
I now felt tolerably secure. There was nothing more to be done but to turn on the gas, and, having first seen what my midnight assailant was like, arouse the household. I will confess to being actuated by a certain pride in not giving the alarm before; I wished to make the capture alone and unaided.
Never losing my hold for an instant, I slipped from the bed to the floor, dragging my captive with me. I had but a few steps to make to reach the gas-burner; these I made with the greatest caution, holding the creature in a grip like a vice. At last I got within arm’s-length of the tiny speck of blue light which told me where the gas-burner lay. Quick as lightning I released my grasp with one hand and let on the full flood of light. Then I turned to look at my captive.
I cannot even attempt to give any definition of my sensations the instant after I turned on the gas. I suppose I must have shrieked with terror, for in less than a minute afterward my room was crowded with the inmates of the house. I shudder now as I think of that awful moment. I saw nothing! Yes; I had one arm firmly clasped round a breathing, panting, corporeal shape, my other hand gripped with all its strength a throat as warm, and apparently fleshly, as my own; and yet, with this living substance in my grasp, with its body pressed against my own, and all in the bright glare of a large jet of gas, I absolutely beheld nothing! Not even an outline,—a vapor!
I do not, even at this hour, realize the situation in which I found myself. I cannot recall the astounding incident thoroughly. Imagination in vain tries to compass the awful paradox.
It breathed. I felt its warm breath upon my cheek. It struggled fiercely. It had hands. They clutched me. Its skin was smooth, like my own. There it lay, pressed close up against me, solid as stone,—and yet utterly invisible!
I wonder that I did not faint or go mad on the instant. Some wonderful instinct must have sustained me; for, absolutely, in place of loosening my hold on the terrible Enigma, I seemed to gain an additional strength in my moment of horror, and tightened my grasp with such wonderful force that I felt the creature shivering with agony.
Just then Hammond entered my room at the head of the household. As soon as he beheld my face—which, I suppose, must have been an awful sight to look at—he hastened forward, crying, ‘Great heaven, Harry! what has happened?’
‘Hammond! Hammond!’ I cried, ‘come here. O, this is awful!
I have been attacked in bed by something or other, which I have hold of; but I can’t see it,—I can’t see it!’
Hammond, doubtless struck by the unfeigned horror expressed in my countenance, made one or two steps forward with an anxious yet puzzled expression. A very audible titter burst from the remainder of my visitors. This suppressed laughter made me furious. To laugh at a human being in my position! It was the worst species of cruelty. Now, I can understand why the appearance of a man struggling violently, as it would seem, with an airy nothing, and calling for assistance against a vision, should have appeared ludicrous. Then, so great was my rage against the mocking crowd that had I the power I would have stricken them dead where they stood.
‘Hammond! Hammond!’ I cried again, despairingly, ‘for God’s sake come to me. I can hold the—the thing but a short while longer. It is overpowering me. Help me! Help me!’
‘Harry,’ whispered Hammond, approaching me, ‘you have been smoking too much opium.’
‘I swear to you, Hammond, that this is no vision,’ I answered, in the same low tone. ‘Don’t you see how it shakes my whole frame with its struggles? If you don’t believe me, convince yourself. Feel it,— touch it.’
Hammond advanced and laid his hand in the spot I indicated. A wild cry of horror burst from him. He had felt it! In a moment he had discovered somewhere in my room a long piece of cord, and was the next instant winding it and knotting it about the body of the unseen being that I clasped in my arms.
‘Harry,’ he said, in a hoarse, agitated voice, for, though he preserved his presence of mind, he was deeply moved, ‘Harry, it’s all safe now. You may let go, old fellow, if you’re tired. The Thing can’t move.’
I was utterly exhausted, and I gladly loosed my hold.
Check back next Friday, 24 October to find out what happens next. Missed a part of the story? Catch up with part 1 and part 2.
For many of us, nature is defined as an outdoor space, untouched by human hands, and a place we escape to for refuge. We often spend time away from our daily routines to be in nature, such as taking a backwoods camping trip, going for a long hike in an urban park, or gardening in our backyard. Think about the last time you were out in nature, what comes to mind? For me, it was a canoe trip with friends. I can picture myself in our boat, the sound of the birds and rustling leaves in the background, the smell of cedars mixed with the clearing morning mist, and the sight of the still waters in front of me. Most of all, I remember a sense of calmness and clarity which I always achieve when I’m in nature.
Nature takes us away from the demands of life, and allows us to concentrate on the world around us with little to no effort. We can easily be taken back to a summer day by the smell of fresh cut grass, and force ourselves to be still to listen to the distant sound of ocean waves. Time in nature has a wealth of benefits from reducing stress, improving mood, increasing attentional capacities, and facilitating and creating social bonds. A variety of work supports nature being healing and health promoting at both an individual level (such as being energized after a walk with your dog) and a community level (such as neighbors coming together to create a local co-op garden). However, it can become difficult to experience the outdoors when we spend most of our day within a built environment.
I’d like you to stop for a moment and look around. What do you see? Are there windows? Are there any living plants or animals? Are the walls white? Do you hear traffic or perhaps the hum of your computer? Are you smelling circulated air? As I write now I hear the buzz of the florescent lights above me, and take a deep inhale of the lingering smell from my morning coffee. There is no nature except for the few photographs of the countryside and flowers that I keep tapped to my wall. I often feel hypocritical researching nature exposure sitting in front of a computer screen in my windowless office. But this is the reality for most of us. So how can we tap into the benefits of nature in order to create healthy and healing indoor environments that mimic nature and provide us with the same benefits as being outdoors?
Urban spaces often get a bad rap. Sure, they’re typically overcrowded, high in pollution, and limited in their natural and green spaces, but they also offer us the ability to transform the world around us into something that is meaningful and also health promoting. Beyond architectural features such as skylights, windows, and open air courtyards, we can use ambient features to adapt indoor spaces to replicate the outdoors. The integration of plants, animals, sounds, scents, and textures into our existing indoor environments enables us to create a wealth of natural environments indoors.
Notable examples of indoor nature, are potted plants or living walls in office spaces, atriums providing natural light, and large mural landscapes. In fact, much research has shown that the presence of such visual aids provides the same benefits of being outdoors. Incorporating just a few pieces of greenery into your workspace can help increase your productivity, boost your mood, improve your health, and help you concentrate on getting your work done. But being in nature is more than just seeing, it’s experiencing it fully and being immersed into a world that engages all of your senses. The use of natural sounds, scents, and textures (e.g. wooden furniture or carpets that look and feel like grass) provides endless possibilities for creating a natural environment indoors, and encouraging built environments to be therapeutic spaces. The more nature-like the indoor space can be, the more apt it is to illicit the same psychological and physical benefits that being outdoors does. Ultimately, the built environment can engage my senses in a way that brings me back to my canoe trip, and help me feel that same clarity and calmness that I did on the lake.
On a broader level, indoor nature may also be a means of encouraging sustainable and eco-friendly behaviors. With more generations growing up inside, we risk creating a society that is unaware of the value of nature. It’s easy to suggest that the solution to our declining involvement with nature is to just “go outside”; but with today’s busy lifestyle, we cannot always afford the time and money to step away. Integrating nature into our indoor environment is one way to foster the relationship between us and nature, and to encourage a sense of stewardship and appreciation for our natural world. By experiencing the health promoting and healing properties of nature, we can instill individuals with the significance of our natural world.
As I look around my office I’ve decided I need to take some of my own advice and bring my own little piece of nature inside. I encourage you to think about what nature means to you, and how you can incorporate this meaning into your own space. Does it involve fresh cut flowers? A photograph of your annual family campsite? The sound of birds in the background as you work? Whatever it is, I’m sure it’ll leave you feeling a little bit lighter, and maybe have you working a little bit faster.
Image: World Financial Center Winter Garden by WiNG. CC-BY-3.0 via Wikimedia Commons.
Color names have been investigated in almost overwhelming detail, but it is not the etymology but usage that tends to “throw us off the scent.” One can have no quarrel with the statement that different communities will use a certain term differently, for the basis of comparison may be different (so Francis A. Wood, a great specialist in historical semantics). Wood cited the case of “smeared.” Some people associate “smeared” with “dirty” (hence “brown; black”), while others with “oily” (hence “shiny” and even “bright; yellow; white”). It is harder to agree that “in primitive times colors were not carefully distinguished,” because we don’t know what “primitive times” means. The centuries of Classical Greek, Old English, or some remote epoch from which we have no documents and about whose language habits we can judge only from those of modern “primitive peoples” studied by missionaries and anthropologists? Also, how “careful” should one be in distinguishing colors? The idea that some general notion like “smeared” can diverge and yield opposite meanings is fully acceptable. We are in trouble when a word displays seemingly incompatible meanings in the same language or in closely related languages.
Metaphors do not confuse us, and therefore we accept the idiom green years. We can also let greenhorns and our acquaintances who are still green behind the ears enjoy their youthful inexperience. Perhaps green in greencheese, the moon’s main ingredient in folklore, does mean “fresh,” as I have read, but I still feel some discomfort when an Icelandic saga mentions green meat, green fish, and green butter. In the sagas, green also means “safe, excellent” (and green roads in Old Germanic referred to good roads devoid of danger), so perhaps not fresh (unsalted?) meat, fish, and butter are meant but products of exceptional quality, something one can eat without fearing for one’s health?
Red yolk, occurring in Old Icelandic, also amazes me (in English, yolk has the root of yellow), and so does red gold, a collocation used in the epic poetry all over Europe. Does red mean “scintillating” here, or do we not know something about ancient minting? And how did red gold become a formula in several traditions? Some such phrases have been explained, but the explanations do not always sound fully convincing. In dealing with color names one cannot be too careful. Etymology is of little help here. For example, green has the same root as grow (thus, green is the color of vegetation) and cats have green eyes; yet we still don’t quite understand why jealousy, if we can trust Shakespeare, is a green-eyed monster. Likewise, red is, from an etymological point of view, the color of ore (as follows from Russian ruda “ore”; stress on the second syllable), but coins were not made from ore.
Brown is no less opaque than green or red. Older scholars traced brown to the root of burn (Old Engl. brinnan ~ birnan, Gothic brinnan, and so forth). Allegedly, that is why brown can refer to both dark and bright shades. But brown and burn are hardly related, and, even if they were, those who spoke Old English and Old Icelandic would not have been aware of the ancient root. As mentioned in Part 1 of this essay, brown horses or possibly shields of Germanic speakers seem to have impressed the Romance world so strongly that the word for “brown” made its way into the speech of the French, Italians, and others. In the Germanic languages, shields and occasionally helmets and swords were called brown (= “shining”). This sense returned from Romance to English, which has burnish from French and the verb to brown; both mean “to polish.” In some parts of the German-speaking world (predominantly in the south), braun “brown” means “violet”; Luther used it in this sense. In medieval German literature, compounds turned up that can be glossed as “scarlet-brown” and “black-brown.” Their second components must have emphasized their sheen.
In the past, several distinguished language historians thought, and some of their followers still think that brown “shining” and brown “violet” are homonyms, both etymologically distinct from brun (long u, as in Engl. woo) “brown.” Fortunately, there has been no agreement among them, and this explanation has not become dogma, but the idea that braun “violet” owes its existence to Latin prunum “plum” (hence Engl. prune) has gained wide acceptance. For example, it was endorsed by Elmar Seebold, the latest editor of Kluge’s German etymological dictionary, a deservedly authoritative source. According to the rule known as Occam’s razor, entities should not be multiplied (with regard to etymology, I discussed it briefly in the post on qualm). Jacob Grimm suggested that, in dealing with ancient homonyms, it is advisable to treat them as going back to the same root. Given the baffling variety of senses the main color names typically show, it is perhaps more prudent to stay with one basic word that branched off in many unpredictable ways.
What else has been recorded as brown? If the color brown had magical connotations, Germanic shields, swords, and horses may have inspired awe and fear rather than admiration. In the broad Slavic-Iranian belt, brown was a common epithet of stallions and deities. There it was obviously not borrowed from Germanic. In German baroque literature, the phrase braune Nacht “brown night” appeared, and poets began to speak about the brown shadows of night. This usage has been explained as a loan from Romance. Even if so, today we don’t think of night or shadows as brown (compare Byron’s clear obscure, an English version of Italian chiaroscuro).
During the Renaissance, brown competed with black as the color of mourning, especially with reference to mourning women. It suggested merging with the background, being somber, unattractive, inconspicuous. We note with surprise how many Ancient Greek names began with Phryn- “brown” (Phryniskos, Phrynion, and the like). They remind one of Jude the Obscure. Didn’t they originally refer to the insignificance or low status of the bearers? In Part 1, I wrote that the family name Brown ~ Braune needs an explanation but was reminded of Black, White, and Green. Black and White can also be accounted for in several ways. In the population of blonds, would “white” have become a distinguishing feature? To my mind, brown as an allusion to the color of the person’s hair does not look persuasive. How many Greeks had brown hair? If their rarity is the origin of the moniker, then what was so special about Germanic speakers with chestnut-colored hair?
Perhaps an especially revealing phrase is Dante’s sangue bruno “brown blood,” said about gore, that is, blood shed and clotted or simply clotted. English speakers had the word dreor “gore, flowing blood.” It is still alive as the root of the adjective dreary, originally “bloody, gory, grievous, sorrowful,” later “dismal, gloomy.” Homer called blood porphyros “purple” (or “crimson”?), but he also used this adjective when he described descending death. These bridges between “brown” and “red” will perhaps allow us to understand the strange predilection for brown waves (as in Beowulf), wine-colored sea (as in Homer), and the colors of the planet Saturn, which was called by the ancients black, brownish, and fiery. One thing can already be said now: in the history of the Indo-European languages, “brown” designated both a dark and a bright color. Our modern gloss “brown” does it less than full justice.
Question: Can anyone say why Hitler’s SA adopted brown shirts as its uniform? Did the color have any symbolic value?
To be continued.
Image credits: (1) Moon with an unhealthy greenish coating, modified from Michael K. Fairbanks’s photo. Image by Naive cynic, CC-BY-SA-3.0-MIGRATED; GFDL-WITH-DISCLAIMERS via Wikimedia Commons. (2) Illustration from The Innocence of Father Brown, public domain via Project Gutenberg Australia.
Autumn 2014 marked the tenth anniversary of the publication of the Oxford Dictionary of National Biography. In a series of blog posts, academics, researchers, and editors looked at aspects of the ODNB’s online evolution in the decade since 2004. In this final post of the series, Alex May—ODNB’s editor for the very recent past— considers the Dictionary as a record of contemporary history.
When it was first published in September 2004, the Oxford DNB included biographies of people who had died (all in the ODNB are deceased) on or before 31 December 2001. In the subsequent ten years we have continued to extend the Dictionary’s coverage into the twenty-first century—with regular updates recording those who have died since 2001. Of the 4300 people whose biographies have been added to the online ODNB in this decade, 2172 died between 1 January 2001 and 31 December 2010 (our current terminus)—i.e., about 220 per year of death. While this may sound a lot, the average number of deaths per year over the same period in the UK was just short of 500,000, indicating a roughly one in 2300 chance of entering the ODNB. This does not yet approach the levels of inclusion for people who died the late nineteenth century, let alone earlier periods: someone dying in England in the first decade of the seventeenth century, for example, had a nearly three-times greater chance of being included in the ODNB than someone who died in the first decade of the twenty-first century.
‘Competition’ for spaces at the modern end of the dictionary is therefore fierce. Some subjects are certainties—prime ministers such as Ted Heath or Jim Callaghan, or Nobel prize-winning scientists such as Francis Crick or Max Perutz. There are perhaps fifty or sixty potential subjects a year about whose inclusion no-one would quibble. But there are as many as 1500 people on our lists each year, and for perhaps five or six hundred of them a very good case could be made.
This is where our advisers come in. Over the last ten years we have relied heavily on the help of some 500 people, experts and leading figures in their fields whether as scholars or practitioners, who have given unstintingly of their time and support. Advisers are enjoined to consider all the aspects of notability, including achievement, influence, fame, and notoriety. Of course, their assessments can often vary, particularly in the creative fields, but even in those it is remarkable how often they coincide.
Our advisers have also in most cases been crucial in identifying the right contributor for each new biography, whether he or she be a practitioner from the same field (we often ask politicians to write on politicians—Ted Heath and Jim Callaghan are examples of this—lawyers on lawyers, doctors on doctors, and so on), or a scholar of the particular subject area. Sadly, a number of our advisers and contributors have themselves entered the dictionary in this decade, among them the judge Tom Bingham, the politician Roy Jenkins, the journalist Tony Howard, and the historian Roy Porter.
Just as the selection of subjects is made with an eye to an imaginary reader fifty or a hundred years’ hence (will that reader need or want to find out more about that person?), so the entries themselves are written with such a reader in view. ODNB biographies are not always the last word on a subject, but they are rarely the first. Most of the ‘recently deceased’ added to the Dictionary have received one or more newspaper obituary. ODNB biographies differ from newspaper obituaries in providing more, and more reliable, biographical information, as well as being written after a period of three to four years’ reflection between death and publication of the entry—allowing information to emerge and reputations to settle. In addition, ODNB lives attempt to provide an understanding of context, and a considered assessment (implicit or explicit) of someone’s significance: in short, they aim to narrate and evaluate a person’s life in the context of the history of modern Britain and the broad sweep of a work of historical reference.
The result, over the last ten years, has been an extraordinary collection of biographies offering insights into all corners of twentieth and early twenty-first century British life, from multiple angles. The subjects themselves have ranged from the soprano Elisabeth Schwarzkopf to the godfather of punk, Malcolm McLaren; the high tory Norman St John Stevas to the IRA leader Sean MacStiofáin; the campaigner Ludovic Kennedy to the jester Jeremy Beadle; and the turkey farmer Bernard Matthews to Julia Clements, founder of the National Association of Flower Arranging Societies. By birth date they run from the founder of the Royal Ballet, Dame Ninette de Valois (born in 1898, who died in 2001), to the ‘celebrity’ Jade Goody (born in 1981, who died in 2009). Mention of the latter reminds us of Leslie Stephen’s determination to represent the whole of human life in the pages of his original, Victorian DNB. Poignantly, in light of the 100th anniversary of the outbreak of the First World War, among the oldest subjects included in the dictionary are three of the ‘last veterans’, Harry Patch, Henry Allingham, and Bill Stone, who, as the entry on them makes clear, reacted very differently to the notion of commemoration and their own late fame.
The work of selecting from thousands of possible subjects, coupled with the writing and evaluation of the chosen biographies, builds up a contemporary picture of modern Britain as we record those who’ve shaped the very recent past. As we begin the ODNB’s second decade this work continues: in January 2015 we’ll publish biographies of 230 people who died in 2011 and we’re currently editing and planning those covering the years 2012 and 2013, including what will be a major article on the life, work, and legacy of Margaret Thatcher.
Links between biography and contemporary history are further evident online—creating opportunities to search across the ODNB by profession or education, and so reveal personal networks, associations, and encounters that have shaped modern national life. Online it’s also possible to make connections between people active in or shaped by national events. Searching for Dunkirk, or Suez, or the industrial disputes of the 1970s brings up interesting results. Searching for the ‘Festival of Britain’ identifies the biographies of 35 men and women who died between 2001-2010: not just the architects who worked on the structures or the sculptors and artists whose work was showcased, but journalists, film-makers, the crystallographer Helen Megaw (whose diagrams of crystal structures adorned tea sets used during the Festival), and the footballer Bobby Robson, who worked on the site as a trainee electrician. Separately, these new entries shed light not only on the individuals concerned but on the times in which they lived. Collectively, they amount to a substantial and varied slice of modern British national life.
Headline image credit: Harry Patch, 2007, by Jim Ross. CC-BY-SA-3.0 via Wikimedia Commons.
In honor of the 40th anniversary of Austin City Limits, the longest running live music show on television, we spoke to author Tracey E. W. Laird, author of Austin City Limits: A History, about the challenges the show has faced, the ways that it has adapted to a rapidly changing music industry, and what makes ACL perennially appealing to viewers.
What is the biggest challenge that Austin City Limits (ACL) has faced over the years?
One of the show’s biggest challenges for the first 25 years was funding. In the ups and downs of the public broadcasting world, largely dependent on fundraising and philanthropy, Austin PBS affiliate KLRU could never be certain that the show’s current year would not be the last. This anxiety peaked during the mid-to-late 1990s with a change in structure for PBS program distribution. Stations that once received Austin City Limits as part of their basic subscription package suddenly had to pay extra for the show. To make matters worse, a PBS competitor, Sessions at W. 54th, launched around this time, with slicker production and full Sony underwriting (I still recall seeing Beck on that show, where his performance was interspersed with footage of him walking down the street, looking hip in an all-white suit). Ultimately, for reasons I talk about in the book, Sessions survived only 3 years. That whole crisis time — when Austin newspapers ran stories about whether or not Austin City Limits would endure — led to a major turning point when the people behind Austin City Limits made the radical decision to redefine its modus operandi.
How has ACL managed to transcend the many changes that have taken place in the way we listen to and discover music?
ACL producers made a conscious decision right around the 25th anniversary to operate differently, recognizing that changes in the television industry and in the way people engage with music demanded flexibility and openness to new ideas. The alternative was obsolescence. They very deliberately articulated the core vision and mission for the show in broad musical terms that crossed a wide range of genres. Sincerity and quality are characteristics that might apply equally to, say, Esperanza Spaulding and Brad Paisley, Grizzly Bear and Ladysmith Black Mambazo. They also conceived ACL as a musical experience that includes the core television broadcast but expands outside it as well. Festival, venue, DVD, website, and so on, are all predicated on an outlook that is open to building on that core in new ways without diluting it.
How are the live performances on Austin City Limits different from other live performances?
That “live-ness” distinguishes ACL from other examples of televised musical performance. It goes back to the show’s beginnings. Its originators were motivated mainly by their own transcendent experiences seeing live music. Trying to capture that experience has been the central goal for Austin City Limits, despite any shifts in equipment, style, or genre. That differs from the central goal for most television productions, normally to produce a highly polished end result that fits the time constraints for commercial broadcasts. They require performers to repeat a song, sometimes multiple times, to allow the best possible camera angles and to tailor a song to fit time parameters shaped by commercial rather than artistic concerns. ACL, by contrast, lets its cameras and mics capture the music that has always been center. It is so unusual to see a televised performance unfolding according to the energy and communication between musicians and a live, interactive audience. It’s so simple, yet so rare.
What is your favorite ACL performance, and why?
If I had to pick one it would probably be Tom Waits in 1979 (Season 4). Most of all, it’s a fantastic performance, but it also represents an early turning point for Austin City Limits when it sloughed off any bounded, over-determined expectations for who might appear on its stage. It also shows how important the show’s PBS context is for its long and momentous history – no other media outlet in the United States would have aired an hour of Tom Waits. It is a treasure. But, then, over the years there are so many episodes about which I might say the same. Fats Domino is another one I will never forget. Oftentimes my favorite episode is the one I’ve just seen. I recently watched an episode with Raphael Saadiq that I had missed — they had it streaming on the “acltv” website — and I was excited about his music in a way that I wouldn’t have been if I had just heard a studio recording. I had a similar experience last year when I saw a DVD of a performance by Susan Tedeschi. This happens over and over again with Austin City Limits.
What’s one of your favorite behind-the-scenes stories about ACL?
I love the stories the crew tells about their work, like when sound engineer David Hough explained how they cover up the tally lights on the cameras so that performers never know which one is feeding into the master cut. A little trick like that helps insure that the performer stays focused on performing for the audience in the room. I also love to hear crew members talk about particular shows that stand out to them. To hear them talk underscores the very personal nature of musical performance; a performance that might leave me flat can deeply move someone else. Everyone there loves the work, so it’s a joy to listen to a staff member reflect. To return to Hough, for instance, when I interviewed him he went into a kind of reverie talking about his approach to mixing the sound for a given show. He’s a wizard – the end results sound good whether you listened through a mono TV speaker in 1976 (as in the first full season) or a digital 5.1 Dolby surround sound. He has been with the show that long, and listening to a wizard talk about his magic is fascinating. Many other crew members are equally inspirational to talk with. Outside that, there are well-traveled stories, the most famous of which describes how the electricity went off just as a performance (by Kris Kristofferson) was about to begin. 800 or so people filed down six flights of stairs and out the building via flashlights and cigarette lighters, amiably singing “London Homesick Blues” together. Anecdotes don’t get much better than that.
Featured image: Night view of Austin skyline and Lady Bird Lake as seen from Lou Neff Point. Photo by LoneStarMike. CC BY 3.0n via Wikimedia Commons.
Author of the book Night, Elie Wiesel, in his Nobel Peace Prize acceptance speech stated, “I remember: it happened yesterday, or eternities ago.” This quote holds true for many who have survived terrible tragedies or traumatic events in their lives. Often, survivorship and healing after trauma are long and personalized journeys, individualized paths of learning how to live a meaningful life after surviving trauma or tragedy. Each person’s life trajectory is unique, and however painful that journey may be, hope, renewal, and healing are possible.
The young, frightened mother who huddled in the basement with her infant during a tornado as debris swirled around them; the young man who survived a tragic automobile accident in which friends did not survive; or the child who witnessed a terrifying shooting in her neighborhood – these are all examples of individuals who have survived traumatic events. To hurt emotionally after experiencing a tragedy simply means that you are human. To process the traumatic event, heal emotionally, and move forward in life may be difficult, but it is achievable. When disaster or trauma strike, the immediate impact and after-effects of the event can feel intensely frightening, anxiety-provoking, and often life-changing. Earthquakes, fires, tornadoes, hurricanes, tsunamis — as well as bombings, shootings, genocide, sexual assaults, domestic violence, child abuse, and traffic accidents — these natural disasters and manmade traumatic events can invoke feelings of fear, anger, anxiety, and grief. Many times, victims and those around them feel a sense of helplessness and hopelessness, as they could not control the traumatic event nor did they have a way to prevent it from happening. They may blame themselves for the disaster or traumatic incident, even though they are not to blame. When feasible, victims and observers of trauma need to feel safe and protected immediately after the trauma occurs to help them regain a sense of safety after feeling so threatened in such a personal and direct sense.
Children, adolescents, and adults may all experience the impact of trauma. Some people may be easily startled after hearing the sound of cars backfiring or doors slamming. Others may have terrifying nightmares or immobilizing flashbacks of the traumatic events during the day, sometimes when it is least expected. Seeing images on television or online that remind victims of the trauma may be difficult for victims to view. Each person’s experience, and each person’s reaction to that experience is unique.
Immediately after a tragedy occurs, people may experience acute stress that is a temporary period of adjustment after surviving a trauma. If symptoms such as hypervigilance, flashbacks, and nightmares continue for a prolonged time period, they may be experiencing symptoms associated with post-traumatic stress. These traumatic events may alter the course of peoples’ lives and can be emotionally painful- but working toward feeling better after surviving a traumatic event is important. Traumatic life experiences can alter peoples’ lives and can be extremely emotionally painful. It is normal to feel emotional pain after something very frightening happens. Though it may take time, recovery is achievable. Everyone’s healing journey is unique, and is customized to his or her own circumstances including the impact the tragedy or traumatic event has on his or her life.
After trauma, some people may seek the comfort and support of friends or family members. Others may find solace in spirituality or their chosen faith. Some may benefit from meditation, yoga, exercise, or spending time in nature. For those creatively inclined, creative outlets such as storytelling, drama, music, art, or writing may be beneficial to themselves and other trauma survivors by releasing and sometimes, when desired, sharing their trauma related story to assist them and other survivors in moving forward toward healing and personal growth. Some may benefit from becoming advocates for other victims of trauma or may find meaning through efforts targeted at preventing future tragedies, disasters, or traumatic events from happening again in the future. Yet others may choose various therapeutic interventions with the help of a caring mental health professional.
Every person who has survived a disaster or traumatic event has a personal story of survivorship. And every person deserves the chance to process that story in his or her own unique way, in his or her own time. Healing from trauma is individualized as well, and each person finds what promotes healing for him or her. Sometimes a goal is to lessen, soften, or subdue traumatic memories so people can live today. Seeking solace, comfort, and sometimes professional help when needed after trauma can be beneficial- and can help people move forward toward a healthier, happier, meaningful life.
World Anaesthesia Day commemorates the first successful demonstration of ether anaesthesia at the Massachusetts General Hospital on 16 October 1846. This was one of the most significant events in medical history, enabling patients to undergo surgical treatments without the associated pain of an operation. To celebrate this important day, we are highlighting a selection of British Journal of Anaesthesia podcasts so you can learn more about anaesthesia practices today.
Fifth National Audit Project on Accidental Awareness during General Anaesthesia
Accidental awareness during general anaesthesia (AAGA) is a rare but feared complication of anaesthesia. Studying such rare occurrences is technically challenging but following in the tradition of previous national audit projects, the results of the fifth national audit project have now been published receiving attention from both the academic and national press. In this BJA podcast Professor Jaideep Pandit (NAP5 Lead) summarises the results and main findings from another impressive and potentially practice changing national anaesthetic audit. Professor Pandit highlights areas of AAGA risk in anaesthetic practice, discusses some of the factors (both technical and human) that lead to accidental awareness, and describes the review panels findings and recommendations to minimise the chances of AAGA.
October 2014 || Volume 113 – Issue 4 || 36 Minutes
Emergency airway management in trauma patients is a complex and somewhat contentious issue, with opinions varying on both the timing and delivery of interventions. London’s Air Ambulance is a service specialising in the care of the severely injured trauma patient at the scene of an accident, and has produced one of the largest data sets focusing on pre-hospital rapid sequence induction. Professor David Lockey, a consultant with London’s Air Ambulance, talks to the BJA about LAA’s approach to advanced airway management, which patients benefit from pre-hospital anaesthesia and the evolution of RSI algorithms. Professor Lockey goes on to discuss induction agents, describes how to achieve a 100% success rate for surgical airways and why too much choice can be a bad thing, as he gives us an insight into the exciting world of pre-hospital emergency care.
August 2014 || Volume 113 – Issue 2 || 35 Minutes
Fluid responsiveness: an evolution in our understanding
Fluid therapy is a central tenet of both anaesthetic and intensive care practice, and has been a solid performer in the medical armamentarium for over 150 years. However, mounting evidence from both surgical and medical populations is starting to demonstrate that we may be doing more harm than good by infusing solutions of varying tonicity and pH into the arms of our patients. As anaesthetists we arguably monitor our patient’s response to fluid-based interventions more closely than most, but in emergency departments and on intensive care units this monitoring me be unavailable or misleading. For this podcast Dr Paul Marik, Professor and Division Chief of Pulmonary Critical Care at Eastern Virginia Medical Center delivers a masterclass on the physiology of fluid optimisation, tells us which monitors to believe and importantly under which circumstances, and reviews some of the current literature and thinking on fluid responsiveness.
April 2014 || Volume 112 – Issue 4 || 43 Minutes
Post-operative Cognitive Decline
Post-operative cognitive decline (POCD) has been detected in some studies in up to 50% patients undergoing major surgery. With an ageing population and an increasing number of elective surgeries, POCD may represent a major public health problem. However POCD research is complex and difficult to perform, and the current literature may not tell the full story. Dr Rob Sanders from the Wellcome Department of Imaging Neuroscience at UCL talks to us about the methodological limitations of previous studies and the important concept of a cognitive trajectory. In addition, Dr Sanders discusses the risk factors and role of inflammation in causing brain injury, and reveals the possibility that certain patients may in fact undergo post-operative cognitive improvement (POCI).
March 2014 || Volume 112 – Issue 3 || 20 Minutes
Needle Phobia – A Psychological Perspective
For anaesthetists, intravenous cannulation is the gateway procedure to an increasingly complex and risky array of manoeuvres, and as such becomes more a reflex arc than a planned motor act. For some patients however, that initial feeling of needle penetrating epidermis, dermis and then vessel wall is a dreaded event, and the cause of more anxiety than the surgery itself. Needle phobia can be a deeply debilitating disease causing patients not to seek help even under the most dire circumstances. Dr Kate Jenkins, a hospital clinical psychologist describes both the psychology and physiology of needle phobia, what we as anaesthetists need to be aware of, and how we can better serve out patients for whom ‘just a small scratch’ may be their biggest fear.
July 2014 || Volume 113 – Issue 1 || 32 Minutes
Scholars have written a lot about the difficulties in the study of religion generally. Those difficulties become even messier when we use the words black or African American to describe religion. The adjectives bear the burden of a difficult history that colors the way religion is practiced and understood in the United States. They register the horror of slavery and the terror of Jim Crow as well as the richly textured experiences of a captured people, for whom sorrow stands alongside joy. It is in this context, one characterized by the ever-present need to account for one’s presence in the world in the face of the dehumanizing practice of white supremacy, that African American religion takes on such significance.
To be clear, African American religious life is not reducible to those wounds. That life contains within it avenues for solace and comfort in God, answers to questions about who we take ourselves to be and about our relation to the mysteries of the universe; moreover, meaning is found, for some, in submission to God, in obedience to creed and dogma, and in ritual practice. Here evil is accounted for. And hope, at least for some, assured. In short, African American religious life is as rich and as complicated as the religious life of other groups in the United States, but African American religion emerges in the encounter between faith, in all of its complexity, and white supremacy.
I take it that if the phrase African American religion is to have any descriptive usefulness at all, it must signify something more than African Americans who are religious. African Americans practice a number of different religions. There are black people who are Buddhist, Jehovah Witness, Mormon, and Baha’i. But the fact that African Americans practice these traditions does not lead us to describe them as black Buddhism or black Mormonism. African American religion singles out something more substantive than that.
The adjective refers instead to a racial context within which religious meanings have been produced and reproduced. The history of slavery and racial discrimination in the United States birthed particular religious formations among African Americans. African Americans converted to Christianity, for example, in the context of slavery. Many left predominantly white denominations to form their own in pursuit of a sense of self- determination. Some embraced a distinctive interpretation of Islam to make sense of their condition in the United States. Given that history, we can reasonably describe certain variants of Christianity and Islam as African American and mean something beyond the rather uninteresting claim that black individuals belong to these different religious traditions.
The adjective black or African American works as a marker of difference: as a way of signifying a tradition of struggle against white supremacist practices and a cultural repertoire that reflects that unique journey. The phrase calls up a particular history and culture in our efforts to understand the religious practices of a particular people. When I use the phrase, African American religion, then, I am not referring to something that can be defined substantively apart from varied practices; rather, my aim is to orient you in a particular way to the material under consideration, to call attention to a sociopolitical history, and to single out the workings of the human imagination and spirit under particular conditions.
When Howard Thurman, the great 20th century black theologian, declared that the slave dared to redeem the religion profaned in his midst, he offered a particular understanding of black Christianity: that this expression of Christianity was not the idolatrous embrace of Christian doctrine which justified the superiority of white people and the subordination of black people. Instead, black Christianity embraced the liberating power of Jesus’s example: his sense that all, no matter their station in life, were children of God. Thurman sought to orient the reader to a specific inflection of Christianity in the hands of those who lived as slaves. That difference made a difference. We need only listen to the spirituals, give attention to the way African Americans interpreted the Gospel, and to how they invoked Jesus in their lives.
We cannot deny that African American religious life has developed, for much of its history, under captured conditions. Slaves had to forge lives amid the brutal reality of their condition and imagine possibilities beyond their status as slaves. Religion offered a powerful resource in their efforts. They imagined possibilities beyond anything their circumstances suggested. As religious bricoleurs, they created, as did their children and children’s children, on the level of religious consciousness and that creativity gave African American religion its distinctive hue and timber.
African Americans drew on the cultural knowledge, however fleeting, of their African past. They selected what they found compelling and rejected what they found unacceptable in the traditions of white slaveholders. In some cases, they reached for traditions outside of the United States altogether. They took the bits and pieces of their complicated lives and created distinctive expressions of the general order of existence that anchored their efforts to live amid the pressing nastiness of life. They created what we call African American religion.
Headline image credit: Candles, by Markus Grossalber, CC-BY-2.0 via Flickr.
If a “revolution” in our field or area of knowledge was ongoing, would we feel it and recognize it? And if so, how?
I think a methodological “revolution” is probably going on in the science of epidemiology, but I’m not totally sure. Of course, in science not being sure is part of our normal state. And we mostly like it. I had the feeling that a revolution was ongoing in epidemiology many times. While reading scientific articles, for example. And I saw signs of it, which I think are clear, when reading the latest draft of the forthcoming book Causal Inference by M.A. Hernán and J.M. Robins from Harvard (Chapman & Hall / CRC, 2015). I think the “revolution” — or should we just call it a “renewal”? — is deeply changing how epidemiological and clinical research is conceived, how causal inferences are made, and how we assess the validity and relevance of epidemiological findings. I suspect it may be having an immense impact on the production of scientific evidence in the health, life, and social sciences. If this were so, then the impact would also be large on most policies, programs, services, and products in which such evidence is used. And it would be affecting thousands of institutions, organizations and companies, millions of people.
One example: at present, in clinical and epidemiological research, every week “paradoxes” are being deconstructed. Apparent paradoxes that have long been observed, and whose causal interpretation was at best dubious, are now shown to have little or no causal significance. For example, while obesity is a well-established risk factor for type 2 diabetes (T2D), among people who already developed T2D the obese fare better than T2D individuals with normal weight. Obese diabetics appear to survive longer and to have a milder clinical course than non-obese diabetics. But it is now being shown that the observation lacks causal significance. (Yes, indeed, an observation may be real and yet lack causal meaning.) The demonstration comes from physicians, epidemiologists, and mathematicians like Robins, Hernán, and colleagues as diverse as S. Greenland, J. Pearl, A. Wilcox, C. Weinberg, S. Hernández-Díaz, N. Pearce, C. Poole, T. Lash , J. Ioannidis, P. Rosenbaum, D. Lawlor, J. Vandenbroucke, G. Davey Smith, T. VanderWeele, or E. Tchetgen, among others. They are building methodological knowledge upon knowledge and methods generated by graph theory, computer science, or artificial intelligence. Perhaps one way to explain the main reason to argue that observations as the mentioned “obesity paradox” lack causal significance, is that “conditioning on a collider” (in our example, focusing only on individuals who developed T2D) creates a spurious association between obesity and survival.
The “revolution” is partly founded on complex mathematics, and concepts as “counterfactuals,” as well as on attractive “causal diagrams” like Directed Acyclic Graphs (DAGs). Causal diagrams are a simple way to encode our subject-matter knowledge, and our assumptions, about the qualitative causal structure of a problem. Causal diagrams also encode information about potential associations between the variables in the causal network. DAGs must be drawn following rules much more strict than the informal, heuristic graphs that we all use intuitively. Amazingly, but not surprisingly, the new approaches provide insights that are beyond most methods in current use. In particular, the new methods go far deeper and beyond the methods of “modern epidemiology,” a methodological, conceptual, and partly ideological current whose main eclosion took place in the 1980s lead by statisticians and epidemiologists as O. Miettinen, B. MacMahon, K. Rothman, S. Greenland, S. Lemeshow, D. Hosmer, P. Armitage, J. Fleiss, D. Clayton, M. Susser, D. Rubin, G. Guyatt, D. Altman, J. Kalbfleisch, R. Prentice, N. Breslow, N. Day, D. Kleinbaum, and others.
We live exciting days of paradox deconstruction. It is probably part of a wider cultural phenomenon, if you think of the “deconstruction of the Spanish omelette” authored by Ferran Adrià when he was the world-famous chef at the elBulli restaurant. Yes, just kidding.
Right now I cannot find a better or easier way to document the possible “revolution” in epidemiological and clinical research. Worse, I cannot find a firm way to assess whether my impressions are true. No doubt this is partly due to my ignorance in the social sciences. Actually, I don’t know much about social studies of science, epistemic communities, or knowledge construction. Maybe this is why I claimed that a sociology of epidemiology is much needed. A sociology of epidemiology would apply the scientific principles and methods of sociology to the science, discipline, and profession of epidemiology in order to improve understanding of the wider social causes and consequences of epidemiologists’ professional and scientific organization, patterns of practice, ideas, knowledge, and cultures (e.g., institutional arrangements, academic norms, scientific discourses, defense of identity, and epistemic authority). It could also address the patterns of interaction of epidemiologists with other branches of science and professions (e.g. clinical medicine, public health, the other health, life, and social sciences), and with social agents, organizations, and systems (e.g. the economic, political, and legal systems). I believe the tradition of sociology in epidemiology is rich, while the sociology of epidemiology is virtually uncharted (in the sense of not mapped neither surveyed) and unchartered (i.e. not furnished with a charter or constitution).
Another way I can suggest to look at what may be happening with clinical and epidemiological research methods is to read the changes that we are witnessing in the definitions of basic concepts as risk, rate, risk ratio, attributable fraction, bias, selection bias, confounding, residual confounding, interaction, cumulative and density sampling, open population, test hypothesis, null hypothesis, causal null, causal inference, Berkson’s bias, Simpson’s paradox, frequentist statistics, generalizability, representativeness, missing data, standardization, or overadjustment. The possible existence of a “revolution” might also be assessed in recent and new terms as collider, M-bias, causal diagram, backdoor (biasing path), instrumental variable, negative controls, inverse probability weighting, identifiability, transportability, positivity, ignorability, collapsibility, exchangeable, g-estimation, marginal structural models, risk set, immortal time bias, Mendelian randomization, nonmonotonic, counterfactual outcome, potential outcome, sample space, or false discovery rate.
You may say: “And what about textbooks? Are they changing dramatically? Has one changed the rules?” Well, the new generation of textbooks is just emerging, and very few people have yet read them. Two good examples are the already mentioned text by Hernán and Robins, and the soon to be published by T. VanderWeele, Explanation in causal inference: Methods for mediation and interaction (Oxford University Press, 2015). Clues can also be found in widely used textbooks by K. Rothman et al. (Modern Epidemiology, Lippincott-Raven, 2008), M. Szklo and J Nieto (Epidemiology: Beyond the Basics, Jones & Bartlett, 2014), or L. Gordis (Epidemiology, Elsevier, 2009).
Finally, another good way to assess what might be changing is to read what gets published in top journals as Epidemiology, the International Journal of Epidemiology, the American Journal of Epidemiology, or the Journal of Clinical Epidemiology. Pick up any issue of the main epidemiologic journals and you will find several examples of what I suspect is going on. If you feel like it, look for the DAGs. I recently saw a tweet saying “A DAG in The Lancet!”. It was a surprise: major clinical journals are lagging behind. But they will soon follow and adopt the new methods: the clinical relevance of the latter is huge. Or is it not such a big deal? If no “revolution” is going on, how are we to know?