JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Blog Posts by Tag
In the past 7 days
Blog Posts by Date
Click days in this calendar to see posts by day or month
Viewing: Blog Posts Tagged with: various links, Most Recent at Top [Help]
Results 1 - 25 of 47
How to use this Page
You are viewing the most recent posts tagged with the words: various links in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
A powerful technology that continues to evolve, researchers say, has rekindled interest in liquid biopsies as a way to disrupt tumor progression. The technology, genetic sequencing, is allowing researchers a closer look at the genetic trail tumors leave in the blood as cancer develops. That capability, as these new “liquid” blood tests work their way into clinics, may further a deeper understanding of how tumors alter their molecular masks to defy treatment.
“O, wonder!/How many goodly creatures are there here!/How beauteous mankind is!/O brave new world,/That has such people in't!” Shakespeare’s lines in The Tempest famously inspired Aldous Huxley’s novel Brave New World, first published in 1932. Huxley’s vision of the future has become a byword for the idea that attempts at genetic (and social) engineering are bound to go wrong. With its crude partitioning of society, by stunting human development before birth, and with its use of a drug – soma – to induce a false sense of happiness and suppress dissent, this was the opposite of a ‘beauteous’ world.
Many say now is the century of biology, the study of life. Genomics is therefore “front-and-centre”, as DNA, is the software of life. From staring at stars, we are now staring at DNA. We can’t use our eyes, like we do in star gazing, but just as telescopes show us the far reaches of the Universe, DNA sequencing machines are reading out our genomes at an astonishing pace.
Announced on January 13th by President Obama in his eighth and final State of the Union Address, the multi-billion dollar project will be led by US Vice President, Joe Biden, who has a vested interest in seeing new cures for cancer. Using genomics to cure cancer is being held on par with JFK’s desire in 1961 to land men on the moon.
2016 is here. The New Year is a time for renewal and resolution. It is also a time for dieting. Peak enrolment and attendance times at gyms occur after sumptuous holiday indulgences in December and again when beach wear is cracked out of cold storage in summer. As the obesity epidemic reaches across the globe we need new solutions. We need better ways to live healthy lifestyles.
When Simón Bolívar died on this day 185 years ago, tuberculosis was thought to have been the disease that killed him. An autopsy showing tubercles of different sizes in his lungs seemed to confirm the diagnosis, though neither microscopic examination nor bacterial cultures of his tissues were performed.
Knowledge that we all have DNA and what this means is getting around. The informed public is well aware that our cells run on DNA software called the genome. This software is passed from parent to child, in the long line of evolutionary history that dates back billions of years – in fact, research published this year pushes back the origin of life on Earth another 300 million years.
In ampelographic collections, about ten living plants of each grape variety or clone are kept alive for future studies or plantings, which requires a large amount of time and money. Yet, in every collection we estimate an average of 5% of labelling errors. They can now be identified with DNA profiling and duplicates can be eliminated, thus saving time and money.
It is hard to quantify the impact of ‘role-model’ celebrities on the acceptance and uptake of genetic testing and bio-literacy, but it is surely significant. Angelina Jolie is an Oscar-winning actress, Brad Pitt’s other half, mother, humanitarian, and now a “DNA celebrity”. She propelled the topic of familial breast cancer, female prophylactic surgery, and DNA testing to the fore.
A recent meme circulating on the internet mocked a US government programme (ObamaCare) saying that its introduction cost $360 million when there were only 317 million people in the entire country. It then posed the rhetorical question: "Why not just give everyone a million dollars instead?"
Two other major and largely unsolved problems in evolution, at the opposite extremes of the history of life, are the origin of the basic features of living cells and the origin of human consciousness. In contrast to the questions we have just been discussing, these are unique events in the history of life.
Society owes a debt to Henrietta Lacks. Modern life benefits from long-term access to a small sample of her cells that contained incredibly unusual DNA. As Rebecca Skloot reports in her best-selling book, “The Immortal Life of Henrietta Lacks”, the story that unfolded after Lacks died at the age of 31 is one of injustice, tragedy, bravery, innovation and scientific discovery.
Kuwait is changing the playing field. In early July, just days after the June 26th deadly Imam Sadiq mosque bombing claimed by ISIS, Kuwait ruled to instate mandatory DNA-testing for all permanent residents. This is the first use of DNA testing at the national-level for security reasons, specifically as a counter-terrorism measure. An initial $400 million dollars is set aside for collecting the DNA profiles of all 1.3 million citizens and 2.9 million foreign residents
Another ‘Awareness Day’, International Kissing Day, is coming up on July 6. It might not seem obvious but kissing, like most subjects can now be easily linked to the science of DNA. Thus, there could be no more perfect opener for my Double Helix column, given the elegance and beauty of a kiss. To start, there is the obvious biological link between kissing and DNA: propagation of the species. Kissing is not only pleasurable but seems to be a solid way to assess the quality and suitability of a mate.
One of the most fun and exciting sources of information available for free on the Internet are the videos found on the Technology, Entertainment and Design (TED) website. TED is a hub of stories about innovation, achievement and change, each artfully packaged into a short, highly accessible talk by an outstanding speaker. As of April 2015, the TED website boasts 1900+ videos from some of the most imminent individuals in the world. Selected speakers range from Bill Clinton and Al Gore to Bono and other global celebrities to a range of academics experts.
DNA is the foundation of life. It codes the instructions for the creation of all life on Earth. Scientists are now reading the autobiographies of organisms across the Tree of Life and writing new words, paragraphs, chapters, and even books as synthetic genomics gains steam. Quite astonishingly, the beautiful design and special properties of DNA makes it capable of many other amazing feats. Here are five man-made functions of DNA, all of which are contributing to the growing “industrial-DNA” phenomenon.
Today, 25 April is a joint celebration for geneticists, commemorating the discovery of the helix nature of DNA by James Watson and Francis Crick in 1953 and the completion of the human genome project fifty years later in 2003. It may have taken half a century to map the human genome, but in the years since its completion the field of genetics has seen breakthroughs increase at an ever-accelerating rate.
This book is part of the Writer's Digest Howdunit Series.
I mentioned in an earlier post that I’ve joined Sisters in Crime and the local chapter, Capitol Crimes. The local chapter meets monthly, and each month guest speakers share their expertise in either writing mysteries or being connected in some way to concerns of the mystery writer. One such concern is always whether a writer is presenting crime scenes or police procedures that are accurate. Last month we were fortunate to have Lee Lofland, the author of Police Procedure & Investigation, as our guest speaker, and he addressed those very concerns.
Lee Lofland is a former police detective, and the bad news is that much of what you see on your favorite crime show is misleading and/or inaccurate. His book, on the other hand, is a very thorough coverage of everything an author would want to ask their local police department. Blurbs by best-selling mystery writers (including two of my favorites, Rhys Bowen and Hallie Ephron) give his book high praise, and I was pleased to find that the writing – entertaining and sobering by turns – is always a good read. He presents facts that you really want to know in a way that don’t make your eyes glaze over. A few examples:
The difference between police officers and detectives; how they’re trained; what they do.
Arrest and search procedures.
The differences between homicide, murder, and manslaughter.
The difference between a crime scene and the scene of the crime.
DNA and fingerprinting
What can send you to prison and what can send you to jail.
A section on different drugs and the effects of each one.
Differences in weapons (with photos) and how they work
The book’s appendices include a glossary of terms, police 10 codes, a drug quantity table, and a federal sentencing table. It isn’t necessary to read this book straight through, chapter by chapter. There’s a thorough index that helps when you just want to look up something useful at that moment in your writing, along with good visual aids (charts, diagrams, photos of tools, etc.) throughout the book. This is a must read for any mystery writer who wants their police procedural scenes to ring with accuracy.
Lee also shared with us the Writers’ Police Academy, held in August in Appleton, Wisconsin. Yes, there really is such a thing. You can register now and have hands on experiences that will enhance your scenes. For more information about what is covered, check out their website HERE .
Lee’s book is available in paperback and Kindle at Amazon HERE .
You can contact the Lee Lowland at his website, The Graveyard Shift,HERE, and learn even more about police work to enrich your mysteries from his frequent blog posts.
The author and friendly officer.
A must have book.
0 Comments on Police Procedure & Investigation - A Must-Read Handbook for Mystery Writers as of 3/25/2015 5:43:00 PM
On 25th March 2015, 530 years after his death, King Richard III of England will be interred in Leicester Cathedral. This remarkable ceremony is only taking place because of the success of DNA analysis in identifying his skeletal remains. So what sort of genes might a king be expected to have? Or, more prosaically, how do you identify a long dead corpse from its DNA? Several methods were used, and in particular the deduction of the skeleton’s probable hair and eye colour raises some interesting questions about future trends in forensic DNA analysis.
Richard III is one of England’s best known kings, largely due to the famous play of William Shakespeare in which he is portrayed as an evil villain. He only reigned for two years and was killed at the age of 32 at the battle of Bosworth in 1485. According to the historical records he was unceremoniously buried at Greyfriars Friary in Leicester. At some stage knowledge of the exact location of Richard’s burial was lost. But in 2012 excavations under a car park at the probable site of the former friary yielded “skeleton 1″. Suspicion of his royal identity was excited by the fact that the skeleton had a severely bent spine causing the right shoulder to be higher than the left. This well-known deformity of Richard was mentioned in a contemporary source, as well as by Shakespeare. Furthermore, the skeleton was male, the age was about right, it had evidently been killed in battle, and the radiocarbon date was consistent with death in 1485.
This was all very suggestive, but it was the DNA analysis that really proved the case. The work was led by a team at the University of Leicester, with participation by many other UK and European centres. It is important to note that this was not the normal type of forensic DNA identification, which relies on comparing a set of highly variable DNA markers to a database. Such analysis is fine so long as your suspect is in the database, but it is no use for identifying a long dead individual who is not in any database.
By far the best evidence for the identity of Richard III comes from the analysis of his mitochondrial DNA. Mitochondria are bodies found in every cell, responsible for the production of energy. They have their own DNA which is passed down the generations only through the female line. Barring the occasional new mutation, the DNA sequence of mitochondrial DNA should be identical from mother to daughter down a particular female line of descent. Like their sisters, males also carry the mitochondrial DNA of their mothers, but they do not pass it down to their own offspring.
Richard will have shared mitochondrial DNA with his sister, Anne of York. Two complete female lines of descent were traced back to Anne of York, one of 17 generations down to Michael Ibsen, a resident of London, and the other of 19 generations down to Wendy Duldig, formerly of New Zealand. Complete sequencing of their mitochondrial DNA showed a 100% match between skeleton 1 and of Michael Ibsen, and a single base change compared to Wendy Duldig. One change over this period of time is quite likely to be a new mutation. The sequence family (haplogroup) to which the mitochondrial DNA sequence belongs is a fairly rare one, so few other people in England in 1485 would have shared it and in fact the team has systematically ruled out all the other males of the period who might have shared it because of a common female lineage with Richard III. So this match is highly significant and is the best piece of evidence that the “skeleton 1″ is indeed King Richard.
Also applied was a newer method which is a technique for predicting the hair and eye colour of someone from their DNA. The most important variants affecting hair colour are mutations of a gene called MC1R, which encodes a cell surface receptor for a hormone. Individuals carrying variants of the MC1R gene with reduced function are likely to have red or blond hair rather than the normal dark hair. The pigmentation of the iris of the eye depends significantly on a gene called OCA2, encoding a protein which transports tyrosine into cells. Again variants of reduced function give less pigmented eyes, meaning that the colour is blueish rather than brownish. Recently a Dutch group created a forensic test based on variants at 24 genetic loci, of which 11 are in the MC2R gene and the rest in 12 other positions including the OCA2 gene. Identification of these 24 variants yields a fairly accurate prediction of hair and eye colour, and in the case of skeleton 1 the prediction was for blue eyes and blond hair. The existing portraits of Richard III all date from some time after his death but the older ones do indeed show light-coloured eyes and reddish-brown hair, an appearance which is consistent with the prediction.
These two types of analysis indicate two rather different senses in which we use the word “gene”. The sequence variants of the mitochondrial DNA, like those used in normal forensic identification, do not, in general, affect the characteristics of the individuals carrying them. The DNA changes often lie outside actual genes, in the regions of DNA between genes. They are better described as “markers” than as “genes”. But the hair and eye colour analysis is based at least partly on actual gene variants that might be expected to generate those visible characteristics.
How much further might this kind of analysis be pushed? Could the height, facial features or skin colour of a crime suspect be deduced from their DNA? The essential issue is the number of gene variants in the population that affect a feature. If it is relatively small, as with hair and eye colour, then prediction is possible. If it is very large, as for height, then it is not possible, because most of the variants affecting height have too small effects to be detectable. Most of the human characteristics that have been studied in this way have turned out to depend on a very large number of variants of small effect. So, contrary to popular perception, there are real limits to what is possible in terms of prediction of bodily features from DNA data. There will doubtless be some other features that are predictable, and these may eventually include skin colour. But unless a completely new approach is invented, it is unlikely that we shall ever see an identikit picture of a suspect generated from DNA at the crime scene.
Featured image credit: Stained glass, by VeteranMP. CC-BY-SA 3.0 via Wikimedia Commons
Microbiology should be part of everyone’s educational experience. European students deserve to know something about the influence of microscopic forms of life on their existence, as it is at least as important as the study of the Roman Empire or the Second World War. Knowledge of viruses should be as prominent in American high school curricula as the origin of the Declaration of Independence. This limited geographic compass reflects the fact that the science of microbiology is a triumph of Western civilization, but the educational significance of the field is a global concern. We cannot understand life without an elementary comprehension of microorganisms.
Appreciation of the microbial world might begin by looking at pond water and pinches of wet soil with a microscope. Precocious children could be encouraged in this fashion at a very early age. Deeper inquiry with science teachers would build a foundation of knowledge for teenagers, before the end of their formal education or the pursuit of a university degree in the humanities.
Earth has always been dominated by microorganisms. Most genetic diversity exists in the form of microbes and if animals and plants were extinguished by cosmic bombardment, biology would reboot from reservoirs of this bounty. The numbers of microbes are staggering. Tens of millions of bacteria live in a crumb of soil. A drop of seawater contains 500,000 bacteria and tens of millions of viruses. The air is filled with microscopic fungal spores, and a hundred trillion bacteria swarm inside the human gut. Every macroscopic organism and every inanimate surface is coated with microbes. They grow around volcanoes and hydrothermal vents. They live in blocks of sea ice, in the deepest oceans, and thrive in ancient sediment on the seafloor. Microbes act as decomposers, recycling the substance of dead organisms. Others are primary producers, turning carbon dioxide into sugars using sunlight or by tapping chemical energy from hydrogen sulfide, ferrous iron, ammonia, and methane.
Bacterial infections are caused by decomposers that survive in living tissues. Airborne bacteria cause diphtheria, pertussis, tuberculosis, and meningitis. Airborne viruses cause influenza, measles, mumps, rubella, chickenpox, and the common cold. Hemorrhagic fevers caused by Ebola viruses are spread by direct contact with infected patients. Diseases transmitted by animal bites include bacterial plague, as the presumed cause of the Black Death, which killed 200 million people in the 14th century. Typhus spread by lice decimated populations of prisoners in concentration camps and refugees during the Second World War. Malaria, carried by mosquitos, massacres half a million people every year.
Contrary to the impression left by this list of infections, relatively few microbes are harmful and we depend on a lifelong cargo of single-celled organisms and viruses. The bacteria in our guts are essential for digesting the plant part of our diet and other bacteria and yeasts are normal occupants of healthy skin. The tightness of our relationship with microbes is illustrated by the finding that human DNA contains 100,000 fragments of genes that came from viruses. We are surprisingly microbial.
Missing the opportunity to learn something about microbiology is a mistake. The uninformed are likely to be left with a distorted view of biology in which they miscast themselves as the most important organisms. For example, “Sarah” is a significant manifestation of life from Sarah’s perspective, but her body is not the individual organism that she imagines, and nor, despite her talents, is she a major player in the ecology of the planet. Her interactions with microbes will include a healthy relationship with bacteria in her gut, bouts of influenza and other viral illnesses, and death in old age from an antibiotic-resistant infection. Sarah’s microbiology will continue after death with her decomposition by fungi. In happier times she will become an expert on Milton’s poetry, and delight students by reciting Lycidas through her tears, but she will never know a thing about microbiology. This is a pity. Learning about viruses that bloom in seawater and fungi that sustain rainforests would not have stopped her from falling in love with Milton.
Even brief consideration of microorganisms can be inspiring. A simple magnifying lens transforms the surface of rotting fruit into a hedgerow of glittering stalks topped with jet-black fungal spores. Microscopes take us deeper, to the slow revolution of the bright green globe of the alga Volvoxas its beats its way through a drop of pond water. A greater number of microbes are quite dull things to look at and their appreciation requires greater imagination. Considering that our bodies are huge ecosystems supported by trillions of bacteria is a good place to start, and then we might realize that we fade from view against the grander galaxy of life on Earth. The science of microbiology is a marvel for our time.
Featured image credit: BglII-DNA complex By Gwilliams10. Public domain via Wikimedia Commons
Two of the biggest scientific breakthroughs in paleoanthropology occurred in 2010. Not only had we determined a draft genome of an extinct Neandertal from bones that lay in the Earth for tens of thousands of years, but the genome from another heretofore unknown ancient human relative, dubbed the Denisovans, was also announced.
A one-hundred-year-old conundrum was finally answered: did we mate with Neandertals? It was now undeniable that modern humans, with all our modern features – our rounded craniums, prominent chins, gracile faces tucked beneath an enlarged forehead, and long, slender skeletons – had met and mated with both of these extinct ancient human-like beings. After comparison with the human genome, 2-4% of the genomes of all peoples outside Africa had been directly inherited from Neandertal ancestors. And, DNA from the Denisovans (named after the cave in southern Siberia where their bones were discovered) makes up 3% to 6% of the genomes of many peoples living in South East Asia (Philippines, Melanesians, Australian Aborigines).
We now believe that it is in the Levant, regions just east of the Mediterranean, where humans met and mated with Neandertals. Remains of Neandertals are well known from this region. When modern humans ventured out of Africa into the Levant approximately 50,000 years ago, they mated with Neandertals. When they later spread into South East Asia they mated with Denisovans, although mating probably occurred in other regions of Asia as well. We now have evidence suggesting the ancient Denisovans occupied a very large geographic distribution extending from Southern Siberia all the way to the South East Asian tropics. It is tantalizing that, other than their distinctive genomes and their somewhat robust-looking molars, we know close to nothing about what they looked like.
With these discoveries, the notion that modern humans would hardly have interbred with such dim-witted, brutish, and bent-kneed Neandertals – a reputation that had long dogged Neandertals since French Paleontologist Marcellin Boule studied them – was now clearly out of the question. Indeed, more recent research into the skeleton and the cultural artifacts of Neandertals has demonstrated their sophisticated material cultures (stone tools, body ornament, and symbolic culture) and that their skeletons, rather than being “primitive,” were adapted for the cold and for rugged daily physical activities. Furthermore, the almost paradigmatically-held view of a strict replacement of ancient peoples in Eurasia by colonizing modern humans is now laid to rest. This view, popularized in the 1980s and 1990s, rested on comparisons between the minute mitochondrial genomes (much less than 1% of our full genomes) of humans and Neandertals. Full genomes, as you can see, tell us a fuller and more fascinating story.
These breakthroughs open a window of fresh air into the field of anthropology after decades of speculation. They are simultaneous with advancements in detecting the genetic bases of common chronic human diseases like hypertension, obesity, and diabetes. Yet even these diseases have been shaped by our evolutionary past. Genomes tell us that our species has undergone contractions in population size during the evolutionary past, which reduced the effectiveness of natural evolutionary constraints, and allowed damaging mutations to slip through the cracks to take root in our genome. This is a new view of disease informed by evolution as well as genomes.
We are also making base-by-base comparisons of our genome with those of chimpanzees, gorillas, orangutans, as well as genomes of other primates, allowing us to start to look for the genomic bases of our unique features – our large and complex brains, our complex cognition, and our use of spoken language. At the same time, we are learning the degree to which there is a genetic continuum between us and our primate relatives. Darwin presciently wrote in The Descent of Man and Selection in Relation to Sex that “the difference in mind between man and the higher animals, great as it is, certainly is one of degree and not of kind.” Today, we are realizing Darwin’s dream.
We are also uncovering details about how different human populations adapted to hot and cold climates, high altitudes, different diets, and to the various pathogens modern humans encountered as we colonized different regions of the world. A large project is already well-underway to collect thousands of genomes of modern peoples from different regions of the world. Comparing these genomes allows the search for ancient footprints left by positive selection (the type of natural selection that shapes our adaptations). Surprisingly, the different pathogens we encountered as we left Africa and spread into different environments appears to have made some of the largest footprints on our genome.
The genomic highway has an unchecked speed limit; we are experiencing a unique problem where data is pouring in faster than it can be fully analyzed. Each new issue of our scientific journals is ripe with new, exciting discoveries unlocking intriguing secrets of our ancestry.
Poor old king Tut has made the news again – for all the wrong reasons, again.
In a documentary that aired on the BBC two weeks ago, scientists based at the EURAC-Institute for Mummies and the Iceman unveiled a frankly hideous reconstruction of Tutankhamun’s mummy, complete with buck teeth, a sway back, Kardashian-style hips, and a club foot. They based it on CT-scans of the mummy from 2005 and their own research, claiming to have identified a host of genetic disorders and physical deformities suffered by the boy-king, who died around age 19 some 3,300 years ago.
The English-language newspaper Ahram Online has aired the views of three Egyptian Egyptologists who are just as shocked by the reconstruction as many television viewers were. There are old and understandable sensitivities here: Western scientists have been poking around Egyptian mummies for more than 200 years, while the discovery of Tutankhamun’s tomb in 1922 coincided with the birth of an independent Egyptian nation after decades of European colonialism. The ensuing tussle between excavator Howard Carter and the government authorities, over where the tomb finds would end up (Cairo won, and rightly so), highlighted deep-seated tensions about who ‘owned’ ancient Egypt, literally and figuratively. It’s safe to say that the last century has seen king Tut more involved in politics than he ever was in his own lifetime.
Most Egyptologists can readily debunk the ‘evidence’ presented by the EURAC team – if we weren’t so weary of debunking television documentaries already. (why do the ancient Romans get academic royalty like Mary Beard, while the ancient Egyptians get the guy from The Gadget Show?). What’s fascinating is how persistent – and how misguided – lurid interest in the dead bodies of ancient Egyptians is, not to mention the wild assumptions made about the skilled and stunning art this culture produced. The glorious gold mask, gilded shrines and coffins, weighty stone sarcophagus, and hundreds of other objects buried with Tutankhamun were never meant to show us a mere human, but to manifest the razzle-dazzle of a god-king.
Around the time of Tutankhamun’s reign, artists depicted the royal family and the gods with almond eyes, luscious lips, and soft, plump bodies. These were never meant to be true-to-life images, as if the pharaoh and his court were posting #nomakeupselfie snaps on Twitter. Each generation of artists developed a style that was distinctive to a specific ruler, but which also linked him to a line of ancestors, emphasizing the continuity and authority of the royal house. The works of art that surrounded Tutankhamun in life, and in death, were also deeply concerned with a king’s unique responsibilities to his people and to the gods.
All the walking sticks buried in the tomb – more than 130 of them, one of which Carter compared to Charlie Chaplin’s ubiquitous prop – emphasize the king’s status at the pinnacle of society (nothing to do with a limp). The chariots were luxury items (quite macho ones, at that), and Tutankhamun’s wardrobe was the haute couture of its day, with delicate embroidery and spangly sequins. Much of the tomb was taken up with deeply sacred objects, too: guardian statues at the doorways, magic figures bricked into the walls, and two dozen bolted shrines protecting wrapped statues of the king and various gods. Not to mention the shrines, sarcophagus, and coffins that held the royal mummy – a sacred object in itself, long before science got a hold of it.
As for the diseases and deformities Tutankhamun is said to have suffered? Allegations of inbreeding don’t add up: scholars have exhaustively combed through the existing historical sources that relate to Tutankhamun (lots and lots of rather dry inscriptions, I’m afraid), and as yet there is no way to identify his biological parents with any certainty. Don’t assume that DNA is an easy answer, either. Not only do we not know the identity of almost any of the ‘royal’ mummies that regularly do the rounds on TV programmes, but also the identification of DNA from ancient mummies is contested – it simply doesn’t survive in the quantity or quality that DNA amplification techniques require. Instead, many of the ‘abnormal’ features of Tutankhamun’s mummy, like the supposed club foot and damage to the chest and skull, resulted from the mummification process, as research on other mummies has surmised. Embalming a body to the standard required for an Egyptian king was a difficult and messy task, left to specialist priests. What mattered just as much, if not more, was the intricate linen wrapping, the ritual coating of resin, and the layering of amulets, shrouds, coffins, and shrines that Carter and his team had to work through in order to get to the fragile human remains beneath.
The famous mummy mask and spectacular coffins we can see in the Museum of Egyptian Antiquities in Cairo today, or in copious images online, should stop us in our tracks with their splendour and skill. That’s what they were meant to do, for those few people who saw them and for the thousands more whose lives and livelihoods depended on the king. But they should also remind us of how they got there: the invidious colonial system under which archaeology flourished in Egypt, for a start, and the thick resin that had to be hammered off so that the lids could be opened and the royal mummy laid bare. Did king Tut have buck teeth, waddle like a duck, drag race his chariot? Have a look at that mask: do you think we’ve missed the point? Like so many modern engagements with the ancient past, this latest twist in the Tutankhamun tale says more about our times than his.
Many bioethical challenges surround the promise of genomic technology and the power of genomic information — providing a rich context for critically exploring underlying bioethical traditions and foundations, as well as the practice of multidisciplinary advisory committees and collaborations. Controversial issues abound that call into question the core values and assumptions inherent in bioethics analysis and thus necessitates interprofessional inquiry. Consequently, the teaching of genomics and contemporary bioethics provides an opportunity to re-examine our disciplines’ underpinnings by casting light on the implications of genomics with novel approaches to address thorny issues — such as determining whether, what, to whom, when, and how genomic information, including “incidental” findings, should be discovered and disclosed to individuals and their families, and whose voice matters in making these determinations particularly when children are involved.
One creative approach we developed is narrative genomics using drama with provocative characters and dialogue as an interdisciplinary pedagogical approach to bring to life the diverse voices, varied contexts, and complex processes that encompass the nascent field of genomics as it evolves from research to clinical practice. This creative educational technique focuses on inherent challenges currently posed by the comprehensive interrogation and analysis of DNA through sequencing the human genome with next generation technologies and illuminates bioethical issues, providing a stage to reflect on the controversies together, and temper the sometimes contentious debates that ensue.
As a bioethics teaching method, narrative genomics highlights the breadth of individuals affected by next-gen technologies — the conversations among professionals and families — bringing to life the spectrum of emotions and challenges that envelope genomics. Recent controversies over genomic sequencing in children and consentissues have brought fundamental ethical theses to the stage to be re-examined, further fueling our belief in drama as an interdisciplinary pedagogical approach to explore how society evaluates, processes, and shares genomic information that may implicate future generations. With a mutual interest in enhancing dialogue and understanding about the multi-faceted implications raised by generating and sharing vast amounts of genomic information, and with diverse backgrounds in bioethics, policy, psychology, genetics, law, health humanities, and neuroscience, we have been collaboratively weaving dramatic narratives to enhance the bioethics educational experience within varied professional contexts and a wide range of academic levels to foster interprofessionalism.
Dramatizations of fictionalized individual, familial, and professional relationships that surround the ethical landscape of genomics create the potential to stimulate bioethical reflection and new perceptions amongst “actors” and the audience, sparking the moral imagination through the lens of others. By casting light on all “the storytellers” and the complexity of implications inherent with this powerful technology, dramatic narratives create vivid scenarios through which to imagine the challenges faced on the genomic path ahead, critique the application of bioethical traditions in context, and re-imagine alternative paradigms.
We initially collaborated on the creation of a short vignette play in the context of genomic research and the informed consent process that was performed at the NHGRI-ELSI Congress by a geneticist, genetic counselor, bioethicists, and other conference attendees. The response by “actors” and audience fueled us to write many more plays of varying lengths on different ethical and genomic issues, as well as to explore the dialogues of existing theater with genetic and genomic themes — all to be presented and reflected upon by interdisciplinary professionals in the bioethics and genomics community at professional society meetings and academic medical institutions nationally and internationally.
Because narrative genomics is a pedagogical approach intended to facilitate discourse, as well as provide reflection on the interrelatedness of the cross-disciplinary issues posed, we ground our genomic plays in current scholarship and ensure that it is accurate scientifically as well as provide extensive references and pose focused bioethics questions which can complement and enhance the classroom experience.
In a similar vein, bioethical controversies can also be brought to life with this approach where bioethics reaching incorporates dramatizations and excerpts from existing theatrical narratives, whether to highlight bioethics issues thematically, or to illuminate the historical path to the genomics revolution and other medical innovations from an ethical perspective.
Varying iterations of these dramatic narratives have been experienced (read, enacted, witnessed) by bioethicists, policy makers, geneticists, genetic counselors, other healthcare professionals, basic scientists, bioethicists, lawyers, patient advocates, and students to enhance insight and facilitate interdisciplinary and interprofessional dialogue.
Dramatizations embedded in genomic narratives illuminate the human dimensions and complexity of interactions among family members, medical professionals, and others in the scientific community. By facilitating discourse and raising more questions than answers on difficult issues, narrative genomics links the promise and concerns of next-gen technologies with a creative bioethics pedagogical approach for learning from one another.
Heading image: Andrzej Joachimiak and colleagues at Argonne’s Midwest Center for Structural Genomics deposited the consortium’s 1,000th protein structure into the Protein Data Bank. CC-BY-SA-2.0 via Wikimedia Commons.
0 Comments on Illuminating the drama of DNA: creating a stage for inquiry as of 10/20/2014 10:42:00 PM
Connie Ngo said, on 10/17/2014 12:30:00 AM
Scholars have written a lot about the difficulties in the study of religion generally. Those difficulties become even messier when we use the words black or African American to describe religion. The adjectives bear the burden of a difficult history that colors the way religion is practiced and understood in the United States. They register the horror of slavery and the terror of Jim Crow as well as the richly textured experiences of a captured people, for whom sorrow stands alongside joy. It is in this context, one characterized by the ever-present need to account for one’s presence in the world in the face of the dehumanizing practice of white supremacy, that African American religion takes on such significance.
To be clear, African American religious life is not reducible to those wounds. That life contains within it avenues for solace and comfort in God, answers to questions about who we take ourselves to be and about our relation to the mysteries of the universe; moreover, meaning is found, for some, in submission to God, in obedience to creed and dogma, and in ritual practice. Here evil is accounted for. And hope, at least for some, assured. In short, African American religious life is as rich and as complicated as the religious life of other groups in the United States, but African American religion emerges in the encounter between faith, in all of its complexity, and white supremacy.
I take it that if the phrase African American religion is to have any descriptive usefulness at all, it must signify something more than African Americans who are religious. African Americans practice a number of different religions. There are black people who are Buddhist, Jehovah Witness, Mormon, and Baha’i. But the fact that African Americans practice these traditions does not lead us to describe them as black Buddhism or black Mormonism. African American religion singles out something more substantive than that.
The adjective refers instead to a racial context within which religious meanings have been produced and reproduced. The history of slavery and racial discrimination in the United States birthed particular religious formations among African Americans. African Americans converted to Christianity, for example, in the context of slavery. Many left predominantly white denominations to form their own in pursuit of a sense of self- determination. Some embraced a distinctive interpretation of Islam to make sense of their condition in the United States. Given that history, we can reasonably describe certain variants of Christianity and Islam as African American and mean something beyond the rather uninteresting claim that black individuals belong to these different religious traditions.
The adjective black or African American works as a marker of difference: as a way of signifying a tradition of struggle against white supremacist practices and a cultural repertoire that reflects that unique journey. The phrase calls up a particular history and culture in our efforts to understand the religious practices of a particular people. When I use the phrase, African American religion, then, I am not referring to something that can be defined substantively apart from varied practices; rather, my aim is to orient you in a particular way to the material under consideration, to call attention to a sociopolitical history, and to single out the workings of the human imagination and spirit under particular conditions.
When Howard Thurman, the great 20th century black theologian, declared that the slave dared to redeem the religion profaned in his midst, he offered a particular understanding of black Christianity: that this expression of Christianity was not the idolatrous embrace of Christian doctrine which justified the superiority of white people and the subordination of black people. Instead, black Christianity embraced the liberating power of Jesus’s example: his sense that all, no matter their station in life, were children of God. Thurman sought to orient the reader to a specific inflection of Christianity in the hands of those who lived as slaves. That difference made a difference. We need only listen to the spirituals, give attention to the way African Americans interpreted the Gospel, and to how they invoked Jesus in their lives.
We cannot deny that African American religious life has developed, for much of its history, under captured conditions. Slaves had to forge lives amid the brutal reality of their condition and imagine possibilities beyond their status as slaves. Religion offered a powerful resource in their efforts. They imagined possibilities beyond anything their circumstances suggested. As religious bricoleurs, they created, as did their children and children’s children, on the level of religious consciousness and that creativity gave African American religion its distinctive hue and timber.
African Americans drew on the cultural knowledge, however fleeting, of their African past. They selected what they found compelling and rejected what they found unacceptable in the traditions of white slaveholders. In some cases, they reached for traditions outside of the United States altogether. They took the bits and pieces of their complicated lives and created distinctive expressions of the general order of existence that anchored their efforts to live amid the pressing nastiness of life. They created what we call African American religion.
Headline image credit: Candles, by Markus Grossalber, CC-BY-2.0 via Flickr.
If a “revolution” in our field or area of knowledge was ongoing, would we feel it and recognize it? And if so, how?
I think a methodological “revolution” is probably going on in the science of epidemiology, but I’m not totally sure. Of course, in science not being sure is part of our normal state. And we mostly like it. I had the feeling that a revolution was ongoing in epidemiology many times. While reading scientific articles, for example. And I saw signs of it, which I think are clear, when reading the latest draft of the forthcoming book Causal Inference by M.A. Hernán and J.M. Robins from Harvard (Chapman & Hall / CRC, 2015). I think the “revolution” — or should we just call it a “renewal”? — is deeply changing how epidemiological and clinical research is conceived, how causal inferences are made, and how we assess the validity and relevance of epidemiological findings. I suspect it may be having an immense impact on the production of scientific evidence in the health, life, and social sciences. If this were so, then the impact would also be large on most policies, programs, services, and products in which such evidence is used. And it would be affecting thousands of institutions, organizations and companies, millions of people.
One example: at present, in clinical and epidemiological research, every week “paradoxes” are being deconstructed. Apparent paradoxes that have long been observed, and whose causal interpretation was at best dubious, are now shown to have little or no causal significance. For example, while obesity is a well-established risk factor for type 2 diabetes (T2D), among people who already developed T2D the obese fare better than T2D individuals with normal weight. Obese diabetics appear to survive longer and to have a milder clinical course than non-obese diabetics. But it is now being shown that the observation lacks causal significance. (Yes, indeed, an observation may be real and yet lack causal meaning.) The demonstration comes from physicians, epidemiologists, and mathematicians like Robins, Hernán, and colleagues as diverse as S. Greenland, J. Pearl, A. Wilcox, C. Weinberg, S. Hernández-Díaz, N. Pearce, C. Poole, T. Lash , J. Ioannidis, P. Rosenbaum, D. Lawlor, J. Vandenbroucke, G. Davey Smith, T. VanderWeele, or E. Tchetgen, among others. They are building methodological knowledge upon knowledge and methods generated by graph theory, computer science, or artificial intelligence. Perhaps one way to explain the main reason to argue that observations as the mentioned “obesity paradox” lack causal significance, is that “conditioning on a collider” (in our example, focusing only on individuals who developed T2D) creates a spurious association between obesity and survival.
The “revolution” is partly founded on complex mathematics, and concepts as “counterfactuals,” as well as on attractive “causal diagrams” like Directed Acyclic Graphs (DAGs). Causal diagrams are a simple way to encode our subject-matter knowledge, and our assumptions, about the qualitative causal structure of a problem. Causal diagrams also encode information about potential associations between the variables in the causal network. DAGs must be drawn following rules much more strict than the informal, heuristic graphs that we all use intuitively. Amazingly, but not surprisingly, the new approaches provide insights that are beyond most methods in current use. In particular, the new methods go far deeper and beyond the methods of “modern epidemiology,” a methodological, conceptual, and partly ideological current whose main eclosion took place in the 1980s lead by statisticians and epidemiologists as O. Miettinen, B. MacMahon, K. Rothman, S. Greenland, S. Lemeshow, D. Hosmer, P. Armitage, J. Fleiss, D. Clayton, M. Susser, D. Rubin, G. Guyatt, D. Altman, J. Kalbfleisch, R. Prentice, N. Breslow, N. Day, D. Kleinbaum, and others.
We live exciting days of paradox deconstruction. It is probably part of a wider cultural phenomenon, if you think of the “deconstruction of the Spanish omelette” authored by Ferran Adrià when he was the world-famous chef at the elBulli restaurant. Yes, just kidding.
Right now I cannot find a better or easier way to document the possible “revolution” in epidemiological and clinical research. Worse, I cannot find a firm way to assess whether my impressions are true. No doubt this is partly due to my ignorance in the social sciences. Actually, I don’t know much about social studies of science, epistemic communities, or knowledge construction. Maybe this is why I claimed that a sociology of epidemiology is much needed. A sociology of epidemiology would apply the scientific principles and methods of sociology to the science, discipline, and profession of epidemiology in order to improve understanding of the wider social causes and consequences of epidemiologists’ professional and scientific organization, patterns of practice, ideas, knowledge, and cultures (e.g., institutional arrangements, academic norms, scientific discourses, defense of identity, and epistemic authority). It could also address the patterns of interaction of epidemiologists with other branches of science and professions (e.g. clinical medicine, public health, the other health, life, and social sciences), and with social agents, organizations, and systems (e.g. the economic, political, and legal systems). I believe the tradition of sociology in epidemiology is rich, while the sociology of epidemiology is virtually uncharted (in the sense of not mapped neither surveyed) and unchartered (i.e. not furnished with a charter or constitution).
Another way I can suggest to look at what may be happening with clinical and epidemiological research methods is to read the changes that we are witnessing in the definitions of basic concepts as risk, rate, risk ratio, attributable fraction, bias, selection bias, confounding, residual confounding, interaction, cumulative and density sampling, open population, test hypothesis, null hypothesis, causal null, causal inference, Berkson’s bias, Simpson’s paradox, frequentist statistics, generalizability, representativeness, missing data, standardization, or overadjustment. The possible existence of a “revolution” might also be assessed in recent and new terms as collider, M-bias, causal diagram, backdoor (biasing path), instrumental variable, negative controls, inverse probability weighting, identifiability, transportability, positivity, ignorability, collapsibility, exchangeable, g-estimation, marginal structural models, risk set, immortal time bias, Mendelian randomization, nonmonotonic, counterfactual outcome, potential outcome, sample space, or false discovery rate.
You may say: “And what about textbooks? Are they changing dramatically? Has one changed the rules?” Well, the new generation of textbooks is just emerging, and very few people have yet read them. Two good examples are the already mentioned text by Hernán and Robins, and the soon to be published by T. VanderWeele, Explanation in causal inference: Methods for mediation and interaction (Oxford University Press, 2015). Clues can also be found in widely used textbooks by K. Rothman et al. (Modern Epidemiology, Lippincott-Raven, 2008), M. Szklo and J Nieto (Epidemiology: Beyond the Basics, Jones & Bartlett, 2014), or L. Gordis (Epidemiology, Elsevier, 2009).
Finally, another good way to assess what might be changing is to read what gets published in top journals as Epidemiology, the International Journal of Epidemiology, the American Journal of Epidemiology, or the Journal of Clinical Epidemiology. Pick up any issue of the main epidemiologic journals and you will find several examples of what I suspect is going on. If you feel like it, look for the DAGs. I recently saw a tweet saying “A DAG in The Lancet!”. It was a surprise: major clinical journals are lagging behind. But they will soon follow and adopt the new methods: the clinical relevance of the latter is huge. Or is it not such a big deal? If no “revolution” is going on, how are we to know?
For many of us, nature is defined as an outdoor space, untouched by human hands, and a place we escape to for refuge. We often spend time away from our daily routines to be in nature, such as taking a backwoods camping trip, going for a long hike in an urban park, or gardening in our backyard. Think about the last time you were out in nature, what comes to mind? For me, it was a canoe trip with friends. I can picture myself in our boat, the sound of the birds and rustling leaves in the background, the smell of cedars mixed with the clearing morning mist, and the sight of the still waters in front of me. Most of all, I remember a sense of calmness and clarity which I always achieve when I’m in nature.
Nature takes us away from the demands of life, and allows us to concentrate on the world around us with little to no effort. We can easily be taken back to a summer day by the smell of fresh cut grass, and force ourselves to be still to listen to the distant sound of ocean waves. Time in nature has a wealth of benefits from reducing stress, improving mood, increasing attentional capacities, and facilitating and creating social bonds. A variety of work supports nature being healing and health promoting at both an individual level (such as being energized after a walk with your dog) and a community level (such as neighbors coming together to create a local co-op garden). However, it can become difficult to experience the outdoors when we spend most of our day within a built environment.
I’d like you to stop for a moment and look around. What do you see? Are there windows? Are there any living plants or animals? Are the walls white? Do you hear traffic or perhaps the hum of your computer? Are you smelling circulated air? As I write now I hear the buzz of the florescent lights above me, and take a deep inhale of the lingering smell from my morning coffee. There is no nature except for the few photographs of the countryside and flowers that I keep tapped to my wall. I often feel hypocritical researching nature exposure sitting in front of a computer screen in my windowless office. But this is the reality for most of us. So how can we tap into the benefits of nature in order to create healthy and healing indoor environments that mimic nature and provide us with the same benefits as being outdoors?
Urban spaces often get a bad rap. Sure, they’re typically overcrowded, high in pollution, and limited in their natural and green spaces, but they also offer us the ability to transform the world around us into something that is meaningful and also health promoting. Beyond architectural features such as skylights, windows, and open air courtyards, we can use ambient features to adapt indoor spaces to replicate the outdoors. The integration of plants, animals, sounds, scents, and textures into our existing indoor environments enables us to create a wealth of natural environments indoors.
Notable examples of indoor nature, are potted plants or living walls in office spaces, atriums providing natural light, and large mural landscapes. In fact, much research has shown that the presence of such visual aids provides the same benefits of being outdoors. Incorporating just a few pieces of greenery into your workspace can help increase your productivity, boost your mood, improve your health, and help you concentrate on getting your work done. But being in nature is more than just seeing, it’s experiencing it fully and being immersed into a world that engages all of your senses. The use of natural sounds, scents, and textures (e.g. wooden furniture or carpets that look and feel like grass) provides endless possibilities for creating a natural environment indoors, and encouraging built environments to be therapeutic spaces. The more nature-like the indoor space can be, the more apt it is to illicit the same psychological and physical benefits that being outdoors does. Ultimately, the built environment can engage my senses in a way that brings me back to my canoe trip, and help me feel that same clarity and calmness that I did on the lake.
On a broader level, indoor nature may also be a means of encouraging sustainable and eco-friendly behaviors. With more generations growing up inside, we risk creating a society that is unaware of the value of nature. It’s easy to suggest that the solution to our declining involvement with nature is to just “go outside”; but with today’s busy lifestyle, we cannot always afford the time and money to step away. Integrating nature into our indoor environment is one way to foster the relationship between us and nature, and to encourage a sense of stewardship and appreciation for our natural world. By experiencing the health promoting and healing properties of nature, we can instill individuals with the significance of our natural world.
As I look around my office I’ve decided I need to take some of my own advice and bring my own little piece of nature inside. I encourage you to think about what nature means to you, and how you can incorporate this meaning into your own space. Does it involve fresh cut flowers? A photograph of your annual family campsite? The sound of birds in the background as you work? Whatever it is, I’m sure it’ll leave you feeling a little bit lighter, and maybe have you working a little bit faster.
Image: World Financial Center Winter Garden by WiNG. CC-BY-3.0 via Wikimedia Commons.
We’re getting ready for Halloween this month by reading the classic horror stories that set the stage for the creepy movies and books we love today. Check in every Friday this October as we tell Fitz-James O’Brien’s tale of an unusual entity in What Was It?, a story from the spine-tingling collection of works in Horror Stories: Classic Tales from Hoffmann to Hodgson, edited by Darryl Jones. Last we left off the narrator was headed to bed after a night of opium and philosophical conversation with Dr. Hammond, a friend and fellow boarded at the supposed haunted house where they are staying.
We parted, and each sought his respective chamber. I undressed quickly and got into bed, taking with me, according to my usual custom, a book, over which I generally read myself to sleep. I opened the volume as soon as I had laid my head upon the pillow, and instantly flung it to the other side of the room. It was Goudon’s ‘History of Monsters,’—a curious French work, which I had lately imported from Paris, but which, in the state of mind I had then reached, was anything but an agreeable companion. I resolved to go to sleep at once; so, turning down my gas until nothing but a little blue point of light glimmered on the top of the tube, I composed myself to rest.
The room was in total darkness. The atom of gas that still remained alight did not illuminate a distance of three inches round the burner. I desperately drew my arm across my eyes, as if to shut out even the darkness, and tried to think of nothing. It was in vain. The confounded themes touched on by Hammond in the garden kept obtruding themselves on my brain. I battled against them. I erected ramparts of would-be blankness of intellect to keep them out. They still crowded upon me. While I was lying still as a corpse, hoping that by a perfect physical inaction I should hasten mental repose, an awful incident occurred. A Something dropped, as it seemed, from the ceiling, plumb upon my chest, and the next instant I felt two bony hands encircling my throat, endeavoring to choke me.
I am no coward, and am possessed of considerable physical strength. The suddenness of the attack, instead of stunning me, strung every nerve to its highest tension. My body acted from instinct, before my brain had time to realize the terrors of my position. In an instant I wound two muscular arms around the creature, and squeezed it, with all the strength of despair, against my chest. In a few seconds the bony hands that had fastened on my throat loosened their hold, and I was free to breathe once more. Then commenced a struggle of awful intensity. Immersed in the most profound darkness, totally ignorant of the nature of the Thing by which I was so suddenly attacked, finding my grasp slipping every moment, by reason, it seemed to me, of the entire nakedness of my assailant, bitten with sharp teeth in the shoulder, neck, and chest, having every moment to protect my throat against a pair of sinewy, agile hands, which my utmost efforts could not confine,—these were a combination of circumstances to combat which required all the strength, skill, and courage that I possessed.
At last, after a silent, deadly, exhausting struggle, I got my assailant under by a series of incredible efforts of strength. Once pinned, with my knee on what I made out to be its chest, I knew that I was victor. I rested for a moment to breathe. I heard the creature beneath me panting in the darkness, and felt the violent throbbing of a heart. It was apparently as exhausted as I was; that was one comfort. At this moment I remembered that I usually placed under my pillow, before going to bed, a large yellow silk pocket-handkerchief. I felt for it instantly; it was there. In a few seconds more I had, after a fashion, pinioned the creature’s arms.
I now felt tolerably secure. There was nothing more to be done but to turn on the gas, and, having first seen what my midnight assailant was like, arouse the household. I will confess to being actuated by a certain pride in not giving the alarm before; I wished to make the capture alone and unaided.
Never losing my hold for an instant, I slipped from the bed to the floor, dragging my captive with me. I had but a few steps to make to reach the gas-burner; these I made with the greatest caution, holding the creature in a grip like a vice. At last I got within arm’s-length of the tiny speck of blue light which told me where the gas-burner lay. Quick as lightning I released my grasp with one hand and let on the full flood of light. Then I turned to look at my captive.
I cannot even attempt to give any definition of my sensations the instant after I turned on the gas. I suppose I must have shrieked with terror, for in less than a minute afterward my room was crowded with the inmates of the house. I shudder now as I think of that awful moment. I saw nothing! Yes; I had one arm firmly clasped round a breathing, panting, corporeal shape, my other hand gripped with all its strength a throat as warm, and apparently fleshly, as my own; and yet, with this living substance in my grasp, with its body pressed against my own, and all in the bright glare of a large jet of gas, I absolutely beheld nothing! Not even an outline,—a vapor!
I do not, even at this hour, realize the situation in which I found myself. I cannot recall the astounding incident thoroughly. Imagination in vain tries to compass the awful paradox.
It breathed. I felt its warm breath upon my cheek. It struggled fiercely. It had hands. They clutched me. Its skin was smooth, like my own. There it lay, pressed close up against me, solid as stone,—and yet utterly invisible!
I wonder that I did not faint or go mad on the instant. Some wonderful instinct must have sustained me; for, absolutely, in place of loosening my hold on the terrible Enigma, I seemed to gain an additional strength in my moment of horror, and tightened my grasp with such wonderful force that I felt the creature shivering with agony.
Just then Hammond entered my room at the head of the household. As soon as he beheld my face—which, I suppose, must have been an awful sight to look at—he hastened forward, crying, ‘Great heaven, Harry! what has happened?’
‘Hammond! Hammond!’ I cried, ‘come here. O, this is awful!
I have been attacked in bed by something or other, which I have hold of; but I can’t see it,—I can’t see it!’
Hammond, doubtless struck by the unfeigned horror expressed in my countenance, made one or two steps forward with an anxious yet puzzled expression. A very audible titter burst from the remainder of my visitors. This suppressed laughter made me furious. To laugh at a human being in my position! It was the worst species of cruelty. Now, I can understand why the appearance of a man struggling violently, as it would seem, with an airy nothing, and calling for assistance against a vision, should have appeared ludicrous. Then, so great was my rage against the mocking crowd that had I the power I would have stricken them dead where they stood.
‘Hammond! Hammond!’ I cried again, despairingly, ‘for God’s sake come to me. I can hold the—the thing but a short while longer. It is overpowering me. Help me! Help me!’
‘Harry,’ whispered Hammond, approaching me, ‘you have been smoking too much opium.’
‘I swear to you, Hammond, that this is no vision,’ I answered, in the same low tone. ‘Don’t you see how it shakes my whole frame with its struggles? If you don’t believe me, convince yourself. Feel it,— touch it.’
Hammond advanced and laid his hand in the spot I indicated. A wild cry of horror burst from him. He had felt it! In a moment he had discovered somewhere in my room a long piece of cord, and was the next instant winding it and knotting it about the body of the unseen being that I clasped in my arms.
‘Harry,’ he said, in a hoarse, agitated voice, for, though he preserved his presence of mind, he was deeply moved, ‘Harry, it’s all safe now. You may let go, old fellow, if you’re tired. The Thing can’t move.’
I was utterly exhausted, and I gladly loosed my hold.
Check back next Friday, 24 October to find out what happens next. Missed a part of the story? Catch up with part 1 and part 2.
Last weekend we were thrilled to see so many of you at the 2014 Oral History Association (OHA) Annual Meeting, “Oral History in Motion: Movements, Transformations, and the Power of Story.” The panels and roundtables were full of lively discussions, and the social gatherings provided a great chance to meet fellow oral historians. You can read a recap from Margo Shea, or browse through the Storify below, prepared by Jaycie Vos, to get a sense of the excitement at the meeting. Over the next few weeks, we’ll be sharing some more in depth blog posts from the meeting, so make sure to check back often.
We look forward to seeing you all next year at the Annual Meeting in Florida. And special thanks to Margo Shea for sending in her reflections on the meeting and to Jaycie Vos (@jaycie_v) for putting together the Storify.
Headline image credit: Madison, Wisconsin cityscape at night, looking across Lake Monona from Olin Park. Photo by Richard Hurd. CC BY 2.0 via rahimageworks Flickr.
The outbreak of Ebola, in Africa and in the United States, is a stark reminder of the clear and present danger that infection represents in all our lives, and we need reminding. Despite all of our medical advances, more familiar infections still take tens of thousands of American lives each year – and too often these deaths are avoidable.
Hospital infections kill 75,000 Americans a year — more than twice the number of people who die in car crashes. Most people know that motor vehicle deaths could be drastically reduced. What’s not as widely appreciated is that the far greater number of hospital infections could be reduced by up to 70%.
Changes that would reduce infections are evidence-based and scientific, supported by the Centers for Disease Control and Prevention. For example, the campaign against hospital-acquired urinary tract infection — one of the most common hospital infections in the world — seeks to minimize the use of internal, Foley catheters, a major vector of infection. Nurses who have always relied on Foleys to deal with patients who have urinary incontinence are told to use straight catheters intermittently instead, which increases their workload. Surgeons who are accustomed to placing Foley catheters in their patients for several days after an operation are told to remove the catheter shortly after surgery – or not to use one at all. Similar approaches can be used to reduce other common infections. If we know what needs to be done to lower the rate of hospital infections, why have the many attempts to do so fallen so woefully short?
Our research shows that a major reason is the unwillingness of some nurses and physicians to support the desired new behaviors. We have found that opposition to hospitals’ infection prevention initiatives comes from the three groups we call Active Resisters, Organizational Constipators, and Timeservers. While we know these types of individuals exist in hospitals since we have seen them in action, we suspect they can also be found in all types of organizations.
Active resisters refuse to abide by and sometimes campaign against an initiative’s proposed changes. Some active resisters refuse to change a practice they have used for years because they fear it might have a negative impact on their patients’ health. Others resist because they doubt the scientific validity of a change, or because the change is inconvenient. For others it’s simply a matter of ego, as in, “Don’t tell me what to do.” Some ignore the evidence. Many initiatives to prevent urinary tract infection ask nurses to remind physicians when it’s time to remove an indwelling catheter, but many nurses are unwilling to confront physicians – and many physicians are unwilling to be so confronted.
Organizational constipators present a different set of challenges. Most are mid- to upper-level staff members who have nothing against an infection prevention initiative per se but simply enjoy exercising their power. Sometimes they refuse to permit underlings to help with an initiative. Sometimes they simply do nothing, allowing memos and emails to pile up without taking action. While we have met some physicians in this category, we have seen, unfortunately, a surprising number of nursing leaders employ this approach.
Timeservers do the least possible in any circumstance. That applies to every aspect of their work, including preventing infection. A timeserver surgeon may neglect to wash her hands before examining a patient, not because she opposes that key infection prevention requirement but because it’s just easier that way. A timeserver nurse may “forget” to conduct “sedation vacations” for patients who are on mechanical breathing machines to assess if the patient can be weaned from the ventilator sooner for the simple reason that sedated patients are less work.
We have learned that different overcoming these human-related barriers to improvement requires different styles of engagement.
To win support among the active resisters, we recommend employing data both liberally and strategically. Doctors are trained to respond to facts, and a graph that shows a high rate of infection department can help sway them. Sharing research from respected journals describing proven methods of preventing infection can also help overcome concerns. Nurse resisters are similarly impressed by such data, but we find that they are also likely to be convinced by appeals to their concern for their patients’ welfare – a description, for example, of the discomfort the Foley causes their patients.
Organizational constipators and timeservers are more difficult to win over, largely because their negative behavior is an incidental result of their normal operating style. Managers sometimes try to work around the organizational constipators and assign an authority figure to harass the timeservers, but their success is limited. Efforts to fire them can sometimes be difficult.
Hospitals’ administrative and medical leaders often play an important role in successful infection prevention initiatives by emphasizing their approval in their staff encounters, by occasionally attending an infection prevention planning session, and by making adherence to the goals of the initiative a factor in employee performance reviews. Some innovative leaders also give out physician or nurse champion-of-the-year awards that serve the dual purpose of rewarding the healthcare workers who have been helpful in a successful initiative while encouraging others by showing that they, too, could someday receive similar recognition. It may help to include potential obstructors in planning for an infection prevention campaign; the critics help spot weaknesses and are also inclined to go easy on the campaign once it gets underway.
But the leadership of a successful infection prevention project can also come from lower down in a hospital’s hierarchy, with or without the active support of the senior executives. We found the key to a positive result is a culture of excellence, when the hospital staff is fully devoted to patient-centered, high-quality care. Healthcare workers in such hospitals endeavor to treat each patient as a family member. In such institutions, a dedicated nurse can ignite an infection prevention initiative, and the staff’s all-but-universal commitment to patient safety can win over even the timeservers. The closer the nation’s hospitals approach that state of grace, the greater the success they will have in their efforts to lower infection rates.
Preventing infection is a team sport. Cooperation — among doctors, nurses, microbiologists, public health officials, patients, and families — will be required to control the spread of Ebola. Such cooperation is required to prevent more mundane infections as well.
Anti-politics is in the air. There is a prevalent feeling in many societies that politicians are up to no good, that establishment politics are at best irrelevant and at worst corrupt and power-hungry, and that the centralization of power in national parliaments and governments denies the public a voice. Larger organizations fare even worse, with the European Union’s ostensible detachment from and imperviousness to the real concerns of its citizens now its most-trumpeted feature. Discontent and anxiety build up pressure that erupts in the streets from time to time, whether in Takhrir Square or Tottenham. The Scots rail against a mysterious entity called Westminster; UKIP rides on the crest of what it terms patriotism (and others term typical European populism) intimating, as Matthew Goodwin has pointed out in the Guardian, that Nigel Farage “will lead his followers through a chain of events that will determine the destiny of his modern revolt against Westminster.”
At the height of the media interest in Wootton Bassett, when the frequent corteges of British soldiers who were killed in Afghanistan wended their way through the high street while the townspeople stood in silence, its organizers claimed that it was a spontaneous and apolitical display of respect. “There are no politics here,” stated the local MP. Those involved held that the national stratum of politicians was superfluous to the authentic feeling of solidarity that could solely be generated at the grass roots. A clear resistance emerged to national politics trying to monopolize the mourning that only a town at England’s heart could convey.
Academics have been drawn in to the same phenomenon. A new Anti-politics and Depoliticization Specialist Group has been set up by the Political Studies Association in the UK dedicated, as it describes itself, to “providing a forum for researchers examining those processes throughout society that seem to have marginalized normative political debates, taken power away from elected politicians and fostered an air of disengagement, disaffection and disinterest in politics.” The term “politics” and what it apparently stands for is undoubtedly suffering from a serious reputational problem.
But all that is based on a misunderstanding of politics. Political activity and thinking isn’t something that happens in remote places and institutions outside the experience of everyday life. It is ubiquitous, rooted in human intercourse at every level. It is not merely an elite activity but one that every one of us engages in consciously or unconsciously in our relations with others: commanding, pleading, negotiating, arguing, agreeing, refusing, or resisting. There is a tendency to insist on politics being mainly about one thing: power, dissent, consensus, oppression, rupture, conciliation, decision-making, the public domain, are some of the competing contenders. But politics is about them all, albeit in different combinations.
It concerns ranking group priorities in terms of urgency or importance—whether the group is a family, a sports club, or a municipality. It concerns attempts to achieve finality in human affairs, attempts always doomed to fail yet epitomised in language that refers to victory, authority, sovereignty, rights, order, persuasion—whether on winning or losing sides of political struggle. That ranges from a constitutional ruling to the exasperated parent trying to end an argument with a “because I say so.” It concerns order and disorder in human gatherings, whether parliaments, trade union meetings, classrooms, bus queues, or terrorist attacks—all have a political dimension alongside their other aspects. That gives the lie to a demonstration being anti-political, when its ends are reform, revolution, or the expression of disillusionment. It concerns devising plans and weaving visions for collectivities. It concerns the multiple languages of support and withholding support that we engage in with reference to others, from loyalty and allegiance through obligation to commitment and trust. And it is manifested through conservative, progressive, or reactionary tendencies that the human personality exhibits.
When those involved in the Wootton Bassett corteges claimed to be non-political, they overlooked their organizational role in making certain that every detail of the ceremony was in place. They elided the expression of national loyalty that those homages clearly entailed. They glossed over the tension between political centre and periphery that marked an asymmetry of power and voice. They assumed, without recognizing, the prioritizing of a particular group of the dead – those that fell in battle.
People everywhere engage in political practices, but they do so in different intensities. It makes no more sense to suggest that we are non-political than to suggest that we are non-psychological. Nor does anti-politics ring true, because political disengagement is still a political act: sometimes vociferously so, sometimes seeking shelter in smaller circles of political conduct. Alongside political philosophy and the history of political thought, social scientists need to explore the features of thinking politically as typical and normal features of human life. Those patterns are always with us, though their cultural forms will vary considerably across and within societies. Being anti-establishment, anti-government, anti-sleaze, even anti-state are themselves powerful political statements, never anti-politics.
Headline image credit: Westminster, by “Stròlic Furlàn” – Davide Gabino. CC-BY-SA-2.0 via Flickr.
Biology Week is an annual celebration of the biological sciences that aims to inspire and engage the public in the wonders of biology. The Society of Biology created this awareness day in 2012 to give everyone the chance to learn and appreciate biology, the science of the 21st century, through varied, nationwide events. Our belief that access to education and research changes lives for the better naturally supports the values behind Biology Week, and we are excited to be involved in it year on year.
Biology, as the study of living organisms, has an incredibly vast scope. We’ve identified some key figures from the last couple of centuries who traverse the range of biology: from physiology to biochemistry, sexology to zoology. You can read their stories by checking out our Biology Week 2014 gallery below. These biologists, in various different ways, have had a significant impact on the way we understand and interact with biology today. Whether they discovered dinosaurs or formed the foundations of genetic engineering, their stories have plenty to inspire, encourage, and inform us.
If you’d like to learn more about these key figures in biology, you can explore the resources available on our Biology Week page, or sign up to our e-alerts to stay one step ahead of the next big thing in biology.
Headline image credit: Marie Stopes in her laboratory, 1904, by Schnitzeljack. Public domain via Wikimedia Commons.
Now that Noughth Week has come to an end and the university Full Term is upon us, I thought it might be an appropriate time to investigate the arcane world of Oxford jargon -- the University of Oxford, that is. New students, or freshers, do not arrive in Oxford but come up; at the end of term they go down (irrespective of where they live).
Many bioethical challenges surround the promise of genomic technology and the power of genomic information — providing a rich context for critically exploring underlying bioethical traditions and foundations, as well as the practice of multidisciplinary advisory committees and collaborations. Controversial issues abound that call into question the core values and assumptions inherent in bioethics analysis and thus necessitates interprofessional inquiry. Consequently, the teaching of genomics and contemporary bioethics provides an opportunity to re-examine our disciplines’ underpinnings by casting light on the implications of genomics with novel approaches to address thorny issues — such as determining whether, what, to whom, when, and how genomic information, including “incidental” findings, should be discovered and disclosed to individuals and their families, and whose voice matters in making these determinations particularly when children are involved.
One creative approach we developed is narrative genomics using drama with provocative characters and dialogue as an interdisciplinary pedagogical approach to bring to life the diverse voices, varied contexts, and complex processes that encompass the nascent field of genomics as it evolves from research to clinical practice. This creative educational technique focuses on inherent challenges currently posed by the comprehensive interrogation and analysis of DNA through sequencing the human genome with next generation technologies and illuminates bioethical issues, providing a stage to reflect on the controversies together, and temper the sometimes contentious debates that ensue.
As a bioethics teaching method, narrative genomics highlights the breadth of individuals affected by next-gen technologies — the conversations among professionals and families — bringing to life the spectrum of emotions and challenges that envelope genomics. Recent controversies over genomic sequencing in children and consentissues have brought fundamental ethical theses to the stage to be re-examined, further fueling our belief in drama as an interdisciplinary pedagogical approach to explore how society evaluates, processes, and shares genomic information that may implicate future generations. With a mutual interest in enhancing dialogue and understanding about the multi-faceted implications raised by generating and sharing vast amounts of genomic information, and with diverse backgrounds in bioethics, policy, psychology, genetics, law, health humanities, and neuroscience, we have been collaboratively weaving dramatic narratives to enhance the bioethics educational experience within varied professional contexts and a wide range of academic levels to foster interprofessionalism.
Dramatizations of fictionalized individual, familial, and professional relationships that surround the ethical landscape of genomics create the potential to stimulate bioethical reflection and new perceptions amongst “actors” and the audience, sparking the moral imagination through the lens of others. By casting light on all “the storytellers” and the complexity of implications inherent with this powerful technology, dramatic narratives create vivid scenarios through which to imagine the challenges faced on the genomic path ahead, critique the application of bioethical traditions in context, and re-imagine alternative paradigms.
We initially collaborated on the creation of a short vignette play in the context of genomic research and the informed consent process that was performed at the NHGRI-ELSI Congress by a geneticist, genetic counselor, bioethicists, and other conference attendees. The response by “actors” and audience fueled us to write many more plays of varying lengths on different ethical and genomic issues, as well as to explore the dialogues of existing theater with genetic and genomic themes — all to be presented and reflected upon by interdisciplinary professionals in the bioethics and genomics community at professional society meetings and academic medical institutions nationally and internationally.
Because narrative genomics is a pedagogical approach intended to facilitate discourse, as well as provide reflection on the interrelatedness of the cross-disciplinary issues posed, we ground our genomic plays in current scholarship and ensure that it is accurate scientifically as well as provide extensive references and pose focused bioethics questions which can complement and enhance the classroom experience.
In a similar vein, bioethical controversies can also be brought to life with this approach where bioethics reaching incorporates dramatizations and excerpts from existing theatrical narratives, whether to highlight bioethics issues thematically, or to illuminate the historical path to the genomics revolution and other medical innovations from an ethical perspective.
Varying iterations of these dramatic narratives have been experienced (read, enacted, witnessed) by bioethicists, policy makers, geneticists, genetic counselors, other healthcare professionals, basic scientists, bioethicists, lawyers, patient advocates, and students to enhance insight and facilitate interdisciplinary and interprofessional dialogue.
Dramatizations embedded in genomic narratives illuminate the human dimensions and complexity of interactions among family members, medical professionals, and others in the scientific community. By facilitating discourse and raising more questions than answers on difficult issues, narrative genomics links the promise and concerns of next-gen technologies with a creative bioethics pedagogical approach for learning from one another.
Heading image: Andrzej Joachimiak and colleagues at Argonne’s Midwest Center for Structural Genomics deposited the consortium’s 1,000th protein structure into the Protein Data Bank. CC-BY-SA-2.0 via Wikimedia Commons.
American higher education is at a crossroads. The cost of a college education has made people question the benefits of receiving one. To better understand the issues surrounding the supposed crisis, we asked Goldie Blumenstyk, author of American Higher Education in Crisis: What Everyone Needs to Know, to comment on some of the most hot button topics today.
A discussion on the rising cost of higher education.
What does the future of higher education look like?
Are the salaries of university presidents and coaches too high?
A look into the accountability movement in higher education today.
Causation is now commonly supposed to involve a succession that instantiates some lawlike regularity. This understanding of causality has a history that includes various interrelated conceptions of efficient causation that date from ancient Greek philosophy and that extend to discussions of causation in contemporary metaphysics and philosophy of science. Yet the fact that we now often speak only of causation, as opposed to efficient causation, serves to highlight the distance of our thought on this issue from its ancient origins. In particular, Aristotle (384-322 BCE) introduced four different kinds of “cause” (aitia): material, formal, efficient, and final. We can illustrate this distinction in terms of the generation of living organisms, which for Aristotle was a particularly important case of natural causation. In terms of Aristotle’s (outdated) account of the generation of higher animals, for instance, the matter of the menstrual flow of the mother serves as the material cause, the specially disposed matter from which the organism is formed, whereas the father (working through his semen) is the efficient cause that actually produces the effect. In contrast, the formal cause is the internal principle that drives the growth of the fetus, and the final cause is the healthy adult animal, the end point toward which the natural process of growth is directed.
From a contemporary perspective, it would seem that in this case only the contribution of the father (or perhaps his act of procreation) is a “true” cause. Somewhere along the road that leads from Aristotle to our own time, material, formal and final aitiai were lost, leaving behind only something like efficient aitiai to serve as the central element in our causal explanations. One reason for this transformation is that the historical journey from Aristotle to us passes by way of David Hume (1711-1776). For it is Hume who wrote: “[A]ll causes are of the same kind, and that in particular there is no foundation for that distinction, which we sometimes make betwixt efficient causes, and formal, and material … and final causes” (Treatise of Human Nature, I.iii.14). The one type of cause that remains in Hume serves to explain the producing of the effect, and thus is most similar to Aristotle’s efficient cause. And so, for the most part, it is today.
However, there is a further feature of Hume’s account of causation that has profoundly shaped our current conversation regarding causation. I have in mind his claim that the interrelated notions of cause, force and power are reducible to more basic non-causal notions. In Hume’s case, the causal notions (or our beliefs concerning such notions) are to be understood in terms of the constant conjunction of objects or events, on the one hand, and the mental expectation that an effect will follow from its cause, on the other. This specific account differs from more recent attempts to reduce causality to, for instance, regularity or counterfactual/probabilistic dependence. Hume himself arguably focused more on our beliefs concerning causation (thus the parenthetical above) than, as is more common today, directly on the metaphysical nature of causal relations. Nonetheless, these attempts remain “Humean” insofar as they are guided by the assumption that an analysis of causation must reduce it to non-causal terms. This is reflected, for instance, in the version of “Humean supervenience” in the work of the late David Lewis. According to Lewis’s own guarded statement of this view: “The world has its laws of nature, its chances and causal relationships; and yet — perhaps! — all there is to the world is its point-by-point distribution of local qualitative character” (On the Plurality of Worlds, 14).
Admittedly, Lewis’s particular version of Humean supervenience has some distinctively non-Humean elements. Specifically — and notoriously — Lewis has offered a counterfactural analysis of causation that invokes “modal realism,” that is, the thesis that the actual world is just one of a plurality of concrete possible worlds that are spatio-temporally discontinuous. One can imagine that Hume would have said of this thesis what he said of Malebranche’s occasionalist conclusion that God is the only true cause, namely: “We are got into fairy land, long ere we have reached the last steps of our theory; and there we have no reason to trust our common methods of argument, or to think that our usual analogies and probabilities have any authority” (Enquiry concerning Human Understanding, §VII.1). Yet the basic Humean thesis in Lewis remains, namely, that causal relations must be understood in terms of something more basic.
And it is at this point that Aristotle re-enters the contemporary conversation. For there has been a broadly Aristotelian move recently to re-introduce powers, along with capacities, dispositions, tendencies and propensities, at the ground level, as metaphysically basic features of the world. The new slogan is: “Out with Hume, in with Aristotle.” (I borrow the slogan from Troy Cross’s online review of Powers and Capacities in Philosophy: The New Aristotelianism.) Whereas for contemporary Humeans causal powers are to be understood in terms of regularities or non-causal dependencies, proponents of the new Aristotelian metaphysics of powers insist that regularities and dependencies must be understood rather in terms of causal powers.
Should we be Humean or Aristotelian with respect to the question of whether causal powers are basic or reducible features of the world? Obviously I cannot offer any decisive answer to this question here. But the very fact that the question remains relevant indicates the extent of our historical and philosophical debt to Aristotle and Hume.
Headline image: Face to face. Photo by Eugenio. CC-BY-SA-2.0 via Flickr
It’s fairly common knowledge that languages, like people, have families. English, for instance, is a member of the Germanic family, with sister languages including Dutch, German, and the Scandinavian languages. Germanic, in turn, is a branch of a larger family, Indo-European, whose other members include the Romance languages (French, Italian, Spanish, and more), Russian, Greek, and Persian.
Being part of a family of course means that you share a common ancestor. For the Romance languages, that mother language is Latin; with the spread and then fall of the Roman empire, Latin split into a number of distinct daughter languages. But what did the Germanic mother language look like? Here there’s a problem, because, although we know that language must have existed, we don’t have any direct record of it.
The earliest Old English written texts date from the 7th century AD, and the earliest Germanic text of any length is a 4th-century translation of the Bible into Gothic, a now-extinct Germanic language. Though impressively old, this text still dates from long after the breakup of the Germanic mother language into its daughters.
How does one go about recovering the features of a language that is dead and gone, and which has left no records of itself in spoken or written form? This is the subject matter of linguistic necromancy – or linguistic reconstruction, as it is more conventionally known.
The enterprise, dubbed “darkest of the dark arts” and “the only means to conjure up the ghosts of vanished centuries” in the epigraph to a chapter of Campbell’s historical linguistics textbook, really got off the ground in the 1900s due to a development of a toolkit of techniques known as the comparative method.
Crucial to the comparative method was a revolutionary empirical finding: the regularity of sound change. Though it has wide-reaching implications, the basic finding is simple to grasp. In a nutshell: it’s sounds that change, not words, and when they change, all words which include those sounds are affected.
Let’s take an example. Lots of English words beginning with a p sound have a German counterpart that begins with pf. Here are some of them:
English path: German Pfad
English pepper: German Pfeffer
English pipe: German Pfeife
English pan: German Pfanne
English post: German Pfoste
If the forms of words simply changed at random, these systematic correspondences would be a miraculous coincidence. However, in the light of the regularity of sound change they make perfect sense. Specifically, at some point in the early history of German, the language sounded a lot more like (Old) English. But then the sound p underwent a change to pf at the beginning of words, and all words starting with p were affected.
There’s much more to be said about the regularity of sound change, since it underlies pretty much everything we know about language family groupings. (If you’re interested in finding out more, Guy Deutscher’s book The Unfolding of Language provides an accessible summary.) But for now let’s concentrate on its implications for necromantic purposes, which are immense.
If we want to invoke the words and sounds of a long-dead language like the mother language Proto-Germanic (the ‘proto-’ indicates that the language is reconstructed, rather than directly evidenced in texts), we just need to figure out what changes have happened to the sounds of the daughter languages, and to peel them back one by one like the layers of an onion. Eventually we’ll reach a point where all the daughter languages sound the same; and voilà, we’ve conjured up a proto-language.
There’s more to living languages than just sounds and words though. Living languages have syntax: a structure, a skeleton. By contrast, reconstructed protolanguages tend to look more like ghosts: hauntingly amorphous clouds of words and sounds. There are practical reasons why the reconstruction of proto-syntax has lagged behind. One is simply that our understanding of syntax, in general, has come a long way since the work of the reconstruction pioneers in the 19th century.
Another is that there is nothing quite like the regularity of sound change in syntax: how can we tell which syntactic structures correspond to each other across languages? These problems have led some to be sceptical about the possibility of syntactic reconstruction, or at any rate about its fruitfulness. Nevertheless, progress is being made. To take one example, English is a language that doesn’t like to leave out the subject of a sentence. We say “He speaks Swahili” or “It is raining”, not “Speaks Swahili” or “Is raining”. Though most of the modern Germanic languages behave the same, many other languages, like Italian and Japanese, have no such requirement; speakers can include or omit the subject of the sentence as the fancy takes them. Was Proto-Germanic like English, or like Italian or Japanese, in this respect? Doing a bit of necromancy based on the earliest Germanic written records suggests that Proto-Germanic was, like the latter, quite happy to omit the subject, at least under certain circumstances.Of course the issue is more complex than that – Italian and Japanese themselves differ with regard to the circumstances under which subjects can be omitted.
Slowly but surely, though, historical linguists are starting to add skeletons to the reanimated spectres of proto-languages.
There’s a lot of interesting social science research these days. Conference programs are packed, journals are flooded with submissions, and authors are looking for innovative new ways to publish their work.
This is why we have started up a new type of research publication at Political Analysis, Letters.
Research journals have a limited number of pages, and many authors struggle to fit their research into the “usual formula” for a social science submission — 25 to 30 double-spaced pages, a small handful of tables and figures, and a page or two of references. Many, and some say most, papers published in social science could be much shorter than that “usual formula.”
We have begun to accept Letters submissions, and we anticipate publishing our first Letters in Volume 24 of Political Analysis. We will continue to accept submissions for research articles, though in some cases the editors will suggest that an author edit their manuscript and resubmit it as a Letter. Soon we will have detailed instructions on how to submit a Letter, the expectations for Letters, and other information, on the journal’s website.
We have named Justin Grimmer and Jens Hainmueller, both at Stanford University, to serve as Associate Editors of Political Analysis — with their primary responsibility being Letters. Justin and Jens are accomplished political scientists and methodologists, and we are quite happy that they have agreed to join the Political Analysis team. Justin and Jens have already put in a great deal of work helping us develop the concept, and working out the logistics for how we integrate the Letters submissions into the existing workflow of the journal.
I recently asked Justin and Jens a few quick questions about Letters, to give them an opportunity to get the word out about this new and innovative way of publishing research in Political Analysis.
Political Analysis is now accepting the submission of Letters as well as Research Articles. What are the general requirements for a Letter?
Letters are short reports of original research that move the field forward. This includes, but is not limited to, new empirical findings, methodological advances, theoretical arguments, as well as comments on or extensions of previous work. Letters are peer reviewed and subjected to the same standards as Political Analysis research articles. Accepted Letters are published in the electronic and print versions of Political Analysis and are searchable and citable just like other articles in the journal. Letters should focus on a single idea and are brief—only 2-4 pages and no longer than 1500-3000 words.
Why is Political Analysis taking this new direction, looking for shorter submissions?
Political Analysis is taking this new direction to publish important results that do not traditionally fit in the longer format of journal articles that are currently the standard in the social sciences, but fit well with the shorter format that is often used in the sciences to convey important new findings. In this regard the role model for the Political Analysis Letters are the similar formats used in top general interest science journals like Science, Nature, or PNAS where significant findings are often reported in short reports and articles. Our hope is that these shorter papers also facilitate an ongoing and faster paced dialogue about research findings in the social sciences.
What is the main difference between a Letter and a Research Paper?
The most obvious difference is the length and focus. Letters are intended to only be 2-4 pages, while a standard research article might be 30 pages. The difference in length means that Letters are going to be much more focused on one important result. A letter won’t have the long literature review that is standard in political science articles and will have much more brief introduction, conclusion, and motivation. This does not mean that the motivation is unimportant; it just means that the motivation has to briefly and clearly convey the general relevance of the work and how it moves the field forward. A Letter will typically have 1-3 small display items (figures, tables, or equations) that convey the main results and these have to be well crafted to clearly communicate the main takeaways from the research.
If you had to give advice to an author considering whether to submit their work to Political Analysis as a Letter or a Research Article, what would you say?
Our first piece of advice would be to submit your work! We’re open to working with authors to help them craft their existing research into a format appropriate for letters. As scholars are thinking about their work, they should know that Letters have a very high standard. We are looking for important findings that are well substantiated and motivated. We also encourage authors to think hard about how they design their display items to clearly convey the key message of the Letter. Lastly, authors should be aware that a significant fraction of submissions might be desk rejected to minimize the burden on reviewers.
You both are Associate Editors of Political Analysis, and you are editing the Letters. Why did you decide to take on this professional responsibility?
Letters provides us an opportunity to create an outlet for important work in Political Methodology. It also gives us the opportunity to develop a new format that we hope will enhance the quality and speed of the academic debates in the social sciences.
Checking the website for the Audio Engineering Society (AES) convention in Los Angeles, I took note of the swipes promoting the event. Each heading was framed as follows: If it’s about ____________, it’s at AES. The slide show contained nine headings that are to be a part of the upcoming convention (in no particular order because you start at whatever point in the slide show you happened to log-in to the site).
Archiving & Restoration
Networked Audio
Broadcast & Streaming
Product Design
Recording
Project Studios
Sound for Picture
Live Sound
Game Sound
The list was interesting to me on many levels, but one significant one that struck me immediately was the absence of mixing and mastering (my main areas of work in audio). A relatively short time ago almost half of these categories did not exist. There was no streaming, no project studios, no networked audio and no game sound. So what is the state of affairs for the young audio engineering student or practitioner?
Interestingly, of the four new fields mentioned, three of them represent diminished opportunities in the field of music recording, with one a singular beacon of hope.
Streaming audio represents the brave new world of audio delivery systems. As these services continue to capture more of the consumer market share they continue to diminish artists ability to earn a decent living (or pay an accomplished audio engineer). A friend of mine with 3 CD releases recently got his Spotify statement and saw that he had more that 60,000 streams of his music. His check was for $17. CDs don’t pay as well as vinyl records used to, downloads don’t pay as well as CDs, and streaming doesn’t pay as well as downloads (not to mention “file-sharing” which doesn’t pay anything). Sure, there may be jobs at Pandora and Spotify for a few engineers helping with the infrastructure of audio streaming, but generally streaming is another brick in the wall that is restricting audio jobs by shrinking the earning capacity of recording artists.
Project studios now dominate most recording projects outside the reasonably well-funded major label records and even most of that work is done in project studios (though they might be quite elaborate facilities). Project studios rarely have spots for interns or assistant engineers so they provide no entree positions for those trying to come up in the engineering ranks. Not only does that limit the available sources of income, but it also prevents the kind of mentoring that actually trains young engineers in the fine points of running sessions. Of course, almost no project studios provide regular, dependable work or with any kind of benefits.
Networked audio systems provide new, faster, and more elaborate connectivity of audio using digital technology. While there may be opportunities in the tech realm for engineers designing and building digital audio networks there is, once again, a shrinking of opportunities for those aspiring to making commercial music recordings. In many instances, these networking systems allow fewer people to do more—a boon only to a small number of audio engineers working with music recordings who can now do remote recordings without having to be present and without having to employ local recording engineers and studios to complete projects with musicians in other locations.
The one bright spot here is Game Sound. The explosive world of video games is providing many good jobs for audio engineers who want to record music. These recordings have become more interesting, higher quality, and featuring more prominent and talented composers and musicians than virtually any other area of music production. The only reservation here is that the music is intended as secondary to the game play (of course) and there is a preponderance of violent video games and therefore musical styles that tend to fit well into a violent atmosphere. However, this is changing with a much broader array of game types achieving new levels of popularity (Mindcraft!).
I do not fault AES for pointing to these areas of interest for audio engineers (other than the apparent absence of mixing and mastering). These are the places where significant activity, development, and change are occurring. They’re just not very encouraging for those of us who became audio engineers because of our deep love of music and our desire to be engaged in its production.
Headline Image: Sound Mixing via CC0 Public Domain via Pixabay
In 2014 Oxford University Press celebrates ten years of open access (OA) publishing. In that time open access has grown massively as a movement and an industry. Here we look back at five key moments which have marked that growth.
2004/05 – Nucleic Acids Research (NAR) converts to OA
At first glance it might seem parochial to include this here, but as Rich Roberts noted on this blog in 2012, Nucleic Acids Research’s move to open access was truly ‘momentous’. To put it in context, in 2004 NAR was OUP’s biggest owned journal and it was not at all clear that many of the elements were in place to drive the growth of OA. But in 2004/2005 NAR moved from being free to publish to free to read – with authors now supporting the journal financially by paying APCs (Article Processing Charges). No wonder Roberts adds that it was ‘with great trepidation’ that OUP and the editors made the change. Roberts needn’t have worried — NAR’s switch has been a huge success — its impact factor has increased, and submissions, which could have fallen off a cliff, have continued to climb. As with anything, there are elements of the NAR model which couldn’t be replicated now, but NAR helped show the publishing world in particular that OA could work. It’s saying something that it’s only ten years on, with the transition of Nature Communications to OA, that any journal near NAR’s size has made the switch.
2008 – National Institutes of Health (NIH) Mandate Introduced
Open access presents huge opportunities for research funders; the removal of barriers to access chimes perfectly with most funders’ aim to disseminate the fruits of their research as widely as possible. But as both the NIH and Wellcome, amongst others, have found out, author interests don’t always chime exactly with theirs. Authors have other pressures to consider – primarily career development – and that means publishing in the best journal, the journal with the highest impact factor, etc. and not necessarily the one with the best open access options. So it was that in 2008 the NIH found it was getting a very low rate of compliance with its recommended OA requirements for authors. What happened next was hugely significant for the progress of open access. As part of an Act which passed through the US legislature, it was made mandatory for all NIH-funded authors to make their works available 12 months after publication. This was transformative in two ways: it meant thousands of articles published from NIH research became available through PubMed Central (PMC), and perhaps just as importantly it legitimised government intervention in OA policy, setting a precedent for future developments in Europe and the United Kingdom.
2008 – Springer buys BioMed Central (BMC)
BioMed Central was the first for-profit open access publisher – and since its inception in 2000 it was closely watched in the industry to see if it could make OA ‘work’. When it was purchased by one of the world’s largest publishers, and when that company’s CEO declared that OA was now a ‘sustainable part of STM publishing’, it was a pretty clear sign to the rest of the industry, and all OA-watchers, that the upstart business model was now proving to be more than just an interesting side line. It also reflected the big players in the industry starting to take OA very seriously, and has been followed by other acquisitions – for example Nature purchasing Frontiers in early 2013. The integration of BMC into Springer has happened gradually over the past five years, and has also been marked by a huge expansion of OA at the parent company. Springer was one of the first subscription publishers to embrace hybrid OA, in 2004, but since acquiring BMC they have also massively increased their fully OA publishing. It seems bizarre to think that back in 2008 there were even some who feared the purchase was aimed at moving all BMC’s journals back to subscription access.
2007 on – Growth of PLOS ONE
The Public Library of Science (PLOS) started publishing open access journals back in 2003, but while its journals quickly developed a reputation for high-quality publishing, the not-for-profit struggled to succeed financially. The advent of PLOS ONE changed all that. PLOS ONE has been transformative for several reasons, most notably its method of peer review. Typically top journals have tended to have their niche, and be selective. A journal on carcinogens would be unlikely to accept a paper about molecular biology, and it would only accept a paper on carcinogens if it was seen to be sufficiently novel and interesting. PLOS ONE changed that. It covers every scientific field, and its peer review is methodological (i.e. is the basic science sound) rather than looking for anything else. This enabled PLOS ONE to rapidly turn into the biggest journal in the world, publishing a staggering 31,500 papers in 2013 alone. PLOS ONE’s success cannot be solely attributed to its OA nature, but it was being OA which enabled PLOS ONE to become the ‘megajournal’ we know today. It would simply not be possible to bring such scale to a subscription journal. The price would balloon beyond the reach of even the biggest library budget. PLOS ONE has spawned a rash of similar journals and more than any one title it has energised the development of OA, dispelling previously-held notions of what could and couldn’t be done in journals publishing.
2012 – The ‘Finch’ Report
It’s difficult to sum up the vast impact of the Finch Report on journals publishing in the UK. The product of a group chaired by the eponymous Dame Janet Finch, the report, by way of two government investigations, catalysed a massive investment in gold open access (funded by APCs) from the UK government, crystallised by Research Councils UK’s OA policy. In setting the direction clearly towards gold OA, ‘Finch’ led to a huge number of journals changing their policies to accommodate UK researchers, and the establishment of OA policies, departments, and infrastructure at academic institutions and publishers across the UK and beyond. The wide-ranging policy implications of ‘Finch’ continue to be felt as time progresses, through 2014’s Higher Education Funding Council (HEFCE) for England policy, through research into the feasibility of OA monographs, and through deliberations in other jurisdictions over whether to follow the UK route to open access. HEFCE’s OA mandate in particular will prove incredibly influential for UK researchers – as it directly ties the assessment of a university’s funding to their success in ensuring their authors publish OA. The mainstream media attention paid to ‘Finch’ also brought OA publishing into the public eye in a way never seen before (or since).
How rapidly does medical knowledge advance? Very quickly if you read modern newspapers, but rather slowly if you study history. Nowhere is this more true than in the fields of neurology and psychiatry.
It was believed that studies of common disorders of the nervous system began with Greco-Roman Medicine, for example, epilepsy, “The sacred disease” (Hippocrates) or “melancholia”, now called depression. Our studies have now revealed remarkable Babylonian descriptions of common neuropsychiatric disorders a millennium earlier.
There were several Babylonian Dynasties with their capital at Babylon on the River Euphrates. Best known is the Neo-Babylonian Dynasty (626-539 BC) associated with King Nebuchadnezzar II (604-562 BC) and the capture of Jerusalem (586 BC). But the neuropsychiatric sources we have studied nearly all derive from the Old Babylonian Dynasty of the first half of the second millennium BC, united under King Hammurabi (1792-1750 BC).
The Babylonians made important contributions to mathematics, astronomy, law and medicine conveyed in the cuneiform script, impressed into clay tablets with reeds, the earliest form of writing which began in Mesopotamia in the late 4th millennium BC. When Babylon was absorbed into the Persian Empire cuneiform writing was replaced by Aramaic and simpler alphabetic scripts and was only revived (translated) by European scholars in the 19th century AD.
The Babylonians were remarkably acute and objective observers of medical disorders and human behaviour. In texts located in museums in London, Paris, Berlin and Istanbul we have studied surprisingly detailed accounts of what we recognise today as epilepsy, stroke, psychoses, obsessive compulsive disorder (OCD), psychopathic behaviour, depression and anxiety. For example they described most of the common seizure types we know today e.g. tonic clonic, absence, focal motor, etc, as well as auras, post-ictal phenomena, provocative factors (such as sleep or emotion) and even a comprehensive account of schizophrenia-like psychoses of epilepsy.
Early attempts at prognosis included a recognition that numerous seizures in one day (i.e. status epilepticus) could lead to death. They recognised the unilateral nature of stroke involving limbs, face, speech and consciousness, and distinguished the facial weakness of stroke from the isolated facial paralysis we call Bell’s palsy. The modern psychiatrist will recognise an accurate description of an agitated depression, with biological features including insomnia, anorexia, weakness, impaired concentration and memory. The obsessive behaviour described by the Babylonians included such modern categories as contamination, orderliness of objects, aggression, sex, and religion. Accounts of psychopathic behaviour include the liar, the thief, the troublemaker, the sexual offender, the immature delinquent and social misfit, the violent, and the murderer.
The Babylonians had only a superficial knowledge of anatomy and no knowledge of brain, spinal cord or psychological function. They had no systematic classifications of their own and would not have understood our modern diagnostic categories. Some neuropsychiatric disorders e.g. stroke or facial palsy had a physical basis requiring the attention of the physician or asû, using a plant and mineral based pharmacology. Most disorders, such as epilepsy, psychoses and depression were regarded as supernatural due to evil demons and spirits, or the anger of personal gods, and thus required the intervention of the priest or ašipu. Other disorders, such as OCD, phobias and psychopathic behaviour were viewed as a mystery, yet to be resolved, revealing a surprisingly open-minded approach.
From the perspective of a modern neurologist or psychiatrist these ancient descriptions of neuropsychiatric phenomenology suggest that the Babylonians were observing many of the common neurological and psychiatric disorders that we recognise today. There is nothing comparable in the ancient Egyptian medical writings and the Babylonians therefore were the first to describe the clinical foundations of modern neurology and psychiatry.
A major and intriguing omission from these entirely objective Babylonian descriptions of neuropsychiatric disorders is the absence of any account of subjective thoughts or feelings, such as obsessional thoughts or ruminations in OCD, or suicidal thoughts or sadness in depression. The latter subjective phenomena only became a relatively modern field of description and enquiry in the 17th and 18th centuries AD. This raises interesting questions about the possibly slow evolution of human self awareness, which is central to the concept of “mental illness”, which only became the province of a professional medical discipline, i.e. psychiatry, in the last 200 years.
The theme of this year’s meeting is “International Law in a Time of Chaos”, exploring the role of international law in conflict mitigation. Panel discussions will examine various aspects of both public international law and private international law, including trade, investment, arbitration, intellectual property, combatting corruption, labor standards in the global supply chain, and human rights, as well as issues of international organizations and international security.
ILW is sponsored and organized by the American Branch of the International Law Association (ABILA) and the International Law Students Association (ILSA). Every year more than one thousand practitioners, academics, diplomats, members of the governmental and nongovernmental sectors, and students attend this conference.
This year’s conference highlights include:
This year’s keynote from Lori Damrosch, Hamilton Fish Professor of International Law and Diplomacy, Columbia Law School, and President of the American Society of International Law. “Democratization of Foreign Policy and International Law, 1914-2014” Friday, 1:30PM (Room 2-02A)
Several talks on recent events in Crimea. (Check out our OPIL Debate Map: Ukraine Use of Force, to learn more on the subject in advance.)
“European Union – Challenges or Chaos,” Friday, 9:00AM (Room 2-02A)
“Update on the International Criminal Court’s Crime of Aggression: Considering Crimea,” Friday, 10:45AM (Room 2-02B)
<“Self-Determination, Secession, and Non Intervention in the Age of Crimea and Kosovo,” Friday, 4:45PM (Room 2-02B)
The “International Adjudication in the 21st Century” panel, including OUP author Cesare Romano, will discuss the key findings of the recently published The Oxford Handbook of International Adjudication. Friday, 9:00AM (Room 2-01B). (Read up on the topic before the event, with free content from the book.)
Top practitioners in the field discuss “International Investment Arbitration and the Rule of Law”, Friday 4:45PM (Room 2-02A). (Sign up for our Free Investment Claims Webinar on October 20th to brush up on VCLT in BIT arbitrations in time for this panel.)
Looking for career advice? Attend this roundtable discussion on Saturday afternoon “Careers in International Human Rights, International Development, and International Rule of Law,” Saturday, 3:30PM (Room 2-02B)
Fordham Law School is located in the wonderful Lincoln Square neighborhood of New York and just around the corner from some great activities after the conference:
ILW Opening Reception. The wine and cheese reception at the Association of the Bar of the City of New York is open to all ILW attendees. 2nd Floor, Reception Area, ABCNY, Thursday at 8:00PM.
Of course, we hope to see you at Oxford University Press booth. We’ll be offering the chance to browse and buy our new and bestselling titles on display at a 20% conference discount, discover what’s new in Oxford Law Online, and pick up sample copies of our latest law journals.
To follow the latest updates about the ILW Conference as it happens, follow us on Twitter at @OUPIntLaw and the hashtag #ILW2014.
See you in there!
Headline image credit: 2011, 62nd St by Cornerstones of New York, CC BY-NC 2.0 via Flickr.
As an Africanist historian committed to reaching broader publics, I was thrilled when the research team for the BBC’s genealogy program Who Do You Think You Are? contacted me late last February about an episode they were working on that involved the subject of some of my research, mixed race relationships in colonial Ghana. I was even more pleased when I realized that their questions about shifting practices and perceptions of intimate relationships between African women and European men in the Gold Coast, as Ghana was then known, were ones I had just explored in a newly published American Historical Review article, which I readily shared with them. This led to a month-long series of lengthy email exchanges, phone conversations, Skype chats, and eventually to an invitation to come to Ghana to shoot the Who Do You Think You Are? episode.
After landing in Ghana in early April, I quickly set off for the coastal town of Sekondi where I met the production team, and the episode’s subject, Reggie Yates, a remarkable young British DJ, actor, and television presenter. Reggie had come to Ghana to find out more about his West African roots, but he discovered along the way that his great grandfather was a British mining accountant who worked in the Gold Coast for close to a decade. His great grandmother, Dorothy Lloyd, was a mixed-race Fante woman whose father — Reggie’s great-great grandfather — was rumored to be a British district commissioner at the turn of the century in the Gold Coast.
The episode explores the nature of the relationship between Dorothy and George, who were married by customary law around 1915 in the mining town of Broomassi, where George worked as the paymaster at the local mine. George and Dorothy set up house in Broomassi and raised their infant son, Harry, there for two years before George left the Gold Coast in 1917 for good. Although their marriage was relatively short lived, it appears that Dorothy’s family and the wider community that she lived in regarded it as a respectable union and no social stigma was attached to her or Harry after George’s departure from the coast.
George and Dorothy lived openly as man and wife in Broomassi during a time period in which publicly recognized intermarriages were almost unheard of. As a privately employed European, George was not bound by the colonial government’s directives against cohabitation between British officers and local women, but he certainly would have been aware of the informal codes of conduct that regulated colonial life. While it was an open secret that white men “kept” local women, these relationships were not to be publicly legitimated.
Precisely because George and Dorothy’s union challenged the racial prescripts of colonial life, it did not resemble the increasingly strident characterizations of interracial relationships as immoral and insalubrious that frequently appeared in the African-owned Gold Coast press during these years. Although not a perfect union, as George was already married to an English woman who lived in London with their children, the trajectory of their relationship suggests that George and Dorothy had a meaningful relationship while they were together, that they provided their son Harry with a loving home, and that they were recognized as a respectable married couple. The latter helps to account for why Dorothy was able to “marry well” after George left. Her marriage to Frank Vardon, a prominent Gold Coaster, would have been unlikely had she been regarded as nothing more than a discarded “whiteman’s toy,” as one Gold Coast writer mockingly called local women who casually liaised with European men. In her own right, Dorothy became an important figure in the Sekondi community where she ultimately settled and raised her son Harry, alongside the children she had with Frank Vardon.
The “white peril” commentaries that I explored in my American Historical Review article proved to be a rhetorically powerful strategy for challenging the moral legitimacy of British colonial rule because they pointed to the gap between the civilizing mission’s moral rhetoric and the sexual immorality of white men in the colony. But rhetoric often sacrifices nuance for argumentative force and Gold Coasters’ “white peril” commentaries were no exception. Left out of view were men like George Yates, who challenged the conventions of their times, albeit imperfectly, and women like Dorothy Lloyd who were not cast out of “respectable” society, but rather took their place in it.
This sense of conflict and connection and of categorical uncertainty surrounding these relationships is what I hope to have contributed to the research process, storyline development, and filming of the Reggie Yates episode of Who Do You Think You Are? The central question the show raises is how do we think about and define relationships that were so heavily circumscribed by racialized power without denying the “possibility of love?” By “endeavor[ing] to trace its imperfections, its perversions,” was Martinican philosopher and anticolonial revolutionary Frantz Fanon’s answer. His insight surely reverberates throughout the episode.
Voting for the 2014 Atlas Place of the Year is now underway. However, you still be curious about the nominees. What makes them so special? Each year, we put the spotlight on the top locations in the world that make us go, “wow”. For good or for bad, this year’s longlist is quite the round-up.
Just hover over the place-markers on the map to learn a bit more about this year’s nominations.
Make sure to vote for your Place of the Year below. If you have another Place of the Year that you would like to nominate, we’d love to know about it in the comments section. Follow along with #POTY2014 until our announcement on 1 December.What do you think Place of the Year 2014 should be?
Image Credits: Ferguson: “Cops Kill Kids”. Photo by Shawn Semmler. CC BY 2.0 via Flickr. Liberia: Ebola Virus Particles. Photo by NIAID. CC BY 2.0 via Flickr. Ukraine: Euromaiden in Kiev 2014-02-19 10-22. Photo by Amakuha. CC BY-SA 3.0 via Wikimedia Commons. Colorado: Grow House 105. Photo by Coleen Whitfield. CC BY-SA 2.0 via Flickr. Nauru: In front of the Menen. Photo by Sean Kelleher. CC BY-SA 2.0 via Flickr. Sochi: Olympic Park Flags (2). Photo by american_rugbler. CC BY-SA 2.0 via Flickr. Mount Sinjar: Sinjar Karst. Photo by Cpl. Dean Davis. Public Domain via Wikimedia Commons. Gaza: The home of the Kware family after it was bombed by the military. Photo by B’Tselem. CC BY 4.0 via Wikimedia Commons. Scotland: Vandalised no thanks sign. Photo by kay roxby. CC BY 2.0 via Flickr. Brazil: World Cup stuff, Rio de Janeiro, Brazil (15). Photo by Jorge in Brazil. CC BY 2.0 via Flickr.
Heading image: Old Globe by Petar Milošević. CC-BY-SA-3.0 via Wikimedia Commons.
It is a safe bet that the name of Pierre Rolland rings very few bells among the British public. In 2012, Rolland, riding for Team Europcar finished in eighth place in the overall final classifications of the Tour de France whilst Sir Bradley Wiggins has since become a household name following his fantastic achievement of being the first British person ever to win the most famous cycle race in the world.
In the world of sport, we remember a winner. But the history of science is often also described in similar terms – as a tale of winners and losers racing to the finish line. Nowhere is this more true than in the story of the discovery of the structure of DNA. When James Watson’s book, The Double Helix was published in 1968, it depicted science as a frantic and often ruthless race in which the winner clearly took all. In Watson’s account, it was he and his Cambridge colleague Francis Crick who were first to cross the finish line, with their competitors Rosalind Franklin at Kings College, London and Linus Pauling at Caltech, Pasadena trailing in behind.
There is no denying the importance of Watson and Crick’s achievement: their double-helical model of DNA not only answered fundamental questions in biology such as how organisms pass on hereditary traits from one generation to the next but also heralded the advent of genetic engineering and the production of vital new medicines such as recombinant insulin. But it is worth asking whether this portrayal of science as a breathless race to the finish line with only winners and losers, is necessarily an accurate one. And perhaps more importantly, does it actually obscure the way that science really works?
William Astbury. Reproduced with the permission of Leeds University Library
To illustrate this point, it is worth remembering that Watson and Crick obtained a vital clue to solving the double-helix thanks to a photograph taken by the crystallographer Rosalind Franklin. Labelled in her lab notes as ‘Photo 51′, it showed a pattern of black spots arranged in the shape of a cross, formed when X-rays were diffracted by fibres of DNA. The effect of this image on Watson was dramatic. The sight of the black cross, he later said, made his jaw drop and pulse race for he knew that this pattern could only arise from a molecule that was helical in shape.
In recognition of its importance in the discovery of the double-helical structure of DNA, a plaque on the wall outside King’s College, London where Franklin worked now hails ‘Photo 51‘ as being ‘one of the world’s most important photographs’. Yet curiously, neither Watson nor Franklin had been the first to observe this striking cross pattern. For almost a year earlier, the physicist William Astbury working in his lab at Leeds had obtained an almost identical X-ray diffraction pattern of DNA.
Yet despite obtaining this clue that would prove to be so vital to Watson and Crick, Astbury never solved the double-helical structure himself and whilst the Cambridge duo went to win the Nobel Prize for their work, Astbury remains largely forgotten.
But to dismiss him as a mere ‘also-ran’ in the race for the double-helix would be both harsh and hasty: the questions that Astbury was asking and the aims of his research were subtly but significantly different to those of Watson and Crick. The Cambridge duo were solely focussed on DNA, whereas Astbury felt that by studying a wide range of biological fibres from wool to bacterial flagella, he might uncover some deep common theme based on molecular shape that could unify the whole of biology. It was this emphasis on the molecular shape of fibres and how these shapes could change that formed his core definition of the new science of ‘molecular biology’ which he helped to found and popularise, and one that has had a profound impact on modern biology and medicine.
On 5th July this year, Leeds will host ‘Le Grand Depart’ – the start of the 2014 Tour de France. As the contestants begin to climb the hills of Yorkshire each will no doubt harbour dreams of wearing the coveted yellow jersey and all will have their sights firmly fixed on crossing the same ultimate finishing line. At first sight scientific discovery may also appear to be a race towards a single finish line, but in truth it is a much more muddled affair rather like a badly organised school sports day in which several races all taking place in different directions and over different distances became jumbled together. For this reason it makes little sense to think of Astbury as having ‘lost’ the race for DNA to Watson and Crick. That Leeds was chosen to host the start of the 2014 Tour de France, is an honour for which the city can take pride, but in the life and work of William Astbury it also has a scientific heritage of which it can be equally proud.
Kersten Hall is graduated from St. Anne’s College, Oxford with a degree in biochemistry, before embarking on a PhD at the University of Leeds using molecular biology to study how viruses evade the human immune system. He then worked as a Research Fellow in the School of Medicine at Leeds during which time he developed a keen interest in the historical and philosophical roots of molecular biology. He is now Visiting Fellow in the School of Philosophy, Religion and History of Science, where his research focuses on the origins of molecular biology and in particular the role of the pioneering physicist William T. Astbury and the work of Sir William and Lawrence Bragg. He is the author of The Man in the Monkeynut Coat.
Subscribe to the OUPblog via email or RSS.
Subscribe to only physics and chemistry articles on the OUPblog via email or RSS.
Image credit: William Astbury, Reproduced with the permission of Leeds University Library
Scholars have written a lot about the difficulties in the study of religion generally. Those difficulties become even messier when we use the words black or African American to describe religion. The adjectives bear the burden of a difficult history that colors the way religion is practiced and understood in the United States. They register the horror of slavery and the terror of Jim Crow as well as the richly textured experiences of a captured people, for whom sorrow stands alongside joy. It is in this context, one characterized by the ever-present need to account for one’s presence in the world in the face of the dehumanizing practice of white supremacy, that African American religion takes on such significance.
To be clear, African American religious life is not reducible to those wounds. That life contains within it avenues for solace and comfort in God, answers to questions about who we take ourselves to be and about our relation to the mysteries of the universe; moreover, meaning is found, for some, in submission to God, in obedience to creed and dogma, and in ritual practice. Here evil is accounted for. And hope, at least for some, assured. In short, African American religious life is as rich and as complicated as the religious life of other groups in the United States, but African American religion emerges in the encounter between faith, in all of its complexity, and white supremacy.
I take it that if the phrase African American religion is to have any descriptive usefulness at all, it must signify something more than African Americans who are religious. African Americans practice a number of different religions. There are black people who are Buddhist, Jehovah Witness, Mormon, and Baha’i. But the fact that African Americans practice these traditions does not lead us to describe them as black Buddhism or black Mormonism. African American religion singles out something more substantive than that.
The adjective refers instead to a racial context within which religious meanings have been produced and reproduced. The history of slavery and racial discrimination in the United States birthed particular religious formations among African Americans. African Americans converted to Christianity, for example, in the context of slavery. Many left predominantly white denominations to form their own in pursuit of a sense of self- determination. Some embraced a distinctive interpretation of Islam to make sense of their condition in the United States. Given that history, we can reasonably describe certain variants of Christianity and Islam as African American and mean something beyond the rather uninteresting claim that black individuals belong to these different religious traditions.
The adjective black or African American works as a marker of difference: as a way of signifying a tradition of struggle against white supremacist practices and a cultural repertoire that reflects that unique journey. The phrase calls up a particular history and culture in our efforts to understand the religious practices of a particular people. When I use the phrase, African American religion, then, I am not referring to something that can be defined substantively apart from varied practices; rather, my aim is to orient you in a particular way to the material under consideration, to call attention to a sociopolitical history, and to single out the workings of the human imagination and spirit under particular conditions.
When Howard Thurman, the great 20th century black theologian, declared that the slave dared to redeem the religion profaned in his midst, he offered a particular understanding of black Christianity: that this expression of Christianity was not the idolatrous embrace of Christian doctrine which justified the superiority of white people and the subordination of black people. Instead, black Christianity embraced the liberating power of Jesus’s example: his sense that all, no matter their station in life, were children of God. Thurman sought to orient the reader to a specific inflection of Christianity in the hands of those who lived as slaves. That difference made a difference. We need only listen to the spirituals, give attention to the way African Americans interpreted the Gospel, and to how they invoked Jesus in their lives.
We cannot deny that African American religious life has developed, for much of its history, under captured conditions. Slaves had to forge lives amid the brutal reality of their condition and imagine possibilities beyond their status as slaves. Religion offered a powerful resource in their efforts. They imagined possibilities beyond anything their circumstances suggested. As religious bricoleurs, they created, as did their children and children’s children, on the level of religious consciousness and that creativity gave African American religion its distinctive hue and timber.
African Americans drew on the cultural knowledge, however fleeting, of their African past. They selected what they found compelling and rejected what they found unacceptable in the traditions of white slaveholders. In some cases, they reached for traditions outside of the United States altogether. They took the bits and pieces of their complicated lives and created distinctive expressions of the general order of existence that anchored their efforts to live amid the pressing nastiness of life. They created what we call African American religion.
Headline image credit: Candles, by Markus Grossalber, CC-BY-2.0 via Flickr.
The post What is African American religion? appeared first on OUPblog.
Related Stories
If a “revolution” in our field or area of knowledge was ongoing, would we feel it and recognize it? And if so, how?
I think a methodological “revolution” is probably going on in the science of epidemiology, but I’m not totally sure. Of course, in science not being sure is part of our normal state. And we mostly like it. I had the feeling that a revolution was ongoing in epidemiology many times. While reading scientific articles, for example. And I saw signs of it, which I think are clear, when reading the latest draft of the forthcoming book Causal Inference by M.A. Hernán and J.M. Robins from Harvard (Chapman & Hall / CRC, 2015). I think the “revolution” — or should we just call it a “renewal”? — is deeply changing how epidemiological and clinical research is conceived, how causal inferences are made, and how we assess the validity and relevance of epidemiological findings. I suspect it may be having an immense impact on the production of scientific evidence in the health, life, and social sciences. If this were so, then the impact would also be large on most policies, programs, services, and products in which such evidence is used. And it would be affecting thousands of institutions, organizations and companies, millions of people.
One example: at present, in clinical and epidemiological research, every week “paradoxes” are being deconstructed. Apparent paradoxes that have long been observed, and whose causal interpretation was at best dubious, are now shown to have little or no causal significance. For example, while obesity is a well-established risk factor for type 2 diabetes (T2D), among people who already developed T2D the obese fare better than T2D individuals with normal weight. Obese diabetics appear to survive longer and to have a milder clinical course than non-obese diabetics. But it is now being shown that the observation lacks causal significance. (Yes, indeed, an observation may be real and yet lack causal meaning.) The demonstration comes from physicians, epidemiologists, and mathematicians like Robins, Hernán, and colleagues as diverse as S. Greenland, J. Pearl, A. Wilcox, C. Weinberg, S. Hernández-Díaz, N. Pearce, C. Poole, T. Lash , J. Ioannidis, P. Rosenbaum, D. Lawlor, J. Vandenbroucke, G. Davey Smith, T. VanderWeele, or E. Tchetgen, among others. They are building methodological knowledge upon knowledge and methods generated by graph theory, computer science, or artificial intelligence. Perhaps one way to explain the main reason to argue that observations as the mentioned “obesity paradox” lack causal significance, is that “conditioning on a collider” (in our example, focusing only on individuals who developed T2D) creates a spurious association between obesity and survival.
The “revolution” is partly founded on complex mathematics, and concepts as “counterfactuals,” as well as on attractive “causal diagrams” like Directed Acyclic Graphs (DAGs). Causal diagrams are a simple way to encode our subject-matter knowledge, and our assumptions, about the qualitative causal structure of a problem. Causal diagrams also encode information about potential associations between the variables in the causal network. DAGs must be drawn following rules much more strict than the informal, heuristic graphs that we all use intuitively. Amazingly, but not surprisingly, the new approaches provide insights that are beyond most methods in current use. In particular, the new methods go far deeper and beyond the methods of “modern epidemiology,” a methodological, conceptual, and partly ideological current whose main eclosion took place in the 1980s lead by statisticians and epidemiologists as O. Miettinen, B. MacMahon, K. Rothman, S. Greenland, S. Lemeshow, D. Hosmer, P. Armitage, J. Fleiss, D. Clayton, M. Susser, D. Rubin, G. Guyatt, D. Altman, J. Kalbfleisch, R. Prentice, N. Breslow, N. Day, D. Kleinbaum, and others.
We live exciting days of paradox deconstruction. It is probably part of a wider cultural phenomenon, if you think of the “deconstruction of the Spanish omelette” authored by Ferran Adrià when he was the world-famous chef at the elBulli restaurant. Yes, just kidding.
Right now I cannot find a better or easier way to document the possible “revolution” in epidemiological and clinical research. Worse, I cannot find a firm way to assess whether my impressions are true. No doubt this is partly due to my ignorance in the social sciences. Actually, I don’t know much about social studies of science, epistemic communities, or knowledge construction. Maybe this is why I claimed that a sociology of epidemiology is much needed. A sociology of epidemiology would apply the scientific principles and methods of sociology to the science, discipline, and profession of epidemiology in order to improve understanding of the wider social causes and consequences of epidemiologists’ professional and scientific organization, patterns of practice, ideas, knowledge, and cultures (e.g., institutional arrangements, academic norms, scientific discourses, defense of identity, and epistemic authority). It could also address the patterns of interaction of epidemiologists with other branches of science and professions (e.g. clinical medicine, public health, the other health, life, and social sciences), and with social agents, organizations, and systems (e.g. the economic, political, and legal systems). I believe the tradition of sociology in epidemiology is rich, while the sociology of epidemiology is virtually uncharted (in the sense of not mapped neither surveyed) and unchartered (i.e. not furnished with a charter or constitution).
Another way I can suggest to look at what may be happening with clinical and epidemiological research methods is to read the changes that we are witnessing in the definitions of basic concepts as risk, rate, risk ratio, attributable fraction, bias, selection bias, confounding, residual confounding, interaction, cumulative and density sampling, open population, test hypothesis, null hypothesis, causal null, causal inference, Berkson’s bias, Simpson’s paradox, frequentist statistics, generalizability, representativeness, missing data, standardization, or overadjustment. The possible existence of a “revolution” might also be assessed in recent and new terms as collider, M-bias, causal diagram, backdoor (biasing path), instrumental variable, negative controls, inverse probability weighting, identifiability, transportability, positivity, ignorability, collapsibility, exchangeable, g-estimation, marginal structural models, risk set, immortal time bias, Mendelian randomization, nonmonotonic, counterfactual outcome, potential outcome, sample space, or false discovery rate.
You may say: “And what about textbooks? Are they changing dramatically? Has one changed the rules?” Well, the new generation of textbooks is just emerging, and very few people have yet read them. Two good examples are the already mentioned text by Hernán and Robins, and the soon to be published by T. VanderWeele, Explanation in causal inference: Methods for mediation and interaction (Oxford University Press, 2015). Clues can also be found in widely used textbooks by K. Rothman et al. (Modern Epidemiology, Lippincott-Raven, 2008), M. Szklo and J Nieto (Epidemiology: Beyond the Basics, Jones & Bartlett, 2014), or L. Gordis (Epidemiology, Elsevier, 2009).
Finally, another good way to assess what might be changing is to read what gets published in top journals as Epidemiology, the International Journal of Epidemiology, the American Journal of Epidemiology, or the Journal of Clinical Epidemiology. Pick up any issue of the main epidemiologic journals and you will find several examples of what I suspect is going on. If you feel like it, look for the DAGs. I recently saw a tweet saying “A DAG in The Lancet!”. It was a surprise: major clinical journals are lagging behind. But they will soon follow and adopt the new methods: the clinical relevance of the latter is huge. Or is it not such a big deal? If no “revolution” is going on, how are we to know?
Feature image credit: Test tubes by PublicDomainPictures. Public Domain via Pixabay.
The post The deconstruction of paradoxes in epidemiology appeared first on OUPblog.
Related Stories
For many of us, nature is defined as an outdoor space, untouched by human hands, and a place we escape to for refuge. We often spend time away from our daily routines to be in nature, such as taking a backwoods camping trip, going for a long hike in an urban park, or gardening in our backyard. Think about the last time you were out in nature, what comes to mind? For me, it was a canoe trip with friends. I can picture myself in our boat, the sound of the birds and rustling leaves in the background, the smell of cedars mixed with the clearing morning mist, and the sight of the still waters in front of me. Most of all, I remember a sense of calmness and clarity which I always achieve when I’m in nature.
Nature takes us away from the demands of life, and allows us to concentrate on the world around us with little to no effort. We can easily be taken back to a summer day by the smell of fresh cut grass, and force ourselves to be still to listen to the distant sound of ocean waves. Time in nature has a wealth of benefits from reducing stress, improving mood, increasing attentional capacities, and facilitating and creating social bonds. A variety of work supports nature being healing and health promoting at both an individual level (such as being energized after a walk with your dog) and a community level (such as neighbors coming together to create a local co-op garden). However, it can become difficult to experience the outdoors when we spend most of our day within a built environment.
I’d like you to stop for a moment and look around. What do you see? Are there windows? Are there any living plants or animals? Are the walls white? Do you hear traffic or perhaps the hum of your computer? Are you smelling circulated air? As I write now I hear the buzz of the florescent lights above me, and take a deep inhale of the lingering smell from my morning coffee. There is no nature except for the few photographs of the countryside and flowers that I keep tapped to my wall. I often feel hypocritical researching nature exposure sitting in front of a computer screen in my windowless office. But this is the reality for most of us. So how can we tap into the benefits of nature in order to create healthy and healing indoor environments that mimic nature and provide us with the same benefits as being outdoors?
Urban spaces often get a bad rap. Sure, they’re typically overcrowded, high in pollution, and limited in their natural and green spaces, but they also offer us the ability to transform the world around us into something that is meaningful and also health promoting. Beyond architectural features such as skylights, windows, and open air courtyards, we can use ambient features to adapt indoor spaces to replicate the outdoors. The integration of plants, animals, sounds, scents, and textures into our existing indoor environments enables us to create a wealth of natural environments indoors.
Notable examples of indoor nature, are potted plants or living walls in office spaces, atriums providing natural light, and large mural landscapes. In fact, much research has shown that the presence of such visual aids provides the same benefits of being outdoors. Incorporating just a few pieces of greenery into your workspace can help increase your productivity, boost your mood, improve your health, and help you concentrate on getting your work done. But being in nature is more than just seeing, it’s experiencing it fully and being immersed into a world that engages all of your senses. The use of natural sounds, scents, and textures (e.g. wooden furniture or carpets that look and feel like grass) provides endless possibilities for creating a natural environment indoors, and encouraging built environments to be therapeutic spaces. The more nature-like the indoor space can be, the more apt it is to illicit the same psychological and physical benefits that being outdoors does. Ultimately, the built environment can engage my senses in a way that brings me back to my canoe trip, and help me feel that same clarity and calmness that I did on the lake.
On a broader level, indoor nature may also be a means of encouraging sustainable and eco-friendly behaviors. With more generations growing up inside, we risk creating a society that is unaware of the value of nature. It’s easy to suggest that the solution to our declining involvement with nature is to just “go outside”; but with today’s busy lifestyle, we cannot always afford the time and money to step away. Integrating nature into our indoor environment is one way to foster the relationship between us and nature, and to encourage a sense of stewardship and appreciation for our natural world. By experiencing the health promoting and healing properties of nature, we can instill individuals with the significance of our natural world.
As I look around my office I’ve decided I need to take some of my own advice and bring my own little piece of nature inside. I encourage you to think about what nature means to you, and how you can incorporate this meaning into your own space. Does it involve fresh cut flowers? A photograph of your annual family campsite? The sound of birds in the background as you work? Whatever it is, I’m sure it’ll leave you feeling a little bit lighter, and maybe have you working a little bit faster.
Image: World Financial Center Winter Garden by WiNG. CC-BY-3.0 via Wikimedia Commons.
The post Going inside to get a taste of nature appeared first on OUPblog.
Related Stories
We parted, and each sought his respective chamber. I undressed quickly and got into bed, taking with me, according to my usual custom, a book, over which I generally read myself to sleep. I opened the volume as soon as I had laid my head upon the pillow, and instantly flung it to the other side of the room. It was Goudon’s ‘History of Monsters,’—a curious French work, which I had lately imported from Paris, but which, in the state of mind I had then reached, was anything but an agreeable companion. I resolved to go to sleep at once; so, turning down my gas until nothing but a little blue point of light glimmered on the top of the tube, I composed myself to rest.
The room was in total darkness. The atom of gas that still remained alight did not illuminate a distance of three inches round the burner. I desperately drew my arm across my eyes, as if to shut out even the darkness, and tried to think of nothing. It was in vain. The confounded themes touched on by Hammond in the garden kept obtruding themselves on my brain. I battled against them. I erected ramparts of would-be blankness of intellect to keep them out. They still crowded upon me. While I was lying still as a corpse, hoping that by a perfect physical inaction I should hasten mental repose, an awful incident occurred. A Something dropped, as it seemed, from the ceiling, plumb upon my chest, and the next instant I felt two bony hands encircling my throat, endeavoring to choke me.
I am no coward, and am possessed of considerable physical strength. The suddenness of the attack, instead of stunning me, strung every nerve to its highest tension. My body acted from instinct, before my brain had time to realize the terrors of my position. In an instant I wound two muscular arms around the creature, and squeezed it, with all the strength of despair, against my chest. In a few seconds the bony hands that had fastened on my throat loosened their hold, and I was free to breathe once more. Then commenced a struggle of awful intensity. Immersed in the most profound darkness, totally ignorant of the nature of the Thing by which I was so suddenly attacked, finding my grasp slipping every moment, by reason, it seemed to me, of the entire nakedness of my assailant, bitten with sharp teeth in the shoulder, neck, and chest, having every moment to protect my throat against a pair of sinewy, agile hands, which my utmost efforts could not confine,—these were a combination of circumstances to combat which required all the strength, skill, and courage that I possessed.
At last, after a silent, deadly, exhausting struggle, I got my assailant under by a series of incredible efforts of strength. Once pinned, with my knee on what I made out to be its chest, I knew that I was victor. I rested for a moment to breathe. I heard the creature beneath me panting in the darkness, and felt the violent throbbing of a heart. It was apparently as exhausted as I was; that was one comfort. At this moment I remembered that I usually placed under my pillow, before going to bed, a large yellow silk pocket-handkerchief. I felt for it instantly; it was there. In a few seconds more I had, after a fashion, pinioned the creature’s arms.
I now felt tolerably secure. There was nothing more to be done but to turn on the gas, and, having first seen what my midnight assailant was like, arouse the household. I will confess to being actuated by a certain pride in not giving the alarm before; I wished to make the capture alone and unaided.
Never losing my hold for an instant, I slipped from the bed to the floor, dragging my captive with me. I had but a few steps to make to reach the gas-burner; these I made with the greatest caution, holding the creature in a grip like a vice. At last I got within arm’s-length of the tiny speck of blue light which told me where the gas-burner lay. Quick as lightning I released my grasp with one hand and let on the full flood of light. Then I turned to look at my captive.
I cannot even attempt to give any definition of my sensations the instant after I turned on the gas. I suppose I must have shrieked with terror, for in less than a minute afterward my room was crowded with the inmates of the house. I shudder now as I think of that awful moment. I saw nothing! Yes; I had one arm firmly clasped round a breathing, panting, corporeal shape, my other hand gripped with all its strength a throat as warm, and apparently fleshly, as my own; and yet, with this living substance in my grasp, with its body pressed against my own, and all in the bright glare of a large jet of gas, I absolutely beheld nothing! Not even an outline,—a vapor!
I do not, even at this hour, realize the situation in which I found myself. I cannot recall the astounding incident thoroughly. Imagination in vain tries to compass the awful paradox.
It breathed. I felt its warm breath upon my cheek. It struggled fiercely. It had hands. They clutched me. Its skin was smooth, like my own. There it lay, pressed close up against me, solid as stone,—and yet utterly invisible!
I wonder that I did not faint or go mad on the instant. Some wonderful instinct must have sustained me; for, absolutely, in place of loosening my hold on the terrible Enigma, I seemed to gain an additional strength in my moment of horror, and tightened my grasp with such wonderful force that I felt the creature shivering with agony.
Just then Hammond entered my room at the head of the household. As soon as he beheld my face—which, I suppose, must have been an awful sight to look at—he hastened forward, crying, ‘Great heaven, Harry! what has happened?’
‘Hammond! Hammond!’ I cried, ‘come here. O, this is awful!
I have been attacked in bed by something or other, which I have hold of; but I can’t see it,—I can’t see it!’
Hammond, doubtless struck by the unfeigned horror expressed in my countenance, made one or two steps forward with an anxious yet puzzled expression. A very audible titter burst from the remainder of my visitors. This suppressed laughter made me furious. To laugh at a human being in my position! It was the worst species of cruelty. Now, I can understand why the appearance of a man struggling violently, as it would seem, with an airy nothing, and calling for assistance against a vision, should have appeared ludicrous. Then, so great was my rage against the mocking crowd that had I the power I would have stricken them dead where they stood.
‘Hammond! Hammond!’ I cried again, despairingly, ‘for God’s sake come to me. I can hold the—the thing but a short while longer. It is overpowering me. Help me! Help me!’
‘Harry,’ whispered Hammond, approaching me, ‘you have been smoking too much opium.’
‘I swear to you, Hammond, that this is no vision,’ I answered, in the same low tone. ‘Don’t you see how it shakes my whole frame with its struggles? If you don’t believe me, convince yourself. Feel it,— touch it.’
Hammond advanced and laid his hand in the spot I indicated. A wild cry of horror burst from him. He had felt it! In a moment he had discovered somewhere in my room a long piece of cord, and was the next instant winding it and knotting it about the body of the unseen being that I clasped in my arms.
‘Harry,’ he said, in a hoarse, agitated voice, for, though he preserved his presence of mind, he was deeply moved, ‘Harry, it’s all safe now. You may let go, old fellow, if you’re tired. The Thing can’t move.’
I was utterly exhausted, and I gladly loosed my hold.
Headline image credit: Green Scream by Matt Coughlin, CC 2.0 via Flickr.
The post A Halloween horror story : What was it? Part 3 appeared first on OUPblog.
Related Stories
Last weekend we were thrilled to see so many of you at the 2014 Oral History Association (OHA) Annual Meeting, “Oral History in Motion: Movements, Transformations, and the Power of Story.” The panels and roundtables were full of lively discussions, and the social gatherings provided a great chance to meet fellow oral historians. You can read a recap from Margo Shea, or browse through the Storify below, prepared by Jaycie Vos, to get a sense of the excitement at the meeting. Over the next few weeks, we’ll be sharing some more in depth blog posts from the meeting, so make sure to check back often.
We look forward to seeing you all next year at the Annual Meeting in Florida. And special thanks to Margo Shea for sending in her reflections on the meeting and to Jaycie Vos (@jaycie_v) for putting together the Storify.
Headline image credit: Madison, Wisconsin cityscape at night, looking across Lake Monona from Olin Park. Photo by Richard Hurd. CC BY 2.0 via rahimageworks Flickr.
The post Recap of the 2014 OHA Annual Meeting appeared first on OUPblog.
Related Stories
The outbreak of Ebola, in Africa and in the United States, is a stark reminder of the clear and present danger that infection represents in all our lives, and we need reminding. Despite all of our medical advances, more familiar infections still take tens of thousands of American lives each year – and too often these deaths are avoidable.
Hospital infections kill 75,000 Americans a year — more than twice the number of people who die in car crashes. Most people know that motor vehicle deaths could be drastically reduced. What’s not as widely appreciated is that the far greater number of hospital infections could be reduced by up to 70%.
Changes that would reduce infections are evidence-based and scientific, supported by the Centers for Disease Control and Prevention. For example, the campaign against hospital-acquired urinary tract infection — one of the most common hospital infections in the world — seeks to minimize the use of internal, Foley catheters, a major vector of infection. Nurses who have always relied on Foleys to deal with patients who have urinary incontinence are told to use straight catheters intermittently instead, which increases their workload. Surgeons who are accustomed to placing Foley catheters in their patients for several days after an operation are told to remove the catheter shortly after surgery – or not to use one at all. Similar approaches can be used to reduce other common infections. If we know what needs to be done to lower the rate of hospital infections, why have the many attempts to do so fallen so woefully short?
Our research shows that a major reason is the unwillingness of some nurses and physicians to support the desired new behaviors. We have found that opposition to hospitals’ infection prevention initiatives comes from the three groups we call Active Resisters, Organizational Constipators, and Timeservers. While we know these types of individuals exist in hospitals since we have seen them in action, we suspect they can also be found in all types of organizations.
Active resisters refuse to abide by and sometimes campaign against an initiative’s proposed changes. Some active resisters refuse to change a practice they have used for years because they fear it might have a negative impact on their patients’ health. Others resist because they doubt the scientific validity of a change, or because the change is inconvenient. For others it’s simply a matter of ego, as in, “Don’t tell me what to do.” Some ignore the evidence. Many initiatives to prevent urinary tract infection ask nurses to remind physicians when it’s time to remove an indwelling catheter, but many nurses are unwilling to confront physicians – and many physicians are unwilling to be so confronted.
Organizational constipators present a different set of challenges. Most are mid- to upper-level staff members who have nothing against an infection prevention initiative per se but simply enjoy exercising their power. Sometimes they refuse to permit underlings to help with an initiative. Sometimes they simply do nothing, allowing memos and emails to pile up without taking action. While we have met some physicians in this category, we have seen, unfortunately, a surprising number of nursing leaders employ this approach.
Timeservers do the least possible in any circumstance. That applies to every aspect of their work, including preventing infection. A timeserver surgeon may neglect to wash her hands before examining a patient, not because she opposes that key infection prevention requirement but because it’s just easier that way. A timeserver nurse may “forget” to conduct “sedation vacations” for patients who are on mechanical breathing machines to assess if the patient can be weaned from the ventilator sooner for the simple reason that sedated patients are less work.
We have learned that different overcoming these human-related barriers to improvement requires different styles of engagement.
To win support among the active resisters, we recommend employing data both liberally and strategically. Doctors are trained to respond to facts, and a graph that shows a high rate of infection department can help sway them. Sharing research from respected journals describing proven methods of preventing infection can also help overcome concerns. Nurse resisters are similarly impressed by such data, but we find that they are also likely to be convinced by appeals to their concern for their patients’ welfare – a description, for example, of the discomfort the Foley causes their patients.
Organizational constipators and timeservers are more difficult to win over, largely because their negative behavior is an incidental result of their normal operating style. Managers sometimes try to work around the organizational constipators and assign an authority figure to harass the timeservers, but their success is limited. Efforts to fire them can sometimes be difficult.
Hospitals’ administrative and medical leaders often play an important role in successful infection prevention initiatives by emphasizing their approval in their staff encounters, by occasionally attending an infection prevention planning session, and by making adherence to the goals of the initiative a factor in employee performance reviews. Some innovative leaders also give out physician or nurse champion-of-the-year awards that serve the dual purpose of rewarding the healthcare workers who have been helpful in a successful initiative while encouraging others by showing that they, too, could someday receive similar recognition. It may help to include potential obstructors in planning for an infection prevention campaign; the critics help spot weaknesses and are also inclined to go easy on the campaign once it gets underway.
But the leadership of a successful infection prevention project can also come from lower down in a hospital’s hierarchy, with or without the active support of the senior executives. We found the key to a positive result is a culture of excellence, when the hospital staff is fully devoted to patient-centered, high-quality care. Healthcare workers in such hospitals endeavor to treat each patient as a family member. In such institutions, a dedicated nurse can ignite an infection prevention initiative, and the staff’s all-but-universal commitment to patient safety can win over even the timeservers. The closer the nation’s hospitals approach that state of grace, the greater the success they will have in their efforts to lower infection rates.
Preventing infection is a team sport. Cooperation — among doctors, nurses, microbiologists, public health officials, patients, and families — will be required to control the spread of Ebola. Such cooperation is required to prevent more mundane infections as well.
The post What will it take to reduce infections in the hospital? appeared first on OUPblog.
Related Stories
Anti-politics is in the air. There is a prevalent feeling in many societies that politicians are up to no good, that establishment politics are at best irrelevant and at worst corrupt and power-hungry, and that the centralization of power in national parliaments and governments denies the public a voice. Larger organizations fare even worse, with the European Union’s ostensible detachment from and imperviousness to the real concerns of its citizens now its most-trumpeted feature. Discontent and anxiety build up pressure that erupts in the streets from time to time, whether in Takhrir Square or Tottenham. The Scots rail against a mysterious entity called Westminster; UKIP rides on the crest of what it terms patriotism (and others term typical European populism) intimating, as Matthew Goodwin has pointed out in the Guardian, that Nigel Farage “will lead his followers through a chain of events that will determine the destiny of his modern revolt against Westminster.”
At the height of the media interest in Wootton Bassett, when the frequent corteges of British soldiers who were killed in Afghanistan wended their way through the high street while the townspeople stood in silence, its organizers claimed that it was a spontaneous and apolitical display of respect. “There are no politics here,” stated the local MP. Those involved held that the national stratum of politicians was superfluous to the authentic feeling of solidarity that could solely be generated at the grass roots. A clear resistance emerged to national politics trying to monopolize the mourning that only a town at England’s heart could convey.
Academics have been drawn in to the same phenomenon. A new Anti-politics and Depoliticization Specialist Group has been set up by the Political Studies Association in the UK dedicated, as it describes itself, to “providing a forum for researchers examining those processes throughout society that seem to have marginalized normative political debates, taken power away from elected politicians and fostered an air of disengagement, disaffection and disinterest in politics.” The term “politics” and what it apparently stands for is undoubtedly suffering from a serious reputational problem.
But all that is based on a misunderstanding of politics. Political activity and thinking isn’t something that happens in remote places and institutions outside the experience of everyday life. It is ubiquitous, rooted in human intercourse at every level. It is not merely an elite activity but one that every one of us engages in consciously or unconsciously in our relations with others: commanding, pleading, negotiating, arguing, agreeing, refusing, or resisting. There is a tendency to insist on politics being mainly about one thing: power, dissent, consensus, oppression, rupture, conciliation, decision-making, the public domain, are some of the competing contenders. But politics is about them all, albeit in different combinations.
It concerns ranking group priorities in terms of urgency or importance—whether the group is a family, a sports club, or a municipality. It concerns attempts to achieve finality in human affairs, attempts always doomed to fail yet epitomised in language that refers to victory, authority, sovereignty, rights, order, persuasion—whether on winning or losing sides of political struggle. That ranges from a constitutional ruling to the exasperated parent trying to end an argument with a “because I say so.” It concerns order and disorder in human gatherings, whether parliaments, trade union meetings, classrooms, bus queues, or terrorist attacks—all have a political dimension alongside their other aspects. That gives the lie to a demonstration being anti-political, when its ends are reform, revolution, or the expression of disillusionment. It concerns devising plans and weaving visions for collectivities. It concerns the multiple languages of support and withholding support that we engage in with reference to others, from loyalty and allegiance through obligation to commitment and trust. And it is manifested through conservative, progressive, or reactionary tendencies that the human personality exhibits.
When those involved in the Wootton Bassett corteges claimed to be non-political, they overlooked their organizational role in making certain that every detail of the ceremony was in place. They elided the expression of national loyalty that those homages clearly entailed. They glossed over the tension between political centre and periphery that marked an asymmetry of power and voice. They assumed, without recognizing, the prioritizing of a particular group of the dead – those that fell in battle.
People everywhere engage in political practices, but they do so in different intensities. It makes no more sense to suggest that we are non-political than to suggest that we are non-psychological. Nor does anti-politics ring true, because political disengagement is still a political act: sometimes vociferously so, sometimes seeking shelter in smaller circles of political conduct. Alongside political philosophy and the history of political thought, social scientists need to explore the features of thinking politically as typical and normal features of human life. Those patterns are always with us, though their cultural forms will vary considerably across and within societies. Being anti-establishment, anti-government, anti-sleaze, even anti-state are themselves powerful political statements, never anti-politics.
Headline image credit: Westminster, by “Stròlic Furlàn” – Davide Gabino. CC-BY-SA-2.0 via Flickr.
The post The chimera of anti-politics appeared first on OUPblog.
Related Stories
Biology Week is an annual celebration of the biological sciences that aims to inspire and engage the public in the wonders of biology. The Society of Biology created this awareness day in 2012 to give everyone the chance to learn and appreciate biology, the science of the 21st century, through varied, nationwide events. Our belief that access to education and research changes lives for the better naturally supports the values behind Biology Week, and we are excited to be involved in it year on year.
Biology, as the study of living organisms, has an incredibly vast scope. We’ve identified some key figures from the last couple of centuries who traverse the range of biology: from physiology to biochemistry, sexology to zoology. You can read their stories by checking out our Biology Week 2014 gallery below. These biologists, in various different ways, have had a significant impact on the way we understand and interact with biology today. Whether they discovered dinosaurs or formed the foundations of genetic engineering, their stories have plenty to inspire, encourage, and inform us.
If you’d like to learn more about these key figures in biology, you can explore the resources available on our Biology Week page, or sign up to our e-alerts to stay one step ahead of the next big thing in biology.
Headline image credit: Marie Stopes in her laboratory, 1904, by Schnitzeljack. Public domain via Wikimedia Commons.
The post Biologists that changed the world appeared first on OUPblog.
Related Stories
Now that Noughth Week has come to an end and the university Full Term is upon us, I thought it might be an appropriate time to investigate the arcane world of Oxford jargon -- the University of Oxford, that is. New students, or freshers, do not arrive in Oxford but come up; at the end of term they go down (irrespective of where they live).
The post Battels and subfusc: the language of Oxford appeared first on OUPblog.
Related Stories
Many bioethical challenges surround the promise of genomic technology and the power of genomic information — providing a rich context for critically exploring underlying bioethical traditions and foundations, as well as the practice of multidisciplinary advisory committees and collaborations. Controversial issues abound that call into question the core values and assumptions inherent in bioethics analysis and thus necessitates interprofessional inquiry. Consequently, the teaching of genomics and contemporary bioethics provides an opportunity to re-examine our disciplines’ underpinnings by casting light on the implications of genomics with novel approaches to address thorny issues — such as determining whether, what, to whom, when, and how genomic information, including “incidental” findings, should be discovered and disclosed to individuals and their families, and whose voice matters in making these determinations particularly when children are involved.
One creative approach we developed is narrative genomics using drama with provocative characters and dialogue as an interdisciplinary pedagogical approach to bring to life the diverse voices, varied contexts, and complex processes that encompass the nascent field of genomics as it evolves from research to clinical practice. This creative educational technique focuses on inherent challenges currently posed by the comprehensive interrogation and analysis of DNA through sequencing the human genome with next generation technologies and illuminates bioethical issues, providing a stage to reflect on the controversies together, and temper the sometimes contentious debates that ensue.
As a bioethics teaching method, narrative genomics highlights the breadth of individuals affected by next-gen technologies — the conversations among professionals and families — bringing to life the spectrum of emotions and challenges that envelope genomics. Recent controversies over genomic sequencing in children and consent issues have brought fundamental ethical theses to the stage to be re-examined, further fueling our belief in drama as an interdisciplinary pedagogical approach to explore how society evaluates, processes, and shares genomic information that may implicate future generations. With a mutual interest in enhancing dialogue and understanding about the multi-faceted implications raised by generating and sharing vast amounts of genomic information, and with diverse backgrounds in bioethics, policy, psychology, genetics, law, health humanities, and neuroscience, we have been collaboratively weaving dramatic narratives to enhance the bioethics educational experience within varied professional contexts and a wide range of academic levels to foster interprofessionalism.
Dramatizations of fictionalized individual, familial, and professional relationships that surround the ethical landscape of genomics create the potential to stimulate bioethical reflection and new perceptions amongst “actors” and the audience, sparking the moral imagination through the lens of others. By casting light on all “the storytellers” and the complexity of implications inherent with this powerful technology, dramatic narratives create vivid scenarios through which to imagine the challenges faced on the genomic path ahead, critique the application of bioethical traditions in context, and re-imagine alternative paradigms.
Building upon the legacy of using case vignettes as a clinical teaching modality, and inspired by “readers’ theater”, “narrative medicine,” and “narrative ethics” as approaches that helped us expand the analyses to implications of genomic technologies, our experience suggests similar value for bioethics education within the translational research and public policy domain. While drama has often been utilized in academic and medical settings to facilitate empathy and spotlight ethical and legal controversies such as end-of-life issues and health law, to date there appears to be few dramatizations focusing on next-generation sequencing (NGS) in genomic research and medicine.
We initially collaborated on the creation of a short vignette play in the context of genomic research and the informed consent process that was performed at the NHGRI-ELSI Congress by a geneticist, genetic counselor, bioethicists, and other conference attendees. The response by “actors” and audience fueled us to write many more plays of varying lengths on different ethical and genomic issues, as well as to explore the dialogues of existing theater with genetic and genomic themes — all to be presented and reflected upon by interdisciplinary professionals in the bioethics and genomics community at professional society meetings and academic medical institutions nationally and internationally.
Because narrative genomics is a pedagogical approach intended to facilitate discourse, as well as provide reflection on the interrelatedness of the cross-disciplinary issues posed, we ground our genomic plays in current scholarship and ensure that it is accurate scientifically as well as provide extensive references and pose focused bioethics questions which can complement and enhance the classroom experience.
In a similar vein, bioethical controversies can also be brought to life with this approach where bioethics reaching incorporates dramatizations and excerpts from existing theatrical narratives, whether to highlight bioethics issues thematically, or to illuminate the historical path to the genomics revolution and other medical innovations from an ethical perspective.
Varying iterations of these dramatic narratives have been experienced (read, enacted, witnessed) by bioethicists, policy makers, geneticists, genetic counselors, other healthcare professionals, basic scientists, bioethicists, lawyers, patient advocates, and students to enhance insight and facilitate interdisciplinary and interprofessional dialogue.
Dramatizations embedded in genomic narratives illuminate the human dimensions and complexity of interactions among family members, medical professionals, and others in the scientific community. By facilitating discourse and raising more questions than answers on difficult issues, narrative genomics links the promise and concerns of next-gen technologies with a creative bioethics pedagogical approach for learning from one another.
Heading image: Andrzej Joachimiak and colleagues at Argonne’s Midwest Center for Structural Genomics deposited the consortium’s 1,000th protein structure into the Protein Data Bank. CC-BY-SA-2.0 via Wikimedia Commons.
The post Illuminating the drama of DNA: creating a stage for inquiry appeared first on OUPblog.
Related Stories
American higher education is at a crossroads. The cost of a college education has made people question the benefits of receiving one. To better understand the issues surrounding the supposed crisis, we asked Goldie Blumenstyk, author of American Higher Education in Crisis: What Everyone Needs to Know, to comment on some of the most hot button topics today.
A discussion on the rising cost of higher education.
What does the future of higher education look like?
Are the salaries of university presidents and coaches too high?
A look into the accountability movement in higher education today.
Featured image credit: Grads with diplomas by Saint Louis University Plus Memorial Library. CC BY-NC-SA 2.0 via Flickr.
The post Is American higher education in crisis? appeared first on OUPblog.
Related Stories
Causation is now commonly supposed to involve a succession that instantiates some lawlike regularity. This understanding of causality has a history that includes various interrelated conceptions of efficient causation that date from ancient Greek philosophy and that extend to discussions of causation in contemporary metaphysics and philosophy of science. Yet the fact that we now often speak only of causation, as opposed to efficient causation, serves to highlight the distance of our thought on this issue from its ancient origins. In particular, Aristotle (384-322 BCE) introduced four different kinds of “cause” (aitia): material, formal, efficient, and final. We can illustrate this distinction in terms of the generation of living organisms, which for Aristotle was a particularly important case of natural causation. In terms of Aristotle’s (outdated) account of the generation of higher animals, for instance, the matter of the menstrual flow of the mother serves as the material cause, the specially disposed matter from which the organism is formed, whereas the father (working through his semen) is the efficient cause that actually produces the effect. In contrast, the formal cause is the internal principle that drives the growth of the fetus, and the final cause is the healthy adult animal, the end point toward which the natural process of growth is directed.
From a contemporary perspective, it would seem that in this case only the contribution of the father (or perhaps his act of procreation) is a “true” cause. Somewhere along the road that leads from Aristotle to our own time, material, formal and final aitiai were lost, leaving behind only something like efficient aitiai to serve as the central element in our causal explanations. One reason for this transformation is that the historical journey from Aristotle to us passes by way of David Hume (1711-1776). For it is Hume who wrote: “[A]ll causes are of the same kind, and that in particular there is no foundation for that distinction, which we sometimes make betwixt efficient causes, and formal, and material … and final causes” (Treatise of Human Nature, I.iii.14). The one type of cause that remains in Hume serves to explain the producing of the effect, and thus is most similar to Aristotle’s efficient cause. And so, for the most part, it is today.
However, there is a further feature of Hume’s account of causation that has profoundly shaped our current conversation regarding causation. I have in mind his claim that the interrelated notions of cause, force and power are reducible to more basic non-causal notions. In Hume’s case, the causal notions (or our beliefs concerning such notions) are to be understood in terms of the constant conjunction of objects or events, on the one hand, and the mental expectation that an effect will follow from its cause, on the other. This specific account differs from more recent attempts to reduce causality to, for instance, regularity or counterfactual/probabilistic dependence. Hume himself arguably focused more on our beliefs concerning causation (thus the parenthetical above) than, as is more common today, directly on the metaphysical nature of causal relations. Nonetheless, these attempts remain “Humean” insofar as they are guided by the assumption that an analysis of causation must reduce it to non-causal terms. This is reflected, for instance, in the version of “Humean supervenience” in the work of the late David Lewis. According to Lewis’s own guarded statement of this view: “The world has its laws of nature, its chances and causal relationships; and yet — perhaps! — all there is to the world is its point-by-point distribution of local qualitative character” (On the Plurality of Worlds, 14).
Admittedly, Lewis’s particular version of Humean supervenience has some distinctively non-Humean elements. Specifically — and notoriously — Lewis has offered a counterfactural analysis of causation that invokes “modal realism,” that is, the thesis that the actual world is just one of a plurality of concrete possible worlds that are spatio-temporally discontinuous. One can imagine that Hume would have said of this thesis what he said of Malebranche’s occasionalist conclusion that God is the only true cause, namely: “We are got into fairy land, long ere we have reached the last steps of our theory; and there we have no reason to trust our common methods of argument, or to think that our usual analogies and probabilities have any authority” (Enquiry concerning Human Understanding, §VII.1). Yet the basic Humean thesis in Lewis remains, namely, that causal relations must be understood in terms of something more basic.
And it is at this point that Aristotle re-enters the contemporary conversation. For there has been a broadly Aristotelian move recently to re-introduce powers, along with capacities, dispositions, tendencies and propensities, at the ground level, as metaphysically basic features of the world. The new slogan is: “Out with Hume, in with Aristotle.” (I borrow the slogan from Troy Cross’s online review of Powers and Capacities in Philosophy: The New Aristotelianism.) Whereas for contemporary Humeans causal powers are to be understood in terms of regularities or non-causal dependencies, proponents of the new Aristotelian metaphysics of powers insist that regularities and dependencies must be understood rather in terms of causal powers.
Should we be Humean or Aristotelian with respect to the question of whether causal powers are basic or reducible features of the world? Obviously I cannot offer any decisive answer to this question here. But the very fact that the question remains relevant indicates the extent of our historical and philosophical debt to Aristotle and Hume.
Headline image: Face to face. Photo by Eugenio. CC-BY-SA-2.0 via Flickr
The post Efficient causation: Our debt to Aristotle and Hume appeared first on OUPblog.
Related Stories
It’s fairly common knowledge that languages, like people, have families. English, for instance, is a member of the Germanic family, with sister languages including Dutch, German, and the Scandinavian languages. Germanic, in turn, is a branch of a larger family, Indo-European, whose other members include the Romance languages (French, Italian, Spanish, and more), Russian, Greek, and Persian.
Being part of a family of course means that you share a common ancestor. For the Romance languages, that mother language is Latin; with the spread and then fall of the Roman empire, Latin split into a number of distinct daughter languages. But what did the Germanic mother language look like? Here there’s a problem, because, although we know that language must have existed, we don’t have any direct record of it.
The earliest Old English written texts date from the 7th century AD, and the earliest Germanic text of any length is a 4th-century translation of the Bible into Gothic, a now-extinct Germanic language. Though impressively old, this text still dates from long after the breakup of the Germanic mother language into its daughters.
How does one go about recovering the features of a language that is dead and gone, and which has left no records of itself in spoken or written form? This is the subject matter of linguistic necromancy – or linguistic reconstruction, as it is more conventionally known.
The enterprise, dubbed “darkest of the dark arts” and “the only means to conjure up the ghosts of vanished centuries” in the epigraph to a chapter of Campbell’s historical linguistics textbook, really got off the ground in the 1900s due to a development of a toolkit of techniques known as the comparative method.
Crucial to the comparative method was a revolutionary empirical finding: the regularity of sound change. Though it has wide-reaching implications, the basic finding is simple to grasp. In a nutshell: it’s sounds that change, not words, and when they change, all words which include those sounds are affected.
Let’s take an example. Lots of English words beginning with a p sound have a German counterpart that begins with pf. Here are some of them:
If the forms of words simply changed at random, these systematic correspondences would be a miraculous coincidence. However, in the light of the regularity of sound change they make perfect sense. Specifically, at some point in the early history of German, the language sounded a lot more like (Old) English. But then the sound p underwent a change to pf at the beginning of words, and all words starting with p were affected.
There’s much more to be said about the regularity of sound change, since it underlies pretty much everything we know about language family groupings. (If you’re interested in finding out more, Guy Deutscher’s book The Unfolding of Language provides an accessible summary.) But for now let’s concentrate on its implications for necromantic purposes, which are immense.
If we want to invoke the words and sounds of a long-dead language like the mother language Proto-Germanic (the ‘proto-’ indicates that the language is reconstructed, rather than directly evidenced in texts), we just need to figure out what changes have happened to the sounds of the daughter languages, and to peel them back one by one like the layers of an onion. Eventually we’ll reach a point where all the daughter languages sound the same; and voilà, we’ve conjured up a proto-language.
There’s more to living languages than just sounds and words though. Living languages have syntax: a structure, a skeleton. By contrast, reconstructed protolanguages tend to look more like ghosts: hauntingly amorphous clouds of words and sounds. There are practical reasons why the reconstruction of proto-syntax has lagged behind. One is simply that our understanding of syntax, in general, has come a long way since the work of the reconstruction pioneers in the 19th century.
Another is that there is nothing quite like the regularity of sound change in syntax: how can we tell which syntactic structures correspond to each other across languages? These problems have led some to be sceptical about the possibility of syntactic reconstruction, or at any rate about its fruitfulness. Nevertheless, progress is being made. To take one example, English is a language that doesn’t like to leave out the subject of a sentence. We say “He speaks Swahili” or “It is raining”, not “Speaks Swahili” or “Is raining”. Though most of the modern Germanic languages behave the same, many other languages, like Italian and Japanese, have no such requirement; speakers can include or omit the subject of the sentence as the fancy takes them. Was Proto-Germanic like English, or like Italian or Japanese, in this respect? Doing a bit of necromancy based on the earliest Germanic written records suggests that Proto-Germanic was, like the latter, quite happy to omit the subject, at least under certain circumstances.Of course the issue is more complex than that – Italian and Japanese themselves differ with regard to the circumstances under which subjects can be omitted.
Slowly but surely, though, historical linguists are starting to add skeletons to the reanimated spectres of proto-languages.
The post Linguistic necromancy: a guide for the uninitiated appeared first on OUPblog.
Related Stories
There’s a lot of interesting social science research these days. Conference programs are packed, journals are flooded with submissions, and authors are looking for innovative new ways to publish their work.
This is why we have started up a new type of research publication at Political Analysis, Letters.
Research journals have a limited number of pages, and many authors struggle to fit their research into the “usual formula” for a social science submission — 25 to 30 double-spaced pages, a small handful of tables and figures, and a page or two of references. Many, and some say most, papers published in social science could be much shorter than that “usual formula.”
We have begun to accept Letters submissions, and we anticipate publishing our first Letters in Volume 24 of Political Analysis. We will continue to accept submissions for research articles, though in some cases the editors will suggest that an author edit their manuscript and resubmit it as a Letter. Soon we will have detailed instructions on how to submit a Letter, the expectations for Letters, and other information, on the journal’s website.
We have named Justin Grimmer and Jens Hainmueller, both at Stanford University, to serve as Associate Editors of Political Analysis — with their primary responsibility being Letters. Justin and Jens are accomplished political scientists and methodologists, and we are quite happy that they have agreed to join the Political Analysis team. Justin and Jens have already put in a great deal of work helping us develop the concept, and working out the logistics for how we integrate the Letters submissions into the existing workflow of the journal.
I recently asked Justin and Jens a few quick questions about Letters, to give them an opportunity to get the word out about this new and innovative way of publishing research in Political Analysis.
Political Analysis is now accepting the submission of Letters as well as Research Articles. What are the general requirements for a Letter?
Letters are short reports of original research that move the field forward. This includes, but is not limited to, new empirical findings, methodological advances, theoretical arguments, as well as comments on or extensions of previous work. Letters are peer reviewed and subjected to the same standards as Political Analysis research articles. Accepted Letters are published in the electronic and print versions of Political Analysis and are searchable and citable just like other articles in the journal. Letters should focus on a single idea and are brief—only 2-4 pages and no longer than 1500-3000 words.
Why is Political Analysis taking this new direction, looking for shorter submissions?
Political Analysis is taking this new direction to publish important results that do not traditionally fit in the longer format of journal articles that are currently the standard in the social sciences, but fit well with the shorter format that is often used in the sciences to convey important new findings. In this regard the role model for the Political Analysis Letters are the similar formats used in top general interest science journals like Science, Nature, or PNAS where significant findings are often reported in short reports and articles. Our hope is that these shorter papers also facilitate an ongoing and faster paced dialogue about research findings in the social sciences.
What is the main difference between a Letter and a Research Paper?
The most obvious difference is the length and focus. Letters are intended to only be 2-4 pages, while a standard research article might be 30 pages. The difference in length means that Letters are going to be much more focused on one important result. A letter won’t have the long literature review that is standard in political science articles and will have much more brief introduction, conclusion, and motivation. This does not mean that the motivation is unimportant; it just means that the motivation has to briefly and clearly convey the general relevance of the work and how it moves the field forward. A Letter will typically have 1-3 small display items (figures, tables, or equations) that convey the main results and these have to be well crafted to clearly communicate the main takeaways from the research.
If you had to give advice to an author considering whether to submit their work to Political Analysis as a Letter or a Research Article, what would you say?
Our first piece of advice would be to submit your work! We’re open to working with authors to help them craft their existing research into a format appropriate for letters. As scholars are thinking about their work, they should know that Letters have a very high standard. We are looking for important findings that are well substantiated and motivated. We also encourage authors to think hard about how they design their display items to clearly convey the key message of the Letter. Lastly, authors should be aware that a significant fraction of submissions might be desk rejected to minimize the burden on reviewers.
You both are Associate Editors of Political Analysis, and you are editing the Letters. Why did you decide to take on this professional responsibility?
Letters provides us an opportunity to create an outlet for important work in Political Methodology. It also gives us the opportunity to develop a new format that we hope will enhance the quality and speed of the academic debates in the social sciences.
Headline image credit: Letters, CC0 via Pixabay.
The post Political Analysis Letters: a new way to publish innovative research appeared first on OUPblog.
Related Stories
Checking the website for the Audio Engineering Society (AES) convention in Los Angeles, I took note of the swipes promoting the event. Each heading was framed as follows: If it’s about ____________, it’s at AES. The slide show contained nine headings that are to be a part of the upcoming convention (in no particular order because you start at whatever point in the slide show you happened to log-in to the site).
The list was interesting to me on many levels, but one significant one that struck me immediately was the absence of mixing and mastering (my main areas of work in audio). A relatively short time ago almost half of these categories did not exist. There was no streaming, no project studios, no networked audio and no game sound. So what is the state of affairs for the young audio engineering student or practitioner?
Interestingly, of the four new fields mentioned, three of them represent diminished opportunities in the field of music recording, with one a singular beacon of hope.
Streaming audio represents the brave new world of audio delivery systems. As these services continue to capture more of the consumer market share they continue to diminish artists ability to earn a decent living (or pay an accomplished audio engineer). A friend of mine with 3 CD releases recently got his Spotify statement and saw that he had more that 60,000 streams of his music. His check was for $17. CDs don’t pay as well as vinyl records used to, downloads don’t pay as well as CDs, and streaming doesn’t pay as well as downloads (not to mention “file-sharing” which doesn’t pay anything). Sure, there may be jobs at Pandora and Spotify for a few engineers helping with the infrastructure of audio streaming, but generally streaming is another brick in the wall that is restricting audio jobs by shrinking the earning capacity of recording artists.
Project studios now dominate most recording projects outside the reasonably well-funded major label records and even most of that work is done in project studios (though they might be quite elaborate facilities). Project studios rarely have spots for interns or assistant engineers so they provide no entree positions for those trying to come up in the engineering ranks. Not only does that limit the available sources of income, but it also prevents the kind of mentoring that actually trains young engineers in the fine points of running sessions. Of course, almost no project studios provide regular, dependable work or with any kind of benefits.
Networked audio systems provide new, faster, and more elaborate connectivity of audio using digital technology. While there may be opportunities in the tech realm for engineers designing and building digital audio networks there is, once again, a shrinking of opportunities for those aspiring to making commercial music recordings. In many instances, these networking systems allow fewer people to do more—a boon only to a small number of audio engineers working with music recordings who can now do remote recordings without having to be present and without having to employ local recording engineers and studios to complete projects with musicians in other locations.
The one bright spot here is Game Sound. The explosive world of video games is providing many good jobs for audio engineers who want to record music. These recordings have become more interesting, higher quality, and featuring more prominent and talented composers and musicians than virtually any other area of music production. The only reservation here is that the music is intended as secondary to the game play (of course) and there is a preponderance of violent video games and therefore musical styles that tend to fit well into a violent atmosphere. However, this is changing with a much broader array of game types achieving new levels of popularity (Mindcraft!).
I do not fault AES for pointing to these areas of interest for audio engineers (other than the apparent absence of mixing and mastering). These are the places where significant activity, development, and change are occurring. They’re just not very encouraging for those of us who became audio engineers because of our deep love of music and our desire to be engaged in its production.
Headline Image: Sound Mixing via CC0 Public Domain via Pixabay
The post 2014 AES Convention: shrinking opportunities in music audio appeared first on OUPblog.
Related Stories
In 2014 Oxford University Press celebrates ten years of open access (OA) publishing. In that time open access has grown massively as a movement and an industry. Here we look back at five key moments which have marked that growth.
2004/05 – Nucleic Acids Research (NAR) converts to OA
At first glance it might seem parochial to include this here, but as Rich Roberts noted on this blog in 2012, Nucleic Acids Research’s move to open access was truly ‘momentous’. To put it in context, in 2004 NAR was OUP’s biggest owned journal and it was not at all clear that many of the elements were in place to drive the growth of OA. But in 2004/2005 NAR moved from being free to publish to free to read – with authors now supporting the journal financially by paying APCs (Article Processing Charges). No wonder Roberts adds that it was ‘with great trepidation’ that OUP and the editors made the change. Roberts needn’t have worried — NAR’s switch has been a huge success — its impact factor has increased, and submissions, which could have fallen off a cliff, have continued to climb. As with anything, there are elements of the NAR model which couldn’t be replicated now, but NAR helped show the publishing world in particular that OA could work. It’s saying something that it’s only ten years on, with the transition of Nature Communications to OA, that any journal near NAR’s size has made the switch.
2008 – National Institutes of Health (NIH) Mandate Introduced
Open access presents huge opportunities for research funders; the removal of barriers to access chimes perfectly with most funders’ aim to disseminate the fruits of their research as widely as possible. But as both the NIH and Wellcome, amongst others, have found out, author interests don’t always chime exactly with theirs. Authors have other pressures to consider – primarily career development – and that means publishing in the best journal, the journal with the highest impact factor, etc. and not necessarily the one with the best open access options. So it was that in 2008 the NIH found it was getting a very low rate of compliance with its recommended OA requirements for authors. What happened next was hugely significant for the progress of open access. As part of an Act which passed through the US legislature, it was made mandatory for all NIH-funded authors to make their works available 12 months after publication. This was transformative in two ways: it meant thousands of articles published from NIH research became available through PubMed Central (PMC), and perhaps just as importantly it legitimised government intervention in OA policy, setting a precedent for future developments in Europe and the United Kingdom.
2008 – Springer buys BioMed Central (BMC)
BioMed Central was the first for-profit open access publisher – and since its inception in 2000 it was closely watched in the industry to see if it could make OA ‘work’. When it was purchased by one of the world’s largest publishers, and when that company’s CEO declared that OA was now a ‘sustainable part of STM publishing’, it was a pretty clear sign to the rest of the industry, and all OA-watchers, that the upstart business model was now proving to be more than just an interesting side line. It also reflected the big players in the industry starting to take OA very seriously, and has been followed by other acquisitions – for example Nature purchasing Frontiers in early 2013. The integration of BMC into Springer has happened gradually over the past five years, and has also been marked by a huge expansion of OA at the parent company. Springer was one of the first subscription publishers to embrace hybrid OA, in 2004, but since acquiring BMC they have also massively increased their fully OA publishing. It seems bizarre to think that back in 2008 there were even some who feared the purchase was aimed at moving all BMC’s journals back to subscription access.
2007 on – Growth of PLOS ONE
The Public Library of Science (PLOS) started publishing open access journals back in 2003, but while its journals quickly developed a reputation for high-quality publishing, the not-for-profit struggled to succeed financially. The advent of PLOS ONE changed all that. PLOS ONE has been transformative for several reasons, most notably its method of peer review. Typically top journals have tended to have their niche, and be selective. A journal on carcinogens would be unlikely to accept a paper about molecular biology, and it would only accept a paper on carcinogens if it was seen to be sufficiently novel and interesting. PLOS ONE changed that. It covers every scientific field, and its peer review is methodological (i.e. is the basic science sound) rather than looking for anything else. This enabled PLOS ONE to rapidly turn into the biggest journal in the world, publishing a staggering 31,500 papers in 2013 alone. PLOS ONE’s success cannot be solely attributed to its OA nature, but it was being OA which enabled PLOS ONE to become the ‘megajournal’ we know today. It would simply not be possible to bring such scale to a subscription journal. The price would balloon beyond the reach of even the biggest library budget. PLOS ONE has spawned a rash of similar journals and more than any one title it has energised the development of OA, dispelling previously-held notions of what could and couldn’t be done in journals publishing.
2012 – The ‘Finch’ Report
It’s difficult to sum up the vast impact of the Finch Report on journals publishing in the UK. The product of a group chaired by the eponymous Dame Janet Finch, the report, by way of two government investigations, catalysed a massive investment in gold open access (funded by APCs) from the UK government, crystallised by Research Councils UK’s OA policy. In setting the direction clearly towards gold OA, ‘Finch’ led to a huge number of journals changing their policies to accommodate UK researchers, and the establishment of OA policies, departments, and infrastructure at academic institutions and publishers across the UK and beyond. The wide-ranging policy implications of ‘Finch’ continue to be felt as time progresses, through 2014’s Higher Education Funding Council (HEFCE) for England policy, through research into the feasibility of OA monographs, and through deliberations in other jurisdictions over whether to follow the UK route to open access. HEFCE’s OA mandate in particular will prove incredibly influential for UK researchers – as it directly ties the assessment of a university’s funding to their success in ensuring their authors publish OA. The mainstream media attention paid to ‘Finch’ also brought OA publishing into the public eye in a way never seen before (or since).
Headline image credit: Storm of Stars in the Trifid Nebula. NASA/JPL-Caltech/UCLA
The post Five key moments in the Open Access movement in the last ten years appeared first on OUPblog.
Related Stories
How rapidly does medical knowledge advance? Very quickly if you read modern newspapers, but rather slowly if you study history. Nowhere is this more true than in the fields of neurology and psychiatry.
It was believed that studies of common disorders of the nervous system began with Greco-Roman Medicine, for example, epilepsy, “The sacred disease” (Hippocrates) or “melancholia”, now called depression. Our studies have now revealed remarkable Babylonian descriptions of common neuropsychiatric disorders a millennium earlier.
There were several Babylonian Dynasties with their capital at Babylon on the River Euphrates. Best known is the Neo-Babylonian Dynasty (626-539 BC) associated with King Nebuchadnezzar II (604-562 BC) and the capture of Jerusalem (586 BC). But the neuropsychiatric sources we have studied nearly all derive from the Old Babylonian Dynasty of the first half of the second millennium BC, united under King Hammurabi (1792-1750 BC).
The Babylonians made important contributions to mathematics, astronomy, law and medicine conveyed in the cuneiform script, impressed into clay tablets with reeds, the earliest form of writing which began in Mesopotamia in the late 4th millennium BC. When Babylon was absorbed into the Persian Empire cuneiform writing was replaced by Aramaic and simpler alphabetic scripts and was only revived (translated) by European scholars in the 19th century AD.
The Babylonians were remarkably acute and objective observers of medical disorders and human behaviour. In texts located in museums in London, Paris, Berlin and Istanbul we have studied surprisingly detailed accounts of what we recognise today as epilepsy, stroke, psychoses, obsessive compulsive disorder (OCD), psychopathic behaviour, depression and anxiety. For example they described most of the common seizure types we know today e.g. tonic clonic, absence, focal motor, etc, as well as auras, post-ictal phenomena, provocative factors (such as sleep or emotion) and even a comprehensive account of schizophrenia-like psychoses of epilepsy.
Early attempts at prognosis included a recognition that numerous seizures in one day (i.e. status epilepticus) could lead to death. They recognised the unilateral nature of stroke involving limbs, face, speech and consciousness, and distinguished the facial weakness of stroke from the isolated facial paralysis we call Bell’s palsy. The modern psychiatrist will recognise an accurate description of an agitated depression, with biological features including insomnia, anorexia, weakness, impaired concentration and memory. The obsessive behaviour described by the Babylonians included such modern categories as contamination, orderliness of objects, aggression, sex, and religion. Accounts of psychopathic behaviour include the liar, the thief, the troublemaker, the sexual offender, the immature delinquent and social misfit, the violent, and the murderer.
The Babylonians had only a superficial knowledge of anatomy and no knowledge of brain, spinal cord or psychological function. They had no systematic classifications of their own and would not have understood our modern diagnostic categories. Some neuropsychiatric disorders e.g. stroke or facial palsy had a physical basis requiring the attention of the physician or asû, using a plant and mineral based pharmacology. Most disorders, such as epilepsy, psychoses and depression were regarded as supernatural due to evil demons and spirits, or the anger of personal gods, and thus required the intervention of the priest or ašipu. Other disorders, such as OCD, phobias and psychopathic behaviour were viewed as a mystery, yet to be resolved, revealing a surprisingly open-minded approach.
From the perspective of a modern neurologist or psychiatrist these ancient descriptions of neuropsychiatric phenomenology suggest that the Babylonians were observing many of the common neurological and psychiatric disorders that we recognise today. There is nothing comparable in the ancient Egyptian medical writings and the Babylonians therefore were the first to describe the clinical foundations of modern neurology and psychiatry.
A major and intriguing omission from these entirely objective Babylonian descriptions of neuropsychiatric disorders is the absence of any account of subjective thoughts or feelings, such as obsessional thoughts or ruminations in OCD, or suicidal thoughts or sadness in depression. The latter subjective phenomena only became a relatively modern field of description and enquiry in the 17th and 18th centuries AD. This raises interesting questions about the possibly slow evolution of human self awareness, which is central to the concept of “mental illness”, which only became the province of a professional medical discipline, i.e. psychiatry, in the last 200 years.
The post Neurology and psychiatry in Babylon appeared first on OUPblog.
Related Stories
The 2014 International Law Weekend Annual Meeting is taking place this month at Fordham Law School, in New York City (24-25 October 2014).
The theme of this year’s meeting is “International Law in a Time of Chaos”, exploring the role of international law in conflict mitigation. Panel discussions will examine various aspects of both public international law and private international law, including trade, investment, arbitration, intellectual property, combatting corruption, labor standards in the global supply chain, and human rights, as well as issues of international organizations and international security.
ILW is sponsored and organized by the American Branch of the International Law Association (ABILA) and the International Law Students Association (ILSA). Every year more than one thousand practitioners, academics, diplomats, members of the governmental and nongovernmental sectors, and students attend this conference.
This year’s conference highlights include:
This year we are excited to see a number of OUP authors sitting on panels, including: Cesare Romano, editor of The Oxford Handbook of International Adjudication (with Karen J. Alter, and Yuval Shany); Ryan Goodman, author of the ASIL award winning book Socializing States: Promoting Human Rights through International Law (with Derek Jinks); August Reinisch, editor of The Privileges and Immunities of International Organizations in Domestic Courts; Jose E. Alvarez, author of The Evolving International Investment Regime (with Karl P. Sauvant); Ruti G. Teitel, author of Globalizing Transitional Justice: Contemporary Essays; Daniel H. Joyner, author of Interpreting the Nuclear Non-Proliferation Treaty; and Philip Alston, author of International Human Rights (with Ryan Goodman), to name a few.
For the full International Law Weekend 2014 schedule of events, visit ILSA and American Branch of the International Law Association websites.
Fordham Law School is located in the wonderful Lincoln Square neighborhood of New York and just around the corner from some great activities after the conference:
Of course, we hope to see you at Oxford University Press booth. We’ll be offering the chance to browse and buy our new and bestselling titles on display at a 20% conference discount, discover what’s new in Oxford Law Online, and pick up sample copies of our latest law journals.
To follow the latest updates about the ILW Conference as it happens, follow us on Twitter at @OUPIntLaw and the hashtag #ILW2014.
See you in there!
Headline image credit: 2011, 62nd St by Cornerstones of New York, CC BY-NC 2.0 via Flickr.
The post Preparing for the International Law Weekend 2014 appeared first on OUPblog.
Related Stories
As an Africanist historian committed to reaching broader publics, I was thrilled when the research team for the BBC’s genealogy program Who Do You Think You Are? contacted me late last February about an episode they were working on that involved the subject of some of my research, mixed race relationships in colonial Ghana. I was even more pleased when I realized that their questions about shifting practices and perceptions of intimate relationships between African women and European men in the Gold Coast, as Ghana was then known, were ones I had just explored in a newly published American Historical Review article, which I readily shared with them. This led to a month-long series of lengthy email exchanges, phone conversations, Skype chats, and eventually to an invitation to come to Ghana to shoot the Who Do You Think You Are? episode.
After landing in Ghana in early April, I quickly set off for the coastal town of Sekondi where I met the production team, and the episode’s subject, Reggie Yates, a remarkable young British DJ, actor, and television presenter. Reggie had come to Ghana to find out more about his West African roots, but he discovered along the way that his great grandfather was a British mining accountant who worked in the Gold Coast for close to a decade. His great grandmother, Dorothy Lloyd, was a mixed-race Fante woman whose father — Reggie’s great-great grandfather — was rumored to be a British district commissioner at the turn of the century in the Gold Coast.
The episode explores the nature of the relationship between Dorothy and George, who were married by customary law around 1915 in the mining town of Broomassi, where George worked as the paymaster at the local mine. George and Dorothy set up house in Broomassi and raised their infant son, Harry, there for two years before George left the Gold Coast in 1917 for good. Although their marriage was relatively short lived, it appears that Dorothy’s family and the wider community that she lived in regarded it as a respectable union and no social stigma was attached to her or Harry after George’s departure from the coast.
George and Dorothy lived openly as man and wife in Broomassi during a time period in which publicly recognized intermarriages were almost unheard of. As a privately employed European, George was not bound by the colonial government’s directives against cohabitation between British officers and local women, but he certainly would have been aware of the informal codes of conduct that regulated colonial life. While it was an open secret that white men “kept” local women, these relationships were not to be publicly legitimated.
Precisely because George and Dorothy’s union challenged the racial prescripts of colonial life, it did not resemble the increasingly strident characterizations of interracial relationships as immoral and insalubrious that frequently appeared in the African-owned Gold Coast press during these years. Although not a perfect union, as George was already married to an English woman who lived in London with their children, the trajectory of their relationship suggests that George and Dorothy had a meaningful relationship while they were together, that they provided their son Harry with a loving home, and that they were recognized as a respectable married couple. The latter helps to account for why Dorothy was able to “marry well” after George left. Her marriage to Frank Vardon, a prominent Gold Coaster, would have been unlikely had she been regarded as nothing more than a discarded “whiteman’s toy,” as one Gold Coast writer mockingly called local women who casually liaised with European men. In her own right, Dorothy became an important figure in the Sekondi community where she ultimately settled and raised her son Harry, alongside the children she had with Frank Vardon.
The “white peril” commentaries that I explored in my American Historical Review article proved to be a rhetorically powerful strategy for challenging the moral legitimacy of British colonial rule because they pointed to the gap between the civilizing mission’s moral rhetoric and the sexual immorality of white men in the colony. But rhetoric often sacrifices nuance for argumentative force and Gold Coasters’ “white peril” commentaries were no exception. Left out of view were men like George Yates, who challenged the conventions of their times, albeit imperfectly, and women like Dorothy Lloyd who were not cast out of “respectable” society, but rather took their place in it.
This sense of conflict and connection and of categorical uncertainty surrounding these relationships is what I hope to have contributed to the research process, storyline development, and filming of the Reggie Yates episode of Who Do You Think You Are? The central question the show raises is how do we think about and define relationships that were so heavily circumscribed by racialized power without denying the “possibility of love?” By “endeavor[ing] to trace its imperfections, its perversions,” was Martinican philosopher and anticolonial revolutionary Frantz Fanon’s answer. His insight surely reverberates throughout the episode.
All images courtesy of Carina Ray.
The post Race, sex, and colonialism appeared first on OUPblog.
Related Stories
Voting for the 2014 Atlas Place of the Year is now underway. However, you still be curious about the nominees. What makes them so special? Each year, we put the spotlight on the top locations in the world that make us go, “wow”. For good or for bad, this year’s longlist is quite the round-up.
Just hover over the place-markers on the map to learn a bit more about this year’s nominations.
Make sure to vote for your Place of the Year below. If you have another Place of the Year that you would like to nominate, we’d love to know about it in the comments section. Follow along with #POTY2014 until our announcement on 1 December.What do you think Place of the Year 2014 should be?
Image Credits: Ferguson: “Cops Kill Kids”. Photo by Shawn Semmler. CC BY 2.0 via Flickr. Liberia: Ebola Virus Particles. Photo by NIAID. CC BY 2.0 via Flickr. Ukraine: Euromaiden in Kiev 2014-02-19 10-22. Photo by Amakuha. CC BY-SA 3.0 via Wikimedia Commons. Colorado: Grow House 105. Photo by Coleen Whitfield. CC BY-SA 2.0 via Flickr. Nauru: In front of the Menen. Photo by Sean Kelleher. CC BY-SA 2.0 via Flickr. Sochi: Olympic Park Flags (2). Photo by american_rugbler. CC BY-SA 2.0 via Flickr. Mount Sinjar: Sinjar Karst. Photo by Cpl. Dean Davis. Public Domain via Wikimedia Commons. Gaza: The home of the Kware family after it was bombed by the military. Photo by B’Tselem. CC BY 4.0 via Wikimedia Commons. Scotland: Vandalised no thanks sign. Photo by kay roxby. CC BY 2.0 via Flickr. Brazil: World Cup stuff, Rio de Janeiro, Brazil (15). Photo by Jorge in Brazil. CC BY 2.0 via Flickr.
Heading image: Old Globe by Petar Milošević. CC-BY-SA-3.0 via Wikimedia Commons.
The post Place of the Year 2014: behind the longlist appeared first on OUPblog.
Related Stories