JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
The recent release of The Imitation Game has revealed the important role crosswords played in the recruitment of code-breakers at Bletchley Park. In response to complaints that its crosswords were too easy, The Daily Telegraph organised a contest in which entrants attempted to solve a puzzle in less than 12 minutes. Successful competitors subsequently found themselves being approached by the War Office, and later working as cryptographers at Bletchley Park.
The birth of the crossword
The crossword was the invention of Liverpool émigré Arthur Wynne, whose first puzzle appeared in the New York World in 1913. This initial foray was christened a Word-Cross; the instruction in subsequent issues to ‘Find the missing cross words’ led to the birth of the cross-word. Although Wynne’s invention was initially greeted with scepticism, by the 1920s it had established itself as a popular pastime, entertaining and frustrating generations of solvers, solutionists, puzzle-heads, and cruciverbalists (Latin for ‘crossworders’).
Crosswords consist of a grid made up of black and white boxes, in which the answers, also known as lights, are to be written. The term light derives from the word’s wider use to refer to facts or suggestions which help to explain, or ‘cast light upon’, a problem. The puzzle consists of a series of clues, a word that derives from Old English cleowen ‘ball of thread’. Since a ball of thread could be used to help guide someone out of a maze – just as Ariadne’s thread came to Theseus’s aid in the Minotaur’s labyrinth – it developed the figurative sense of a piece of evidence leading to a solution, especially in the investigation of a crime. The spelling changed from clew to clue under the influence of French in the seventeenth century; the same shift affected words like blew, glew, rew, and trew.
Anagrams, homophones, and Spoonerisms: clues in crosswords
In the earliest crosswords the clue consisted of a straightforward synonym (Greek ‘with name’) – this type is still popular in concise or so-called quick crosswords. A later development saw the emergence of the cryptic clue (from a Greek word meaning ‘hidden’), where, in addition to a definition, another route to the answer is concealed within a form of wordplay. Wordplay devices include the anagram, from a Greek word meaning ‘transposition of letters’, and the charade, from a French word referring to a type of riddle in which each syllable of a word, or a complete word, is described, or acted out – as in the game charades. A well-known example, by prolific Guardian setter Rufus, is ‘Two girls, one on each knee’ (7). Combining two girls’ names, Pat and Ella, gives you a word for the kneecap: PATELLA.
Punning on similar-sounding words, or homophones (Greek ‘same sound’), is a common trick. A reference to Spooner requires a solver to transpose the initial sounds of two or more words; this derives from a supposed predisposition to such slips of the tongue in the speech of Reverend William Archibald Spooner (1844–1930), Warden of New College Oxford, whose alleged Spoonerisms include a toast to ‘our queer dean’ and upbraiding a student who ‘hissed all his mystery lectures’. Other devious devices of misdirection include reversals, double definitions, containers (where all or part of word must be placed within another), and words hidden inside others, or between two or more words. In the type known as &lit. (short for ‘& literally so’), the whole clue serves as both definition and wordplay, as in this clue by Rufus: ‘I’m a leader of Muslims”. Here the word play gives IMA+M (the leader, i.e. first letter, of Muslims), while the whole clue stands as the definition.
Crossword compilers and setters
Crossword compilers, or setters, traditionally remain anonymous (Greek ‘without name’), or assume pseudonyms (Greek ‘false name’). Famous exponents of the art include Torquemada and Ximenes, who assumed the names of Spanish inquisitors, Afrit, the name of a mythological Arabic demon hidden in that of the setter A.F.Ritchie, and Araucaria, the Latin name for the monkey puzzle tree. Some crosswords conceal a name or message within the grid, perhaps along the diagonal, or using the unchecked letters (or unches), which do not cross with other words in the grid. This is known as a nina, a term deriving from the practice of the American cartoonist Al Hirschfield of hiding the name of his daughter Nina in his illustrations.
If you’re a budding code-cracker and fancy pitting your wits against the cryptographers of Bletchley Park, you can find the original Telegraph puzzle here.
But remember, you only have 12 minutes to solve it.
When patients are discharged from the intensive care unit it’s great news for everyone. However, it doesn’t necessarily mean the road to recovery is straight. As breakthroughs and new technology increase the survival rate for highly critical patients, the number of possible further complications rises, meaning life after the ICU can be complex. Joe Hitchcock from Oxford University Press’s medical publishing team spoke to Dr. Robert D. Stevens, Associate Professor at Johns Hopkins University School of Medicine, to find out more.
Can you tell us a little about your career?
As a junior doctor in the intensive care unit, I observed that prowess in resuscitation is a double edged sword. We were getting better and better at promoting survival, but at what cost in the long term? I decided I would dedicate my career to the recovery process that follows severe illnesses and injuries. Currently, my team has several cohort studies under way in human subjects with head injury, stroke and sepsis. We’re looking at their long term outcomes and also imaging their brains. I have a laboratory in which we are studying a range of neurologic readouts in mice following brain injury. We’re looking at the biology of neuronal plasticity and studying stem cells as a treatment to promote recovery of function.
What is Post-ICU medicine and what does it aim to achieve?
Medicine is increasingly a victim of its own successes. People are surviving complex and terrifying illnesses, which only years ago would almost certainly have been fatal. This means there is an ever-growing population of “survivors”. Like survivors of cancer, survivors of intensive care bring with them an entirely new set of clinical problems, demanding new approaches. We propose Post-ICU Medicine as an umbrella term for this new domain of medical practice and research, which is specifically concerned with the biology, diagnosis and treatment of illnesses and disabilities resulting from critical illness.
What do you mean by the “legacy” of critical illnesses?
The “legacy” of critical illness refers to what people “carry with them” after living through a life threatening illness in the intensive care unit (ICU). It is the sum of consequences, both physical and mental, some temporary others permanent, which unfold in the weeks, months and years after someone is discharged from the ICU.
In what ways might a patient’s post-ICU experience differ from public/idealized expectations?
There is a widely held perception, or perhaps an anticipation, that acute and severe illnesses, such as sepsis or respiratory failure, are a zero-sum game: You may die from this illness, but if you survive you have a good chance of recovering completely and of going on with your life as if nothing had happened. This notion has been turned on its head. We know now that the post-ICU experience presents physical and psychological challenges for a high proportion of patients. Even the most fortunate, those we might regard as having recovered successfully, often acknowledge problems months after they have left the hospital. They report that they feel weak, have difficulties concentrating, are impulsive, anxious or depressed. When tested formally, they are often score below population means on tests of memory, attention, and functional status.
Have you observed patterns in the way patients recover?
I do not know that there are any easily classifiable patterns. There are countless possible trajectories of recovery which we are only beginning to characterize with some degree of scientific rigor. In reality, just as each patient is biologically unique, so too is his or her recovery. One of the main tasks of Post-ICU Medicine is to identify and validate markers (e.g. genetic variants, protein expression) that allow us to predict and track recovery patterns with a much higher level of confidence and reliability.
How do you assess and treat patients who have a multitude of Post-ICU conditions, psychological and physical?
Ideally, a single provider would be able to follow and treat patients in the post-ICU period. However, the range of different problems — neurologic, cognitive, psychological, cardiac, pulmonary, renal, musculoskeletal, digestive, nutritional, endocrine, social, economic — which these patients present with, are beyond the scope of even a very knowledgeable practitioner. Some groups that specialize in post-ICU follow up care have adopted a different approach, in which patients are evaluated by a multi-disciplinary “Recovery Team” with a wide array of minimally-overlapping knowledge and skills. The latter may include internists, specialists in rehabilitation, psychiatrists, neuropsychologists, neurologists, physical therapists, occupational therapists, orthopaedic surgeons, rheumatologists, and social workers. Patients recovering from critical illness are evaluated periodically and referred to the different members of the Recovery Team depending on clinical symptoms and signs. While evidence is mounting regarding the benefits of integrated post-ICU Recovery Team approach, such interventions area resource intensive and costly and are not currently available to the vast majority of recovering post-ICU patients.
Is it possible to accurately predict patient rehabilitation and recovery trajectories?
This is the “holy grail” of post-ICU medicine, and even of critical care medicine more generally. We desperately need discriminative methods to predict recovery trajectories. Current predictive approaches rely on multiple logistic regression models often using a mix of demographic and clinical severity variables. These models are terribly inaccurate, to the point of being quite useless in the clinical setting. New approaches are needed which analyse large biological datasets – patterns of gene and protein expression, changes in the microbiome, changes in carbohydrate and lipid metabolism, alterations in brain functional and metabolic activity. The great hope is that models emerging from these more sophisticated data sets will allow individualized or personalized approaches to outcome prediction and treatment.
If recovery is considered a gradated process, when is a patient “cured”?
The World Health Organization states that physical and mental well-being are a right of all human beings. It is likely that the insults and injuries suffered in the ICU can never be completely healed or cured. However, the good news is that some ICU survivors achieve astonishing levels of recovery. We need to study these individuals – the ones who do very well and surpass all expectations for recovery– as they seem to have biological or psychological characteristics (e.g. resilience factors, motivation) which set them apart. Knowing more about these characteristics may help us treat those with less favorable recovery profiles.
What might the post-ICU medicine look like in the distant future?
I believe that mortality will continue to decline for a range of illnesses an injuries encountered in the ICU. The key task will be to maximize health status in those who survive. I expect that major discoveries will be made regarding organ-specific patterns of gene and protein expression and molecular signalling which drive post-injury recovery versus failure — and that this knowledge will enable novel treatment strategies. I anticipate that important advances will be made in the regeneration tissues and organs using stem cell and tissue engineering approaches.
As anyone knows who has looked at the newspapers over the festive season, 2015 is a bumper year for anniversaries: among them Magna Carta (800 years), Agincourt (600 years), and Waterloo (200 years). But it is January which sees the first of 2015’s major commemorations, for it is fifty years since Sir Winston Churchill died (on the 24th) and received a magnificent state funeral (on the 30th). As Churchill himself had earlier predicted, he died on just the same day as his father, Lord Randolph Churchill, had done, in 1895, exactly seventy years before.
The arrangements for Churchill’s funeral, codenamed ‘Operation Hope Not’, had long been in the planning, which meant that Churchill would receive the grandest obsequies afforded to any commoner since the funerals of Nelson and Wellington. And unlike Magna Carta or Agincourt or Waterloo, there are many of us still alive who can vividly remember those sad yet stirring events of half a century ago. My generation (I was born in 1950) grew up in what were, among other things, the sunset years of Churchillian apotheosis. They may, as Lord Moran’s diary makes searingly plain, have been sad and enfeebled years for Churchill himself, but they were also years of unprecedented acclaim and veneration. During the last decade of his life, he was the most famous man alive. On his ninetieth birthday, thousands of greeting cards were sent, addressed to ‘The Greatest Man in the World, London’, and they were all delivered to Churchill’s home. During his last days, when he lay dying, there were many who found it impossible to contemplate the world without him, just as Queen Victoria had earlier wondered, at the time of his death in 1852, how Britain would manage without the Duke of Wellington.
Like all such great ceremonial occasions, the funeral itself had many meanings, and for those of us who watched it on television, by turns enthralled and tearful, it has also left many memories. In one guise, it was the final act homage to the man who had been described as ‘the saviour of his country’, and who had lived a life so full of years and achievement and honour and controversy that it was impossible to believe anyone in Britain would see his like again. But it was also, and in a rather different emotional and historical register, not only the last rites of the great man himself, but also a requiem for Britain as a great power. While Churchill might have saved his country during the Second World War, he could not preserve its global greatness thereafter. It was this sorrowful realization that had darkened his final years, just as his funeral, attended by so many world leaders and heads of state, was the last time that a British figure could command such global attention and recognition. (The turn out for Margaret Thatcher’s funeral, in 2013, was nothing like as illustrious.) These multiple meanings made the ceremonial the more moving, just as there were many episodes which made it unforgettable: the bearer party struggling and straining to carry the huge, lead-lined coffin up the steps of St Paul’s; Clement Attlee—Churchill’s former political adversary—old and frail, but determined to be there as one of the pallbearers, sitting on a chair outside the west door brought especially for him; the cranes of the London docks dipping in salute, as Churchill’s coffin was born up the Thames from Tower Pier to Waterloo Station; and the funeral train, hauled by a steam engine of the Battle of Britain class, named Winston Churchill, steaming out of the station.
For many of us, the funeral was made the more memorable by Richard Dimbleby’s commentary. Already stricken with cancer, he must have known that this would be the last he would deliver for a great state occasion (he would, indeed, be dead before the year was out), and this awareness of his own impending mortality gave to his commentary a tone of tender resignation that he had never quite achieved before. As his son, Jonathan, would later observe in his biography of his father, ‘Richard Dimbleby’s public was Churchill’s public, and he had spoken their emotions.’
Fifty years on, the intensity of those emotions cannot be recovered, but many events have been planned to commemorate Churchill’s passing, and to ponder the nature of his legacy. Two years ago, a committee was put together, consisting of representatives of the many institutions and individuals that constitute the greater Churchill world, both in Britain and around the world, which it has been my privilege to chair. Significant events are planned for 30 January: in Parliament, where a wreath will be laid; on the River Thames, where Havengore, the ship that bore Churchill’s coffin, will retrace its journey; and at Westminster Abbey, where there will be a special evensong. It will be a moving and resonant day, and the prelude to many other events around the country and around the world. Will any other British prime minister be so vividly and gratefully remembered fifty years after his—or her—death?
Headline image credit: Franklin D. Roosevelt and Winston Churchill, New Bond Street, London. Sculpted by Lawrence Holofcener. Public domain via Wikimedia Commons.
Though he’s largely forgotten today, Walter Savage Landor was one of the major authors of his time—of both his times, in fact, for he was long-lived enough to produce major writing during both the Romantic and the Victorian eras. He kept writing and publishing promiscuously through his long life (he died in his ninetieth year) which puts him in a unique category. Maybe the problem is that he outlived his own reputation. Byron, Shelly and Keats all died in their twenties, and this fact somehow seals-in their importance as poets. Landor’s close friend Southey died at the beginning of the 1840s. Landor lived on, writing and publishing poetry, prose, drama, English and Latin. He forged friendships now with men like Robert Browning—who was deeply influenced by Landor’s writing—John Forster and Charles Dickens (Dickens named his second son Walter Savage Landor Dickens in his friend’s honour). His Victorian reputation was higher than his sales; but and if we’re puzzled by how completely his literary reputation was eclipsed during the 20th century in part that may simply be a function of his prolixity. Landor’s Collected Works was published between 1927 and 1936 in sixteen fat volumes; and even that capacious edition doesn’t by any means contain everything Landor published. It omits, for instance, his voluminous Latin writing—for Landor was the last English writer to produce a substantial body of work in that dead language. In late life he once said ‘I am sometimes at a loss for an English word; for a Latin—never!’
His most substantial prose writings were the Imaginary Conversations: dozens and dozens of prose dialogues between famous historical figures, and occasionally between fictionalised versions of living individuals, varying in length from a few pages each to seventy or eighty. The prose is exquisite, balanced, beautifully mannered and expressed and full of potent epigrams and apothegms on art, society, history, morals and religion. Nobody reads the Imaginary Conversations any more. Then there are the epics—his masterpiece, Gebir (1798), an heroic poem of immense ambition, was greeted by bafflement and ridicule on its initial publication. Landor’s experimental epic idiom was simply too obscure for his readers even to understand—though Lamb claimed the poem has ‘lucid interludes’, and Shelley loved it. Critic William Gifford was less kind: he called the poem ‘a jumble of incomprehensible trash; the effusion of a mad and muddy brain.’ Landor decided to address the question of the poem’s obscurity the best way he knew: by translating the entire epic into Latin (Gebirus, 1803). Ah, those were the days!
He wrote shoals of beautiful lyrics and elegies. He wrote volumes-full of plays, all cod-Shakespearian blank-verse dramas. He wrote historical novels, one of which (Pericles and Aspasia, 1836) is very good. He wrote classical idylls, pastoral poetry—he was a passionate gardener—epigrams and epitaphs in English and Latin. The sheer amount of work he produced may explain the decline in his reputation; for looking new readers surveying the cliff-face of text to climb may find it offputting.
It’s worth the ascent, though. Landor was a choleric individual, given to sudden rages, whilst also magnanimous, kind-hearted and loyal to his friends. Dickens wrote him into Bleak House as the character Boythorn; and a Boythorn-ish energy and vitality very often breaks through the classical refinement of the verse. Unhappily married (he and his wife separated in 1835) he lived through a series of towering, unrequired passions for other, married women. This hopelessness, paradoxically, gives force to some of the best poetry Landor ever wrote: love poems in which the impossibility of love only magnifies the intensity of affection. It’s idea Landor understands better almost than any other writer: that the strongest feelings are predicated upon absence rather than presence. Here’s his short lyric ‘Dirce’ (1831):
Stand close around, ye Stygian set,
With Dirce in one boat convey’d,
Or Charon, seeing, may forget
That he is old, and she a shade.
This says that Dirce is so beautiful that, were he to see her, Charon might ‘forget himself’, and presumably ignore the obstacles of his own dotage and the fact that she is ‘a shade’ to make erotic advances. But in fact the ‘forgetting’ in this lyric involves a much more complex mode of amnesia. It’s tempting to read the poem as being about a particular affect: the melancholy, hopeless desire of an old man for the ideal of youthful female beauty. Desire haunted by the sense that, really, it would be better not to feel desire at all—that to desire is in some sense to ‘forget yourself.’ That idiom is an interesting one, actually; as if an old man feeling sexual desire is in some sense ‘forgetting’ not just that he is old, and that young girls aren’t interested in clapped-out old codgers, but more crucially forgetting that he isn’t the sort of person who feels in that way at all. Perhaps we tend to think of desire not as something to be remembered or forgotten, but as something experienced directly. In its compact way this poem suggests otherwise.
Renunciation is another of Landor’s perennial themes. One of his most famous quatrains runs:
I strove with none, for none was worth my strife;
Nature I loved; and next to Nature, Art.
I warmed both hands before the fire of life;
It sinks, and I am ready to depart.
Written in 1849, on the occasion of Landor’s 74th birthday, this has a certain clean dignity, both stylistically and in terms of what it is saying; although it takes part of its force from the knowledge that (as I mention above) Landor actually strove with people all the time, all through his life: personally, cholerically, in law courts, in print and face-to-face. The second line of the poem, by (it seems to me) rather pointedly omitting ‘people’ from the things that Landor has spent his life loving, rather reinforces this notion. One consequence of a man, particularly a large man like Landor, standing in front of the fire to warm his hands is to block off the heat from everybody else in the room. And that seems appropriate too, somehow.
Featured image credit: ‘Inscription from Walter Savage Landor (1775-1864) to Robert Browning (1812-1889)’ by Provenance Online Project. CC-BY-2.0 via Flickr
For our second blog post of 2015, we’re looking back at a great article from Katie Kuszmar in The Oral History Review (OHR), “From Boat to Throat: How Oral Histories Immerse Students in Ecoliteracy and Community Building” (OHR, 41.2.) In the article, Katie discussed a research trip she and her students used to record the oral histories of local fishing practices and to learn about sustainable fishing and consumption. We followed up with her over email to see what we could learn from high school oral historians, and what she has been up to since the article came out. Enjoy the article, and check out her current work at Narrability.com.
In the article, you mentioned that your students’ youthful curiosity, or lack of inhibition, helped them get answers to tough questions. Can you think of particular moments where this made a difference? Were there any difficulties you didn’t expect, working with high school oral historians?
One particular moment was at the end of the trip. Our final interview was with the Monterey Bay Aquarium’s (MBA) Seafood Watch public relations coordinator, who was kind enough to arrange the fisheries historian interviews and offered to be one of the interviewees as well. When we finally interviewed the coordinator, the most burning question the students had was whether or not Seafood Watch worked directly with fishermen. The students didn’t like her answer. She let us know that fishermen are welcome to approach Seafood Watch and that Seafood Watch is interested in fishermen, but they didn’t work directly with fishermen in setting the standards for their sustainable seafood guidelines. The students seemed to think that taking sides with fishermen was the way to react. When we left the interview they were conflicted. The Monterey Bay Aquarium is a well-respected organization for young people in the area. The aquarium itself is full of nostalgic memories for most students in the region who visit the aquarium frequently on field trips or on vacation. How could such a beloved establishment not consider fishermen voices, for whom the students had just built a newfound respect? It was a big learning moment about bureaucracy, research, empathetic listening, and the usefulness of oral history.
After the interview, when the students cooled off, we discussed how the dynamics in an interview can change when personal conflicts arise. The narrator may even change her story and tone because of the interviewer’s biases. We explored several essential questions that I would now use for discussion before interviews were to occur, for I was learning too. Some questions that we considered were: When you don’t agree with your narrator, how do you ask questions that will keep the communication safe and open?
Oral history has power in this way: voices can illuminate the issues without the need for strong editorializing.
How do you set aside your own beliefs from the narrator, and why is this important when collecting oral history? In other words, how do you take the ego out of it?
The students were given a learning opportunity from which I hoped we all could gain insight. We discussed how if you can capture in your interview the narrator’s perspective (even if different than your own or other narrators for that matter), then the audience will be able to see discrepancies in the narratives and gather the evidence they need to engage with the issues. Hearing that Seafood Watch doesn’t work with fishermen might potentially help an audience to ask questions on a larger public scale. Considering oral history’s usefulness in engaging the public, inspiring advocacy, and questioning bureaucracy might be a powerful way for students to engage in the process without worrying about trying to prove their narrators wrong or telling the audience what to think. Oral history has power in this way: voices can illuminate the issues without the need for strong editorializing. This narrative power can be studied beforehand with samples of oral history, as it can also be a great way for students to reflect metacognitively on what they have participated in and how they might want to extend their learning experiences into the real world. Voice of Witness (VOW) contends that students who engage in oral history are “history makers.” What a powerful way to learn!
How did this project start? Did you start with wanting to do oral history with your students, or were you more interested in exploring sustainability and fall into oral history as a method?
Being a fisherwoman myself and just having started commercial fishing with my husband who is a fishmonger, I found my two worlds of fishing and teaching oral history colliding. Even after teaching English for ten years because of my love of storytelling, I have long been interested in creating experiential learning opportunities for students concerning where food comes from and sustainable food hubs.
Through a series of uncanny events connecting fishing and oral history, the project seemed to fall into place. I first attended an oral history for educators training through a collaborative pilot program created by VOW and Facing History and Ourselves (FHAO). After the training, I mentored ten seniors at my school to produce oral history Senior Service Learning Projects that ended in a public performance at a local art museum’s performance space. VOW was integral in my first year’s experience with oral history education. I still work with VOW and sit on their Education Advisory Board, which helps me to continue my engagement in oral history education.
In the same year as the pilot program with VOW, I attended the annual California Association of Teachers of English conference in which the National Oceanic Atmospheric Association’s (NOAA) Voices of the Bay (VOB) program coordinator offered a training. The training offered curriculum strategies in marine ecology, fishing, economics, and basic oral history skill-building. To record interviews, NOAA would help arrange interviews with local fishermen in classrooms or at nearby harbors. The interviews would eventually go into a national archive called Voices from the Fisheries.
The trainer for VOB and I knew many of the same fishermen and mongers up and down the central and north (Pacific) coast. I arranged a meeting between the two educational directors of VOW and VOB, who were both eager to meet each other, as they both were just firing up their educational programs in oral history education. The meeting was very fruitful for all of us, as we brainstormed new ways to approach interdisciplinary oral history opportunities. As such, I was able to synthesize curriculum from both programs in preparing my students for the immersion trip, considering sustainability as an interdependent learning opportunity in environmental, social, and economic content. When I created the trip I didn’t have a term for what the outcome would be, except that I had hoped they would become aware more aware of sustainable seafood and how to promote its values. Ecoliteracy was a term that came to fruition after the projects were completed, but I think it can be extremely valuable as a goal in interdisciplinary oral history education.
I believe oral history education can help to shape our students into compassionate critical thinkers, and may even inspire them to continue to interview and listen empathetically to solve problems in their personal, educational, and professional futures.
What pointers can you give to other educators interested in using oral history to engage their students?
With all the material out there, I feel that educators have ample access to help prepare for projects. In the scheme of these projects, I would advise scheduling time for thoughtful processing or metacognitive reflection. All too often, it is easy to focus on the preparation, conducting and capturing the interviews, and then getting something tangible done with it. Perhaps, it is embedded in the education world of outcome-based assessment: getting results and evidence that learning is happening. With high school students, the experience of interviewing is an extremely valuable learning tool that could easily get overlooked when we are focusing on a project
For example, on an immersion trip to El Salvador with my high school students, we were given an opportunity to interview the daughter of the sole survivor of El Mozote, an infamous massacre that happened at the climax of the civil war. The narrator insisted on telling us her and her mother’s story, despite the fact that she had just gotten chemotherapy the day prior. She said that her storytelling was therapeutic for her and helped her feel that her mother, who had passed away, and all those victims of the massacre would not die in vain. This was such heavy content for her and for us as her audience. We all needed to talk, be quiet about it, cry about it, and reflect on the value of the witnessing. In the end, it wasn’t the deliverable that would be the focus of the learning, it was the actual experience. From it, compassion was built in the students, not just for El Salvadorian victims and survivors, but on a broader scale for all people who face civil strife and persecution. After such an experience, statistics were not just numbers anymore, they had a human face. This, to date, for me has been the most valuable part of oral history education: the transformation that can occur during the experience of an interview, as opposed to the product produced from it. For educators, it is vital to facilitate a pointed and thoughtful discussion with the interviewer to hone in on the learning and realize the transformation, if there is one. The discussion about the experience is essential in understanding the value of the oral history interviewing.
Do you have plans to do similar projects in the future?
After such positive experiences with oral history education, I wanted a chance to actively be an oral historian who captures narratives in issues of sustainable food sources. I have transitioned from teaching to running my own business called Narrability with the mission to build sustainability through community narratives. I just completed a small project, in which I collected oral histories of local fishermen called: “Long Live the King: Storytelling the Value of Salmon Fishing in the Monterey Bay.” Housed on the Monterey Bay Salmon and Trout Project (MBSTP) website, the project highlights some of the realities connected to the MBSTP local hatchery net pen program that augments the natural Chinook salmon runs from rivers in the Sacramento area to be released into the Monterey Bay. Because of drought, dams, overfishing, and urbanization, the Chinook fishery in the central coast area has been deeply affected, and the need for a net pen program seems strong. In the Monterey Bay, there have been many challenges in implementing the Chinook net pen program due to the unfortunate bureaucracy of a discouraging port commission out of the Santa Cruz harbor. Because of the challenges, the oral histories that I collected help to illustrate that regional Chinook salmon fishing builds environmental stewardship, family bonding, community building, and provides a healthy protein source.
Through Narrability, I have also been working on developing a large oral history program with a group of organic farming, wholesale, and certification pioneers. As many organic pioneers face retirement, the need for their history to be recorded is growing. Irene Reti sparked this realization in her project through University of California, Santa Cruz: Cultivating a Movement: An Oral History Series on Organic Farming & Sustainable Agriculture on California’s Central Coast. Through collaboration with some of the major players in organics, we aim to build a comprehensive national collection of the history of organics for the public domain.
Is there anything you couldn’t address in the article that you’d like to share here?
I know being a teacher can be time crunched, and once interviews are recorded, students and teachers want to do something tactile with the interviews (podcasts/narratives/documentaries). I encourage educators to implement time to reflect on the process. I wished I would have done more reflective processing in this manner: to interview as a class; to discuss the experience of interviewing and the feelings elicited before, during and after an interview; to authentically analyze how the interviews went, including considering narrator dynamics. In many cases, the skills learned and personal growth is not the most tangible outcome. Despite this, I believe oral history education can help to shape our students into compassionate critical thinkers, and may even inspire them to continue to interview and listen empathetically to solve problems in their personal, educational, and professional futures. This might not be something we can grade or present as a deliverable, it might be a long-term effect that grows with a students’ life long learning.
Image Credit: Front entrance of the Aquarium. Photo by Amadscientist. CC by SA 3.0 via Wikimedia Commons.
The field of anaesthesia is a subtle discipline, when properly applied the patient falls gently asleep, miraculously waking-up with one less kidney or even a whole new nose. Today, anaesthesiologists have perfected measuring the depth and risk of anaesthesia, but these breakthroughs were hard-won. The history of anaesthesia is resplendent with pus and cadavers, each new development moved one step closer to the art of the modern anaesthesiologist, who can send you to oblivion and float you safely back. This timeline marks some of the most macabre and downright bizarre events in its long history.
Heading image: Junker-type inhaler for anaesthesia, London, England, 1867-1 Wellcome L0058160. Wellcome Library, London. CC BY 4.0 via Wikimedia Commons.
Meet Utricularia. It’s a bladderwort, an aquatic carnivorous plant, and one of the fastest things on the planet. It can catch its prey in a millisecond, accelerating it up to 600g.
Once caught inside the prey suffocates and digestive enzymes break down the unfortunate creature for its nutrients. Anything small enough to be pulled in won’t know their mistake until it’s too late. But as lethal as the trap is, it did seem to have some flaws. The traps don’t just catch animals, they catch anything that gets sucked in, so often that’s algae and pollen too.
A team at the University of Vienna led by Marianne Koller-Peroutka and Wolfram Adlassnig closely examined Utricularia and found the plants were not very efficient killers. Studying over 2000 traps showed that only about 10% of the objects sucked in were animals. Animals are great if you want nutrients like nitrogen and phosphorus, but half of the catch was algae and a third pollen.
What was more puzzling was that not all the algae entered with an animal. If a bladder is left for a long while, it will trigger anyway. No animal is needed; algae, pollen, and fungi will enter. Is this a sign that the plant is desperate for a meal, and hoping an animal is passing? Koller-Peroutka and Adlassnig found that the traps catching algae and pollen grew larger and had more biomass. Examining the bladders under a microscope showed that algae caught in the traps died and decayed. This was more evidence that it’s happy to eat other plants too. It seems that it’s not just animals that Utricularia is hunting.
Koller-Peroutka and Adlassnig say this is why Utricularia is able to live in places with comparatively few animals. Nitrogen from animals and other elements from plants mean it is happy with a balanced diet. It can grow more and bigger traps, and use these for catching animals or plants or both.
Fortunately even the big traps only catch tiny animals, so if someone has bought you one for Christmas you can leave it on the dinner table without losing your turkey and trimmings in a millisecond.
The headline reads: “Border State Governor Issues Dire Warning about Flood of Undocumented Immigrants.” And here’s the gist of the story: In a letter to national officials, the governor of a border state sounded another alarm about unchecked immigration across a porous boundary with a neighboring country. In the message, one of several from border state officials, the governor acknowledged that his/her nation had once welcomed immigrants from its neighbor, but recent events taught how unwise that policy was. He/she insisted that many of the newcomers to his/her state were armed and dangerous criminals. Even those who came to work threatened to overwhelm the state’s resources and destabilize the social order.
Indeed, unlike earlier immigrants from the neighboring nation who had adapted to their new homeland and its traditions, more recent arrivals resisted assimilation. Instead, they continued to speak in their native tongue and maintain attachments to their former nation, sometimes carrying their old flag in public demonstrations. Worse still, the governor admitted that his/her nation seemed unwilling to “arrest” the flow of these undocumented aliens. Yet, unless the “incursions” were halted, the “daring strangers,” who are “gradually outnumbering and displacing us,” would turn us into “strangers in our own land.”
Today’s headline? It could be. The governor’s fears certainly ring familiar. Indeed, the warning sounds a lot like ones issued by Governor Rick Perry of Texas or Jan Brewer of Arizona. But this particular alarm emanated from California. That might make Pete Wilson the author of this message. Back in the 1990s, he was very vocal about the dangers that illegal immigration posed to his state and the United States. As governor, Wilson championed the “Save Our State” ballot initiative that cut illegal aliens from access to state benefits such as subsidized health care and public education. He campaigned on behalf of the initiative (Proposition 187) and made it a centerpiece of his 1994 re-election campaign.
Wilson, however, was not the source of the letter cited above. In fact, this warning dates back to 1845, almost 150 years before Proposition 187 appeared on the scene. Its author was Pio Pico, governor of the still Mexican state of California.
The unsanctioned immigrants about whom Pico worried were from the United States. Pico had reason to be concerned, especially as he reflected on events in Texas. There, the Mexican government had opted to encourage immigration from the United States. Beginning in the 1820s and continuing into the 1830s, Americans, primarily from the southern United States, poured into Texas.
By the mid-1830s, they outnumbered Tejanos (people with Mexican roots) by almost ten to one. Demanding provincial autonomy, the Americans clashed with Mexican authorities determined to enforce the rule of the national government. In 1836, a rebellion commenced, and Texans won their war of secession. Nine years later, the United States annexed Texas. And now, claimed Pico, many officials of the United States government openly coveted California, their expansionist designs abetted by American immigrants to California.
In retrospect, the policy of promoting American immigration into northern Mexico looks as dangerous as Pico deemed it and as counterintuitive as it has seemed to subsequent generations. Why invite Americans in if a chief goal was to keep the United States out? Still, the policy did not appear so paradoxical at the time. There were, in fact, encouraging precedents. Spain had attempted something similar in the Louisiana Territory in the 1790s, though the territory’s transfer back to France and then to the United States had aborted that experiment. More enduring was what the British had done in Upper Canada (now Ontario). Americans who crossed that border proved themselves amenable to a shift in loyalties, which showed how tenuous national attachments remained in these years. From this, others could draw lessons: the keys to gaining and holding the affection of American transplants was to protect them from Indians, provide them with land on generous terms, require little from them in the way of taxes, and interfere minimally in their private pursuits.
For a variety of reasons, Mexico had trouble abiding by these guidelines, and, in response, Americans did not abide by Mexican rules. In Texas, American immigrants destabilized Mexican rule. In California, as Pico feared, the “daring strangers” overwhelmed the Mexican population, though the brunt of the American rush did not commence until after the discovery of gold in 1848. By then, Mexico had already lost its war with the United States and ceded California. Very soon, men like Pio Pico found themselves strangers in their own land.
Featured image credit: “Map of USA highlighting West”. CC-BY-SA 3.0 via Wikimedia Commons.
From the comfort of a desk, looking at a computer screen or the printed page of a newspaper, it is very easy to ignore the fact that thousands of tons of insecticide are sprayed annually.
Consider the problem of the fall armyworm in Mexico. As scientists and crop advisors, we’ve worked for the past two decades trying to curb its impact on corn yield. We’ve tested dozens of chemicals to gain some control over this pest on different crops.
A couple of years ago, we were comparing information on the number of insecticide applications needed to battle this worm during a break of a technical meeting. Anecdotal information from other parts of the country got into the conversation. Some colleagues reported that the fall armyworm wasn’t the worst pest in a particular region of Mexico and it was easy to control with a couple of insecticide applications. Others mentioned that up to six sprays were necessary in other parts of the country. Wait a second, I said, that is completely ridiculous and tremendously expensive to use so much insecticide in maize production.
At that point we decided to contact more professionals throughout Mexico and put together a geographical and seasonal ‘map’ of the occurrence of corn pests and the insecticides used in their control. Our report was compiled doing simple arithmetic and the findings really surprised us: a conservative estimate of 3,000 tons of insecticidal active ingredient are used against just the fall armyworm every year in Mexico. No wonder our country has the highest use of pesticide per hectare of arable land in North America.
Mexican farmers are stuck on what has been called ‘the pesticide treadmill.’ The first insecticide application sometimes occurs at the time that maize seed is put in the ground, then a second one follows a couple of weeks later, then another, and another; this process usually involves the harshest insecticides, or those that are highly toxic for the grower and the environment, because they are the cheapest. A way of curtailing these initial applications can be achieved by genetically-modified (GM) maize that produces its own very specific and safe insecticide. Not spraying against pests in the first few weeks of maize development allows the beneficial fauna (lacewings, ladybird beetles, spiders, wasps, etc.) to build their populations and control maize pests; simply put, it enables the use of biological control. The combination of GM crops and natural enemies is an essential part of an integrated pest management program — a successful strategy employed all over the world to control pests, reducing the use of insecticides, and helping farmers to obtain more from their crop land.
We have good farmers in Mexico, a great diversity of natural enemies of the fall armyworm and other maize pests, and growers that are familiar with the benefits of using integrated pest management in other crop systems. Now we need modern technology to fortify such a program in Mexican maize.
Mexican scientists have developed GM maize to respond to some of the most pressing production needs in the country, such as lack of water. Maize hybrids developed by Mexican research institutions may be useful in local environments (e.g., tolerant to drought and cold conditions). These local genetically-engineered maize varieties go through the same regulatory process as corporate developers.
At present, maize pest control with synthetic insecticides has been pretty much the only option for Mexican growers. They use pesticides because controlling pests is necessary for obtaining a decent yield, not because they are forced to spray them by chemical corporations or for being part of a government program. This constitutes an urgent situation that demands solutions. There are a few methods to prevent most of these applications, genetic engineering being one of them. Other countries have reduced their pesticide use by 40% due to the acceptance of GM crops. Mexico, the birthplace of maize, only produces 70% of the maize it consumes because growers face so many environmental and pest control challenges, with heavy reliance on synthetic pesticides. Accepting the technology of GM crops, and educating farmers on better management practices, is key for Mexico to jump off the pesticide treadmill.
Image Credit: Maize diversity. Photo by Xochiquetzal Fonseca/CIMMYT. CC BY SA NC ND 2.0 via Flickr.
In late 2014, one particular video of a singer became immensely popular on Facebook. At first I thought my perception of its popularity might be skewed; I’m a singer, and have many friends who are singers, so there’s probably some selection bias in my sampling of popular posts on social media. But eventually I actually clicked on one of the many postings of the video on my feed, and with its 7.4 million views, it seemed likely that it was more than just my singer friends who had been watching it:
Overtone singing, defined in Grove Music Online as “A vocal style in which a single performer produces more than one clearly audible note simultaneously”, has been in existence for thousands of years, most famously in east central Asia. But I had never seen this much attention focused on it at once. The video is jaw-droppingly cool, in part because what’s happening doesn’t seem possible. But then, not that many people understand how singing just one note at a time actually works.
Simply trying to explain everything that happens when we breathe and phonate (i.e., make a vocal sound) requires discussion of various complex, unconscious physical phenomena. As the Grove Dictionary of Musical Instruments article “Voice” puts it:
Phonation takes place during exhalation as the respiratory system supplies air through the vibrating vocal folds, which interrupt and break the air stream into smaller units or puffs of air. The resulting sounds are filtered through a resonator system and then transmitted outside the mouth. Singing, speaking, humming, and other vocal sounds usually involve practised regulation of air pressure and breath-stream mechanics, and balanced control of the inspiratory (chiefly the diaphragm) and expiratory muscles (chiefly the abdominal and intercostal muscles).
Even after understanding all that, it’s clear that what’s happening in the video above is not a typical vocal performance. So when you hear those overtones coming from Anna-Maria Hefele, just what exactly is happening?
Fortunately for all of us, Hefele also made another video which addresses the physics of this phenomenon:
When you sing different vowels, your mouth changes shape to form those vowels. You pull your lips to the side to make an “eee” sound, and your tongue arches up in your mouth; when you make an “ooo” sound, you purse your lips and your tongue flattens out. When you do this, you’re actually changing the shape of your instrument, which in turn changes the harmonics that are stressed above the fundamental frequency (the pitch at which you’re speaking or singing). This is why the vowels sound different from one another. This is clear in Hefele’s training video, where the loudest overtones change from vowel to vowel.
Stress of different overtones is one of the ingredients of timbre, or the quality of a sound beyond its pitch and amplitude. Timbre is what allows us to distinguish between, say, a flute and an oboe playing the same pitch. They simply sound different. This is partially (no pun intended) dependent on the stress of different overtones due to the varying shapes and materials of each instrument.
The neat thing about the voice is that, while we don’t usually change the material, the shape is very flexible, and we can manipulate it to change our timbre. Overtone singing like Hefele’s takes an element of vocal sound and turns it into a new sort of instrument, inverting the typical relationship between instrument and timbre.
Anyone who’s listened to master impressionists or Bobby McFerrin (beyond “Don’t worry, be happy”) can attest to the versatility of the human voice. Vocalists are the shape-shifters of the instrument world. But comparing the 52,251 views of Hefele’s visualization video with the 7.4 million views of her performance video, it seems like we also appreciate the masters of timbre-bending the same way we appreciate magicians; most of us would rather watch the trick than see it explained.
In the newly published second edition of the Grove Dictionary of Musical Instruments, the voice is called “The quintessential human instrument.” But while almost all of us have voices, very few of us understand what is happening when we use them. Every once in a while I think it’s beneficial to see something extraordinary, if only so we remember to look at what seems ordinary a little more closely.
Headline image credit: A Sennheiser Microphone. Photo by ChrisEngelsma. CC BY-SA 3.0 via Wikimedia Commons.
When we think of obsessive-compulsive disorder or OCD for short, lots of examples spring to mind. For example, someone who won’t shake your hand, touch a door handle, or borrow your pen without being compelled to wash their hands, all because of a fear of germs. I’m sure many of us are guilty of using the phrase “you’re so OCD” to categorize our friends, family, and colleagues who have obsessive cleaning habits or use their antibacterial hand gel a few too many times a day.
Despite this being a very over-simplified idea of OCD, it’s based on an important and common feature for many sufferers; contact contamination fear. Contact contamination can be described as a feeling of dirtiness or discomfort that is felt in response to physical contact with harmful substances, disease or dirt, which will contaminate the body, most often the hands. Relief can be felt in response to cleansing the contaminated areas, for example through hand washing. Much of the focus by academics in previous literature has been on contact contamination, as well as focus from the media, which surrounds us with examples of contamination fears in OCD through TV series such as Obsessive Compulsive Cleaners and Monk.
However, for some sufferers the feelings of discomfort and dirtiness can also be caused without physical contact with something that is dirty or germy. Instead, feelings of contamination can be triggered by association with a contaminated person who has betrayed or harmed the sufferer in some way, or even by their own thoughts, images or memories. This ‘mental contamination’ leads to an internal sense of dirtiness, rather than being localized to a particular body part, and therefore can’t be cleansed away by hand washing. For example, one patient, “Jenny” started feeling internally dirty after she discovered that her husband had been unfaithful and her marriage broke down. She would feel dirty and wash her hands after touching any of his possessions or speaking to him on the telephone. “Steven” also experienced severe mental contamination that was triggered by intrusive images of harming others. The source of mental contamination is not an external contaminant such as blood or dirt but human interaction. The emotional violations that can cause mental contamination include degradation, humiliation, painful criticism, and betrayal.
There is much less knowledge of mental contamination amongst the public, possibly due to a lack of focus on the topic by professionals, meaning we simply don’t recognize examples or situations in which we might feel mentally contaminated. Similar to the normative experience of contact contamination, there are numerous examples of feeling contaminated without touching something dirty in everyday life, for example the washing away of sins when being Baptized, or when cleansing the body for worship; known as Wudu in Islam. Sins here are referring to an internal type of uncleanliness, which can be provoked without contact, for example through having blasphemous thoughts. Another example is not listening to a song which reminds you of an ex-partner who wronged you, as it makes you feel tarnished inside. Even the phrases we use can be seen as representing a form of mental contamination, for example “dirty money”, “muck up”, and “feel like dirt”. Milder forms of mental contamination are prevalent in society, for example in the course of a bitter divorce, where a wronged person develops feelings of contamination that are evoked by direct contact with the violator or indirect contacts such as memories, images or reminders of the violation.
A lack of knowledge of mental contamination is perhaps also due to it being a harder concept to comprehend than contact contamination. We can all understand the math behind contact contamination; you touch something dirty, your hands become dirty, you wash your hands, the dirt is gone, you feel relief. The process makes logical sense, as the cause is visible. Mental contamination can be seen in the same way, it just doesn’t require a visible cause, and often the cause is associated with a previous psychological or physical violation. Without this visible cause for their problems, the true source of discomfort is often unknown to sufferers. Imagine you’re taking part in an experiment, you’re asked to try on a jumper which was brought from a charity shop, and report your feelings. If you know the jumper is physically clean, you’d probably feel fine, no discomfort, you might even like wearing it. Now, imagine being told that the jumper belonged to a murderer, and suddenly for no explainable reason you aren’t okay with wearing it anymore. You have that disturbing, spine-tingling, and shivery feeling as if the jumper were made of tarantulas. Despite knowing the jumper is physically clean, there’s a cloud of dirtiness hanging over it, and you feel mentally contaminated.
Intrusive thoughts associated with mental contamination are normal, but it is the interpretation of the thoughts that is important in determining whether or not the person will then engage in compulsive washing behaviour. To you or me, these are just weird feelings which are easily forgotten, but to someone with mental contamination they are harmful, and could damage their personality in some way. Take the jumper scenario; a person suffering from mental contamination might worry that somehow they will adopt the negative traits of the murderer through their clothing.
The discovery of mental contamination has large and immediate implications for clinical treatment. Cognitive behavioural therapy can be used to effectively treat mental contamination in OCD patients, by changing the meaning or interpretation of obsessive intrusive thoughts, so that they are no longer seen as harmful. Subsequently, this also reduces the frequency of compulsive washing behaviours. For many OCD sufferers Cognitive Behavioural Therapy provides hope that a life free from the daily interference of mental contamination and compulsions is achievable.
It is becoming widely accepted that women have, historically, been underrepresented and often completely written out of work in the fields of Science, Technology, Engineering, and Mathematics (STEM). Explanations for the gender gap in STEM fields range from genetically-determined interests, structural and territorial segregation, discrimination, and historic stereotypes. As well as encouraging steps toward positive change, we would also like to retrospectively honour those women whose past works have been overlooked.
From astronomer Caroline Herschel to the first female winner of the Fields Medal, Maryam Mirzakhani, you can use our interactive timeline to learn more about the women whose works in STEM fields have changed our world.
With free Oxford University Press content, we tell the stories and share the research of both famous and forgotten women.
Featured image credit: Microscope. Public Domain via Pixabay.
It is astounding how mysterious the origin of such simple words as man, wife, son, god, house, and others like them is. They are old, even ancient, and over time their form has changed very little, sometimes not at all, so that we don’t have to break through a thicket of sound laws to restitute their initial form. They have been monosyllabic for millennia, and even in the reconstructed protolanguage they were only one syllable longer (an ending or a so-called thematic vowel followed by one consonant). But two thousand years ago they would already have puzzled us as they do today. Conventional wisdom suggests that to call a man a man and a house a house, people chose some easily available language material; yet we can seldom recover it.
If we look at the etymology of such well-known words for “house” as French maison, Italian casa, and Russian dom, we will see that they once referred to covering and hiding somebody or something, or to being “put, fitted together.” Users of English dictionaries will find some information about them in the entries on mansion, case “holder,” casement, and dome. Going further, they will discover the current connection between Latin domus and Engl. timber and tame. In light of such facts, the etymology of house, recognized by most language historians, even though sometimes with an ill grace, makes sense. The oldest recorded form of house is hus, with long u (long u is the vowel we hear in Modern Engl. too), and it seems to be related to the verb hide and through it to the noun hut. Hut came to English from French, but French had it from Old High German. Therefore, the comparison is legitimate. Trouble comes from the final consonant -s, for, if hide and hut are cognates, one expects -t or -d, rather than s, at the end of house (hus). This is not a good place for disentangling phonetic niceties, the more so as they have not been disentangled in a perfectly convincing way. We have a better chance of finding out what kind of a place the speakers of Old Germanic called hus.
In the fourth-century Gothic text, which is a translation of the New Testament, hus occurred only as the second element of the compound gud-hus “(Jewish) temple.” (Gud, of course, means “god”; Germanic had several words for “pagan temple”). The word for what we call “house” was razn. It corresponded to Old Engl. ærn ~ ern ~ ern, still preserved in barn (b- is all that is left of bere “barley”) and saltern “salt works.” The Old Icelandic cognate of razn was rann, and it too lingers in English as the first element of ransack, a borrowing from Scandinavian. There also were other Gothic words for “house,” namely gards and hrot (Engl. yard and quite possibly roost are related to them). No doubt, all of them referred to different structures and buildings, but we should note only one thing: the oldest Germanic family hardly lived in a place called hus.
This conclusion is borne out in a rather unexpected way. There must have been something about the function or appearance or both of the Germanic hus that distinguished it from its counterparts elsewhere, because the word for it made its way into Old Slavic. The Slavs lived in a dom. The hus served other purposes. Since the borrowing goes back to a remote past, we may assume that the word taken over from the Germanic neighbors meant in Slavic approximately or even exactly what it once meant in the lending language. The noun in question is extant practically all over the Slavic-speaking world (though more often in regional dialects than in the Standards). The present day senses of its reflexes do sometimes mean “house” and “home,” but these senses are swamped by “earth house,” “hut” (as in obsolete Polish chyz and Russian khizhina; I have highlighted the stressed root), “the place for building a house,” “a winter shed,” “a shed in the woods,” “storehouse,” hayloft,” “marquee,” “barn (granary),” “closet,” and “storehouse.” Thus, we find all kinds of names for “outhouses.” Even “monastery cell” occurs in the list, and, characteristically, this meaning was ascribed to Gothic hus (allegedly, a one-room structure) in gud-hus. If originally hus denoted a place for temporary protection of people from the elements (“a hut”) or for sheltering grain and other things, the connection of hus and hide is unobjectionable. As noted, it is only the last consonant that spoils the otherwise rather neat picture.
The word is and has always been neuter. The assignment of hus to this gender might be an accident of grammar, but it might be caused by its semantics. Two circumstances made me ask why hus and, incidentally, both Gothic razn and hrot were neuter. First, the situation in Icelandic comes to mind. What was called hús in Old Icelandic (ú designates vowel length, not stress) was not a separate building but a string of “chambers” that made up the farmhouse. Next to the living quarters, often without a partition, a sheepfold was situated; in winter, sheep’s breath served as “fuel” and warmed the room. So I wondered whether perhaps the old hus looked like the medieval Icelandic farm, with the word being coined as a collective plural. Later a singular may have been formed from it. This is a common process.
Then there is the word hotel (French hôtel), with its older form ostel, from which English has ostler. Hotel is related to hospital, hospitality, hospice, and host. The medieval “hotel” first designated any building for human habitation, though the modern sense is also old. Late Latin hospitale is the neuter plural of the adjective hospitalis turned into a noun (the technical term for such a change in grammatical usage is substantivization; thus, hospitale is a substantivized adjective). Again neuter plural! There must have been something in the concept of such “enfilades” that suggested plurality.
I am not jumping to conclusions. In etymology, he who jumps and leaps perishes, and I want to live long enough to produces many more posts. But it so happened that in my work I, on various occasions, keep encountering neuter plurals, and in the huge literature on the word house no one seems to have asked why the word is neuter (that is, perhaps someone did, but I missed the relevant place: one can never be sure), so I thought that there would be no harm in mentioning this detail.
As could be expected, etymologists spent some time hunting for distant congeners of house. A Hittite and an Armenian word have been proposed. As far as I can judge, neither has aroused any interest, and probably for good reason. House appears to have been a local (Germanic) coinage, but whether we have discovered its etymon remains unclear. That is why the most cautious dictionaries call house a word of uncertain etymology. It will probably remain such for all eternity. The time depth we command is insufficient for getting to the bottom of things, but we need not worry: this blog was conceived expressly as a forum for discussing obscure words.
Image credits: (1) The Burning of the Houses of Parliament by Turner. Public domain via WikiArt. (2) Alla Nazimova in the 1922 film of A Doll’s House. Public domain via Wikimedia Commons.
Vladimir Ilich Ulyanov (aka Lenin) died on this day 90 years ago with cerebral vessels so calcified that when tapped with tweezers, they sounded like stone. He was only 53. He hadn’t smoked and, in fact, had prohibited smoking in his presence. He had consumed alcohol sparingly and had exercised regularly, swimming, biking, and walking as often as his schedule allowed. And yet, when only 51 years of age, he had a first stroke, seven months later a second, and then another before suffering his final, fatal one three months shy of his 54th birthday. How could a man so young with none of the usual risk factors for cerebrovascular disease have had cerebral vessels with walls so thick and calcified, that in many places, their lumens were either completely obliterated or narrowed to the dimension of tiny slits?
Syphilis was one of the earliest explanations considered. It is, after all, an infection that attacks the brain, one possibly passed on to Lenin by his mistress, Inessa Armand, a self-professed advocate of free love. However, whereas Treponema pallidum, the bacterium responsible for syphilis, does invade the vessels of the brain, it typically attacks the small arteries of the meninges, the brain’s membranous envelope, not the large feeder vessels responsible for the kind of strokes Lenin had. Moreover, several Wasserman tests (blood tests for syphilis) performed on Lenin prior to his death were all allegedly negative, though it should be noted that the official reports of these tests have since mysteriously vanished.
A more likely explanation for Lenin’s premature cerebrovascular disease, one initially proposed by Dimitri Volkogonov, the first researcher to gain access to Lenin’s secret Soviet files, is that the vessels of his brain were “simply destroyed by the strains of power.” Prior to the October Revolution, Lenin had enjoyed a free and easy existence of literary activities, vacationing in the mountains, and Party squabbles in exile. This changed radically after he became the leader of the World Communist Revolution, when he was forced to work with a driving urgency that found him hardly bothering to undress before falling into an exhausted, troubled sleep. Brief naps no longer refreshed him. Every day brought some new disaster requiring his personal attention. Every day he woke with a dull headache. The tension of dealing with the ever-changing demands of State caused him to erupt in anger with frightening regularity. When his health began to fail, his physicians diagnosed “overstrain of the brain.”
In fact, numerous scientific investigations have since demonstrated a relationship between psychological stress and both cardiovascular and cerebrovascular disease, through mechanisms that have yet to be fully elucidated. Lenin was subjected to such stress in the extreme as the Supreme Soviet leader. Moreover, he was likely genetically predisposed to the adverse effects of such stress on his cerebrovascular system in that his father died at the same age with neurological complaints similar to his own. In addition, two of his brothers died of coronary artery disease and a sister of a stroke. Thus, Lenin’s genetic code likely dictated that sooner or later he would succumb to cerebrovascular disease, whereas the pressures of directing the World Communist Revolution likely caused this to transpire sooner rather than later.
Headline image credit: Vladimir Lenin speaking to a crowd. Public domain via Wikimedia Commons.
These are precisely the experiences that provide talking points for extremist groups that might otherwise be frustrated.
I interviewed a number of such Islamic extremists during full-immersion fieldwork in the Bangladeshi community of London’s East End and the Moroccan community of Southern Madrid. As part of this research, I also attended over a dozen Islamic extremists’ meetings.
In the East End, extremists from the transnational Islamist group, Hizb-Ut-Tahrir, competed directly against street gangs, schools, sport teams, and mosques for the attention of young Muslim men and women.
For a several weeks, I attended Hizb-Ut-Tahrir gatherings that took place directly upstairs from a government-sponsored youth club, where neighborhood adolescents went to do homework, play video games, or shoot billiards. Each Thursday after school at about 5:00pm, a Hizb-Ut-Tahrir activist went into the club downstairs to recruit attendees for their meeting upstairs. They dangled free snacks and soda, and about half of the young men would oblige.
Meetings were run like talk shows. A member would introduce a guest speaker and they would discuss issues pertaining to Islam and British public affairs. Questions came from planted members in the audience, and the young men would listen while chewing and checking their phones.
If it weren’t for grievances against the British state and society, these meetings would be more like Quranic study with halal fried chicken.
For overseas extremists, Europe and North America appear as a fortress. Advanced intelligence and passport control limit the migration of known extremists. And Western Muslims are largely integrated, law-abiding, content members of society. So it is difficult to find recruits or embed them.
Survey research shows that French Muslims are predominantly secular and far less religious than they are portrayed. A recent poll shows that British Muslims identify more closely as British than most non-Muslim Britons. American Muslims, in particular South Asians and Arabs, are among the United States’ most affluent, well educated minorities. And every year, new generations of immigrant-origin Muslims become more integrated into their societies in the West—adapting, intermarrying, having children and grandchildren.
Extremist organizations appeal to the fringes of these communities, and must seek out ways to advance their agenda and recruit supporters among the few inclined to listen to their ideology.
Terrorists attacks help, but not by triumphantly assaulting innocent people. Rather terrorism produces an anti-Muslim backlash that frustrates and alienates Muslims over time.
And this backlash creates a sense of betrayal and disappointment among second and third generation Western Muslims who believe they are not receiving the equal treatment and justice as the rest of their countrymen.
This backlash corners Western Muslims into a greater awareness of their Muslim-ness. They feel obligated to defend their vilified Muslim identity, when it represents but one facet of their personalities. Muslims are soccer stars and violinists, engineers and drama queens, rappers and politicians. But social scrutiny makes them one-dimensional in the public eye.
This backlash is gold for the Hizb-Ut-Tahrir activist who was previously grasping for something new to inspire the young people sitting in front of him, gnawing on halal fried chicken.
Islamophobia is inherently wrong. But if that is not persuasive enough, it is also an enormous strategic mistake in the struggle against Islamic extremism.
Image Credit: Je_suis_Charlie-18. Photo by Valentina Calà. CC by SA 2.0 via Flickr.
The analysis of gender inequality in labour market outcomes has received substantial attention from academics of various disciplines. The distinct literatures have explored, often from differing perspectives and approaches, the various forms of inequality women experience in the labour market. Moreover, the issues and challenges the increasing participation of women in paid work poses has resulted in a substantial interest by policy makers, in many areas of policy, including taxation and benefits, health, caring, the provision of early years’ services, school and higher education.
The gender employment rate gap decreased by almost 30 percentage points since 1971, when data started to be recorded in the Labour Force Survey (LFS). Educational attainment gaps have not only narrowed over recent decades but girls’ education has overtaken that of boys. However, the labour market outcomes of women, both the jobs they do and the pay they receive, often do not reflect their personal qualification levels, at least relative to men, nor their improvement in recent years. There remain gender differences in pay that cannot be explained by educational attainment or other relevant factors, a sign perhaps that the labour market is failing to make the best use of women’s talents. The reasons for this inefficiency are numerous and complex.
We know that labour market inequality between men and women start earlier than entry into the labour market; and that, although gender gaps might not be very prominent in the early labour market years, they widen later on, and particularly important is the impact of having children and the associated career break. For example, the very distribution of where women and men work in the economy, both in terms of sectors and occupations, may not only lead to gender inequality directly, but is also inexorably linked to the subject choices boys and girls make at school. We know that segmentation in the occupations men and women do is substantial, and explains a large and increasing proportion of the gender pay gap, but the inequality within occupations is much wider. Similarly, although the gender pay gap for those in full time work is about 20%, the pay gap between low and high paid women is substantially higher. Gender interacts with other factors to create substantial inequalities. Reasons also include inequality within the household, and the constraints and barriers that an unequal distribution of labour in household production generates on women’s likelihood of participating in paid work. The latter is also linked related to the fiscal policies as well as social attitudes.
What does this all mean for policy? In devising policy approaches and solutions, I think it is important to start from where there seems to be at least some good degree of consensus on the evidence. In fact, despite differences arising from disciplinary backgrounds, philosophical and political perspectives and methodological approaches, it appears that we have some overlapping consensus amongst scholars that:
gender inequality in the labour market is the product of many factors, most notably of a structured system of institutions and norms in which gender plays a very important part. The issues are complex, manifold and interrelated;
within group inequality are very large, which means we need to look at the interaction between gender and other and characteristics;
gender inequality have an important life-cycle dimensions, starting at school, going on during the transition into the labour market and then motherhood.
These are of course only starting points for policy makers. However, they do lead to the following considerations. First, the complexity mentioned above might have meant that policy makers have aimed to tackle various issues with separate discrete policies, in many instances failing to see the links between the issues. I would argue instead that such complexity does not justify separate discrete policies but a more targeted approach on a limited number of key variables, about which deep knowledge of how they relate to others is essential. More specifically, this means less of a proliferation of separate, discrete interventions and more of a set of targeted interventions that aim to address the key labour market inequalities.
Secondly, I would argue that the evidence on within-group inequality, the interaction of various factors, combined with the way gender inequality in the labour market develops through the life cycle, all suggests for a policy approach that is more sensitive to individual circumstances, recognises the variations around averages and therefore focuses on targeted, individual support, moving away from aggregate targets (i.e. all women, all mothers, all school girls). This is certainly more difficult but I do think unavoidable if we want to ensure greater success towards gender equality.
Editor’s note: This post was written by Edward H. Kaplan before the Charlie Hebdo terrorist attacks in Paris on 7th January 2015.
How many good guys are needed to catch the bad guys? This is the staffing question faced by counterterrorism agencies the world over. While government officials are quick to proclaim “zero tolerance” for terrorism, unlimited resources are not made available to prevent terror attacks, nor should that be the case. Indeed, as with most public policy decisions, the appropriate staffing level depends upon both the benefits and costs of fielding counterterrorism agents.
The benefits derive from successfully interdicting terror attacks and averting the damage such attacks impose in deaths, injury, property and infrastructure damage, and more generally population fear and anxiety. While intensifying both covert and overt counterterror intelligence efforts does lead to greater detection, as with many other economic activities, there are diminishing returns to effort: doubling the number of agents will not lead to a doubling of the detection rate, and indeed the marginal detection rates fall rapidly as the number of counterterror agents grows.
And as the number of counterterror agents grows, so does the cost of detecting terror plots. However, unlike detection levels, the marginal cost of adding additional agents stabilizes, for all agents must be trained, outfitted, and compensated. These simple economic considerations are sufficient to suggest that there is a socially optimal counterterror staffing level, which in turn implies a socially efficient detection level for terror plots. So, while government officials contend that even one terror attack is one too many, economics suggests that there is an optimal fraction of terror attacks to prevent that equates the marginal benefits and costs of detection, and this optimal fraction could be significantly less than unity.
How to operationalize the concepts described above is another matter, for unlike many production processes, it is not easy to observe the relationship between counterterror agent staffing on the one hand, and terror plot detection on the other. However, progress in this area has been made thanks to methods borrowed from queueing theory, which is applied widely to study staffing problems in situations ranging from telephone call centers to hospitals to manufacturing facilities to air traffic control. As shown in the figure below, newly hatched terror plots can be construed as “customers” who “arrive” to a service system.
Upon arrival, a new plot is undetected, and will remain so until it is detected, or matures to an actual terror attack, whichever happens first. The number of counterterror agents drives the rate with which plots are detected, but of course the total number of detected plots also depends upon the actual number of plots that exist. Once a plot is detected, it can be interdicted, thus this terror queue framework provides the link between the number of counterterror agents fielded on the one hand, and the number of terror plots that are detected and interdicted on the other.
There are still details that must be specified to complete the analysis, and it is in these details that a recent ten year study of all Jihadi terror plots in the United States provides important data. From an analysis of court records including the testimony of undercover operatives in addition to suspect confessions or observed attack details, it was possible to approximate the starting dates for a sample of terror attacks in addition to the observed dates of actual attack or plot detection, whichever came first. From these data, an interesting hypothesis emerged: when is a terror plot more likely to be detected? The answer is that as a plot edges closer to the moment of execution as a terror attack, there is more activity on the part of would be attackers, and this increased level of activity provides more opportunities for counterterror agents to detect an attack. This idea can be formalized by stating that the instantaneous chance that an undetected plot is detected is proportional to the instantaneous chance this same plot executes as an attack. In language more familiar to economists and statisticians alike, the plot detection hazard is proportional to the attack hazard, which gives rise to what is known as a proportional hazards model. The Jihadi plot data mentioned above are consistent with this hypothesis, which greatly simplifies the relationship between agent staffing levels on the one hand and the fraction of terror plots that are detected on the other.
With this new model in hand, what remains is a valuation step – what is the marginal benefit of preventing a terror attack, and what is the marginal cost of assigning an additional agent? Both of these quantities can be estimated from the terrorism literature. For example, data suggest the typical number of persons killed and injured in terror attacks in Europe, Israel and the United States, well-known economic studies have estimated the value of a statistical life, and a more recent study has established that on average, the disability adjusted life years (DALYs) lost per terrorism injury are equivalent to 0.57 of the DALYs lost due to a death from terrorism. On the cost side, the United States Federal Bureau of Investigation (FBI) provide information regarding the salaries and benefits received by FBI special agents, who comprise the principal counterterror detection force in the United States.
Applying the model to the United States leads to an interesting and perhaps counterintuitive result. The Jihadi plot data report that 80% of these plots were interdicted prior to attack. If one uses this observation to calibrate the proportional hazards relation between attack and detection discussed above, the model suggests an optimal staffing level of only 2,080 agents. It is interesting that in 2004, the FBI reported that 2,398 of 11,881 special agents were devoted to counterterrorism. As of October 2013, the FBI reported that their total number of special agents increased to 13,598, though the number allocated to counterterrorism was not stated.
There are additional analyses one can conduct using the framework developed above. For example, while most of the plots in the United States sample discussed above were “lone wolf” attempts by individuals or small groups to wreak havoc, it is well known that many terrorist organizations behave in strategic fashion and are able to adapt their behavior to counterterror policy and tactics. This leads to a game theoretic model where strategic terrorists who understand how socially efficient staffing works modify their own attempted attack rates in accord with their own benefit-cost calculus. In this game, the resulting optimal terror plot detection level depends upon the costs and benefits that terrorists assign to terror attacks, which provides yet another example of how strategic terrorists can manipulate counterterror agencies (or governments more broadly) to achieve their objectives.
Does the class come out of the person after the person comes out of the class? This question asks us to think about social class inequality in a new way. It asks us to think not only of how much inequality exists in the United States, but how long inequality affects individuals. It also asks us to think of class not just as what we have — money, wealth, an occupation, an education — but also in terms of more personal characteristics — perceptions of who we are, what we want, and how to live our everyday lives. These personal characteristics are not trivial. They are judged by employers, schools, and potential friends; they can have profound effects on our opportunities.
So does the class come out of the person after the person comes out of the class? A study of individuals with working-class roots who graduate from college, enter the professional workforce, marry a spouse who has spent his or her entire life in the middle-class, and raise a family in a middle-class community indicates that the answer is no. Despite immersion in a new class, people with working-class roots still prefer different approaches to daily life than people with middle-class roots, even when they share a class position as adults. Moreover, not only are there differences, but the differences are systematic. College-educated adults with working-class roots generally prefer a laissez-faire lifestyle — one in which they can go with the flow, live in the moment, and feel free from self-imposed constraints. College-educated adults with middle-class roots, on the other hand, tend to prefer a managerial style — they prefer to organize, plan, and oversee. These differences span many aspects of individuals’ lives, including how they want to spend money, attend to paid work, allocate housework, raise their children, engage in downtime, and express emotions.
These differences are revealing. They show that to do well in America’s schools, universities, and workplaces, assimilation to middle-class norms is not required. At the same time, there are likely opportunities that working-class-origin adults miss due to their cultural differences from the middle-class. Workplaces often have unspoken norms that valorize middle-class culture, and upwardly mobile individuals with working-class roots are at risk of being penalized for not knowing or abiding by these norms. In this way, the long arm of social class socialization can even limit the opportunities of the people who embody the very idea of the American Dream.
The unlikeliness of taking the class out of the person after taking the person out of the class also sheds light on timely political debates. Commentators such as Charles Murray and David Brooks advocate stemming social class inequality by having the rich rub shoulders with the poor. They believe that if the rich preach what they practice, the poor will change their mindsets and inequality will be alleviated. The effectiveness of such programs must be questioned if four years of college, decades of professional work, and thousands of days married to a person born into another class does not take the class out of the person after taking the person out of the class. A more effective strategy may be to follow the lead of some of the middle-class spouses married to partners with working-class roots by appreciating the diversity of approaches that come from growing up in different class conditions.
A few really disastrous mistakes have dominated Western philosophy for the past several centuries. The worst mistake of all is the idea that the universe divides into two kinds of entities, the mental and the physical (mind and body, soul and matter). A related mistake, almost as bad, is in our philosophy of perception. All of the great philosophers of the present era, beginning with Descartes, made the same mistake, and it colored their account of knowledge and indeed their account of pretty much everything. By ‘great philosophers’, I mean Locke, Berkeley, Hume, Descartes, Leibniz, Spinoza, and Kant. I am prepared to throw in Hegel and Mill if people think they are great philosophers too. I called this mistake the “Bad Argument”. Here it is: We never directly perceive objects and states of affairs in the world. All we ever perceive are the perceptual contents of our own mind. These are variously called ‘ideas’ by Descartes, Locke, and Berkeley, ‘impressions’ by Hume, ‘representations’ by Kant, and ‘sense data’ by twentieth century theorists. Most contemporary philosophers think they have avoided the mistake, but I do not think they have. It is just repeated in different versions, especially by a currently fashionable view called ‘Disjunctivism’.
But that leaves us with a more interesting problem: What is the correct account of the relation of perceptual experience and the real world? The key to understanding this relation is to understand the intentionality of perception. ‘Intentionality’ is an ugly word, but we can pretty much make clear what it means; a mental state is intentional if it represents, or is about, objects and states of affairs in the world. So beliefs, hopes, fears, desires are all intentional in this sense. ‘Intending’ in the ordinary sense just names one kind of intentionality, along with beliefs, desires, etc. Such intentional states are representations of how things are in the world or how we would like them to be, etc., and we might say therefore that they have “conditions of satisfaction” — truth conditions in the case of belief, fulfillment conditions in the case of intentions, etc.
The biologically most basic and gutsiest forms of intentionality are those where we don’t have mere representations but direct presentations of objects and states of affairs in the world, and part of intentionality is that these must be causally related to the conditions in the world that they present. Perception and intentional action are direct presentations of their conditions of satisfaction. In the case of perception, the conditions of satisfaction have to cause the perceptual experience. In the case of action, the intention in action has to cause the bodily movement. So the key to understanding perception is to see the special features of the causal presentational intentionality of perception. The tough philosophical question is to state how exactly the character of the visual experience, its phenomenology, determines the conditions of satisfaction.
How then does the intentional content fix the conditions of satisfaction? The first step in the answer is to see that perception is hierarchical. In order to see higher level features, such that an object is my car, I have to see such basic features as color and shape. The key to understanding the intentionality of the basic perceptual experience is to see that the feature itself is defined in part by its ability to cause a certain sort of perceptual experience. Being red, for example, consists in part in the ability to cause this sort of experience. Once the intentionality of the basic perceptual features is explained, we can then ask the question of how the presentation of the higher level features, such as seeing that it is my car or my spouse, can be explained in terms of the intentionality of the basic perceptual experiences together with collateral information.
How do we deal with the traditional problems of perception? How do we deal with skepticism? The traditional problem of skepticism arises because exactly the same type of experience can be common to both the hallucinatory and the veridical cases. How are we supposed to know which is which?
Image Credit: Marmalade Skies. Photo by Tom Raven. CC by NC-ND 2.0 via Flickr.
Modern science has introduced us to many strange ideas on the universe, but one of the strangest is the ultimate fate of massive stars in the Universe that reached the end of their life cycles. Having exhausted the fuel that sustained it for millions of years of shining life in the skies, the star is no longer able to hold itself up under its own weight, and it then shrinks and collapses catastrophically unders its own gravity. Modest stars like the Sun also collapse at the end of their life, but they stabilize at a smaller size. But if a star is massive enough, with tens of times the mass of the Sun, its gravity overwhelms all the forces in nature that might possibly halt the collapse. From a size of millions of kilometers across, the star then crumples to a pinprick size, smaller than even the dot on an “i”.
What would be the final fate of such massive collapsing stars? This is one of the most exciting questions in astrophysics and modern cosmology today. An amazing inter-play of the key forces of nature takes place here, including gravity and quantum forces. This phenomenon may hold the secrets to man’s search for a unified understanding of all forces of nature, with exciting implications for astronomy and high energy astrophysics. Surely, this is an outstanding unresolved mystery that excites physicists and the lay person alike.
The story of massive collapsing stars began some eight decades ago when Subrahmanyan Chandrasekhar probed the question of final fate of stars such as the Sun. He showed that such a star, on exhausting its internal nuclear fuel, would stabilize as a “White Dwarf”, about a thousand kilometers in size. Eminent scientists of the time, in particular Arthur Eddington, refused to accept this, saying how a star can ever become so small. Finally Chandrasekhar left Cambridge to settle in the United States. After many years, the prediction was verified. Later, it also became known that stars which are three to five times the Sun’s mass give rise to what are called Neutron stars, just about ten kilometers in size, after causing a supernova explosion.
But when the star has a mass more than these limits, the force of gravity is supreme and overwhelming. It overtakes all other forces that could resist the implosion, to shrink the star in a continual gravitational collapse. No stable configuration is then possible, and the star which lived millions of years would then catastrophically collapse within seconds. The outcome of this collapse, as predicted by Einstein’s theory of general relativity, is a space-time singularity: an infinitely dense and extreme physical state of matter, ordinarily not encountered in any of our usual experiences of physical world.
As the star collapses, an ‘event horizon’ of gravity can possibly develop. This is essentially ‘a one way membrane’ that allows entry, but no exits permitted. If the star entered the horizon before it collapsed to singularity, the result is a ‘Black Hole’ that hides the final singularity. It is the permanent graveyard for the collapsing star.
As per our current understanding of physics, it was one such singularity, the ‘Big Bang’, that created our expanding universe we see today. Such singularities will be again produced when massive stars die and collapse. This is the amazing place at boundary of Cosmos, a region of arbitrarily large densities billions of times the Sun’s density.
An enormous creation and destruction of particles takes place in the vicinity of singularity. One could imagine this as ‘cosmic inter-play’ of basic forces of nature coming together in a unified manner. The energies and all physical quantities reach their extreme values, and quantum gravity effects dominate this regime. Thus, the collapsing star may hold secrets vital for man’s search for a unified understanding of forces of nature.
The question then arises: Are such super-ultra-dense regions of collapse visible to faraway observers, or would they always be hidden in a black hole? A visible singularity is sometimes called a ‘Naked Singularity’ or a ‘Quantum Star’. The visibility or otherwise of such super-ultra-dense fireball the star has turned into, is one of the most exciting and important questions in astrophysics and cosmology today, because when visible, the unification of fundamental forces taking place here becomes observable in principle.
A crucial point is, while gravitation theory implies that singularities must form in collapse, we have no proof the horizon must necessarily develop. Therefore, an assumption was made that an event horizon always does form, hiding all singularities of collapse. This is called ‘Cosmic Censorship’ conjecture, which is the foundation of current theory of black holes and their modern astrophysical applications. But if the horizon did not form before the singularity, we then observe the super-dense regions that form in collapsing massive stars, and the quantum gravity effects near the naked singularity would become observable.
“It turns out that the collapse of a massive star will give rise to either a black hole or naked singularity”
In recent years, a series of collapse models have been developed where it was discovered that the horizon failed to form in collapse of a massive star. The mathematical models of collapsing stars and numerical simulations show that such horizons do not always form as the star collapsed. This is an exciting scenario because the singularity being visible to external observers, they can actually see the extreme physics near such ultimate super-dense regions.
It turns out that the collapse of a massive star will give rise to either a black hole or naked singularity, depending on the internal conditions within the star, such as its densities and pressure profiles, and velocities of the collapsing shells.
When a naked singularity happens, small inhomogeneities in matter densities close to singularity could spread out and magnify enormously to create highly energetic shock waves. This, in turn, have connections to extreme high energy astrophysical phenomena, such as cosmic Gamma rays bursts, which we do not understand today.
Also, clues to constructing quantum gravity–a unified theory of forces, may emerge through observing such ultra-high density regions. In fact, the recent science fiction movie Interstellar refers to naked singularities in an exciting manner, and suggests that if they did not exist in the Universe, it would be too difficult then to construct a quantum theory of gravity, as we will have no access to experimental data on the same!
Shall we be able to see this ‘Cosmic Dance’ drama of collapsing stars in the theater of skies? Or will the ‘Black Hole’ curtain always hide and close it forever, even before the cosmic play could barely begin? Only the future observations of massive collapsing stars in the universe would tell!
Introduction, from Michael Alvarez, co-editor of Political Analysis
Recently I asked Nathaniel Beck to write about his experiences with research replication. His essay, published on 24 August 2014 on the OUPblog, concluded with a brief discussion of a recent experience of his when he tried to obtain replication data from the authors of a recent study published in PNAS, on an experiment run on Facebook regarding social contagion. Since then the story of Neal’s efforts to obtain this replication material have taken a few interesting twists and turns, so I asked Neal to provide an update — because the lessons from his efforts to get the replication data from this PNAS study are useful for the continued discussion of research transparency in the social sciences.
After not hearing from Adam Kramer of Facebook, even after contacting PNAS, I persisted with both the editor of PNAS (Inder Verma, who was most kind) and with the NAS through “well connected” friends. (Getting replication data should not depend on knowing NAS members!). I was finally contacted by Adam Kramer, who offered that I could come out to Palo Alto to look at the replication data. Since Facebook did not offer to fly me out, I said no. I was then offered a chance to look at the replication files in the Facebook office 4 blocks from NYU, so I accepted. Let me stress that all dealings with Adam Kramer were highly cordial, and I assume that delays were due to Facebook higher ups who were dealing with the human subjects firestorm related to the Kramer piece.
When I got to the Facebook office I was asked to sign a standard non-disclosure agreement, which I dec. To my surprise this was not a problem, with the only consequence being that a security officer would have had to escort me to the bathroom. I then was put in a room with a Facebook secure notebook with the data and R-studio loaded; Adam Kramer was there to answer questions, and I was also joined by a security person and an external relations person. All were quite pleasant, and the security person and I could even discuss the disastrous season being suffered by Liverpool.
I was given a replication file which was a data frame which had approximately 700,000 rows (one for each respondent) and 7 columns containing the number of positive and negative words used by each respondent as well as the total word count of each respondent, percentages based on these numbers, experimental condition. and a variable which omitted some respondents for producing the tables. This is exactly the data frame that would have been put in an archive since it contained all the data needed to replicate the article. I also was given the R-code that produced every item in the article. I was allowed to do anything I wanted with that data, and I could copy the results into a file. That file was then checked by Facebook people and about two weeks later I received the entire file I created. All good, or at least as good as it is going to get.
The data frame I played with was based on aggregating user posts so each user had one row of data, regardless of the number of posts (and the data frame did not contain anything more than the total number of words posted). I can understand why Facebook did not want to give me the data frame, innocuous as it seemed; those who specialize in de-de-identifying private data and reverse engineering code are quite good these days, and I can surely understand Facebook’s reluctance to have this raw data out there. And I understand why they could not give me all the actual raw data, which included how feeds were changed and so forth; this is the secret sauce that they would not like reverse engineered.
I got what I wanted. I could see their code, could play with density plots to get a sense of words used, I could change the number of extreme points dropped, and I could have moved to a negative binomial instead of a Poisson. Satisfied, I left after about an hour; there are only so many things one can do with one experiment on two outcomes. I felt bad that Adam Kramer had to fly to New York, but I guess this is not so horrible. Had the data been more complicated I might have felt that I could not do everything I wanted, and running a replication with 3 other people in a room is not ideal (especially given my typing!).
My belief is that that PNAS and the authors could simply have had a different replication footnote. This would have said that the code used (about 5 lines of R, basically a call to a Poisson regression using GLM) is available at a dataverse. In addition, they could have noted that the GLM called used the data frame I described, with the summary statistics for that data frame. Readers could then see what was done, and I can see no reason for such a procedure to bother Facebook (though I do not speak for them). I also note a clear statement on a dataverse would have obviated the need for some discussion. Since bytes are cheap, the dataverse could also contain whatever policy statement Facebook has on replication data. This (IMHO) is much better than the “contact the authors for replication data” footnote that was published. It is obviously up to individual editors as to whether this is enough to satisfy replication standards, but at least it is better than the status quo.
What if I didn’t work four blocks from Astor Place? Fortunately I did not have to confront this horror. How many other offices does Facebook have? Would Adam Kramer have flown to Peoria? I batted this around, but I did most of the batting and the Facebook people mostly did no comment. So someone else will have to test this issue. But for me, the procedure worked. Obviously I am analyzing lots more proprietary data, and (IMHO) this is a good thing. So Facebook, et al., and journal editors and societies have many details to work out. But, based on this one experience, this can be done. So I close this with thanks to Adam Kramer (but do remind him that I have had auto-responders to email for quite while now).
On the more trivial issue of my own dataverse, I am happy to report that almost everything that was once on an a private ftp site is now on my Harvard dataverse. Some of this was already up because of various co-authors who always cared about replication. And on stuff that was not up, I was lucky to have a co-author like Jonathan Katz, who has many skills I do not possess (and is a bug on RCS and the like, which beats my “I have a few TB and the stuff is probably hidden there somewhere”). So everything is now on the dataverse, except for one data set that we were given for our 1995 APSR piece (and which Katz never had). Interestingly, I checked the original authors’ web sites (one no longer exists, one did not go back nearly that far) and failed to make contact with either author. Twenty years is a long time! So everyone should do both themselves and all of us a favor, and build the appropriate dataverse files contemporaneously with the work. Editors will demand this, but even with this coercion, this is just good practice. I was shocked (shocked) at how bad my own practice was.
Heading image: Wikimedia Foundation Servers-8055 24 by Victorgrigas. CC BY-SA 3.0 via Wikimedia Commons.
To speak of sovereign equality today is to invite disdain, even outright dismissal. In an age that has become accustomed to compiling “indicators“ of “state failure,” revalorizing nineteenth-century rhetoric about “great powers,” and circumventing established models of statehood with a nebulous “responsibility to protect,” sovereign equality seems little more than a throwback to a simpler, less complicated era.
To be sure, as a general principle, sovereign equality remains foundational to both customary and conventional international law. Article 2(1) of the UN Charter retains its nominally sacrosanct status, a foundational point of reference for a modern international law that promised to do away with the “standard of civilization”. Similarly, all the other classic articulations of independence and non-interference, especially the 1970 Friendly Relations Declaration, continue to be invoked, often with much the same spirit of solemnity.
Yet a great deal has also changed in recent decades. We have grown familiar to hearing that borders are no longer what they once were (or what, at any rate, they were once imagined to be). Traversed by goods, services, people, and capital, not to mention information, territorial frontiers have been characterized by wave upon wave of globalization theory as “fluid” and “porous”. Likewise, conventional legal models of recognition and jurisdiction have come under intense criticism. Among other things, the colonization of large chunks of international law scholarship by political science has generated a large literature on “rogue states”.
Not surprisingly, such developments have put the very idea of sovereign equality under pressure. And this, in turn, has had significant systemic consequences for international law as a whole.
Of course, sovereign equality is not without its problems. The principle has legitimated the very injustice it is purportedly designed to combat, enshrouding real inequality in a purely notional equality. After all, in itself, a bare assertion that states are equal and endowed with the same legal personality does remarkably little to rectify actually existing inequalities. Worse still, “rights of sovereignty” have been invoked to justify all manner of abuses, typically by national elites determined to augment and consolidate their class power.
Part of the difficulty here is that far from being inherently “progressive”, sovereign equality is a concept with a rather murky pedigree. While its roots reach back centuries, the principle assumed strong doctrinal form during the nineteenth century by way of the Concert of Europe’s commitment to the European balance of power. This commitment was typically premised upon the impermissibility of intervention in “civilized” states and the permissibility of intervention in “uncivilized” and “semi-civilized” regions. That is hardly an ideal foundation for an emancipatory principle.
All of this is true. But it is also worth keeping in mind that sovereign equality has frequently furnished politically and economically weaker states with a measure of protection against aggression and intervention. As a response to de facto inequality, international lawyers instinctively prioritize de jure equality. Absent such insistence on formally equal rights and obligations, it is often assumed, the will and interests of some states would be subordinated to the will and interests of other states, with predictably dire implications for international legal order.
To underscore the significance of sovereign equality today is not to cling to an outdated mode of conceiving international relations. Nor is it to deny that sovereign power has its “dark sides”. It is simply to stress the need for greater appreciation of the fact that sovereignty may under certain circumstances provide a buffer against some of the most direct and explicit forms of inter-state violence. It is worth recalling that the history of international law is to no small degree the history of attempts to secure recognition for (one or another account of) sovereign equality. This is anything but a puerile pursuit.
Headline image credit: Map of the world. CC0 via Pixabay.
Each January, Americans commemorate the birthday of Martin Luther King, Jr., reflecting on the enduring legacy of the legendary civil rights activist. From his iconic speech at the 1963 March on Washington, to his final oration in Memphis, Tennessee, King is remembered not only as a masterful rhetorician, but a luminary for his generation and many generations to come. These quotes, compiled from the Oxford Dictionary of Quotations, demonstrate the reverberating impact of his work, particularly in a time of great social, political, and economic upheaval.
“A riot is at bottom the language of the unheard.”
Where Do We Go From Here? (1967) ch. 4
“If a man hasn’t discovered something he will die for, he isn’t fit to live.”
Speech in Detroit, 23 June 1963, in James Bishop The Days of Martin Luther King (1971) ch. 4
“Cowardice asks the question, ‘Is it safe?’ Expediency asks the question, ‘Is it politic?’ Vanity asks the question, ‘Is it popular?’ But Conscience asks the question, ‘Is it right?’”
Speech, 1967; in Autobiography of Martin Luther King Jr. (1999) ch. 30
“I have a dream that one day on the red hills of Georgia the sons of former slaves and the sons of former slave owners will be able to sit down together at the table of brotherhood…I have a dream that my four little children will one day live in a nation where they will not be judged by the colour of their skin but by the content of their character.”
Speech at Civil Rights March in Washington, 28 August 1963, in New York Times 29 August 1963; see also jackson 413:13
“Returning hate for hate multiplies hate, adding deeper darkness to a night already devoid of stars. Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.”
Strength to Love (1963) ch. 5, pt. 2
“Judicial decrees may not change the heart; but they can restrain the heartless.”
Speech in Nashville, Tennessee, 27 December 1962, in James Melvin Washington (ed.) A Testament of Hope: The Essential Writings of Martin Luther King, Jr. (1986) ch. 22
“Injustice anywhere is a threat to justice everywhere.”
Letter from Birmingham Jail, Alabama, 16 April 1963, in Atlantic Monthly August 1963
“We shall overcome because the arc of a moral universe is long, but it bends toward justice.”
Sermon at the National Cathedral, Washington, 31 March 1968, in James Melvin Washington A Testament of Hope (1991); see obama 571:3, parker 585:12
“The Negro’s great stumbling block in the stride toward freedom is not the White Citizens Councillor or the Ku Klux Klanner but the white moderate who is more devoted to order than to justice; who prefers a negative peace which is the absence of tension to a positive peace which is the presence of justice.”
Letter from Birmingham Jail, Alabama, 16 April 1963, in Atlantic Monthly August 1963
“We will have to repent in this generation not merely for the hateful words and actions of the bad people, but for the appalling silence of the good people.”
Letter from Birmingham Jail, Alabama, 16 April 1963
Image Credit: Tribute to Martin Luther King, Jr. Photo by U.S. Embassy New Delhi. CC by ND 2.0 via Flickr.
Grove Music Online presents this multi-part series by Don Harrán, Artur Rubinstein Professor Emeritus of Musicology at the Hebrew University of Jerusalem, on the life of Jewish musician Salamone Rossi on the anniversary of his birth in 1570. Professor Harrán considers three major questions: Salamone Rossi as a Jew among Jews; Rossi as a Jew among Christians; and the conclusions to be drawn from both. Previous installments include “Salamone Rossi, Jewish musician in Renaissance Mantua” and “Salamone Rossi as a Jew among Jews”.
As a Jewish musician working for the Mantuan court, and competing for the favors that its Christian musicians and composers hoped to gain, it was only inevitable for Rossi to have been considered an intruder. His talents as composer and violinist must have been so remarkable that the dukes decided to keep him in their service over the course of almost forty years, from 1589 to 1628. In his publications he was designated an ebreo, but the very fact that he published so widely suggests that the quality of the music must have been more important than his Judaism.
Still, in Rossi’s dealings with the authorities, his Judaism was a bone of contention. For one thing, because of Jewish holidays and the Sabbath, Rossi was not always available when needed. For another, he could not be expected, when asked to do so, to write music to texts with Christian content. We know from a letter of Claudio Monteverdi that the ducal palace ran concerts of chamber music on Friday evenings, yet Rossi, who observed the Sabbath, would not have been present. We also know that of the various composers who were asked to write music for La Maddalena, a “sacred representation” about the sins and penitence of Mary Magdalen, Rossi was the only one to be assigned, at his request, a secular poem. The piece he wrote for it was “Spazziam pronte”.
Rossi appears to have had cordial relations with Duke Vincenzo I, to whom he dedicated his first two publications. In the first of them, from 1589, he refers to the duke as his “most revered patron” and to himself as the duke’s “most humble and devoted servant”; in the second, from 1600, he supplemented the phrase “most revered patron” with “my natural lord,” to whom he was indebted, he admits, for everything he knows.
Here is one madrigal from his first collection dedicated to Duke Vincenzo: “Cor mio,” originally for five voices, though also prepared as a monody for voice and chitarrone.
Vincenzo was described by his contemporaries as a person “who favored the Jews and spoke kindly to them.” He appears to have encouraged Rossi to compose and perform as a violinist. But with his death in 1612 his successors Francesco and Ferdinando were less sympathetic toward Jews. Before entering office, Francesco was known as a Jew hater—even the pope said so—and as likely to drive the Jews out of Mantua. He was responsible for erecting the Jewish ghetto. It is uncertain what Rossi’s relations were with Francesco or Ferdinando. That Rossi dedicated none of his publications to them speaks for itself.
Jews were not liked and try as he might, Rossi was subject to criticism, if not slander. He asks Duke Vincenzo to keep him “safe from the hands of detractors” by lending his “felicitous name” to his first book of madrigals (1600). “Without your support,” he writes, “his works would be torn to shreds by his critics.” Felicita Gonzaga is asked to “protect and defend” the works in his second book of madrigals (1602), for “no slanderer or detractor would ever dare to censure something that is protected and favored by a lady of such great distinction.”
Rossi was determined to make a name for himself in a non-Jewish environment. His situation appears to have been so hopeless that he grabbed at every opportunity to win a new patron. In choosing his dedicatees, he emphasized some favor he received from them. The extent of these “favors” appears to have been no more than a friendly glance, or a word of praise, or the mere presence of the dedicatee at a performance of his works.
Flattery, praise, gratitude: these were the means by which Rossi hoped to improve his situation. For Rossi, the word patrono, or patron, designated persons who, once having granted him favors, were being asked for new ones. It is difficult to know how much of his dedications were sincere and how much was fabricated. In his Hebrew collection Rossi tells us how he chose Moses Sullam as his patron. “I searched in my heart,” he writes, “for the one ruler to whom I would turn, to place on his altar the offering of this thanksgiving. Then I lifted my eyes and saw that it would be better for me to show my affection to you, honored and important in Israel, than to anyone else.” The tone seems to be genuine. Yet when Rossi dedicates his four-voice madrigals to Prince Alfonso d’Este, he speaks in another language, artificial, rhetorical:
My mind, particularly disposed to serving Your Highness forever, and your infinite kindness and sublimity have given me the courage not only to dedicate to you these few efforts of mine but also to make me hope, at the same time, to be able to see them, by means of your most felicitous name, consecrated to the immortality of your fame, resting assured that you will not disapprove of my receiving this favor of your kindness, which is to reveal to the world, with my meager demonstrations, the most ardent signs of my reverent devotion to Your Highness, whom I, in all humility, beseech, with deepest affection, to accept these trifling notes of mine, assuring that every wearisome undertaking is bound to become the lightest load for me, inasmuch as I am stirred by an immense desire to serve Your Highness.
Rossi did not do the one thing he could have done to solve his problems: convert to Catholicism. The pressure to do so must have been tremendous, but it is doubtful it would have improved his lot. Mahieu le Juif, the thirteenth-century trouvère who composed various songs tells us that the reason that he converted was to please a certain lady, for whose love he “abandoned his religion and his faith in God.” Little did it help him though, for she did not “reciprocate” his “love”; her heart was like “steel”; she “betrayed” him; and she “made a fool” of him. The thirteenth-century minnesinger Süsskind of Trimberg, who too wrote various songs, also converted, but suffered from poverty (“the rich man has flour,” he said, “the poor man has ashes”). In the end, his patrons “separated him from their estate,” whence he “fled the courts,” only perhaps to return to his faith, though now “with a beard,” “gray hair,” after the “life style of an old Jew,” as he is depicted, in fact, in an illustration.
Headline image credit: Opening of Salomone de Rossi’s Madrigaletti, Venice, 1628. Photo of Exhibit at the Diaspora Museum, Tel Aviv. Public domain via Wikimedia Commons.
Commercial law experienced an eventful year in 2014, but what were the most significant cases? Read our run-down of some of the biggest cases from the past 12 months to see if you agree with us:
1. Apple Inc. wins decade-long anti-trust class action
In December 2014, Apple won a long-running class action that was brought against them in 2005. The company was accused of monopolizing the digital music market and violating U.S. anti-trust statutes by reconfiguring its DRM system, which prevented mp3 compatibility with competitors. After 10 years of no judgement, and a recorded video statement from the late Steve Jobs, a jury ruled in Apple’s favour.
2. Russian oligarchs in mining row
A dispute between Russian aluminium businessman, Vasily Anisimov and the late Badri Patarkatsishvili’s family was settled in March 2014. The family alleged that they were entitled to 20% of Mr Anisimov’s mining company, claiming that the two businessmen agreed Mr Anisimov would invest in mining company Metalloinvest’s forerunner, Mikhailovsky. A deal was reached over the $1.8bn case just days before it was to go to trial.
3. Burwell vs. Hobby Lobby
A landmark decision made by the U.S. Supreme Court has allowed for-profit corporations to be exempt from certain laws on the grounds of religious beliefs held by company owners. The lawsuit was filed by Hobby Lobby owners, David and Barbara Green, who objected to having to provide contraceptives to employees through a health insurance plan, which they felt contravened their religious beliefs. The court ruled in their favour in June. This is the first time a court has recognised a for-profit corporation’s claim of religious beliefs.
4. Accolade Wines in construction strife
In what was a huge £170m case in the Technology and Construction Court, Accolade Wines claimed against the company that built its bottling plant in 2010 for property damage and business interruption. Accolade Wines sued contractor VolkerFitzpatrick after finding problems with the floor slabs in their Bristol warehouse, which is the biggest wine warehouse in Europe. VolkerFitzpatrick denied the defects were due to their work.
5. Oracle Corp vs. Google
In May 2014 the Federal Circuit revised a decision made in 2012 that said Application Programming Interfaces (APIs or “Android operating systems”) are not copyrightable. Despite ruling in 2012 that if APIs were subject to copyright, this could allow a particular company to have control over “a utilitarian and functional set of symbols”, which could in turn prevent innovation within the technology industry. The Federal Circuit, however, decided in May that Java’s APIs are copyrightable, and Google’s case has gone back to trial.
6. America Broadcasting Companies vs. Aereo
Industrious start-up Aereo came up with a unique business opportunity by streaming broadcast network television programming online for a fee. The business was sued by a group of broadcasters and the U.S. Supreme Court ruled that their service violated copyright laws. The decision ultimately, of course, put Aereo out of business.
7. Tyre-d of price-fixing
A group of tyre manufacturers claimed against the Dow Chemical Company for damages over £170m, for price-fixing on polyurethane chemical products. Dow appealed the decision in October 2014, but this was denied by the 10th Circuit in the U.S in one of the most significant verdicts last year. The European Commission fined 10 companies more than £396m in this price-fixing case, including Shell and Bayer, as well as Dow.
8. Bancroft vs. Weil Gotshal & Manges
In what will be the first time a U.S. company defends itself in a London court, private equity group Bancroft is suing American law firm, Weil Gotshal & Manges, for negligence in a claim worth an estimated £10m. The case is based on a claim that it was not explained during Weil Gotsham & Manges’ advise on Bancroft’s purchase of a 94% stake in ice cream company, Frost, that the group would not have voting control in the new company. The case was settled at £3m.
9. The National Grid take on a cartel
In June 2014 a group of companies were taken to trial in London after the European Commission identified a cartel relating to Gas Insulated Switchgear (GIS). Companies involved were fined €750m by the Commission while National Grid sought £360m in damages.
10. Mineworker pensioners take on RBS
RBS is currently in the firing line in one of the most significant post-recession pieces of litigation, as 77 claimants take the bank to task. The bank is accused of issuing “mis-statements and omissions” in its prospectus for the RBS April 2008 rights issue, as well as portraying themselves as being in a good financial position despite this not being the case. The claimants include pension scheme trustees, local authorities and investment funds. The total amount the bank is being sued for is estimated at over £3bn.
Featured image credit: UK Festival of Fireworks , by David Carter. CC-BY-2.0 via Flickr