JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
There’s something about the idea of ‘original pronunciation’ (OP) that gets the pulse racing. I’ve been amazed by the public interest shown in this unusual application of a little-known branch of linguistics — historical phonology, a subject that explores how the sounds of a language change over time. I little expected, when I was approached by Shakespeare’s Globe in 2004 to help them mount a production of Romeo and Juliet in OP, that ten years on the approach would become a thriving linguistic industry. Nor could I have predicted that a short documentary recording about OP for the Open University (which I made with actor son Ben in 2011) would for no apparent reason go viral towards the end of 2013, with 1.5 million hits in recent months.
A dozen Shakespeare plays have now been produced in original pronunciation, including A Midsummer Night’s Dream at Kansas University in 2010 and Hamlet at the University of Nevada (Reno) in 2011. This year a group from the University of Texas (Houston) brought an OP production of Julius Caesar to the Edinburgh Fringe. Next January, Ben Crystal and his OP ensemble are presenting Pericles in Stockholm as part of an Interplay series along with the Swedish Radio Symphony Orchestra. More productions are in the pipeline.
But it isn’t just Shakespeare. The interest in him tops the list, but it is a long list, in which the work of any dramatist from the period can be treated in this way. And not just drama. Poems and prose too. My recording of the Sonnets is available on the website associated with the book Pronouncing Shakespeare. An OP recording by Ben of one of John Donne’s long sermons can now be heard as part of the Virtual St Paul’s Cross project.
Donne takes us forward in time to the 1620s. Going backwards in time, the British Library wanted an original pronunciation recording of William Tyndale to accompany the publication of its facsimile of the Tyndale Gospels. They chose the Matthew Gospel, and I recorded this for them in 2013. That takes us back to 1525. There are earlier recordings in the BL archive, made for the Evolving English winter exhibition in 2011-12, including extracts from Beowulf, Chaucer, Caxton, and Paston. The British Library also commissioned a CD of Shakespeare extracts from Ben and his ensemble: Shakespeare’s Original Pronunciation.
But the interest extends well beyond literature. Notably present in the talkback sessions after the first original pronunciation productions at the Globe were people interested in early music. And since then there have been many explorations into the kind of pronunciation used by Purcell (late 17th century), Dowland, and other composers. As with their literary counterparts, musicologists have been struck by the fact that so many of the rhymes in songs, madrigals, and operatic texts simply don’t work in modern English, and they want to hear them as they would have been. They note the way many of the vowels and consonants would have had different values in those days, and they want to explore how the texts would sound with those old values articulated. The result is a very different auditory experience, and — by all accounts — an exciting one.
Finally there are the heritage people. It’s all well and good establishing a historical centre where an old period is recreated, and people dress up in old clothes and walk around — but how should they speak? The occasional ‘verily’ and ‘forsooth’ isn’t enough. Here too we see an interest in recreating styles of speech that would have been used in those days.
Add all these constituencies together and you can see why the original pronunciation experiment has become something of an OP movement, with more and more people wanting to learn about OP, to hear it in practice, and to explore its application in texts that so far have received no study. Every new text brings to light something new — such as a previously unnoticed pun, or a fresh way of speaking a line. At university level, people are beginning to write dissertations on the subject. Ben, as I write, is exploring ways for his ensemble to cope with new OP commitments. There’s plenty to do. With only a dozen Shakespeare plays explored so far, that leaves a couple of dozen more awaiting investigation.
The consequence is an urgent need to provide materials to help people take original pronunciation activities forward. Paul Meier already has some tutorial material on his website, and his Dream production is available both as an audio recording and on a DVD. Several articles have now been written answering the usual questions people ask (such as ‘how do you know’?). And I am hard at work on an OP Shakespeare dictionary, which will enable people to make transcripts for themselves. I have paused, in the middle of letter N, to write this post. But with luck and a good following wind, I should have it finished in time for the great anniversary in 2016. And it will be published, of course, by Oxford University Press.
Forty years ago, President Richard M. Nixon faced certain impeachment by the Congress for the Watergate scandal. He resigned the presidency, expressing a sort of conditional regret:
I regret deeply any injuries that may have been done in the course of the events that led to this decision. I would say only that if some of my judgments were wrong, and some were wrong, they were made in what I believed at the time to be the best interest of the Nation.
Nixon is not apologizing here as much as offering what sociologist Erving Goffman calls an account—a verbal reframing of his actions aimed at reducing their offensiveness. Nixon treats himself as a victim of his own mistakes and treats his mistakes as managerial, not criminal. His language is loaded with such words as “any,” “may,” “would,” and “if,” among others and circumlocutions likes “in the course of the events that led to this decision” and “what I believed at the time to be the best interest of the Nation.” Nixon offers regret, but there is no unconditional apology, and there never was.
I sometimes wonder how Nixon’s attitudes toward Watergate and his resignation were shaped by the 1952 presidential campaign, and the events that led to his so-called “Checkers” speech.
It was the home stretch of the 1952 campaign, in which the Republican ticket of Dwight Eisenhower and then-Senator Nixon were pitted against Democrats Adlai Stevenson II and John J. Sparkman to succeed President Harry Truman. Truman’s popularity was at a low point and Eisenhower and Nixon were optimistic about their chances. Then, in mid-September, the press began reporting stories of a secret expense fund established in 1950 by Nixon supporters. The New York Post offered the sensational headline that “Secret Rich Men’s Trust Fund Keeps Nixon in Style Far Beyond His Salary.” As the story developed, many Democrats (and less publicly some Republicans) called for Nixon to be dropped from the ticket. News editorials disapproved of Nixon’s actions two-to-one. Even the Washington Post, which had endorsed the Republican ticket, called for Nixon to withdraw from the race.
The issue took some of the optimism out of the Eisenhower campaign. Eisenhower defended his Vice President publicly, but also promised that there would be a full reporting of the facts by independent auditors. The 39-year-old Nixon offered his account in a half-hour television address broadcast from the El Capitan Theatre in Hollywood, on 23 September 1952.
“I want to tell you my side of the case,” he began, and in a speech that ran just over 4,500 words, Nixon used a series of rhetorical questions guide his audience through his version of events. He used the strategy that rhetoricians called differentiation by claiming that the fund issue was not what it seemed to be. Nixon said that there was no moral wrong because none of the money—about $18,000—was for Senatorial expenses and that none of the contributors receive special favors. He asserted his own good character by explaining why he needed the money: because he was not a rich man and he didn’t feel the taxpayers should pay his expenses.
Nixon bolstered his character further with his biography—explaining his modest background and finances, giving details down to the amount of his life insurance, mortgages, and material of his wife’s coat: not mink but “a respectable Republican cloth coat,” adding that “And I always tell her that she’d look good in anything.”
He added another rhetorical turn in the second half of his speech: “Why do I feel so deeply? Why do I feel that in spite of the smears, the misunderstandings, the necessity for a man to come up here and bare his soul as I have?” Nixon’s answer was “Because, you see, I love my country. And I think my country is in danger.” Here Nixon implies that he is motivated by a greater good and he pivots to an attack on his political opponents and his avowal that Eisenhower was “the man that can clean up the mess in Washington.”
The speech was the first ever use of television by a national candidate to speak directly to the nation and to defend himself against accusations of wrong-doing. And the public was impressed. For many, the most memorable part was when Nixon told the viewers about a black and white cocker spaniel puppy that a supporter from Texas had given his daughters. One of them named it Checkers, and Nixon defiantly asserted that, “regardless of what they say about it, we’re gonna keep it.” The speech thus became known as “The Checkers Speech.”
Nixon finished with a call to action, asking his listeners to write to the Republican National Committee to show their support. His broadcast was seen by an estimated 60 million viewers, and letters and telegrams to the Republican National Committee were overwhelmingly supportive. Eisenhower kept him on the ticket and a few weeks later the Eisenhower-Nixon ticket carried the day with over 55% of the popular vote and 442 electoral votes.
Nixon accomplished three key verbal self-defense strategies in the “Checkers” speech. He argued that the fund was not what it seemed to be. He argued that he was a good steward of public funds and exposed his personal finances. He implied that he was serving a higher good because he supported General Eisenhower and opposed Communism.
But by 1974, things were different. Nixon was in trouble again, much worse trouble of his own making, and there was no “Checkers” speech, no way reframing his situation that would save his presidency. He resigned but he never apologized. Three years after resigning, in interviews with journalist David Frost, Nixon was unequivocally defiant:
When I resigned, people didn’t think it was enough to admit mistakes; fine. If they want me to get down and grovel on the floor, no. Never. Because I don’t believe I should.
Perhaps he was thinking about the “Checkers” speech.
Headline image credit: President Richard Nixon delivers remarks to the White House staff on his final day in office. From left to right are David Eisenhower, Julie Nixon Eisenhower, the president, First Lady Pat Nixon, Tricia Nixon Cox, and Ed Cox. 9 August 1974. White House photo, Courtesy Richard Nixon Presidential Library. Public domain via Wikimedia Commons.
When the first production of On the Town in 1944 featured the Japanese American ballerina Sono Osato as its star, as part of a cast that also included whites and blacks, it aimed for a realistic depiction of the diversity among US citizens during World War II. It did so at a time when African Americans were expressing affinity with Nisei – that is, with second-generation children of Japanese nationals who had immigrated to other countries. The two communities shared the struggle of discrimination by the majority culture.
In 1942, the Office of War Information conducted a survey in Harlem, trying to gain an African-American perspective on the war, and opinions about the Japanese emerged in the process. Many Harlemites communicated a feeling that “these Japanese are colored people.” That quotation comes from a letter written by William Pickens, an African-American journalist who worked for the US Department of Treasury during World War II. When asked “Would you be better off if America or the Axis won the war?” most blacks in the survey stated they “would be treated either the same or better under Japanese rule, although a large majority responded that conditions would be worse under the Germans.”
Yet relationships between these two marginalized communities were not always easy, and On the Town became a flash point for racial distress. A striking case appeared in the memoir Long Old Road (Trident Press, 1965), written by Horace R. Cayton, Jr. An African American sociologist from Chicago, Cayton attended On the Town soon after he heard about the bombing of Hiroshima, which occurred on 6 August 1945. He articulated a shared mission between Nisei and African Americans, yet he did so with considerable agitation. “Our seats were good, and the theater was cool after the heat of New York,” wrote Cayton. He responded positively to the opening number, “New York, New York,” then launched into an assessment of the racial and political complexities posed by Osato’s appearance on stage at that particular moment in time. He perceived her as racially accommodating.
“It was a catchy tune with cute lyrics, but when the beautiful Sono Osato, who is of Japanese descent, appeared and frolicked with the American sailors, I was filled with anger and disgust,” wrote Cayton. “I care more about your people than you do, I thought, as I sat through the rest of the first act looking at the floor and wondering how soon I could escape to the bar next door.”
Cayton’s “anger and disgust” came from watching Osato engage directly and uncritically with white actors playing the role of sailors. At intermission, Cayton’s wife June, who was white, said to him: “This is the first good musical I’ve seen in years. Isn’t Sono Osato wonderful?” Cayton then recounted a tense conversation between the two of them:
“If I were half-Japanese I wouldn’t be dancing with three American sailors at a time like this,” I [Cayton] commented sourly.
“Why shouldn’t she? She’s as America as you or I.” June began to warm to her subject. “She was born in this country. She’s one hundred per cent American, doesn’t even understand Japanese.”
[Cayton replied:] ‘She’s a Jap, I’m a nigger, and you’re a white girl. Let none of us forget what we are.”
Cayton’s outburst comes across as a racial polemic. But there was deep complexity to his reaction, as he expressed solidarity with other non-white races as they confronted the hegemonic power of Caucasians. Even though his language is disturbing, it is extraordinarily frank, acknowledging the era’s venomous racism against the Japanese and the degree to which African Americans felt themselves to be backed against a wall during World War II. Cayton continued:
“I’m torn a dozen ways. I didn’t want the Japanese to win; after all, I am an American. But the mighty white man was being humiliated, and by the little yellow bastards he had nothing but contempt for. It gave me a sense of satisfaction, a feeling that white wasn’t always right, not always able to enforce its will on everyone who was colored. All those fine white liberals rejoicing because we dropped a bomb killing or maiming seventy-eight thousand helpless civilians. Why couldn’t we have dropped it on the Germans—because they were white? No, save it for the yellow bastards.”
Those multi-layered thoughts were unleashed by watching Sono Osato on stage, dancing an identity that was intended to portray her as “All-American” yet could not avoid the realities of her mixed-race heritage at a harrowing historical moment.
Headline Image: Sono Osato modeling a dress by Pattullo Modes, early 1940s. Dance Clipping Files, New York Public Library at Lincoln Center, Astor, Lenox, and Tilden Foundations.
The death rattle of the gender binary has been ringing for decades now, leaving us to wonder when it will take its last gasp. In this third decade of third wave feminism and the queer critique, dismantling the binary remains a critical task in the gender revolution. Language is among the most socially pervasive tools through which culture is negotiated, but in a language like English, with its minimal linguistic marking of gender, it can be difficult to find concrete signs that linguistic structures are changing to reflect new ways of thinking about the gender binary rather than simply repackaging old ideas.
One direction we might look, though, is toward the gendering of third person pronouns, which is what led me to write this post about pronouns on Facebook. Yes, Facebook. The social media giant may not be your first thought when it comes to feminist language activism, but this year’s shift in the way Facebook categorizes gender is among the most widely-felt signs of a sea change in institutional attitudes about gendered third person pronouns. Although Facebook does not have the same force as the educational system, governments, or traditional print media, it carries its own linguistic caché established through its corporate authority, its place in the cultural negotiation of coolness and social connection, and its near inescapable presence in everyday life.
In response to long-standing calls from transgender and gender non-conforming users to broaden its approach to gender, Facebook announced earlier this year that it would offer a new set of options. Rather than limiting members of the site to the selection of female or male, an extensive list of gender identities is offered, along with the option of a custom entry, including labels like agender, bigender, gender fluid, gender non-conforming, trans person, two-spirit, transgender (wo)man and cisgender (i.e. non-transgender) (wo)man.
With all of the potential complexity afforded by these categories, Facebook couldn’t rely on a simple algorithm of assigning gendered pronouns for those occasions on which the website generates a third person reference to the user (e.g. “Wish ___ a happy birthday!”). Instead, it asks which set of pronouns a user prefers among three options: he/him/his, she/her/hers, or they/them/theirs. As a result, there are two important ways that Facebook’s reconsideration of its gender classification system goes beyond the listing of additional gender categories. The first is the more obvious of the two: offering singular they as an option for those who prefer gender neutral reference forms. The other is simply the practice of asking for a pronoun preference rather than deriving it from gender or sex.
Sanctioning the use of singular they as a gender neutral pronoun counters the centuries-old grammarian’s complaint that they can only be used in reference to plural third person referents. Proponents of singular they, however, point out that the pronoun has been used by some of the English-speaking world’s finest writers and that it was in wide-spread use even before blatantly misogynistic language policies determined that he should be the gender-neutral pronoun in official texts of the British government. More recently, an additional source of support for singular they has arisen: for those who do not wish to be slotted into one side of the gender binary or the other, they is perhaps the most intuitive way to avoid gendered third-person pronouns because of its already familiar presence in most dialects of English. (Other options include innovative pronouns like ze/hir/hirs or ey/em/em’s.) In this case, a speaker must choose between upholding grammatical conventions and affirming someone’s identity.
But wait, you might ask – don’t we need a distinction between singular and plural they? How are we supposed to know when someone is talking about a single person and when they’re talking about a group? Though my post isn’t necessarily meant to defend the use of singular they in reference to specific individuals (an argumentothers have madequite extensively), this point is worth addressing briefly if only to dispel the notion that the standard pronoun system is logical while deviations are somehow logically flawed. As the pronoun charts included here illustrate, there is already a major gap in the standard English pronoun system when compared to many other languages: a distinction between singular and plural you. Somehow we get by, however, relying on context and sometimes asking for clarification. Could we do the same with they?
The second pronoun-related change Facebook has made – asking for preferred pronouns rather than determining them based on gender category – is a more fundamental challenge to the normative take on assigning pronouns. According to conventional wisdom, a speaker will select whether to use she or he based on certain types of information about the person being referred to: how their bodily sex is perceived, how they present their gender, and in some cases other contextual factors like their name. To be uncertain about which gendered pronoun to use can be a source of great anxiety, exemplified by cultural artifacts like Saturday Night Live’s androgynous character from the 1990s known only as Pat. No one ever asks Pat about their gender because to do so would presumably be a grave insult, as Pat apparently has no idea that they have an androgynous appearance (were you able to follow me, despite the singular they’s?).
But transgender and queer communities are increasingly turning this logic on its head. Rather than risk being “mis-pronouned,” as community members sometimes call it, it is becoming the norm for introductions in many trans and queer contexts to include pronouns preferences along with names. For instance, my name is Lal and I prefer he/him/his pronouns. (Even the custom of calling these “male” pronouns has been critiqued on the basis that one needn’t identify as male in order to prefer he/him/his pronouns.) The goal behind this move is to remove the tension of uncertainty and to avoid potential offense or embarrassment before it takes place. But this is not just a practice for transgender and gender non-conforming people; the ideal is that no one’s pronoun preferences be taken for granted. Instead of determining pronouns according to appearance, they become a matter of open negotiation in which one can demonstrate an interest in using language that feel maximally respectful to others.
Facebook’s adoption of this new approach to pronouns, despite prescriptive grammarians’ objections, suggests that the acceptance and use of singular they is expanding. More than that, it furthers the normalization of self-selected pronouns since even those who are totally unfamiliar with the use of singular they as a preferred pronoun, or the very idea of pronoun preferences, may be faced with unexpected pronouns in their daily newsfeeds.
For those of us at academic institutions with sizable transgender and gender non-conforming communities, the practices discussed here may already be underway on campus. During my time teaching at Reed College, for instance, I found students to be enthusiastic about including pronoun preferences in our beginning-of-semester introductions even in classes where everyone’s pronoun preferences aligned with normative expectations.
My goal here isn’t to argue that the gender binary is dissolving in the face of new pronoun practices. Indeed, linguistic negotiations of gender and sexual binaries are far too complex to draw such a simple conclusion. However, what I do want to suggest is that we are in the midst of some kind of shift in the way pronouns are used and understood among speakers of English. Describing a more fully complete change of this sort, linguistic anthropologist Michael Silverstein has explained how religious and political ideology among speakers of Early Modern English resulted in a collapse of the second person pronouns thou (singular, informal) and you (plural, formal). In the present case, rapidly changing ideologies about the gender binary may be pushing us toward a different organization of third person pronouns of the sort illustrated by the non-binary pronoun chart above.
The effect of Facebook on linguistic practice more broadly has yet to be fully uncovered, but its capital-driven flexibility and omnipresence in contemporary social life suggests that it may be a powerful tool in ideologically-driven language change.
On 22 September 1692 eight more victims of the Salem witch trials were executed on Gallows Hill. After watching the executions of Martha Cory, Margaret Scott, Mary Easty, Alice Parker, Ann Pudeator, Willmott Redd, Samuel Wardwell, and Mary Parker, Salem’s junior minister Nicholas Noyes exclaimed “What a sad thing to see eight firebrands of hell hanging there.” These would be the last of the executions, for the trials were facing increasing opposition amid a growing dissatisfaction with the political and spiritual leadership of the colony. Symbolic of that displeasure, less than two months later Noyes’s cousin, Sarah Noyes Hale, the wife of Beverly’s Reverend John Hale, would stand accused of witchcraft.
The Court of Oyer and Terminer, created by Governor Sir William Phips to deal with the witchcraft crisis, increasingly mimicked the arbitrary rule of the former governor Sir Edmond Andros and his hated Dominion of New England. Andros restricted rights and controlled the legal system through his appointment of judges, officials and “packed and picked” juries that did his bidding. In 1687 when several Essex County towns rose up in a tax revolt, protesting what they saw as Andros’s arbitrary and illegal tax law, Sir Edmond acted quickly to try and convict the leaders before a specially established Court of Oyer and Terminer. One of the judges on that panel was William Stoughton, a former minister.
Now five years later, under a new government and royal charter that had supposedly restored English liberties to Massachusetts, William Stoughton headed another Court of Oyer and Terminer that was again making quick and arbitrary decisions. This time people were losing their lives. In a two week session in early September, the court heard 15 cases and convicted 15 people of witchcraft. It was a rush to judgment, especially when the evidence was not as strong as in earlier prosecutions. Judges increasingly relied on dubious spectral evidence, and many observers must have been taken aback by the treatment of Giles Cory. He had been pressed to death on 19 September for standing mute when asked if he would accept a trial by jury. Worse, no one who confessed to being a witch had been executed – with the exception of Samuel Wardwell, who recanted his confession. Only those who refused to confess met death.
The trials were but one failure of a weak government that continued to mismanage a war that had damaged the colony’s economy and threatened its very existence. The conflict against the French Catholics of Canada and their Native allies was also symbolic of the ongoing spiritual struggle in Massachusetts. Religious and political leaders had long called for a campaign for moral reformation to end the perceived decline of Puritan faith. The many accusations of witchcraft against the religious and political elite and their families show the extreme level of discontent at the failure of these policy makers.
A total of 20 people (11%) of the 172 formally accused or informally cried out on for witchcraft in 1692 were ministers or their close relatives. The number grows to 50 if one includes extended kin and in-laws of ministers – fully 30% of the people accused in 1692. In all, five ministers, four minister’s wives, three daughters, a son, two brothers and five grandchildren of ministers were cried out upon. Warrants were issued for only five of the twenty, and only two – George Burroughs and Abigail Dane Faulkner (daughter of Andover’s Reverend Francis Dane) would face the Court of Oyer and Terminer.
Burrough’s story is well known but historians have given little attention to Samuel Willard, Francis Dane, John Busse and Jeremiah Shepard, for none were ever formally charged. But they form an important part of an overlooked pattern of accusations against ministers and their families. Virtually all of the ministers who were accused or had family accused preached in New England churches that had accepted the Halfway Covenant – a controversial compromise that conservatives saw as a threat to Puritan orthodoxy.
These ministerial families were allied to each other by marriage, as can be seen in the example of Sarah Noyes Hale who was related to eight ministers. Her brother James would later be one of the seven ministers who founded Yale University. These families also married into the leading political families of the colony, so the accusations were a critique of the political and military leadership as well, including the witchcraft judges. And, the accusations went to the very top. Both Lady Mary Spencer Phips and Maria Cotton Mather were cried out upon. Clearly they served as stand-ins for their husbands – Governor Phips and his chief confidante, Reverend Increase Mather.
Maria Mather was the lynchpin connecting the two most important families of Puritan divines in Massachusetts. Her husband Increase was the President of Harvard College and the son of the prominent Reverend Richard Mather, while her father John Cotton was perhaps the leading Puritan theologian to join the Great Migration. Maria was also the sister of two ministers, sister-in-law of four more, and mother of Reverends Cotton and Samuel Mather. Increase and Cotton were both longstanding advocates of the Halfway Covenant but their conservative North Church had refused to accept it. During the trials, the Mathers were in the final stages of a campaign to get the North Church to adopt the Halfway Covenant. One of the few stalwart church members who stood in the way was Oyer and Terminer Judge John Richards.
The executions of 22 September were clearly the last straw for many observers of the witch trials. They generated opposition to the proceedings and the government, as well as accusations against the colony’s elite. It is notable that soon after his wife was cried out upon, Sir William Phips finally brought the Court of Oyer and Terminer to an end.
Headline image credit: Photo courtesy of Emerson “Tad” Baker.
Strong, stable relationships are essential for both individuals and societies to flourish, but, from transportation policy to the criminal justice system, and from divorce rules to the child welfare system, the legal system makes it harder for parents to provide children with these kinds of relationships.
In her book Failure to Flourish: How Law Undermines Family Relationships, Clare Huntington argues that the legal regulation of families stands fundamentally at odds with the needs of families. We interviewed Professor Huntington about the connection between families and inequality. In the clips below, she explains policies and misconceptions that prevent us from helping families during the crucial first years of a child’s life, provides examples of supportive family law and good neighborhood development, and describes how helping families plays a role in fighting poverty.
Family law and how it affects families
Politics and policy in family law
The role of families in fighting poverty
How do you get into family law?
Headline image credit: family traffic sign. Public domain via Pixabay.
When the UN General Assembly endorsed the Responsibility to Protect (R2P) in 2005, the members of the United Nations recognized the responsibility of states to protect the basic human and humanitarian rights of the world’s citizens. In fact, R2P articulates concentric circles of responsibility, starting with the individual state’s obligation to ensure the well-being of its own people; nested within the collective responsibility of the community of nations to assist individual states in meeting those obligations; in turn encircled by the responsibility of the United Nations to respond if necessary to ensure the basic rights of civilians, with military means only contemplated as a last resort, and only with the consent of the Security Council.
The Responsibility to Protect is a response to war crimes, genocide, and other crimes against humanity. But R2P is also a response to pattern and practice human rights abuses that include entrenched poverty, widespread hunger and malnutrition, and endemic disease and denials of basic health care — all socio-economic conditions which themselves feed and exacerbate armed conflict. In fact, socio-economic development is a powerful mechanism for guaranteeing the full panoply of human rights, just as the Millennium Development Goals are a means of fulfilling the Responsibility to Protect.
While Responsibility to Protect is often misconstrued as a mandate for military action, it is more intrinsically a call to social action, and the embodiment of the joint and several responsibilities of the community of nations to seek a coordinated global response to life-threatening conditions of armed conflict, repression, and socio-economic misery. While diplomats and public servants debate the legality and prudence of military responses to criminal uses of military force against civilians, we must not neglect the legality, prudence, and urgency of non-military responses to public health and poverty emergencies throughout the world.
The United States has put out a call to like-minded nations to join forces, literally and figuratively, in the degradation and destruction of the criminal militancy of the so-called Islamic State [ISIL or ISIL]. Despite concerns that the 2003-2011 US war in Iraq itself may have led to the inception and flourishing of ISIS, and despite warnings that the training, arming, and assisting of Iraqi forces, Shia militias in Iraq and non-ISIS Sunni militants in Syria may inflame sectarian violence and threaten civilians in both countries, the United States is contemplating another open-ended military intervention in the Levant.
A military intervention against ISIS is not justified by the principles of Responsibility to Protect. Without the authorization of the Security Council or the consent of the Syrian government, military intervention is unlawful in Syria, offending both the UN Charter and the tenets of R2P. In either Syria or Iraq a military intervention, even with the permission of the responsible governments, is unlawful if it is likely to lead to further outrages against civilians. Military action that predictably causes the suffering of civilians disproportionate to any legitimate military objectives violates the principles of humanitarian law and the Geneva Conventions, as well as the UN Charter and R2P.
Alongside the criminal militancy of ISIS we face the existential threat of the Ebola virus in West Africa, endangering the people of Guinea, Liberia, Sierra Leone, and their neighbors. Over the past two months, approximately 5000 people have been infected by this hemorrhagic disease, and around 2500 have died, over 150 of them health care workers. At current rates of infection, with new cases doubling every three weeks, the virus could sicken 10,000 by the end of September, 40,000 by mid-November, and 120,000 by the New Year.
Ebola can be contained through basic public health responses: quarantining of the sick, tracing of exposure in families and communities, safe recovery of the bodies of the deceased, regular hand-washing and sanitation, and the all-important rebuilding of trust between effected community members, health care workers, and government officials. But the very countries impacted have fragile health care systems, insufficient hospital beds, and dedicated Red Cross workers, doctors, and nurses nearly besieged by the number of sick people needing care. By funding and supporting more health care and humanitarian relief workers at the international and local levels, more Ebola field hospitals and clinics, and more food, rehydration fluids, and safe blood supplies for transfusions, less new people will fall sick, and more of the infected will be treated and cured. At the same time, the fragile economies and political systems of the effected countries will be strengthened and the threat of regional insecurity will be addressed. Ebola in West Africa is calling out for a coordinated global public health intervention, which will serve our Responsibility to Protect at the local level, while furthering our collective security at the global level.
As the US Congress debates the funding of so-called moderate rebels in Syria in the pursuit of containing the criminal militancy of ISIS, we should turn our national attention to funding Ebola emergency relief in Guinea, Liberia, and Sierra Leone. Such action is consistent with our enlightened self-interest, and required by our humanitarian principles and obligations.
Have you ever thought that your body movements can be transformed into learning stimuli and help to deal with abstract concepts? Subjects in natural science contain plenty of abstract concepts which are difficult to understand through reading-based materials, in particular for younger learners who are at the stage of developing their cognitive ability. For example, elementary school students would find it hard to distinguish the differences in similar concepts of fundamental optics such as concave lens imaging versus convex lens imaging. By performing a simulated exercise in person, learners can comprehend concepts easily because of the content-related actions involved during the process of learning natural science.
As far as commonly adopted virtual simulations of natural science experiments are concerned, the learning approach with keyboard and mouse lacks a comprehensive design. To make the learning design more comprehensive, we suggested that learners be provided with a holistic learning context based on embodied cognition, which views mental simulations in the brain, bodily states, environment, and situated actions as integral parts of cognition. In light of recent development in learning technologies, motion-sensing devices have the potential to be incorporated into a learning-by-doing activity for enhancing the learning of abstract concepts.
When younger learners study natural science, their body movements with external perceptions can positively contribute to knowledge construction during the period of performing simulated exercises. The way of using keyboard/mouse for simulated exercises is capable of conveying procedural information to learners. However, it only reproduces physical experimental procedures on a computer. For example, when younger learners use conventional controllers to perform fundamental optics simulation exercises, they might not benefit from such controller-based interaction due to the routine-like operations. If environmental factors, namely bodily states and situated actions, were well-designed as external information, the additional input can further help learners to better grasp the concepts through meaningful and educational body participation.
Based on the aforementioned idea, we designed an embodiment-based learning strategy to help younger learners perform optics simulation exercises and learn fundamental optics better. With this learning strategy enabled by the motion-sensing technologies, younger learners can interact with digital learning content directly through their gestures. Instead of routine-like operations, the gestures are designed as content-related actions for performing optics simulation exercises. Younger learners can then construct fundamental optics knowledge in a holistic learning context.
One of the learning goals is to acquire knowledge. Therefore, we created a quasi-experiment to evaluate the embodiment-based learning strategy by comparing the leaning performance of the embodiment-based learning group with that of the keyboard-mouse learning group. The result shows that the embodiment-based learning group significantly outperformed the keyboard-mouse learning group. Further analysis shows that no significant difference of cognitive load was found between these two groups although applying new technologies in learning could increase the consumption of learners’ cognitive resources. As it turned out, the embodiment-based learning strategy is an effective learning design to help younger learners comprehend abstract concepts of fundamental optics.
For natural science learning, the learning content and the process of physically experimenting are both important for learners’ cognition and thinking. The operational process conveys implicit knowledge regarding how something works to learners. In the experiments of lens imaging, the position of virtual light source and the type of virtual lens can help learners determine the attributes of the virtual image. By synchronizing gestures with virtual light source, a learner not only concentrates on the simulated experimental process but realizes the details of the external perception. Accordingly, learners can further understand how movements of the virtual light source and the types of virtual lens change the virtual image and learn the knowledge of fundamental optics better.
Our body movements have the potential to improve our learning if adequate learning strategies and designs are applied. Although motion-sensing technologies are now available to the general public, massive applications will depend on economical price and evidence-based approaches recommended for the educational purposes. The embodiment-based design has launched a new direction and is hoped to continuously shed light on improving our future learning.
A few years ago a friend of mine and I were intent on learning German. We were both taking an adult beginning German class together and were trying to make sense of what the teacher was telling us. As time progressed I began to use CDs in my car to practice the language everyday. I could repeat a lot of the phrases and slowly built up my ability to speak.
From 2007-2009, I had the good fortune of spending three summers in Berlin doing research thanks to a fellowship from the Alexander von Humboldt Foundation. Over time I was able to speak more and more German and my approach of spending lots of time listening and speaking was paying great dividends. My friend, whose wife was from Germany, also spent a significant amount of time in Germany every summer. A few years later, I was finally able to carry out a conversation with his wife in German. But my friend was still struggling. He was slow to pick up words and although working hard seemed to lag behind.
A similar thing has happened to me in another domain. I am a recreational tennis player and enjoy learning more about the game and about how to improve. Recently, I received an email invitation to improve my doubles game. I don’t play doubles a lot but I went a long anyway. I entered my selection by indicating that I had trouble poaching. This is when someone crosses to the other side of the court while at the net and intercepts a ball to end the point quickly. After I entered my response I received my own personalized video tip. Basically, the Bryan Brothers, the most successful doubles team of all time, suggested that I see the ball before it was there. There was no time to react to the ball. So I needed to simply imagine where the ball would be and move to hit the imaginary ball before it was in that place. The video ends by saying that I heard it from the most successful doubles players on the planet. Who could be better at teaching me to poach well?
Suffice it to say that it was not that simple. Sometimes I would imagine where the ball was going and I could get to it. But other times I would miss the ball completely because it did not go where I expected it to go. Like my German-learning friend I seemed to be lagging behind the Bryan Brothers in my poaching ability.
The differences between experts and novices has been the topic of discussion for many years. Adriaan de Groot, was the first to test this in the realm of chess. He found that chess experts outperformed less skilled players on tests of memory in real game situations. Follow-up work found that experts also showed different patterns of eye movements. K. Anders Ericsson has extended this seminal work by understanding the factors that play a role in the development of expertise. He has established that it takes roughly 10,000 hours to become an expert. But it is not simple exposure that plays a role. Experts engage in deliberate practice during which they are given feedback about their performance. This feedback helps experts fine-tune their skills over time so that they become automatic.
The role of deliberate practice can explain to some extent the differences between the Bryan Brothers and me. During their childhood, the Bryan Brothers, spent thousands of hours playing with tennis balls. Their father, Wayne Bryan, is a tennis coach and played tennis games with his sons at a very young age. As they grew older, they played doubles together. They probably missed hundreds of balls and made many errors. Now in their 30s the Bryan Brothers are able to literally see the ball before someone hits it. Like the chess experts tested by de Groot, they can anticipate what is going to happen because of a large database of experience.
A similar situation exists for my friend and I. When he asked me what he could to improve his language skills, I suggested that he listen to German CDs and just develop his ear for it over time. Eventually, he would learn to anticipate what was coming because of his experience of hearing many sentences.
Suffice it to say that my suggestion did not work so well. The problem was that I had not anticipated the difference between us. Like the Bryan Brothers, I grew up with two parents that had extensive experience with language and language learning. My mother taught English as a foreign language for 30 years in the public school system, has an M.A. in comparative literature, and has written poetry and prose in Spanish and English. My father was a professor of Spanish and Portuguese, learned Arabic in graduate school, and would listen to the Portuguese hour for fun on the radio. As a child I was exposed to five different languages to varying degrees, eventually gaining proficiency in two. At the age of 20 I lived in Brazil during which I gained extensive experience in a third language as an adult.
What I had neglected to account for was all the hours I had spent learning languages in some form. Like the Bryan Brothers I took for granted how much this experience had sharpened my learning abilities. Practicing with a CD in my car had a very different effect on me than it had on my friend.
So the next time a “pro” promises to teach their secrets over the internet in a few weeks, run in the other direction. There is no substitute for the number of hours required to gain expertise in a skill or ability. It doesn’t mean that learning cannot happen over time. But it requires patience and time that a “pro” often neglects to mention.
With turmoil in the Middle East, from Egypt’s changing government to the emergence of the Isalmic State, we recently sat down with Shadi Hamid, author of Temptations of Power: Islamists and Illiberal Democracy in a New Middle East, to discuss about his research before and during the Arab Spring, working with Islamists across the Middle East, and his thoughts on the future of the region.
In your recent New York Times essay “The Brotherhood Will Be Back,” you argue that there is still support for the mixing of religion and politics, despite the Muslim Brotherhood’s recent failure in power. So do you see a way for Egypt to achieve stability in the years ahead? Can they look toward their neighbors (Jordan, Tunisia?) for a positive example?
Cultural attitudes toward religion do not change overnight, particularly when they’ve been entrenched for decades. Even if a growing number of Egyptians are disillusioned with the way Islam is “used” for political gain, this does not necessarily translated into support for “secularism,” a word which is still anathema in Egyptian public discourse. One of my book’s arguments I is that democratization not only pushes Islamists toward greater conservatism but that it also skews the entire political spectrum rightwards.
In Chapter 3, for instance, I look at the Arab world’s “forgotten decade,” when there were several intriguing but ultimately short-lived democratic experiments. Here, the ostensibly secular Wafd party, sensing the shift in the country toward greater piety, opted to Islamize its political program, something which was all too obvious (perhaps even a bit too obvious) in its 1984 program. It devoted an entire section to the application of Islamic law, in which the Wafd stated that Islam was both “religion and state.” The program also called for combating moral “deviation” in society and purifying the media of anything contradicting the sharia and general morals. The Wafd party also supported the supposedly secular regime of Anwar Sadat’s ambitious effort in the late 1970s and early 1980s to reconcile Egyptian law with Islamic law. Led by speaker of parliament and close Sadat confidant Sufi Abu Talib, the initiative wasn’t just mere rhetoric; Abu Talib’s committees painstakingly produced hundreds of pages of detailed legislation, covering civil transactions, tort reform, criminal punishments, as well as the maritime code.
The point here is that the Islamization of society (itself pushed ahead by Islamists) doesn’t just affect Islamists. Even Egypt’s president, former general Abdel Fattah al-Sissi, cannot escape these deeply embedded social realities.
Egypt is de-democratizing right now, but the Sissi regime, unlike Mubarak’s, is a popular autocracy where the brutal suppression of one particular group — the Muslim Brotherhood and other Islamists — is cheered on by millions of Egyptians. Sissi, then, is not immune from mass sentiment. A populist in the classic vein, Sissi seems to understand this and, like the Brotherhood, instrumentalizes religion for partisan ends. In many ways, Sissi’s efforts surpass those of Islamists before him, asserting great control over al-Azhar, the premier seat of Sunni scholarship in the region, and using the clerical establishment to shore up his regime’s legitimacy. Sissi has said that it’s the president’s role to promote a “correct understanding” of Islam. His regime has also been politically ostentatious with religion in its crackdown against the Gay community, leading one observer to note that
Religion is a powerful tool in a deeply religious society and Sissi, whatever his personal inclinations, can’t escape that basic fact, particularly with a mobilized citizenry.
Looking at the region more broadly, there are really no successful models of reconciling democracy with Islamism, at least not yet, and this failure is likely to have long-term consequences on the region’s trajectory. Turkish Islamists had to effectively concede who they were and become something else — “conservative democrats” — in order to be fully incorporated in Turkish politics. In Tunisia, the Islamist Ennahda party, threatened with Egypt-style mass protests and with the secular opposition calling for the dissolution of parliament and government, opted to step down from power. The true test for Tunisia, then, is still to come: what happens if Ennahda wins the next scheduled elections, and the elections after that, and feels the need to be more responsive to its conservative base? Will this lead, again, to a breakdown in political order, with secular parties unwilling to live with greater “Islamization”?
You began your research on Islamist movements before the start of the Arab Spring. How did your project change after the unrest in 2011? What book did you think you would write when you began living in the region — and what did it become after the revolutions?
I began my research on Islamist movements in 2004-5, when I was living in Jordan as a Fulbright fellow. These were movements that displayed an ambivalence toward power, to the extent that they even lost elections on purpose (an odd phenomenon that was particularly evident in Jordan). Power, and its responsibilities, were dangerous. After the Islamic Salvation Front dominated the first round of the 1991 Algerian elections, and with the military preparing to intervene, the Algerian Islamist Abdelkader Hachani warned a crowd of supporters: “Victory is more dangerous than defeat.” In a sense, then, I was lucky to be able to expand the book’s scope to cover the tumultuous events of 2011-3, allowing me to explore evolving, and increasingly contradictory, attitudes toward power. Because if power was dangerous, it was also tempting, and so this became a recurring theme in the book: the potentially corrupting effects of political power, a problem which was particularly pronounced with groups that claimed a kind of religious purity that transcended politics. The book became about these two phases in the Islamist narrative, in opposition and under repression, on one hand, and during democratic openings, on the other. And then, of course, back again. I knew the military coup of 3 July 2013 and then the Rabaa massacre of 14 August — a dark, tragic blot on Egypt’s history — provided the appropriate bookend. The Brotherhood had returned to its original, purer state of opposition.
The Arab Spring also provided an opportunity to think more seriously and carefully about the effects of democratization. Would democratization have a moderating effect on mainstream Islamist movements, as the academic and conventional wisdom would suggest? Or was there a darker undercurrent, with democratization unleashing ideological polarization and pushing Islamists further to the right? I wanted to challenge a kind of cultural essentialism in reverse: that Islamists, like its ideological counterparts in Latin America or Western Europe, would be no match for “liberal democracy,” history’s apparent end state. Any kind of determinism, even the liberal variety, would prove problematic, especially for us as Americans with our tendency to believe that the process of history would overwhelm the whims of ideology. In a way, I wanted to believe it too, and for many years I did. As someone who has long been a proponent of supporting democracy in the Middle East, this puts me in a bit of a bind: In the Middle East, democracy is simply less attractive. Yes. And now, since the book has come out, I’ve been challenged along these very lines: “Maybe democracy isn’t so good after all… Maybe the dictators were right.” Well, in a sense, they were right. But this is only a problem if we conceive of democracy as some sort of panacea or short-term fix. Democracy is supposed to be difficult, and this is perhaps where the comparisons to the third-wave democracies of the 1980s and 1990s were misleading. The divides of Arab countries were “foundational,” meaning that they weren’t primarily “policy” problems; they were the more basic problems of the State, its meaning, its purpose, and, of course, the role of religion in public life, which inevitably brings us back to the identity of the State. What kind of conception of the Good should the Egyptian or Tunisian states be promoting? Should the state be neutral or should it be a state with a moral or religious mission? These are raw, existential divides that hearken back more to 1848 than 1989.
You conducted many interviews to research Temptations of Power. How did the interviews craft your argument — whether you were speaking with political leaders, activists, students, or citizens? Feel free to mention some examples.
Spending so much time with Islamist activists and leaders over the course of a decade, some of whom I got to know quite well, was absolutely critical. And this book — and pretty much every thing I know and think about Islamist movements — has been informed and shaped by those discussions. I guess I’m a bit old-fashioned that way; that to understand Islamists, you have to sit with them, talk to them, and get to know them as individuals with their own fears and aspirations. This is where I think it’s important for scholars of political Islam to cordon off their own beliefs and political commitments. Just because I’m an American and a small-l liberal (and those two, in my case, are intertwined), doesn’t mean that Egyptians or Jordanians should be subject to my ideological preferences. If you go into the study of Islamism trying to compare Islamists to some liberal ideal, then that’s distorting. Islamists, after all, are products of their own political context, and not ours. So that’s the first thing.
Second, as a political scientist, my tendency has always been to put the focus on political structures, and the first half of my book does quite a bit of that. In other words, context takes precedence: that Islamists — or, for that matter, Islam — are best understood as products of various political variables. This is true, but only up to a point and I worry that we as academics have gone too much in this direction, perhaps over-correcting for what, decades ago, was a seeming obsession with belief and doctrine.
When religion is less relevant in our own lives, it can be difficult to make that jump, to not just understand — but to relate — to its meaning and power for believers, and for those, in particular, who believe they have a cause beyond this life. But I think that outsiders have to make an extra effort to close that gap. And that, in some ways, is the most challenging, and ultimately rewarding, aspect of my work: to be exposed to something fundamentally different. I think, at this point, I feel like I have a good grasp on how mainstream Islamists see the world around them. What I still struggle with is the willingness to die. If I was at a sit-in and the army was coming in with live fire, I’d run for the hills. And that’s why my time interviewing Brotherhood members in Rabaa — before the worst massacre in modern Egyptian history — was so fascinating and forced me to at least try and transcend my own limitations as an analyst. Gehad al-Haddad — who had given up a successful business career in England to return to Egypt — told me was “very much at peace.” He was ready to die, and I knew that he, and so many others, weren’t just saying it. Because many of them — more than 600 — did, in fact, die.
Where does this willingness to die come from? I found myself pondering this same question just a few weeks ago when I was in London. One Brotherhood activist, now unable to return to Egypt, relayed the story of a protester standing at the front line, when the military moved in to “disperse” the sit-in. A bullet grazed his shoulder. Behind him, a man fell to the ground. He had been shot to death. He looked over and began to cry. He could have died a martyr. He knew the man behind him had gone to heaven, in God’s great glory. This is what he longed for. As I heard this story, it couldn’t have been any more clear: this wasn’t politics in any normal sense. Purity, absolution. This was the language of religion, the language of certainties. Where politics, in a sense, is about accepting, or at least coming to terms, with impossibility of purity.
Are you working on any new publications at the moment?
I’m hoping to build on the main arguments in my book and look more closely at how the inherent tensions between religion and mundane politics are expressed in various contexts. This, I think, is at least part of what makes Islamists so important to our understanding of the Middle East. Because their story is, in some ways, the story of a region that is breaking apart because of the inability to answer the fundamental questions of identity, religion, God, citizenship, and State-ness. One project will look at how various Islamist movements have responded to a defining moment in the Islamist narrative — the military coup of July 3, 2013, which has quickly replaced the Algerian coup of 1992 as the thing that always inevitably comes up when you talk to an Islamist. In some ways, I suspect it will prove even more defining in the long-run. Algeria, as devastating as it was, was still somehow remote (and, ironically enough, the Muslim Brotherhood’s Algerian offshoot allowed itself to be co-opted by the military government throughout most of Algeria’s “black decade”).
This time around, there are any number of lessons to be learned. One response among Islamists is that the Brotherhood should have been more confrontational, moving more aggressively against the “deep state” instead of seeking temporary accommodation. While others fault the Brotherhood for not being inclusive enough, and alienating the very allies who had helped bring it to power. But, of course, these two “lessons” are not mutually exclusive, with many believing that the Brotherhood — although it’s not entirely clear how exactly this would work in practice — should have been both more aggressive and more inclusive.
You recently went on a US tour to promote and discuss Temptations of Power — any recent discussion items, comments or questions which supported your conclusions or refined your thinking that you would like to share?
During the tour, I’ve really enjoyed the opportunity to discuss the more philosophical aspects of the book, including the “nature” of Islam, liberalism, and democracy. These are contested terms; Islam, for instance, can mean very different things to different people. A number of people would ask about Narendra Modi, India’s democratically-elected prime minister and somewhat notorious Hindu nationalist. Here’s someone who, in addition to being illiberal, was complicit in genocidal acts against the Muslim minority in Gujarat. But an overwhelming number of Indians voted for him in a free, democratic process. There’s something inspiring about accepting electoral outcomes that might very well be personally threatening to you. Another allied country, Israel, is a democracy with strong (and seemingly stronger) illiberal tendencies. Popular majorities
In some sense, the tensions between liberalism and democracy are universal and trying to find the right balance is an ongoing struggle (although it’s more pronounced and more difficult to address in the Middle Eastern context). So it makes little sense to expect a given Arab country to become anything resembling a liberal democracy in two or three years, when, even in our own history as Americans, our liberalism as well as our democracy were very much in doubt at any number of key points. (I just read this excellent Peter Beinart piece on our descent into populary-backed illiberalism during World War I. Cincinnati actually banned pretzels).
At the same time, looking at other cases has helped me better grasp what, exactly, makes the Middle East different. For example, as illiberal as Modi and the BJP might be, the ideological distance between them and the Congress Party isn’t as much as we might think. In part, this is because the Hindu tradition, to use Michael Cook’s framing, is simply less relevant to modern politics. As Cook writes, “Christians have no law to restore while Hindus do have one but show little interest in restoring it.” Islamists, on the other hand, do have a law and it’s a law that’s taken seriously by large majorities in much of the region. The distinctive nature of “law” — and its continued relevance — in today’s Middle East does add a layer of complexity to the problem of pluralism. This gets us into some uncomfortable territory but I think to ignore it would be a mistake. Islam is distinctive in how it relates to modern politics, at least relative to other major religions. This isn’t bad or good. It just is, and I think this is worth grappling with. As the region plunges into ever greater violence, with questions of religion at the fore, we will need to be more honest about this, even if it’s uncomfortable. This, sometimes, can be as simple as taking religion, and “Islam” in particular, more seriously in an age of secularism. I’m reminded of one of my favorite quotes, which I cite in the book, from the great historian of the Muslim Brotherhood, Richard Mitchell. The Islamic movement, he said, “would not be a serious movement worthy of our attention were it not, above all, an idea and a personal commitment honestly felt.”
Heading image: Protesters fests toward Pearl roundabout. By Bahrain in pictures, CC-BY-SA-3.0 via Wikimedia Commons.
From their remotest origins, treaties have fulfilled numerous different functions. Their contents are as diverse as the substance of human contacts across borders themselves. From pre-classical Antiquity to the present, they have not only been used to govern relations between governments, but also to regulate the position of foreigners or to organise relations between citizens of different polities.
The backbones of the ‘classical law of nations’ or the jus publicum Europaeum of the late 17th and 18th centuries were the networks of bilateral treaties between the princes and republics of Europe, as well as the common principles, values, and customary rules of law that could be induced from the shared practices that were employed in diplomacy in general and in treaty-making in particular. Some treaties, particularly the sets of peace treaties that were made at multiparty peace conferences — such as those of Westphalia (1648, from 1 CTS 1), Nijmegen [Nimeguen] (1678/79, from 14 CTS 365), Rijswijk [Ryswick] (1697, from 21 CTS 347), Utrecht (1713, from 27 CTS 475), Aachen [Aix-la-Chapelle] (1748, 38 CTS 297) or Paris/Hubertusburg (1763, 42 CTS 279 and from 42 CTS 347) — gained special significance and were considered foundational to the general political and legal order of Europe.
This interactive map shows a selection of significant peace treaties that were signed from 1648 to 1919. All of the treaties mapped here include citations to their respective entries in the Consolidated Treaty Series, edited and annotated by Clive Parry (1917-1982). (Please note that this map is not intended to be an exhaustive representation of the most important peace treaties from this period.)
Traveling through Scotland, one is struck by the number of memorials devoted to those who lost their lives in World War I. Nearly every town seems to have at least one memorial listing the names of local boys and men killed in the Great War (St. Andrews, where I am spending the year, has more than one).
Many who served in World War I undoubtedly suffered from what some contemporary psychologists and psychiatrists have labeled ‘moral injury’, a psychological affliction that occurs when one acts in a way that runs contrary to one’s most deeply-held moral convictions. Journalist David Wood characterizes moral injury as ‘the pain that results from damage to a person’s moral foundation’ and declares that it is ‘the signature wound of [the current] generation of veterans.’
By definition, one cannot suffer from moral injury unless one has deeply-held moral convictions. At the same time that some psychologists have been studying moral injury and how best to treat those afflicted by it, other psychologists have been uncovering the cognitive mechanisms that are responsible for our moral convictions. Among the central findings of that research are that our emotions often influence our moral judgments in significant ways and that such judgments are often produced by quick, automatic, behind-the-scenes cognition to which we lack conscious access.
Thus, it is a familiar phenomenon of human moral life that we find ourselves simply feeling strongly that something is right or wrong without having consciously reasoned our way to a moral conclusion. The hidden nature of much of our moral cognition probably helps to explain the doubt on the part of some philosophers that there really is such a thing as moral knowledge at all.
In 1977, philosopher John Mackie famously pointed out that defenders of the reality of objective moral values were at a loss when it comes to explaining how human beings might acquire knowledge of such values. He declared that believers in objective values would be forced in the end to appeal to ‘a special sort of intuition’— an appeal that he bluntly characterized as ‘lame’. It turns out that ‘intuition’ is indeed a good label for the way many of our moral judgments are formed. In this way, it might appear that contemporary psychology vindicates Mackie’s skepticism and casts doubt on the existence of human moral knowledge.
Not so fast. In addition to discovering that non-conscious cognition has an important role to play in generating our moral beliefs, psychologists have discovered that such cognition also has an important role to play in generating a great many of our beliefs outside of the moral realm.
According to psychologist Daniel Kahneman, quick, automatic, non-conscious processing (which he has labeled ‘System 1′ processing) is both ubiquitous and an important source of knowledge of all kinds:
‘We marvel at the story of the firefighter who has a sudden urge to escape a burning house just before it collapses, because the firefighter knows the danger intuitively, ‘without knowing how he knows.’ However, we also do not know how we immediately know that a person we see as we enter a room is our friend Peter. … [T]he mystery of knowing without knowing … is the norm of mental life.’
This should provide some consolation for friends of moral knowledge. If the processes that produce our moral convictions are of roughly the same sort that enable us to recognize a friend’s face, detect anger in the first word of a telephone call (another of Kahneman’s examples), or distinguish grammatical and ungrammatical sentences, then maybe we shouldn’t be so suspicious of our moral convictions after all.
The good news is that hope for the reality of moral knowledge remains.
The good news is that hope for the reality of moral knowledge remains. – See more at: http://blog.oup.com/?p=75592&preview=true#sthash.aozalMuy.dpuf
In all of these cases, we are often at a loss to explain how we know, yet it is clear enough that we know. Perhaps the same is true of moral knowledge.
Still, there is more work to be done here, by both psychologists and philosophers. Ironically, some propose a worry that runs in the opposite direction of Mackie’s: that uncovering the details of how the human moral sense works might provide support for skepticism about at least some of our moral convictions.
Psychologist and philosopher Joshua Greene puts the worry this way:
‘I view science as offering a ‘behind the scenes’ look at human morality. Just as a well-researched biography can, depending on what it reveals, boost or deflate one’s esteem for its subject, the scientific investigation of human morality can help us to understand human moral nature, and in so doing change our opinion of it. … Understanding where our moral instincts come from and how they work can … lead us to doubt that our moral convictions stem from perceptions of moral truth rather than projections of moral attitudes.’
The challenge advanced by Greene and others should motivate philosophers who believe in moral knowledge to pay attention to findings in empirical moral psychology. The good news is that hope for the reality of moral knowledge remains.
And if there is moral knowledge, there can be increased moral wisdom and progress, which in turn makes room for hope that someday we can solve the problem of war-related moral injury not by finding an effective way of treating it but rather by finding a way of avoiding the tragedy of war altogether. Reflection on ‘the war to end war’ may yet enable it to live up to its name.
The Roosevelts: Two exceptionally influential Presidents of the United States, 5th cousins from two different political parties, and key players in the United States’ involvement in both World Wars. Theodore Roosevelt negotiated an end to the Russo-Japanese War and won the 1906 Nobel Peace Prize. He also campaigned for America’s immersion in the First World War. Almost 25 years later, Franklin Delano Roosevelt came into office during the calamitous aftermath of the Great Depression, yet during his 12-year presidency he contributed to the drop in unemployment rates from 24% when he first took office, to a staggering mere 2% when he left office in 1945. Furthermore, the first lady Eleanor Roosevelt encouraged discussion and implementation of women’s rights, World War II refugees, and civil rights of Asian and African Americans even well-after her husband’s presidency and death. Witness the lives of these illustrious figures through this slideshow, and take a look at the first half of 20th century American history through the lives of the Roosevelts.
“[Theodore] Roosevelt used his bully pulpit to shape public opinion on many subjects. Conservation of natural resources received special emphasis…. Earlier presidents had done little to protect scenic places and national parks against the wasteful exploitation of the environment…. The president achieved much, creating five national parks, four national game preserves, fifty-one bird reservations, and one hundred and fifty national forests” (Lewis L. Gould, Theodore Roosevelt, 43). Public domain via the Library of Congress
In 1909 and 1910, after finishing his second term as president, Roosevelt traveled to Africa on safari. While abroad, the American public grew increasingly fascinated with Roosevelt and “to satisfy popular demand, [Theodore Roosevelt] recruited a friendly reporter, Warrington Dawson, to recount the progress of the hunt for the press corps. When Roosevelt returned first to Europe and then home in the spring of 1910, it was to intense popular acclaim everywhere.” (Lewis L. Gould, Theodore Roosevelt, 52). TR (center, facing sideways) on safari, 1910. Public domain via the Library of Congress.
Theodore Roosevelt and William Howard Taft
“Taft was a first-class lieutenant; but he is only fit to act under orders; and for three years and a half the orders given him have been wrong. Now he has lost his temper and is behaving like a blackguard.” (Theodore Roosevelt to Arthur Lee, dated May 1912, from the Papers of Lord Lee of Fareham.) After leaving office in 1908, Theodore Roosevelt’s relationship with his personally-selected successor, William Howard Taft, soured due to policy differences. Theodore Roosevelt decided to run for an unprecedented third term against President Taft in 1912 as a third-party candidate. Theodore Roosevelt and his newly-founded Progressive Party were ultimately defeated by Democratic candidate Woodrow Wilson in the general election. Theodore Roosevelt and William H. Taft, c. 1909. Public domain via the Library of Congress.
Franklin Delano Roosevelt with his mother, Sara
“Franklin grew up in a remarkably cosseted environment, insulated from the normal experiences of most American boys, both by his family’s wealth and by their intense and at times almost suffocating love…. It was a world of extraordinary comfort, security, and serenity, but also one of reticence and reserve.” (Alan Brinkley, Franklin Delano Roosevelt, 4). Franklin Delano Roosevelt with his mother, Sara, 1887. Public domain via Wikimedia Commons.
FDR at Harvard
“Entering Harvard College in 1900, [FDR] set out to make up for what he considered his social failures [as a boarding school student at] Groton. He worked hard at making friends, ran for class office, and became president of the school newspaper, the Harvard Crimson, a post that was more a social distinction at the time than a journalistic one. (His own contributions to the newspaper consisted largely of banal editorials calling for greater school spirit.)” (Alan Brinkley, Franklin Delano Roosevelt, 5). FDR as president of the Harvard Crimson, with its Senior Board in 1904. Public domain via the Franklin Delano Roosevelt Library.
FDR and Polio
In August of 1921, Roosevelt fell ill after being exposed to the poliomyelitis virus. “He learned to disguise it for pulic purposes by wearing heavy leg braces; supporting himself, first with crutches and later with a cane and the arm of a companion; and using his hips to swing his inert legs forward…So effective was the deception that few Americans knew that Roosevelt could not walk” (Brinkley, Franklin Delano Roosevelt, 18-19). Franklin D. Roosevelt, Fala and Ruthie Bie at Hill Top Cottage in Hyde Park, N.Y . Franklin Delano Roosevelt Library.
FDR and the Great Depression
Depression breadlines. In the absence of substantial Gov’t relief programs during 1932, free food was distributed with private funds in some urban centers to large numbers of the unemployed. February 1932 Franklin D. Roosevelt Presidential Library & Museum, Photo 69146. Public domain.
FDR and the New Deal
“When Franklin Delano Roosevelt took the oath of office as president for the first time on March 4, 1933, every moving part in the machinery of the American economy had evidently broken…. Roosevelt right away began working to repair finance, agriculture, and manufacturing…. The Roosevelt agenda grew by experiment: the parts that worked stuck, no matter their origin. Indeed, the program got its name by just that process: Roosevelt used the phrase “new deal” when accepting the democratic nomination for president, and the press liked it. The “New Deal” said the Roosevelt offered a fresh start, but it promised nothing specific: it worked, so it stuck.” (Rauchway, The Great Depression and the New Deal: A Very Short Introduction, 56). Franklin Roosevelt at desk in Oval Office with group, Washington, D.C. 1933. Library of Congress, Harris & Ewing Collection. Wikimedia Commons.
FDR and the New Deal
In the beginning of his presidency, Roosevelt proposed a “New Deal.” Over time, it “created state institutions that significantly and permanently expanded the role of federal government in American life, providing at least minimal assistance to the elderly, the poor, and the unemployed; protecting the rights of labor unions; stabilizing the banking system; building low-income housing; regulating financial markets; subsidizing agricultural production…As a result, American political and economic life became much more competitive, with workers, farmers, consumers, and others now able to press their demands upon the government in ways that in the past had usually been available only the corporate world” (Brinkley, Franklin Delano Roosevelt, 61). “CCC boys at work–Prince George Co., Virginia.” Franklin D. Roosevelt Presidential Library & Museum
FDR and the Social Security ct
President Roosevelt signed the Social Security Act, at approximately 3:30 pm EST on August 14th, 1935. Standing with Roosevelt are Rep. Robert Doughton (D-NC); Sen. Robert Wagner (D-NY); Rep. John Dingell (D-MI); Rep. Joshua Twing Brooks (D-PA); the Secretary of Labor, Frances Perkins; Sen. Pat Harrison (D-MS); and Rep. David Lewis (D-MD). Library of Congress. Wikimedia Commons.
FDR and the Social Security Act
One of the most important pieces of social legislation in American History was The Social Security Act of 1935. The Act was part of Roosevelt’s Second New Deal (from 1935-38). The Social Security Act set up several important programs, including unemployment compensation (funded by employers) and old-age pensions (funded by a Social Security tax paid jointly by employers and employees). It also provided assistance to the disabled (primarily the blind) and the elderly poor (people presumably too old to work). Furthermore, it established Aid to Dependent Children (later called Aid to Families with Dependent Children, or AFDC), which created the model for what most Americans considered “welfare” for over sixty years (Brinkley, Franklin Delano Roosevelt, 51-52). Roosevelt said, “No one can guarantee this country against the dangers of future depressions, but we can reduce those dangers” (Kennedy, Freedom from Fear, 270). This is a poster publicizing Social Security benefits. Public Domain via Franklin D. Roosevelt Library.
FDR and the Second World War
When war finally broke out in Europe in September 1939, Roosevelt continued to insist that the conflict would not involve the United States. Roosevelt declared, “This nation will remain a neutral nation, but I cannot ask that every American remain neutral in thought as well.” Then, on December 7th, 1941, a wave of Japanese bombers struck the American naval base in Pearl Harbor, Hawaii, killing more than 2,000 American servicemen and damaging or destroying dozens of ships and airplanes. Roosevelt called it, “a date which will live in infamy” (Brinkley, Franklin Delano Roosevelt, 68). View looking up “Battleship Row” on 7 December 1941, after the Japanese attack on Pearl Harbor. The battleship USS Arizona (BB-39) is in the center, burning furiously. To the left of her are USS Tennessee (BB-43) and the sunken USS West Virginia (BB-48). Official U.S. Navy Photograph. Wikimedia Commons.
FDR and the declaration of war
“The Senate and House voted for a declaration of war—the Senate unanimously, and the House by a vote of 388 to 1. Three days later, Germany and Italy, Japan’s European allies, declared war on the United States, and the American Congress quickly and unanimously reciprocated” (Brinkley, Franklin Delano Roosevelt, 75-76). United States President Franklin D. Roosevelt signing the declaration of war against Japan, in the wake of the attack on Pearl Harbor. US National Parks Service via Wikimedia Commons
The Big Three
Shown here are ‘The Big Three’: Stalin, U.S. President Franklin D. Roosevelt, and British Prime Minister Winston Churchill at the Tehran Conference, November 1943. At this time, war in eastern Europe had turned decisively in favor of the Soviety Union, which meant that Roosevelt and Churchill now had little leverage over Stalin. Even so, Stalin agreed to enter the Pacific war after the fighting in Europe came to an end. Roosevelt and Churchill promised to launch the long-delayed invasion of France in the spring of 1944 (Brinkley, Franklin Delano Roosevelt, 83). US Signal Corps public domain photo.
Eleanor Roosevelt and the Second World War
An outspoken and publicly active First Lady, Eleanor Roosevelt was active both on the homefront and overseas. Her visits drew crowds of people and welcomed her favorably and amiably. This resulted in positive press being written about the Roosevelts across the United States as well as Britain. Eleanor Roosevelt visiting troops in Galapagos Island. US National Archives and Records Administration
The Roosevelt Family
Franklin D. Roosevelt and Eleanor Roosevelt with their 13 grandchildren in Washington, D.C. in January of 1945 (Archivist note: This photograph was taken at FDR’s fourth inauguration. This is one of the last family photographs taken before FDR’s death.) Franklin D. Roosevelt Presidential Library & Museum.
Franklin Delano Roosevelt died of a stroke in on 12 April 1945. In the decades since his death, his stature as one of the most important leaders of the twentieth century has not diminished. “History will honor this man for many things, however wide the disagreement of many of his countrymen with some of his policies and actions,” the New York Times wrote the day after his death. “It will honor him above all else because he had the vision to see clearly the supreme crisis of our times and the courage to meet that crisis boldly. Men will thank God on their knees, a hundred years from now, that Franklin D. Roosevelt was in the White House” (The New York Times, 13 April 1945). Roosevelt’s funeral procession in Washington in 1945; watched by 300,000 spectators. Library of Congress.
The remaining 17 years that Eleanor Roosevelt lived after her husband passed away were years in which she carried out her humanitarian efforts and maintained the integrity of the Roosevelt name. The next President Harry Truman appointed Eleanor as a delegate to the United Nations General Assembly, and less than a year later, she became the first chairperson of the preliminary United Nations Commission on Human Rights. She also chaired the John F. Kennedy administration’s Presidential Commission on the Status of Women. To this day, she is quoted, and referred to with great respect and admiration for her efforts in human rights and politics. Roosevelt speaking at the United Nations in July 1947. Franklin D. Roosevelt Presidential Library and Museum.
False: Generally speaking, college is still worth the money in the long run. According to the latest figures from the College Board, the median earnings for a person with a bachelor’s degree was 65% greater than those for someone with just a high-school diploma over a 40-year working career. Those with associate degrees, typically earned in community or technical colleges, had earnings that were 27% higher. What’s more, the job market of the future will continue to offer more opportunities to those with post-secondary education. By 2020, experts predict two-thirds of jobs will require at least some education and training beyond the high school level. Forty years ago, only about 28% of jobs required that higher level of education.
It costs hundreds of thousands of dollars to go to college.
False: While there are colleges that charge upwards of $50,000 a year for tuition, room, and board (at least 175 of them, counting the half-dozen or so public universities that charge their out-of-state students that much) most colleges cost a lot less. Last year half of all four-year public-college students attended an institution where the annual in-state tuition rate was below $9,011. Some 85 percent of them attended a college where tuition charges were below $15,000. Private colleges charge more but with student aid from the federal and state governments and the colleges themselves, the price students actually pay is often substantially lower than the “sticker price.” Last year the average “net price” at a four-year private college was $12,460. And the average tuition at community colleges, where about four out of ten undergraduates now attend college, was about $3,300 a year.
Student debt is unmanageable.
True (and False): About 40 million Americans now carry student-loan debt and for many of them, particularly recent graduates struggling to get established in a tough job market, student-debt burdens are a real challenge. That’s evidenced by the rising rate of defaults on student loans. But according to the latest data from Project on Student Debt, for students graduating from college with debt, those who attended four-year public colleges had an average debt burden of $25,500. For comparison sake, a new Ford Focus automobile costs anywhere from about $17,000 to $35,000, depending on the options. The average debt level for graduates from four-year private colleges was $32,300. About 40% of student debt is for balances smaller than $10,000, according to the College Board.
Of all the factors that have propelled college prices up faster than the costs of most other goods and services over the past for 40 years, the cost of all those tenured professors isn’t one of them.
True: Actually, while college costs have been rising, the proportion of faculty members who are tenured professors, or on track to be considered for tenure, has shrunk precipitously during the same period. In the mid-1970s according to the American Association of University Professors, about 45% of all faculty members were tenured or on the tenure track; today only about one-quarter of them are. Full-time professors are well paid, but colleges now increasingly rely on faculty members who they hire annually, adjunct professors who they pay only about $2,700 per course, on average, and graduate teaching assistants. Meanwhile, factors that do seem to more directly drive up costs and prices include: growing numbers of administrators, new facilities, major reductions in state support, and the costs for student aid.
Online education takes place primarily at for-profit colleges like the University of Phoenix and DeVry University.
False: For-profit colleges like those were among the first to use distance education-technologies to expand their enrollments, but online education is now increasingly commonplace in more traditional public and private colleges. According to the latest available data, more than five million students — about a quarter of the student population — took at least one course that was fully or partly online in fall 2012. About half of them took a class that was exclusively online. The medium, however, still seems more popular for certain fields of study. For both graduate and undergraduate education, the most common courses and degrees offered via distance education are in business, marketing, computer- and information-technologies, and health-related fields. In the future, students can expect to see more and more classes that use distance-education technology in a hybrid format, mixing face-to-face instruction with online components.
Headline image credit: Graduation By Tulane Public Relations, CC-BY-2.0 via Wikimedia Commons
From time to time, we try to give you a glimpse into work in our office around the globe, so we are excited to bring you an interview with Gemma Barratt, Marketing Manager for clinical medical journals. We spoke to Gemma about her life here at Oxford University Press.
When did you starting working at OUP?
I started working at OUP five years ago in the Online Products department as a Marketing Assistant. I worked on everything from Oxford Scholarship Online and the Oxford English Dictionary, to the Oxford Dictionary of National Biography and Oxford Reference. I moved to become a Marketing Manager in the Journals End User Marketing team about a year ago and I now work on some of our major Clinical Medicine society titles.
What was your background before you started working at OUP?
I did my undergraduate degree in English literature and then a master’s in gender and culture. I originally planned on becoming an early years teacher, but was encouraged to do the MA instead and never went back! After my masters I volunteered for a number of arts festivals including the Cheltenham Literature Festival and Larmer Tree Festival, and ended up doing a six month marketing internship with Salisbury International Arts Festival.
What drew you to work for OUP in the first place? What do you think about that now?
Following my internship I knew I wanted to work in marketing and I was attracted to OUP because of the size and reputation of the organization, and that’s still true. The work ethos of OUP is something that I really value and if you like working with passionate and driven people this is certainly a good company to be in.
What is your typical day like at OUP?
My typical day is busy and challenging. It can include anything from recruiting new members of staff to troubleshooting issues raised by societies, working on new bids to training — it’s very broad and varied.
What’s the most enjoyable part of your day?
I enjoy being busy and there is always plenty to do. I attend a lot of meetings and for the most part this is one of the things I most enjoy. They are opportunities to troubleshoot issues, share new ideas, and work collaboratively with colleagues.
What are the biggest challenges of working in the Journals End User Marketing team?
One of the biggest challenges is also one of the biggest draws to being part of this team — it’s incredibly busy and there are a lot of people to work with. The work is varied and challenging and you need to be on the ball all the time to make sure that deadlines are met and the societies we work with are happy.
What do you see as the key skills for a marketing team in journals publishing?
To be robust, creative, and not to be afraid to question the way things are done to find better ways of working. Also to be able to juggle and prioritize tasks. There are always new things coming in so it’s important to be flexible. I also think it’s very important to be personable and friendly, as managing relationships within the department, OUP more widely, and externally is a huge part of a marketing team’s role.
What is the most exciting project you have been part of while working for the team?
Probably working on new bids — we work collaboratively with the editorial team and it’s really a chance to showcase what we can do and demonstrate our creative ideas and results.
If you didn’t work in publishing, what would you be doing?
I would probably be doing a PhD — my MA focused on remembrance of World War I through contemporary fiction, so perhaps an extension of that?
It is a well known fact that the Christian church has, in the course of its 2,000-year long history, often been torn with controversy over how to understand those four simple words, ‘This is my body.’
The Orthodox have never been entirely comfortable with the label ‘transubstantiation,’ and at the outset of the Reformation, the Catholic understanding of the Mass was one of the prime issues that provoked Luther to decry the ‘Babylonian captivity’ of the church.
Luther, of course, went on to denounce Zwingli’s view of the Eucharist as vehemently has he had the Catholic one, and slightly later the Reformed followers of Calvin decided that they disagreed with both Luther and Zwingli. The intensity of these debates is understandable in light of the fact that all involved assumed that a correct understanding of the Eucharist had a direct bearing upon the manner in which Jesus was present to his followers.
Was Jesus still here, bringing salvation to his church, or had he departed and left them to get by as well as they could on their own? Defining the nature of this ritual was intrinsically tied to understanding the purpose of this community. Although this story is one often told, the parallels it presents to Christian views on the Bible have often gone overlooked. For the sources of Christian communal identity for the past two millennia include not only a ritual meal but also a written book.
At first this assertion strikes the reader as so obvious it hardly merits mentioning. However, recognizing the importance of this principle accounts for some of the disconnect modern readers of the Bible experience when they attempt to read accounts of scriptural interpretation from late antiquity.
As recounted in Michael Legaspi’s The Death of Scripture and the Rise of Biblical Studies, in the past five hundred years the Bible in the West has undergone a transformation as it was abstracted from its previous home in a unified Christian church and resituated in the context of modern academia.
Such a move would have appeared quite foreign to Christians of an earlier age who assumed that the Bible could not be understood properly apart from grasping its place in the divine plan of salvation centered upon the person of Jesus Christ.
For example, Cyril of Alexandria, the fifth-century bishop of the city that served as the intellectual capital of the Roman world, liked to use a metaphor to explain the Bible’s purpose to his Christian hearers.
In his sermons and writings, he explained the presence of the Bible in the church by stating that Jesus had given this book to his followers, like a shepherd providing his flock with green grass for their nourishment.
Cyril, of course, knew that the Bible was written by countless persons over a vast span of time, and he tried, using the best tools available to him, to attend to that sort of historical detail. But what was most important, in his view, was the fact that when the Bible was read, Jesus himself was present to save, in a manner akin to his presence in the Eucharist.
Whether it was the words of Moses or of the evangelist Mark, when Christians sitting in the basilica in late antique Alexandria heard the scriptures, what they experienced was Jesus himself speaking to them through that myriad of human voices.
And in making this assumption they were following a trajectory already begun in the New Testament itself. Had not the Apostle Paul declared that Christ was speaking in him (2 Cor. 13.3), and did not Jesus himself say that his words were ‘Spirit and life’ (John 6.63)?
For most twentieth-century historians, early Christian exegesis was regarded as unworthy of historical attention due to its failure to attain the standards of modern hermeneutical method.
Imagine the absurd parallel of modern scientists rejecting medieval views on the Eucharist on the basis that those benighted premoderns did not properly understand the chemical composition of bread and wine. Such a dismissal hardly grapples seriously with the way Christians tried to articulate the function of the ritual.
Late antique readers fair somewhat better when seen in their own context. If the Bible is viewed as the written and living voice of Jesus, then the task of interpretation comes to mirror this assumption.
Just as Jesus speaks through the human authors of the Bible, so interpretation must be a process of finding Jesus in those same words, so as to provide spiritual nourishment for Christians seeking to grow in virtue and understanding.
In this way, what Cyril and his contemporaries believed about the Bible determined the way in which they read the Bible as a community, and the consistency of their approach is laudable.
The Bible is open to a great many interpretive approaches, and the plausibility of those methods will always be a product of the community in which the reader is situated. Late antique Christians, who assumed that scripture functioned analogously to the Eucharist, at least managed to find an interpretive method that accorded with their communal experience of this book.
Dearest readers, I am sorry to say that the time has come for me to say goodbye. I have had a wonderful time meeting you all, not to mention learning more than I ever thought I would know about the fantastic field of oral history. However, grant applications and comprehensive examinations are calling my name, so I must take a step back from tweeting, Facebooking, tumbling and Google plusing (sure, why not).
Fear not, we have found another to take my place: the esteemable and often bow-tie-wearing Andrew Shaffer. I chatted with him earlier this week and I already think he’ll make a wonderful Caitlin 2.0. (For instance, Andrew originally wanted to introduce himself with the lyrics from the Fresh Prince of Bel-Air theme song. A+.)
* * * * *
So, Andrew, tell us a bit about yourself.
Well, Caitlin, I am a first year PhD student at the University of Wisconsin-Madison, studying gender and sexuality history in a modern US context. I’m originally from Illinois, but lived in San Francisco for three years before coming to Madison. There I received an MA in International Studies and worked at a non-profit that provides legal resources and policy analysis to immigrants and immigration advocates.
Do you have any interests outside of school?
Honestly? Not really… But when I’m not thinking about school, I sometimes read, go on walks, or explore all the exciting things Madison has to offer.
That’s a little sad. But since you love school so much, I bet you have exceptionally exciting research interests?
I’m really interested in the ways LGBT activists have responded to political and social changes, and how their efforts have impacted the everyday lives of LGBT communities. Because of the incredible diversity among LGBT communities, I use intersectional approaches to better understand how various segments of our community are affected, or even created by these changes.
Oh, awesome! Do you use oral history or interviews in your research?
Absolutely! I had the good fortune to take a class on oral history methods in college, and I fell in love with it right away. Since then, I’ve been involved with multiple oral history projects, and I think it is one of the best tools available to preserve a community’s memories. Because I study the very recent past, I’m lucky to be able to use interviews and oral histories extensively in my research.
You’ll fit in just fine here then — perhaps even better than I did. Speaking of, what are you looking forward to about this position?
Thanks for noticing! I (and Troy) have worked hard to keep up with the latest trends in the field and to shine a spotlight on all the great work oral historians have been doing. Any concerns about taking over?
Definitely! Like most academic types, I find it easier to write 30 pages than 140 characters, but hopefully I’ll learn some brevity. You’ve done a really great job of preparing and sharing high quality posts through Oral History Review’s social media outlets, and I hope I can continue to provide an enjoyable experience for all of our followers!
I’m sure you’ll do great. Best of luck!
* * * * *
Andrew has already taken over all the social media platforms, so you should feel free to bombard him with questions at @oralhistreview, in the comments below or via the other 3 million social media accounts he now runs. He and I will also be at the upcoming annual meeting in October, so be sure to say hi — and goodbye.
Most of what we hear and read about twelfth-century hottie Rosamund Clifford, aka “Fair Rosamund,” just wasn’t so. True, she was Henry II’s mistress. But that’s about it. Like so many other medieval myths, Rosamund’s legendary life and death are a later invention. Herewith, the best of (untrue) Rosamund:
Myth 1: She went to school at, lived at, had assignations with the king at, retired to, died at, or in any way hung out at Godstow Abbey.
Sadly, Rosamund never entered Godstow until she was a fair corpse. She died around the year 1176, in the midst of her affair with the king, and was buried at Godstow, probably because her mother was already buried there. Contrary to what you will read in various places, there is no evidence that the king paid for her tomb. Her tomb was placed in the front of the high altar, and the king did show particular favor to the monastery because of it. Fifteen years later, Bishop Hugh of Lincoln made the nuns move the tomb out of the church because it was inappropriate for a “whore” to be buried there.
Myth 2: She and Henry went drinking at the Trout. Or the Perch.
I read this about the pubs near Godstow in a student handbook when I was doing my postgraduate work at Oxford, and I wanted to believe it. So did visiting relatives. Alas, not true. See number 2 above: no hanging out at Godstow. But my visitors and I did enjoy some pleasant pints at both the aforementioned hostelries.
Myth 3: She lived in a maze at Woodstock.
Of course this is a later embellishment, related to the next two myths. But a fairly elaborate pleasure garden does seem to have been incorporated into the royal residence at Woodstock in this period, adjacent to a room that just a generation later was known as “Rosamund’s Chamber.” So the maze story may have evolved from a real trysting place in a complex garden.
Myth 4: The queen found her in the maze by means of a silken thread.
See previous myth. But there is, just barely, a silken thread in Rosamund’s true story. After her burial at Godstow, King Henry wanted a special relationship with her burial place, so the nunnery’s patron deeded his patronal rights in Godstow to the king. In the ceremony he used a silk cloth that was later described as “a silken thread.”
Myth 5: She was murdered by Queen Eleanor of Aquitaine.
The earliest version of this story, from the fourteenth century, has Eleanor stabbing Rosamund; in Renaissance versions the queen makes Rosamund choose between stabbing and poison. Interestingly, even the Victorians made a sympathetic victim of poor Rosamund (the fornicating mistress) and turned Eleanor (the wronged wife) into a murderous monster. Needless to say, there’s no truth to the murder stories, which arose long after Rosamund died.
Myth 6: She was the mother of Henry II’s illegitimate son Geoffrey Plantagenet, archbishop of York, and/or his illegitimate son William Longespee, earl of Salisbury.
Rosamund was too young to be Geoffrey’s mother, who was apparently a woman named Ikeni. William Longespee was the son of Ida de Tosny.
Myth 7: Latin bell inscriptions all over England make reference to her.
These inscriptions read, “I who am struck am called Maria [or Katherine], the rose of the world.” Rosamund was a rare, possibly unique, name for a woman in twelfth-century England, but the phrases rosa munda (pure rose) and rosa mundi (rose of the world) were epithets for the Virgin Mary. It’s likely that Rosamund Clifford was named (creatively and, as it turned out, ironically) in honor of the Virgin, and that the bell inscriptions came from the same general cultural source.
Myth 8: Roses were spread over her tomb.
No, just a silken pall and candles, as far as we know. It’s possible, however, that the Gallica rose ‘Rosa Mundi’ was named for her, as her legend grew in the later Middle Ages. Perhaps the rose, like the bells, was named for the Virgin Mary, but the name of the rose is one bit of Rosamund lore that seems plausible.
Like every other custom in life, kissing has been studied from the historical, cultural, anthropological, and linguistic point of view. Most people care more for the thing than for the word, but mine is an etymological blog, so don’t expect a disquisition on the erotic aspects of kissing, even though a few lines below will lead us in that direction. Did the ancient Indo-Europeans, the semi-mythic people who lived no one knows exactly when and where kiss? And if they did, what was their method of performing this “gesture”? Did they rub one another’s nose, the way many people do? Did they kiss their children before putting them to their nomadic beds? Did they kiss goodbye to lost objects, blow a kiss to a friend, or kiss the hand of the woman whose affections they hoped to gain? Alas, we will never know. Even a common Indo-European word for “head” does not exist, and if there is no head, how does one kiss in a truly Proto-Indo-European way? Our records, beginning with Ancient Egypt, the Old Testament, and Vedic texts are quite old but not old enough.
In 1897 Kristoffer Nyrop (1858-1931), a distinguished student of Romance linguistics and semantic change, wrote a book called Kyssetog dets historie (The Kiss and Its History; being a nineteenth-century Dane, he stuck to the reactionary habit of writing his works in Danish, but the book was translated into English almost immediately and is still available.) The 190-page study reads like a novel. A week after its publication, all the copies were sold out, and Nyrop was asked to prepare a second edition and do so in a wild hurry, to be ready for Christmas sales. As could be expected, he complied. Regrettably, he said nothing about the origin of the word. Yet the literature on the etymology of kiss is huge.
As usual, I’ll begin with Germanic. The ancestors of the Modern Germans, Dutch, Frisians, Scandinavians, and English had almost the same word for “kiss,” approximately koss (coss). Part of the New Testament in Gothic has come down to us. Gothic is a Germanic language, recorded in the fourth century, and the word for the verb kiss in it is kukjan. As early as 1861, Dutch dialectal kukken surfaced in a scholarly work, and somewhat later an almost identical East Frisian form was set in linguistic circulation. It became clear that at one time Germanic speakers had two forms—one with -ss-, the other with -kk-. Their relation has never been explained to everybody’s satisfaction.
Solomon in The Song of Songs mentions passionate kisses on the mouth, and Judas must also have kissed Jesus on the mouth. At least, such was the general perception in the Middle Ages (for example, this is how Giotto and Fra Angelico, but more explicitly Giotto, represented the scene), so the Hebrews and the Romans kissed as we do, and Wulfila, the translator of the Gothic Bible, probably had a similar image before his eyes while working with the Greek text. So the speakers of the Germanic languages called “kiss” a kuss- (the vowels might differ slightly) or a kukk-.
Whenever the ritual of kissing came into being, some kisses were used to show respect and in other situations served a purpose comparable to shaking hands (think of a handshake sealing a bargain). Kissing the foot of a king or the Pope belongs here too. Dutch zoenen has the root of a verb meaning “reconcile” (a cognate of German versöhnen). Consequently, people kissed to mark the end of hostilities. Later the Dutch verb broadened its meaning and began to denote any kiss. Something similar happened in Russian, in which the verb for “kiss” is akin to the adjective for “whole”: tselovat’ (stress on the last syllable), from tsel. A kiss must have been a gesture signifying “be healthy, gesundheit.” Another Dutch verb for “kiss” (this time, dialectal), with a close analog in dialectal German, is poenen ~ puunen and seems to have meant “push, plunge, thrust; come into contact.” Here the emphasis was obviously on the movement in the direction of another person. Then there is Engl. smack, believed to be sound-imitative: apparently, when one kisses someone, smack is heard. Onomatopoeia is always hard to prove, but compare Russian chmok, which means exactly the same as smack. Latin savium, of obscure origin, designated an erotic kiss, while osculum goes back to the word for “mouth” (os). Neither is sound-imitative.
Where then does Old Germanic kuss- ~ kukk- belong? Many researchers have suggested that it is sound-imitative, like smack. Perhaps we really hear or think we hear smack, chmok, kuss, and kukk when we kiss. However, even an onomatopoeic word can have a protoform. Reconstructing any protoform is pure algebra. For example, the Gothic for come is qiman (pronounced as kwiman). Its indisputable Latin cognate is venire. To make the two belong together, we should posit an ancestor beginning with gw-. In Latin, g was lost, and in Germanic it yielded k, according to the law of the consonant shift (b, d, g to p, t, k). Did the ancestors of Latin speakers ever say gwenire? Most likely, they did.
In the same way, kiss was tentatively connected with Latin gustare “to taste,” on the assumption that at one time the sought-for form began with gw-. Although this suggestion can be found in one of the best Germanic etymological dictionaries, it now has few, if any, supporters. More instructive is the fact that the Hittite for “kiss” was kuwaszi, and it resembles Sanskrit ṡvaṡiti “to blow; snort” (k- and s- alternate according to a certain rule, while u and w are variants of the same phonetic entity). Add to them Greek kuneo “kiss,” in whose conjugation -s- appears with great regularity: the future was kuso and the aorist ekusa, earlier ekussa. On the basis of this evidence, several authoritative modern dictionaries posit a Proto-Indo-European form of kiss. Can we imagine that three or so thousand years ago there was a common verb for kiss that has come down to our time? Possibly, if “kiss” designated something very common and important, that is, if, for example, it existed as a religious term, something like “worship an idol by touching the image with one’s lips.”
Other hypotheses also exist. Kiss was compared with the verb for “speak,” from which English has the antiquated preterit quoth; Engl. choose and chew; Swedish kuk “penis,” Low (= Northern) German kukkuk “whore; vulva,” Irish bel “lip,” and especially often with Latin basium “kiss” (noun) ~ basiare “kiss” (verb), recognizable today from its cognates: French baiser, Italian baciare, and Spanish besar. All those conjectures should probably be dismissed as unprofitable. The origin of basiare is unknown, and nothing good ever comes from explaining one obscure word by referring it to another equally obscure one.
We are left with two choices. Perhaps there indeed once existed a proto-verb for kiss sounding approximately like it, but who kissed whom or what and in what way remains undiscovered. Or, while kissing, different people heard a sound that resembles either kuss or kukk. Neither solution inspires too much confidence, but, in any case, the long consonant (-ss and -kk) points to the affective nature of the verb. Perhaps an ancient expressive verb belonging to the religious sphere had near universal currency, with Hittite, Sanskrit, and Germanic still having its reflexes. If so, the main question will be about the application of that verb. The sex-related look-alikes (“penis,” “vulva,” and the rest) should, almost certainly, be ascribed to coincidence.
To prevent the Indo-European imagination from running wild, one should remember that alongside kiss, Engl. buss exists. Although it sounds like Middle Engl. bass (the same meaning), bass could not become buss, and it is anybody’s guess whether bass is of French or Latin origin. Swedish dialectal puss corresponds to German Bavarian buss, which is remembered because Luther used it. French, Spanish, Portuguese, Lithuanian, Persian, Turkic, and Hindu have almost identical forms (Spanish is sometimes said to have borrowed its word from Arabic), while Scottish Gaelic and Welsh bus means “lip; mouth.” Even Engl. ba “to kiss” has been recorded. This array of b-words seems to tip the scale toward the onomatopoeic solution, the more so because, to pronounce b, we have to open the lips. For millennia people have kussed (no pun intended), kossed, kissed, kukked, bassed, and bussed, to show affection and respect, to conclude peace, and just for the fun of it, without paying too much attention to origins. This is not giving a kiss of death to etymological research: it is rather a warning that some things are hard to investigate.
Nowadays the question where does a certain sentence occur? has lost its edge. Google will immediately provide the answer. So find out who wrote: “‘A gentleman insulted me today’, she said, ‘he hugged me around the waist and kissed me’.” Then read, laugh, and weep with the heroine.
Image credits: (1) “The prince awakened Sleeping Beauty.” From Kinder und Hausmarchen, von Jakob L. und Wilhelm K. Grimm; illus. von Hermann Vogel. Dritte Auflage), 1893. NYPL Digital Gallery. Digital ID: 1698628. New York Public Library (2) The Kiss. Gustav Klimt. 1907-1908. Austrian Gallery Belvedere. Public domain via Wikimedia Commons.
Harriet Ross Tubman’s heroic rescue effort on behalf of slaves before and during the Civil War was a lifetime fight against social injustice and oppression.
Most people are aware of her role as what historian John Hope Franklin considered the greatest conductor for the Underground Railroad. However, her rescue effort also included her work as a cook, nurse, scout, spy, and soldier for the Union Army. As a nurse, she cared for black soldiers by working with Clara Barton, founder of the American Red Cross, who was in charge of front line hospitals. Over 700 slaves were rescued in the Tubman-led raid against the Confederates at the Combahee River in South Carolina. She became the only woman in U.S. history to plan and lead both white and black soldiers in such a military coup.
It is the latter activity which caused black feminists in Roxbury, Massachusetts to organize themselves during the seventies as the Combahee River Collective. When Tubman died, she was given a military burial with honors. It is also Tubman’s work as an abolitionist, advocate for women’s suffrage, and care for the elderly that informs black feminist thought. It is only fitting that we remember the life of this prominent nineteenth century militant social reformer on the 165th anniversary of her escape from slavery on 17 September 1849.
Tubman was born into slavery around 1820 to Benjamin and Harriet Ross and given the name Araminta. She later took her mother’s name, Harriet. As a slave child, she worked in the household first and then was assigned to work in the fields. Her early years as a slave on the Eastern Shore of Maryland were traumatic and she was sickly. An overseer threw an object that accidentally hit Tubman in the head. The head injury she sustained caused her to have seizures and blackouts all of her life. She even had visions and this combined with her religiosity caused her to believe that she was called by God to lead slaves to freedom. It is believed that her work in the fields gave her the physical stamina to make her rescues. She was married in 1844 to John Tubman, a free black man, but her anxiety about being sold caused her to run away to Philadelphia and leave John behind. Runaways were rare among slave women, but prevalent among slave men.
Between 1846 and 1860, Tubman successfully rescued close to 300 family members and other slaves. She became part of a network of prominent abolitionists who created escape havens for passage from the South to Northern cities and then on to Canada. The recent award winning film, Twelve Years a Slave reminds us that even free blacks were subject to being turned in as a runaway after passage of The Fugitive Slave Law of 1850. Tubman was bothered by this new law and was eager to go directly to Canada where she herself resided for a time. She made anywhere from 11 to 19 rescue trips. The exact count is unclear because such records were notkept in this clandestine social movement. Maryland plantation owners put a $40,000 bounty on Tubman’s head. She was never caught and she never lost a passenger. Like Patrick Henry, her motto was give me liberty or give me death. She carried a pistol with her and threatened to shoot any slave who tried to turn back. The exodus from slavery was so successful that the slaves she led to freedom called her Moses. She was such a master of disguise and subterfuge that these skills were used after she joined the Union Army. It has also been reported that the skills she developed were so useful to the military that her scouting and spy strategies were taught at West Point. She purchased a home in Auburn, New York where she resided after the Civil War. Her husband, John Tubman, died after the war, and she married Nelson Davis, another Civil War veteran. From her home in Auburn, she continued to help former slaves.
The Social Reformer
Historian Gerda Lerner once described Tubman as a revolutionist who continued her organizing activities in later life. Tubman supported women’s suffrage, gave speeches at organizing events for both black and white women, and was involved in the organizing efforts of the National Federation of Afro-American Women. After a three decade delay, Tubman was given $20 a month by the government for her military service. Tubman lived in poverty, but her mutual aid activities continued. She used her pension and money from fundraising activities to provide continued aid to freed slaves and military families. She died in 1913 in the home she established for the elderly and poor, the Harriet Tubman Home for Aged and Indigent Colored People, now a National Historic Monument.
Harriet Ross Tubman escaped from slavery, but remembered those she left behind. She was truly an historic champion for civil rights and social justice.
Heading image: Underground Railway Map. Compiled from “The Underground Railroad from Slavery to Freedom” by Willbur H. Siebert Wilbur H. Siebert, The Macmillan Company, 1898. Public Domain via Wikimedia Commons.
How do you survive as a psychology student? It might be a daunting prospect, but we here at OUP are here to give you a helping hand through three years of cognitive overload. Here are our top tips:
1. Do some essential reading before you start your degree! Psychology is a very broad subject, so build some strong foundations with a wide reading base, especially if you’re new to the subject. Check out our Essential Book List to get you started (and recommendations welcome in the comments below).
2. Stay up-to-date with current affairs. Psychology is a continually evolving subject, with new ideas and perspectives emerging all the time. Read blogs, journals, and magazines; watch TED talks; listen to podcasts; and scan newspapers for psychology-themed stories.
3. Always keep your eyes and ears open. University is your chance to learn beyond the classroom. Pay attention to life – just watching your favourite TV programme can give you an insight into how a theoretical concept might actually work. Use everyday events and interactions to deepen your understanding of psychological ideas.
4. Learn from everyone around you. Psychology asks questions about how we as humans think – so go and think together with some other humans! Compare and contrast different ideas and approaches, and make the most of group learning or other opportunities, like taking part in other people’s surveys or experiments. Joining your university psychology society is a great way to learn from your peers and to balance work with play.
5. Learn how to study independently. This is your chance to learn what you want, not what you have to. You will have much greater academic freedom than ever before. Wherever you choose to study, you will have to take on your own independent research, and if you see yourself building a career in psychology, then independent investigation is crucial.
6. Hone your note-taking / diagram-making skills. On your laptop, tablet, smartphone — or with paper and pens — you’ll be writing a lot of notes over the course of your degree. Referencing and formatting might not seem like the most exciting aspects of your degree, but good preparation and organisation will make them more bearable (and quicker!). Get to know how best you learn, remember and process information.
7. Get enough sleep. Sitting up late staring at textbooks and computer screens is easy, but it’s not the healthiest habit to get into. Studying well is less about the number of hours you put in, than how effectively you spend those hours. Keep up a balanced diet, stay hydrated, do regular exercise, and find someone to talk to if you’re feeling stressed.
8. Don’t be afraid to admit to your own weaknesses. Psychology is a demanding subject, and questions are more common than neat answers.
9. Try to enjoy your studies. There are many ideas to explore, from behaviour to dreams, memory to psychoanalysis. Keep looking at different topics that interest you to stay motivated. When it does get too much, don’t be afraid to step back and take a break.
10. Finally, remember what psychology is about. You can get lost in surveys and experiments, theories and concepts, but try to always keep in mind what drew you to psychology in the first place. In studying psychology you’re taking part in a great tradition of questioning how the human mind works and behaves – be proud of that.
Heading Image: Student. Photo by CollegeDegrees360, CC BY-SA 2.0 via Flickr
September 2014 marks the tenth anniversary of the publication of the Oxford Dictionary of National Biography. Over the next month a series of blog posts consider aspects of the ODNB’s online evolution in the decade since 2004. Here the literary historian, David Hill Radcliffe, considers how the ODNB online is shaping new research in the humanities.
The publication of the Oxford Dictionary of National Biography in September 2004 was a milestone in the history of scholarship, not least for crossing from print to digital publication. Prior to this moment a small army of biographers, myself among them, had worked almost entirely from paper sources, including the stately volumes of the first, Victorian ‘DNB’ and its 20th-century print supplement volumes. But the Oxford DNB of 2004 was conceived from the outset as a database and published online as web pages, not paper pages reproduced in facsimile. In doing away with the page image as a means of structuring digital information, the online ODNB made an important step which scholarly monographs and articles might do well to emulate.
Database design has seen dramatic changes since 2004—shifting from the relational model of columns and rows, to semi-structured data used with XML technologies, to the unstructured forms used for linking data across repositories. The implications of these developments for the future of the ODNB remain to be seen, but there is every reason to believe that its content will be increasingly accessed in ways other than the format of the traditional biographical essay. Essays are not going away, of course. But they will be supplemented by the arrays of tables, charts, maps, and graphs made possible by linked data. Indeed, the ODNB has been moving in this direction since 2004 with the addition of thousands of curated links between individuals (recorded in biographical essays) and the social hierarchies and networks to which they belonged (presented in thematic list and group entries)—and then on to content by or about a person held in archives, museums or galleries worldwide.
Online the ODNB offers scholars the opportunity to select, group, and parse information not just at the level of the article, but also in more detailed ways—and this is where computational matters get interesting. I currently use the ODNB online as a resource for a digital prosopography attached to a collection of documents called ‘Lord Byron and his Times’, tracking relationships among more than 12,000 Byron-contemporaries mentioned in nineteenth-century letters and memoirs; of these people a remarkable 5000 have entries in the ODNB. The traditional object of prosopography was to collect small amounts of information about large numbers of persons, using patterns to draw inferences about slenderly documented lives. But when computation is involved, a prosopography can be used with linked data to parse large amounts of information about large numbers of persons. As a result, one can attend to particularities, treating individuals as members of a group or social network without reducing them to the uniformity of a class identity. Digital prosopography thus returns us to something like the nineteenth-century liberalism that inspired Sir Leslie Stephen’s original DNB (1885-1900).
The key to finding patterns in large collections of lives and documents, the evolution of technology suggests, is to atomize the data. As a writer of biographies I would select from documentary sources, collecting the facts of a life, and translating them into the form of an ODNB essay. Creating a record in a prosopography involves a similar kind of abstraction: working from (say) an ODNB entry, I abstract facts from the prose, encoding names and titles and dates in a semi-structured XML template that can then be used to query my archive, comprising data from previous ODNB abstractions and other sources. For instance: ‘find relationships among persons who corresponded with Byron (or Harrow School classmates, or persons born in Nottinghamshire, etc.) mentioned in the Quarterly Review.’ An XML prosopography is but a step towards recasting the information as flexible, concise, and extensible semantic data.
While human readers can easily distinguish the character-string ‘Oxford’ as referring to the place, the university, or the press, this is a challenge for computation—like distinguishing ‘Byron’ the poet from ‘Byron’ the admiral. One can attack this problem by using algorithms to compare adjacent strings, or one can encode strings by hand to disambiguate them, or use a combination of both. Digital ODNB essays are good candidates for semantic analysis since their structure is predictable and they are dense with significant names of persons, places, events, and relationships that can be used for data-linking. One translates character-strings into semantic references, groups the references into relationships, and expresses the relationships in machine-readable form.
A popular model for parsing semantic data is via ‘triples’: statements in the form subject / property / object, which describe a relationship between the subject and the object: the tree / is in / the quad. It is powerful because it can describe anything, and its statements can be yoked together to create new statements. For example: ‘Lord Byron wrote Childe Harold’, and ‘John Murray published Childe Harold’ are both triples. Once the three components are translated into semantically disambiguated machine-readable URIs (Uniquely Referring Identifiers), computation can infer that ‘John Murray published Lord Byron.’
Now imagine the contents of the ODNB expressed not as 60,000 biographical essays but as several billion such statements. In fact, this is far from unthinkable, given the nature of the material and progress being made in information technology. The result is a wonderful back-to-the-future moment with Leslie Stephen’s Victorian DNB wedded to Charles Babbage’s calculating machine: the simplicity of the triple and the power of finding relations embedded within them. Will the fantasies of positivist historians finally be realized? Not likely; while computation is good at questions of ‘who’, ‘what’, ‘where’, and ‘when’, it is not so good at ‘why’ and ‘how’. Biographers and historians are unlikely to find themselves out of a job anytime soon. On the contrary, once works like the ODNB are rendered machine-readable and cross-query-able, scholars will find more work on their hands than they know what to do with.
So the publication of the ODNB online in September 2004 will be fondly remembered as a liminal moment when humanities scholarship crossed from paper to digital. The labour of centuries of research was carried across that important threshold, recast in a medium enabling new kinds of investigation the likes of which—ten years on—we are only beginning to contemplate.
Uncertainty is everywhere. There can hardly be a person alive who has not experienced it at some time. Indeed, as Shakespeare indicates in his play The Tempest (Act I) we are all submitted to “life’s uncertain voyage.” We may well find ourselves asking “What shall I do?” or “How should I react?”, familiar questions as we continue our voyage.
This common factor in human experience is heightened when the circumstances involve serious illness, whether for the patient or for those who care for them. Living with uncertainty affects all at the bedside. The patient longs for normality and yearns for safety. The family has to face unexpected disruption bringing new routines, responsibilities, and many new people into their lives. A whole new world seems to open up. A client once said, “It is like having a new job” referring to all the new things she had learnt following her husband’s terminal diagnosis.
The professional or volunteer carer, too, has to adjust to uncertainty. The progression of the disease is endlessly variable. There are no certainties in medicine, only likelihoods. This may place the carer under pressure to say something that will give patients and their families a sense of having a handle on their life, regardless of the seriousness of the condition. There are also the practical issues, often difficult and complex, about, for instance, discharge arrangements and future support. Working alongside the families, the carer must hold an appropriate balance between hope for the future alongside a realism about what is or could be involved.
Challenges and choices in life-threatening illnesses create a spectrum of strong feelings among those experiencing them. The patient may well ask “Will I ever be well again? What are they going to do to me? Can I cope with the noise and bustle of a hospital ward? Why has this happened to me?” Fear, anger, grief and helplessness are all present in some degree. Even time itself seems to drag amidst the pain and weakness, loss of ability and responsibility. The notion of self-worth can be seriously challenged. The present and the future may look bleak and insecure as compared with “normal” life. Many of the same feelings will be felt also by families, including anxiety about whether they will be able to cope with the new circumstances and the inevitable increase in financial costs.
The radical changes in circumstances can promote the reasonable question “Why me or us?” Disease is often understood to be a form of judgement, and where the patient has done their best, and in their own view, lived a “good life,” the question arises out of what is felt to be an unjust judgement and cruel sentence. People can feel rewarded, unjustly, by the disease, even if in some ways they have unconsciously contributed to its onset by excessive working, smoking, or drinking. The disease can also arise out of the environment in which the patient lives, or their genetic make-up, over which they have no control. The illness therefore becomes an unfair threat and obstruction in the mind of those involved whether as a patient or family member.
Major disease can not only radically change a person’s circumstances, but also their judgement, attitude, and mood. They can be changed as people. Medical experience can be overwhelming, distorting judgements and decisions, undermining relationships, and creating a deep sense of vulnerability. “Why me?” becomes a cry from the heart; a cry for help; a cry out of hopelessness. But it need not be.
We are all vulnerable. There is a fault in creation, just as there is wonder and genius. Both facets can be seen within scientific fact as well as religious and moral recognition. Disease can be judged as part of nature just as death is part of life. Such reality challenges the patient just as it does the doctor and researcher.
Such natural faults need to be accepted and worked with. They confront but they also inspire. Our uncertain voyage can involve major illness and its concomitants. A constructive but very difficult response can be to accept, remain positive and be grateful to those who are helping by their skills, support, and encouragement. Disease and disorder are part of the underbelly of creation of which we are all a part. “Why me?” can be changed to “Why not me?” The change in the question can bring about change in outlook and peace.
As Shakespeare reminds us of “life’s uncertain voyage”, we wrestle with uncertainty. Often, we hope, we may find resources which help us along the road. Close supportive relationships, a commitment to an ideal or an allegiance to a faith which inspires, even those quiet times of reflection and self-realization can prove invaluable. They all have a part to play in helping us to cope with the unknown. Self-confidence or lack of it can be instrumental in how we manage uncertainty, but neither can assure us that our thoughts and actions are right. Subsequent experience is often the only measure of that.
We can learn from experience — we can learn to live life fully, whatever the circumstances, even when we are uncertain as to what they may be or lead to. We will never know everything, and perhaps it requires a sense of peace to live with such uncertainty — a tough challenge, but one with a great reward.
If your morning commute involves crowded public transportation, you definitely want to find yourself standing next to someone who is saying something like, “I know he’s stabbed people, but has he ever killed one?” . It’s of course best to enjoy moments like this in the wild, but I am not above patrolling Overheard in London for its little gems (“Shall I give you a ring when my penguins are available?”), or, on an especially desperate day, going all the way back to the London-Lund Corpus of Spoken English, a treasury of oddly informative conversations (many secretly recorded) from the 1960s and 1970s. Speaker 1: “When I worked on the railways these many years ago, I was working the claims department, at Pretona Station Warmington as office boy for a short time, and one noticed that the tremendous number of claims against the railway companies were people whose fingers had been caught in doors as the porters had slammed them.” Speaker 2: “Really. Oh my goodness.” (Speaker 1 then reports that the railway found it cheaper to pay claims for lost fingers than to install safety trim on the doors.)
If you ever need a good cover story for your eavesdropping, you are welcome to use mine: as an epistemologist, I study the line that divides knowing from merely thinking that something is the case, a line we are constantly marking in everyday conversation. There it was, in the first quotation: “I know he’s stabbed people.” How, exactly was this known, one wonders, and why was knowledge of this fact reported? There’s no shortage of data: knowledge, as it turns out, is reported heavily. In spoken English (as measured most authoritatively, by the 450-million-word Corpus of Contemporary American English), ‘know’ and ‘think’ figure as the sixth and seventh most commonly used verbs, muscling out what might seem to be more obvious contenders like ‘get’ and ‘make’. Spoken English is deeply invested in knowing, easily outshining other genres on this score. In academic writing, for example, ‘know’ and ‘think’ are only the 17th and 22nd-most popular verbs, well behind the scholar’s pallid friends ‘should’ and ‘could’. To be fair, some of the conversational traffic in ‘know’ is coming from fixed phrases, like — you know — invitations to conversational partners to make some inference, or — I know — indications that you are accepting what conversational partners are saying. But even after we strip out those formulaic uses, the database’s randomly sampled conversations remain thickly larded with genuine references to knowing and thinking. Meanwhile, similar results are found in the 100-million-word British National Corpus; this is not just an American thing.
It’s perhaps a basic human thing: conversations naturally slide towards the social. When we are not using language to do something artificial (like academic writing), we relate topics to ourselves. Field research in English pubs, cafeterias, and trains convinced British psychologist Robin Dunbar that most of our casual conversation time is taken up with ‘social topics’: personal relationships, personal experiences, and social plans. Anthropologist John Haviland apparently found similar patterns among the Zinacantan people in the remote highlands of Mexico. We talk about what people think, like, and want, constantly linking conversational topics back to human perspectives and feelings.
There’s an extreme philosophical theory about this tendency, advanced in Ancient Greece by Protagoras, and in our day by the best-known living American philosopher, Kanye West. Protagoras’s ideas reach us only in fragments transmitted through the reports of others, so I’ll give you Kanye’s formulation, transmitted through Twitter: “Feelings are the only facts”. Against the notion that the realm of the subjective is unreal, this theory maintains that reality can never be anything other than subjective. Here (as elsewhere) Kanye goes too far. The mental state verbs we use to link conversational topics back to humanity fall into two families, with interestingly different levels of subjectivity, divided along a line which has to do with the status of claims as fact. The first family is labeled factive, and includes such expressions as realizes, notices, is aware that, and sees that; the mother of all factive verbs is knows (and according to Oxford philosopher Timothy Williamson, knowledge is what unites the whole factive family). Non-factives make up the second family, whose members include thinks, suspects, believes and is sure. Factive verbs, rather predictably, properly attach themselves only to facts: you can know that Jack has stabbed someone only if he really has. Non-factive verbs are less informative: Jane might think that Edwin is following her even if he isn’t. In saying that Jane suspects Edwin has been stabbing people, I leave it an open question whether her suspicions are right: I report her feelings while remaining neutral on the relevant facts. Even when they mark strong degrees of subjective conviction — “Edwin is sure that Jane likes him” — non-factive expressions do not, unfortunately for Edwin in this case, necessarily attach themselves to facts. Feelings and facts can come apart.
Factives like ‘know’, meanwhile, allow us to report facts and feelings together at a single stroke. If I say that Lucy knows that the train is delayed, I’m simultaneously sharing news about the train and about Lucy’s attitude. Sometimes we use factives to reveal our attitudes to facts already known to the audience (“I know what you did last summer”), but most conversational uses of factives are bringing fresh facts into the picture. That last finding is from the work of linguist Jennifer Spenader, whose analysis of the dialogue about railway claims pulled me into the London-Lund Corpus in the first place (my goodness, so many fresh facts with those factives). Spenader and I both struggle with some deep theoretical problems about the line between knowing and thinking, but it nevertheless remains a line whose basic significance can be felt instinctively and without special training, even in casual conversation. No, wait, we have more than a feeling for this. We know something about it.
As the British government holds its first public inquiry into the conditions and nature of immigration detention, it is a good time to take stock of what we know about these controversial institutions. Unlike prisons, about which there is a lengthy and robust tradition of critical academic scholarship, academics have written surprisingly little about everyday life in immigration removal centres (IRCs). Details can be gleaned from parliamentary debates, governmental and non-governmental organizations, and the occasional media report. Researchers also interview former detainees in the community. First-hand accounts can be found on websites, particularly those critical of detention. For the most part, however, academic debate over the purpose, justification, impact, and nature of detention (and its corollary, deportation) has developed independently from sustained engagement with the lived experience of those within these institutions.
There are a number of reasons for the state of academic research in this field. On the one hand, immigration removal centres are relatively recent institutions and so, there is no reason to expect a similar scale of research. Unlike prisons, which in some form or another have been around for centuries, the first institution to house foreign arrivals denied permission to land who were appealing their immigration case, Harmondsworth Immigration Detention Unit, opened near Heathrow airport on the site of today’s IRC Colnbrook in 1970. From that point first slowly and then, under the premiership of Tony Blair, more rapidly, the UK government began to establish the national system we have today. Today’s immigration estate, in other words, largely dates to the past 15 years.
It is not just that immigration removal centres are relatively recent, but also that the numbers held under Immigration Act powers are low; 3,000 women and men on any given day are confined in 10 immigration removal centres (IRCs) scattered throughout the country. This figure starts to swell if we include the 1,000 or so who remain in prison post-sentence (or who are sent there from IRCs), held under immigration act powers, and the small number of families in the ‘pre-departure accommodation’ at Cedars. Another hundred or so sit in short term holding facilities at ports and airports within the UK and across the channel in Calais and Dunkirk. Still more are held in police cells, hospitals and Home Office reporting centres, and a few hundred have recently been placed in HMP The Verne. In comparison to other forms of custody as well as to the estimates of the sum of undocumented migrants in the community, these figures may be easily overlooked.
Complicating matters, immigration removal centres do not fall all that easily into any particular discipline. While scholars in migration studies, political science, geography, and anthropology have been studying issues to do with migration control and citizenship for some time, those fields are not particularly familiar with custodial institutions. At the same time, in my own field of criminology, we have been slow to include IRCs, since, despite many overlaps and intersections, they do not fall within the criminal justice system.
Finally, of course, there is the highly politicized nature of these sites. Governments around the world have been reluctant to allow in researchers, a short-sighted policy decision that contributes to widespread concerns over their conditions and legitimacy. Managed via the terms of confidential commercial contracts with private custodial firms and the prison service, IRCs are, indeed, difficult sites to penetrate.
Once within their walls, the challenges do not stop. Wherever they are held, detainees are drawn from across the globe. Although in the UK they tend to congregate from former British colonies and thus most speak some English, few are entirely fluent. Cultural, religious, and linguistic diversity is breathtaking, making communication difficult. Most people are distressed, with some estimates placing rates of depression at above 80%. The population is also highly fluid; while a small proportion get ‘stuck’ in the system, staying for 6-months and longer, the majority remain for less than two months. Some are held for only a matter of days. The lack of an upper limit to detention – which has been heavily criticized in the recent Parliamentary hearings – makes it difficult not only for detainees and staff to plan their days, but, more prosaically, for researchers to interview and understand. Plans to meet up fall through, and those who agree to participate may be removed or released. Others who remain become progressively more anxious, and may, as a result, drop out of the study.
Researchers need to be aware both of the political limits surrounding their work, and of the vulnerabilities of those whom they interview. Many within these institutions, from detainees to all levels and all kinds of staff, express considerable reservation about the system. There are a lot of problems. There are also some examples of good practice, many attempts at compassion, some moments of shared humanity.
Going inside illuminates parts of detention that we simply cannot otherwise see, filling in gaps in our knowledge. It challenges easy assumptions about the exercise of power, its effect, effectiveness, and legitimacy, by considering how such matters are made concrete in everyday interactions and experiences. First hand accounts remind us of our shared humanity and, in so doing, provide an important counter to the powerful rhetoric of securitization and criminalization that characterizes border control. Testimonies are moving. They reveal similarities and shared aspirations as well as differences of opinion. They are messy and confusing. They might also provide the basis for more creative thinking, a goal that I believe everyone involved in detention would welcome.
Headline image credit: Art and Craft room at IRC Colnbrook, Mary Bosworth