JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
The annual Foreign Direct Investment (FDI) International Arbitration Moot gathers academics and practitioners from around the world to discuss developments and gain a greater understanding of growing international investment, the creation of international investment treaties, domestic legislation, and international investment contracts.
The FDI Moot occurs over the course of six months, and includes regional rounds, which took place in August in New Delhi, Seoul, and Buenos Aires, and concludes with the global finals. Global finals venues rotate each year between Frankfurt, Malibu, Boston, and London.
The 2014 final hearing will be held 24-26 October at Pepperdine University School of Law in Malibu, California. In this phase, 48 teams from the South Asia, Asia-Pacific, Latin America, Africa, North America, Europe and the Middle East regions will compete in the global oral argument preliminary rounds followed by the quarter final, semifinal, and final rounds.
Established practitioners and academics in the international arbitration, investment regulation, construction law, and international economic law fields act as arbitrators or memorandum judges throughout the competition. The arbitrators facilitate hearings during the oral arguments while the memorandum judges assess and score memorials one month before the oral arguments. Oxford University Press will be awarding prizes for the best memorial and counter memorial.
With three days of oral arguments, this year’s FDI Moot promises to be a busy and exciting weekend. In addition, Malibu, often described as “27 miles of scenic beauty,” is surrounded by the Pacific Ocean and Santa Monica Mountains, so don’t forget to take some time to check out area attractions.
Late October, with an average high temperature of 69°F/21°C, is perfect for exploring one of Malibu’s many beaches. Check out the famous Surfrider Beach and the nearby Malibu Pier.
If you’re interested in taking a hike, plan an excursion to Point Mugu State Park, which has more than 70 miles of trails in the Santa Monica Mountains.
If you’ll be joining us in Malibu, stop by the Oxford University Press booth where you can browse our journals collection and take advantage of the 20% conference discount on all books. We’re also offering one month of free access to our collection of online law products for all attendees. Looking to brush up on the Vienna Convention on the Law of Treaties in BIT arbitrations in time for the Moot? Check out the recording of our recent Investment Claims Webinar session and accompanying slides.
To follow the latest updates about the 2014 FDI Moot, follow us on Twitter @OUPIntLaw and at the hashtags #FDI14 #FDIMOOT14, and don’t forget to like the FDI Moot Facebook page.
See you in Malibu!
Heading image: Willem C. Vis pre moot at Palacky University of Olomouc by Cimmerian praetor. CC-BY-SA-3.0 via Wikimedia Commons.
We’re getting ready for Halloween this month by reading the classic horror stories that set the stage for the creepy movies and books we love today. Every Friday this October we’ve unveiled a part of Fitz-James O’Brien’s tale of an unusual entity in What Was It?, a story from the spine-tingling collection of works in Horror Stories: Classic Tales from Hoffmann to Hodgson, edited by Darryl Jones. Last we left off the narrator, Harry, tried to fight off a mysterious creature fighting him in his bed. His friend Hammond had just come to his rescue.
Hammond stood holding the ends of the cord that bound the Invisible, twisted round his hand, while before him, self-supporting as it were, he beheld a rope laced and interlaced, and stretching tightly around a vacant space. I never saw a man look so thoroughly stricken with awe. Nevertheless his face expressed all the courage and determination which I knew him to possess. His lips, although white, were set firmly, and one could perceive at a glance that, although stricken with fear, he was not daunted.
The confusion that ensued among the guests of the house who were witnesses of this extraordinary scene between Hammond and myself, — who beheld the pantomime of binding this struggling Something, — who beheld me almost sinking from physical exhaustion when my task of jailer was over, — the confusion and terror that took possession of the bystanders, when they saw all this, was beyond description. The weaker ones fled from the apartment. The few who remained clustered near the door and could not be induced to approach Hammond and his Charge. Still incredulity broke out through their terror. They had not the courage to satisfy themselves, and yet they doubted. It was in vain that I begged of some of the men to come near and convince themselves by touch of the existence in that room of a living being which was invisible. They were incredulous, but did not dare to undeceive themselves. How could a solid, living, breathing body be invisible, they asked. My reply was this. I gave a sign to Hammond, and both of us — conquering our fearful repugnance to touch the invisible creature — lifted it from the ground, manacled as it was, and took it to my bed. Its weight was about that of a boy of fourteen.
‘Now, my friends,’ I said, as Hammond and myself held the creature suspended over the bed, ‘I can give you self-evident proof that here is a solid, ponderable body, which, nevertheless, you cannot see. Be good enough to watch the surface of the bed attentively.’
I was astonished at my own courage in treating this strange event so calmly; but I had recovered from my first terror, and felt a sort of scientific pride in the affair, which dominated every other feeling.
The eyes of the bystanders were immediately fixed on my bed. At a given signal Hammond and I let the creature fall. There was the dull sound of a heavy body alighting on a soft mass. The timbers of the bed creaked. A deep impression marked itself distinctly on the pillow, and on the bed itself. The crowd who witnessed this gave a low cry, and rushed from the room. Hammond and I were left alone with our Mystery.
We remained silent for some time, listening to the low, irregular breathing of the creature on the bed, and watching the rustle of the bed-clothes as it impotently struggled to free itself from confinement. Then Hammond spoke.
‘Harry, this is awful.’
‘But not unaccountable.’
‘Not unaccountable! What do you mean? Such a thing has never occurred since the birth of the world. I know not what to think, Hammond. God grant that I am not mad, and that this is not an insane fantasy!’
‘Let us reason a little, Harry. Here is a solid body which we touch, but which we cannot see. The fact is so unusual that it strikes us with terror. Is there no parallel, though, for such a phenomenon? Take a piece of pure glass. It is tangible and transparent. A certain chemical coarseness is all that prevents its being so entirely transparent as to be totally invisible. It is not theoretically impossible, mind you, to make a glass which shall not reflect a single ray of light, — a glass so pure and homogeneous in its atoms that the rays from the sun will pass through it as they do through the air, refracted but not reflected. We do not see the air, and yet we feel it.’
‘That’s all very well, Hammond, but these are inanimate substances. Glass does not breathe, air does not breathe. This thing has a heart that palpitates, — a will that moves it, — lungs that play, and inspire and respire.’
‘You forget the phenomena of which we have so often heard of late,’ answered the Doctor, gravely. ‘At the meetings called “spirit circles,” invisible hands have been thrust into the hands of those persons round the table, — warm, fleshly hands that seemed to pulsate with mortal life.’
‘What? Do you think, then, that this thing is — ’
‘I don’t know what it is,’ was the solemn reply; ‘but please the gods I will, with your assistance, thoroughly investigate it.’
Check back next Friday, 31st of October for the final installment. Missed a part of the story? Catch up with part 1, 2, and 3.
When an old friend told me he had saved the former Edward Everett Hale house in Matunuck, Rhode Island from demolition and gifted it to a local historical society with an endowment fund for its restoration, I remembered there was a significant collection of E. E. Hale letters at the Library of Congress that might throw light on the house. How could I have guessed this would lead me to uncovering the revered minister’s decades-long love affair with a forgotten, much younger and truly remarkable woman named Harriet E. Freeman?
First I had to unlock the code the writers used in passages throughout some 3,000 surviving letters. As I transcribed the letters, I recognized the “code” as a defunct shorthand, which I traced to its inventor, Thomas Towndrow. Hale taught himself this shorthand while a student at Harvard, and Towndrow’s 1832 textbook became my “Rosetta Stone” to unlocking an intimate, sometimes passionate, and mutually supportive relationship — the nature of which was concealed by the two of them, their families, and generations of Hale biographers.
Hale’s public life and career are well documented, but who was this Harriet Freeman? As I discovered from reminiscences in the letters, Hale’s special relationship with Freeman had its origins in his close friendship with the wealthy Freeman family, his parishioners since her teenage years. In her early twenties, Freeman began working as a volunteer in Hale’s church, the South Congregational Church in Boston’s South End, just a block away from the Freeman’s town house. Soon, she became his favorite literary amanuensis, to whom he dictated more than half of his sermons and a significant number of his fifty books and countless articles. Their coded expressions of devotion to each other in the letters that begin in 1884, when Hale, married with six surviving children, was 62 and Freeman 37, often seem “over-the-top” in typical Victorian fashion, but the longhand portions of the letters are rich in evidence of their shared intellectual and activist interests and love of the outdoors. Quite simply, they were soul mates.
Far from being just an adjunct to an older man’s life, Freeman fashioned a full and useful life of her own. She had a passion for botany and geology, which she studied at the Teacher’s School of Science (a venture of the Boston Society of Natural History and Boston Tech, later MIT) and then as a special student at Boston Tech, when she participated in multiple field trips in North America. Active in leadership roles in a number of the women’s clubs and organizations that pursued philanthropy and reform in women’s higher education and human rights, she also became a member of the Appalachian Mountain Club once women were allowed to join in 1879. Spending her summers in the White Mountains of New Hampshire, where Hale joined her for the month of August and other shorter visits, she was an activist for preserving the severely threatened forests of the region, persuading Hale to lend his authority to the cause when he became chaplain to the US Senate in 1904.
The story of Harriet Freeman and Edward Hale is valuable for two reasons: it sheds new light on the already celebrated E. E. Hale and it comprehensively documents the life of a truly remarkable woman. I began by thinking that “Hattie” could only be overshadowed by the overpowering legend and charismatic personality of Edward Everett Hale. Instead, I found multiple reasons why he felt she transformed his life. At last, and 84 years after her death, the formerly obscure Harriet Freeman is recognized with a profile in American National Biography Online.
Today is United Nations Day, celebrating the day that the UN Charter came into force in 1945. We thought it would be an excellent time to share thoughts from one of their former Commissioners to highlight the work this organization undertakes. The following is an edited extract by Navanethem Pillay, former United Nations High Commissioner for Human Rights, from International Human Rights Law, Second Edition.
I was born a non-white in apartheid South Africa. My ancestors were sugarcane cutters. My father was a bus driver. We were poor.
At age 16 I wrote an essay which dealt with the role of South African women in educating children on human rights and which, as it turned out, was indeed fateful. After the essay was published, my community raised funds in order to send this promising, but impecunious, young woman to university.
Despite their efforts and goodwill, I almost did not make it as a lawyer, because when I entered university during the apartheid regime everything and everyone was segregated. However, I persevered. After my graduation I sought an internship, which was mandatory under the law; it was a black lawyer who agreed to take me on board, but he first made me promise that I would not become pregnant. And when I started a law practice on my own, it was not out of choice but because no one would employ a black woman lawyer.
Yet, in the course of my life, I had the privilege to see and experience a complete transformation in my country. Against this background it is no surprise that when I read or recite Article 1 of the Universal Declaration of Human Rights, I intimately and profoundly feel its truth. The article stated that: ‘All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood’.
The power of rights made it possible for an ever-expanding number of people, people like myself, to claim freedom, equality, justice, and well-being.
Human rights underpin the aspiration to a world in which every man, woman, and child lives free from hunger and protected from oppression, violence, and discrimination, with the benefits of housing, healthcare, education, and opportunity.
Yet for too many people in the world, human rights remain an unfulfilled promise. We live in a world where crimes against humanity are ongoing, and where the most basic economic rights critical to survival are not realized and often not even accorded the high priority they warrant.
The years to come are crucial for sowing the seeds of an improved international partnership that, by drawing on individual and collective resourcefulness and strengths, can meet the global challenges of poverty, discrimination, conflict, scarcity of natural resources, recession, and climate change.
In 2005, the world leaders at their summit created the UN Human Rights Council, an intergovernmental body which replaced the much-criticized UN Human Rights Council, with the mandate of promoting ‘universal respect for the protection of all human rights and fundamental freedoms for all’. The Council began its operations in June 2006. Since then, it has equipped itself with its own institutional architecture and has been engaged in an innovative process known as the Universal Periodic Review (UPR). The UPR is the Council’s assessment at regular intervals of the human rights record of all UN member states.
In addition, at each session of the Council several country-situations are brought to the fore in addresses and documents delivered by member states, independent experts, and the Office of the High Commissioner for Human Rights.
Today, the Office of the High Commissioner is in a unique position to assist governments and civil society in their efforts to protect and promote human rights. The expansion of its field offices and its presence in more than 50 countries, as well as its increasing and deepening interaction with UN agencies and other crucial partners in government, international organizations, anad civil society are important steps in this direction. With these steps we can more readily strive for practical cooperation leading to the creation of national systems which promote human rights and provide protection and recourse for victims of human rights violations.
In the final instance, however, it is the duty of states, regardless of their political, economic, and cultural systems, to promote and protect all human rights and fundamental freedoms. Our collective responsibility is to assist states to fulfil their obligations and to hold them to account when they do not.
The reemergence of the Ebola epidemic provokes the kind of primal fear that has always gripped humans in the face of contagious disease, even though we now know more about how viruses work than ever before. Viruses, like all living organisms, are constantly evolving. This ensures that new viruses and their diseases will always be with us.
For thousands of years, people knew little about the “plagues” that afflicted them and, despite the impossibility to define causality, there were many attempts to explain how they happened.
Thucydides wrote in the History of the Peloponnesian War during the plague of Athens in 431 BCE that
“no pestilence of such extent nor any scourge so destructive of human lives is on record anywhere. For neither were physicians able to cope with the disease, since they at first had to treat it without knowing its nature, the mortality among them being greatest because they were most exposed to it, … And the supplications made at sanctuaries, or appeals to oracles and the like, were futile, and at last men desisted from them, overcome by the calamity.”
Even two thousand years later, scientists were at a loss to explain the workings of contagion. William Harvey, who described the circulation of blood in humans and is quoted in The Works of William Harvey by Tr. Robert Wills, wrote in 1653,
“So do I hold it scarcely less difficult to conceive how pestilence or leprosy should be communicated to a distance by contagion, by (an)…element contained in woolen or linen things, household furniture, even the walls of a house … How, I ask, can contagion, long lurking in such things … after a long lapse of time, produce its like nature in another body? Nor in one or two only, but in many, without respect of strength, sex, age, temperament, or mode of life, and with such violence that the evil can by no art be stayed or mitigated.”
In the absence of information, humankind resorted to any number of explanations for the origins of disease. Physicians, natural philosophers, and religious figures hypothesized causes of contagious diseases based on their view of the way the world worked. Disease theories became part of the discourse about the causes of events such as earthquakes, lightning and the movement of the planets.
Viruses are a fascinating group of entities that infect humans, other animals, plants, and bacteria. Their presence was anticipated when on 1 April 1717 Lady Mary Montagu, the wife of the British Ambassador to Turkey, wrote to a friend in England about smallpox. She was delighted to report that the disease did so little mischief. Why? Because an old woman would come with a “nutshell full of the matter of the best sort of small-pox,” (fluid derived from poxes) and immunize the children. The children would suffer some slight fever but soon recover, possibly never to contract the disease. What remained unknown was the contents of the fluid used by the old women to inoculate the children.
Near the close of the 19th century, scientists had come to understand that many plant diseases were caused by fungi, while a number of human diseases, such as tuberculosis, were caused by bacteria. But viruses remained a mystery.
That changed from the late 1880s to 1917, as the result of the discovery of contagious diseases whose causes could not be isolated or observed with ordinary microscopes. These included a contagion of tobacco plants, called mosaic disease, a disease of cattle (foot-and-mouth disease), yellow fever in humans, and another disease that attacked bacteria. It turned out they were all caused by viruses.
But the study of viruses posed unique challenges. Viruses are not cells like pathogenic bacteria or fungi which can multiply independently in their hosts or on artificial media. The agent that caused flu could not be grown in culture, and there was no experimental animal that could be infected. It was also impossible for researchers to visualize the agent of disease. After the great flu epidemic of 1918, scientists made numerous attempts to isolate the agent, but it was not until 1933 that three British investigators discovered that ferrets could be infected by nasal washings from patients with the disease. Thus they proved that an entity contained in nasal material could transmit the disease.
The mysteries of viruses were largely revealed by investigators working with those that infect bacteria. These viruses attracted the attention of researchers who speculated that they might lead to discoveries in the field of genetics. They worked with a virus that infects E. coli — which lives in the intestinal tracts of humans — and, while taking over the machinery of the bacterial cell, causes these bacteria to blow open, releasing hundreds of viral particles. Chemical analysis revealed their composition to be DNA and proteins. These studies contributed significantly to the conclusion that DNA is the genetic material of cellular life.
We now know that viruses that infect humans have their origin in animal populations that are in close contact with humans. Many of the flu viruses originate in Southeast Asia where bird and swine populations live in close proximity to humans. The viruses undergo mutations so that humans must be immunized each year against new strains. The rapid production of astronomical numbers of Ebola virus ensures that new strains will be constantly produced.
We also know that all viruses are composed of DNA or RNA and proteins. Ebola, influenza, polio, and AIDS are caused by RNA viruses. The virus that infects tobacco plants also is an RNA virus. Because we know how they work, we have had some success in interfering with the disease process with various drugs.
All of these modern procedures contribute to understanding the cause of disease. Humankind has long believed that understanding would lead to cures. As Hippocrates stated 2,500 years ago, “To know the cause of a disease and to understand the use of the various methods by which disease may be prevented amounts to the same thing in effect as being able to cure.”
And yet, as we have seen with Ebola, understanding the cause is not always the same as curing. We have arrived at a point in the 21st century where we can mitigate some contagious diseases and prevent other catastrophic diseases such as smallpox. But others will be with us now and in the future, for contagion is a general biological phenomenon, a natural phenomenon. Contagious agents evolve like all living organisms and constantly challenge us to understand their origin, spread and pathology.
Headline Image: Ebola virus. Centers for Disease Control and Prevention’s Public Health Image Library. Public domain via Wikimedia Commons.
In 1958, Henry Cabot Lodge Jr., the US ambassador to the United Nations, summarized the role of the world organization: “The primary, the fundamental, the essential purpose of the United Nations is to keep peace. Everything which does not further that goal, either directly or indirectly, is at best superfluous.” Some 30 years later another ambassador expressed a different view. “In the developing countries the United Nations… means environmental sanitation, agricultural production, telecommunications, the fight against illiteracy, the great struggle against poverty, ignorance and disease,” remarked Miguel Albornoz of Ecuador in 1985.
These two citations sum up the basic dilemma of the United Nations. It has always been burdened by high expectations: to keep peace, fix economic injustices, improve educational standards and combat various epidemics and pandemics. But inflated hopes have been tempered by harsh realities. There may not have been a World War III but neither has there been a day’s worth of peace on this quarrelsome globe since 1945. Despite all the efforts of the various UN Agencies (such as the United Nations Development Programme) and related organizations (like the World Bank), there exists a ‘bottom billion’ that survives on less than one dollar a day. The average lifespan in some countries barely exceeds thirty. According to UNESCO 774 million adults around the world lacked basic literacy skills in 2011.
Given such a seemingly dismal record, it is worth asking whether the UN has outlived its usefulness. After all, the organization turns 69 today (October 24th, 2014), a time when many citizens in the industrialized world exchange the stress of daily jobs for leisurely early retirement. Has the UN not had enough of a chance to keep peace and fix the world’s problems? Isn’t the obvious conclusion that the organization is a failure and the earlier it is scrapped the better?
The answer is no. The UN may not have made the world a perfect place but it has improved it immensely. The UN provides no definite guarantees of peace but it has been – and remains – instrumental for pacifying conflicts and enabling mediation between adversaries. Its humanitarian work is indispensable and saves lives every day. In simple terms: if the UN – or the various subsidiary organization that make up the UN – suddenly disappeared, lives would be lost and livelihoods would be endangered.
In fact, the real question is not whether the UN has outlived its usefulness, but how can the UN perform better in addressing the many tasks it has been charged with?
The answer is twofold. First, the UN needs to be empowered to do what it does best. Today, for example, one of the most pressing global challenges is the potential spread of the Ebola virus. Driven by irrational fear, politicians in a number of countries suggest closing borders in order to safeguard their populations. But the only realistic way of addressing a virus that does not know national borders is surely international collaboration. In practical terms this means additional support for the World Health Organization (WHO), the only truly global organization equipped to deal with infectious diseases. But the WHO, much like the UN itself, is essentially a shoestring operation with a global mandate. Its budget in 2013 was just under 4 billion dollars. The US military spent that amount of money in two days.
Second, the UN must become better at ‘selling’ itself. Too much of what the UN and its specialized agencies do around the world is simply covered in fog. What about child survival and development (UNESCO)? Environmental protection (UNEP) and alleviation of poverty (UNDP)? Peaceful uses of atomic energy (IAEA)? Why do we hear so little about the UN’s (or the International Labour Organization’s) role in improving workers’ rights? Does anyone know that the UNHCR has been awarded the Nobel Peace Prize twice (out of a total of 11 Nobel Peace Prizes awarded to the UN, its specialized agencies, related agencies, and staff)? It’s not a bad CV!
We tend to hear, ad nauseam, that the 21st century is a globalized one, filled with global problems but apparently lacking in global solutions. What we tend to forget is the simple fact that there exists an organization that has been addressing such global challenges – with limited resources and without fanfare – for almost seven decades.
Indeed, it seems that in today’s world the UN is more relevant than ever before. At 69 it is certainly not ripe for retirement.
Featured image credit: United Nations Flags, by Tom Page. CC-BY-SA-2.0 via Wikimedia Commons
We invite you to explore the biography of New Zealand painter Rita Angus, as it is presented in Grove Art Online.
New Zealand painter. Angus studied at the Canterbury School of Art, Christchurch (1927–33). In 1930 she married the artist Alfred Cook (1907–70) and used the signature Rita Cook until 1946; they had separated in 1934. Her painting Cass (1936; Christchurch, NZ, A.G.) is representative of the regionalist school that emerged in Canterbury during the late 1920s, with the small railway station visualizing both the isolation and the sense of human progress in rural New Zealand. The impact of North American Regionalism is evident in Angus’s work of the 1930s and 1940s. However, Angus was a highly personal painter, not easily affiliated to specific movements or styles. Her style involved a simplified but fastidious rendering of form, with firm contours and seamless tonal gradations (e.g. Central Otago). Her paintings were invested with symbolic overtones, often enigmatic and individual in nature. The portrait of Betty Curnow (1942; Auckland, A.G.) has generated a range of interpretations relating both to the sitter, wife of the poet Allen Curnow, and its social context.
In her self-portraits, Angus pictured a complex array of often contradictory identities. In Self-portrait (1936–7; Dunedin, NZ, Pub. A.G.) she played the part of the urban, sophisticated and assertive ‘New Woman’. Amongst her most candid works are a series of nude self-portrait pencil drawings, while her watercolours also constitute an important body of work, ranging from portraits and landscapes to painstaking but striking botanical studies such as Passionflower (1943; Wellington, Mus. NZ, Te Papa Tongarewa). The watercolourTree (1943; Wellington, Mus. NZ, Te Papa Tongarewa) carries a sense of mystery, with its surreal stillness and emptiness. Angus’s ‘Goddess’ paintings are equally mysterious. A Goddess of Mercy (1945–7; Christchurch, NZ, A.G.) is an image of peace and harmony. Angus was a pacifist and a conscientious objector during World War II. In Rutu (1951; Wellington, Mus. NZ, Te Papa Tongarewa), she modelled the goddess on her own features, but created a composite figure, half Maori, half European, which suggests an ideal condition of bicultural harmony. The lotus flower held by Rutu reflects Angus’s interest in Buddhism. She thought the Goddess paintings were her most important, and it is on the basis of these works that Angus was hailed as a feminist by subsequent artists and writers.
Angus aimed to evoke transcendental states of being, or a vision beyond mundane reality. In this respect her work connects to European modernism, more so than on the basis of any stylistic affinities. Nonetheless, Angus absorbed some of modernism’s formal innovations, notably degrees of simplification and flattening of form. Towards the end of her career, while she retained motifs based on observation, these were schematic and assembled into composite images, such as Flight(1968–9; Wellington, Mus. NZ, Te Papa Tongarewa). Her Fog, Hawke’s Bay (1966–8; Auckland, A.G.) manifests elements of the faceting and multiple viewpoints of Cubism. Angus’s hard-edged style influenced a younger generation of New Zealand painters, including Don Binney (b 1940) and Robin White.
Docking: Two Hundred Years of New Zealand Painting(Wellington, 1970), p. 146
Rita Angus (exh. cat. by J. Paul and others, Wellington, NZ, N.A.G., 1982)
Rita Angus (exh. cat., ed. L. Bieringa; Wellington, N.A.G., 1983)
Rita Angus: Live to Paint, Paint to Live (exh. cat. by V. Cochran and J. Trevvelyan; Auckland, C.A.G., 2001)
V. Cochran and J. Trevelyan: Rita Angus: Live to Paint, Paint to Live(Auckland, 2001)
M. Dunn: New Zealand Painting: A Concise History(Auckland, 2003), pp. 85–8
P. Simpson: ‘Here’s Looking At You: The Cambridge Terrace Years of of Leo Bensemann and Rita Angus’,Journal of New Zealand Art History, xxv (2004), pp. 23–32
J. Trevelyan: Rita Angus: An Artist’s Life(Wellington, 2008)
Rita Angus: Life and Vision (exh. cat., ed. W. McAloon and J. Trevelyan; Wellington, Mus. NZ, Te Papa Tongarewa, 2008)
A time-traveler, visiting from 1970s Britain, would be surprised by pretty much everything on the modern high street. While prestige brands such as Rolls Royce and Berry Bros. & Rudd have formed part of a much older landscape, the discriminating buyer of the Wilson and Heath eras would be astounded by the topsy-like growth of the modern luxury market. No longer the preserve of a privileged elite, myriad luxury brands now reach out to everyone. Specialist glossy magazines abound, for every interest, from hi-tech snowboards to waterproof smartphones — and the iPhone, in its regular updates, feeds a mass market appetite for the desirable luxury good. Apple has also spent years developing a related personal accessory — the iWatch — but for one group of discerning luxury buyers, this has proved to be a potentially disturbing phenomenon.
The market for luxury (or “high-end”) watches, in a world that naturally goes by a French title — haute horlogerie — has never been stronger. That market’s history has unfolded remarkably differently for participants in different countries. For the most tech-savvy punters — with funds enough for an iPhone — the rhetorical question posed of a fine Patek Philippe watch might be “why would I want such a single-function device?” Yet modern high-end brands can sell their entire output, and are coveted by collectors who own many valuable watches, yet perhaps leave them unworn, locked away safely, housed in boxes with motorized compartments that will keep the rotors of the automatic models turning and the lubrication properly distributed. The cult of the prestige watch has never been stronger, and one of its English bibles is QP, a magazine added by The Telegraph to its stable of luxury publications in 2013. And if there are bibles, so there is a temple, in that the Saatchi Gallery plays host each year to SalonQP, the showcase for haute horlogerie.
The phenomenon of the modern watch collector reflects the triumphant survival, rebirth, and stunning success of the Swiss watchmaking industry in particular, and is often credited to the intervention of one man — Nicolas Hayek — who rescued many heavily over-indebted brands from oblivion in the 1980s, and thereby saved a group of Swiss bankers’ bacon. The emergence of the popular Swatch at the popular end of the market underpinned the capacity of the high-end to recover its equilibrium and to forge forwards.
No such luck for the British market of the same period. Our time-traveler would recall the 1970s witnessing the last gasp of a small British watchmaking industry, born just twenty-five years earlier, in the wake of the Second World War. Preparation in the 1930s had seen firms such as S. Smith & Sons of Cricklewood forge alliances with Jaeger and LeCoultre, and these were vital at the outbreak of war, in securing deliveries from Switzerland of tools, complete watches, and a huge range of parts, including millions of tiny synthetic jewels, needed in every fine instrument across the cockpit of the Spitfire and Hurricane.
As such supplies continued throughout the war, generally through diplomatic smuggling, the realization crystallized that Britain required a larger capacity in light and fine engineering, leading to ambitious plans for the development of a post-war watch manufacturing capacity. With the backing of Stafford Cripps and Hugh Dalton, Attlee’s post-war Labour administration did indeed back the creation of a new industry, centred around Smiths, with factories in development areas of Wales and Scotland, offering much needed employment and generating vital foreign exchange.
But technology, as well as political backdrops and personalities, moved on. The demand for guided missile technology waxed as the need for mechanical fuzes for weapons waned. The watchmakers recalled Cripps committing to their support in the Commons, and believed a covenant had been established between government and the horological industry, in which tariffs and quantitative restrictions on foreign imports would remain in place and at effective levels, ad infinitum. They were wrong.
The affirmation of free-trade doctrine under the Conservative government of the 1950s saw the dismantling of the protectionism that had characterized the post-war Labour government. A long and painful demise of the newborn British watchmaking industry resulted. When Heath replaced Wilson, Smiths was already winding down its watch businesses, returning briefly to the historic practice of importing Swiss mechanisms, and simply adding its name to the dial.
Remarkably, however, a dream of establishing high-end watchmaking in the UK was nevertheless kept alive, not by industry, and not supported by government subsidy, tariffs, and restrictions. The principal guardian of the flame was the determined and eccentric genius George Daniels (1926–2011), now considered one of the finest watchmakers ever. His methods involved a significant element of handcraft and he made nearly every part of his limited series of watches, which have risen colossally in value since his death. His successor and acolyte, Roger Smith, has in turn found huge support from a continuing market that demands the finest and most exclusive hand-crafted products. Another arrival in this rarefied atmosphere, Frodshams — a distinguished British clockmaking name of old — expect to produce watches in the coming years that will out-Daniels Daniels.
Thus the determined British private sector appears to have forged a new and successful small corner of a wider market, and the memory of the UK’s brief-dalliance with a state-supported and subsidized watch industry will gradually fade away in the slipstream. Corporate survival requires anticipation, commitment, investment, risk-taking, and many other qualities. Smiths risked and lost much in its involvement in watches (despite success in other industries), and the temptation is to imagine the Swiss watch industry revival merely preserved the natural order. In truth, the degree of sponsorship and backing for that industry in the last sixty years has been colossal.
If the British government’s support for an upstart post-war infant industry failed in short order, owing to overwhelming foreign competition, it will be interesting to see if the Swiss state and its horological industry, after the scare of the 1980s, have this time looked far enough forward, to support any repositioning necessary. Can the luxury ‘single-function device’ continue to thrive? We live in interesting times.
Headline image credit: 1970 by Noodlefish. CC BY-NC 2.0 via flickr.
We have all experienced the effect music can have on our emotions and state of mind. We have felt our spirit lift when a happy song comes on the radio, or a pinging sense of nostalgia when we hear the songs of our childhood. While this link between music and emotion has long been a part of human life, only in recent decades have we had the technology and foundational knowledge to understand music’s effect on our brains in concrete terms. This knowledge has enabled trained professionals to use music therapy to help people with symptoms of depression, addiction, autism, and more.
I recently worked with a group, called Clarity Way, to put together an informative infographic all about modern music therapy and how it works. The infographic shows everything from how the field is growing–over 70 colleges offer a degree in music therapy–to how it can be applied to patients such as children with speech impediments.
While the concept of music therapy may be quite old, new techniques and applications are being discovered and developed all the time. We may even start to see more and more hospitals and medical institutions employ full time music therapists as part of their staff in the next few years. It will be interesting to see just how powerful music can be.
Headline image credit: Baby Bloo taking a dip. Photo by Marcus Quigmire. CC BY 2.0 via Wikimedia Commons.
I’m writing from Palermo where I’ve been teaching a course on the legacy of Troy. Myths and fairy tales lie on all sides in this old island. It’s a landscape of stories and the past here runs a live wire into the present day. Within the same hour, I saw an amulet from Egypt from nearly 3000 years ago, and passed a young, passionate balladeer giving full voice in the street to a ballad about a young woman – la baronessa Laura di Carini – who was killed by her father in 1538. He and her husband had come upon her alone with a man whom they suspected to be her lover. As she fell under her father’s stabbing, she clung to the wall, and her hand made a bloody print that can still be seen in the castle at Carini – or so I was told. The cantastorie – the ballad singer – was giving the song his all. He was sincere and funny at the same time as he knelt and frowned, mimed and lamented.
The eye of Horus, or Wadjet, was found in a Carthaginian’s grave in the city and it is still painted on the prows of fishing boats, and worn as a charm all over the Mediterranean and the Middle East, in order to ward off dangers. This function is, I believe, one of the deepest reasons for telling stories in general, and fairy tales in particular: the fantasy of hope conjures an antidote to the pain the plots remember. The street singer was young, curly haired, and had spent some time in Liverpool, he told me later, but he was back home now, and his song was raising money for a street theatre called Ditirammu (dialect for Dithryamb), that performs on a tiny stage in the stables of an ]old palazzo in the district called the Kalsa. Using a mixture of puppetry, song, dance, and mime, the troupe give local saints’ legends, traditional tales of crusader paladins versus dastardly Moors, and pastiches of Pinocchio, Snow White, and Alice in Wonderland.
Their work captures the way fairy tales spread through different media and can be played, danced or painted and still remain recognisable: there are individual stories which keep shape-shifting across time, and there is also a fairytale quality which suffuses different forms of expression (even recent fashion designs have drawn on fairytale imagery and motifs). The Palermo theatre’s repertoire also reveals the kinship between some history and fairy tale: the hard facts enclosed and memorialised in the stories. Although the happy ending is a distinguishing feature of fairy tales, many of them remember the way things were – Bluebeard testifies to the kinds of marriages that killed Laura di Carini.
A few days after coming across the cantastorie in the street, I was taken to see the country villa on the crest of Capo d’Orlando overlooking the sea, where Casimiro Piccolo lived with his brother and sister. The Piccolo siblings were rich Sicilian landowners, peculiar survivals of a mixture of luxurious feudalism and austere monasticism. A dilettante and dabbler in the occult, Casimiro believed in fairies. He went out to see them at twilight, the hour recommended by experts such as William Blake, who reported he had seen a fairy funeral, and the Revd. Robert Kirk, who had the information on good authority from his parishioners in the Highlands, where fairy abductions, second sight, and changelings were a regular occurrence in the seventeenth century.
Casimiro’s elder brother, Lucio, a poet who had a brief flash of fame in the Fifties, was as solitary, odd-looking, and idiosyncratic as himself, and the siblings lived alone with their twenty servants, in the midst of a park with rare shrubs and cacti from all over the world, their beautiful summer villa filled with a vast library of science, art, and literature, and marvellous things. They slept in beds as narrow as a discalced Carmelite’s, and never married. They loved their dogs, and gave them names that are mostly monosyllables, often sort of orientalised in a troubling way. They range from ‘Aladdin’ to ‘Mameluk’ to ‘Book’ and the brothers built them a cemetery of their own in the garden.
Casimiro was a follower of Paracelsus, who had distinguished the elemental beings as animating matter: gnomes, undines, sylphs and salamanders. Salamanders, in the form of darting, wriggling lizards, are plentiful on the baked stones of the south, but the others are the cousins of imps and elves, sprites and sirens, and they’re not so common. The journal Psychic News, to which Casimiro subscribed, inspired him to try to take photographs of the apparitions he saw in the park of exotic plants around the house. He also ordered various publications of the Society of Psychical Research and other bodies who tried to tap immaterial presences and energies. He was hoping for images like the famous Cottingley images of fairies sunbathing or dancing which Conan Doyle so admired. But he had no success. Instead, he painted: a fairy punt poled by a hobgoblin through the lily pads, a fairy doctor with a bag full of shining golden instruments taking the pulse of a turkey, four old gnomes consulting a huge grimoire held up by imps, etiolated genies, turbaned potentates, and eastern sages. He rarely left Sicily, or indeed, his family home, and he went on painting his sightings in soft, rich watercolour from 1943 to 1970 when he died.
His work looks like Victorian or Edwardian fairy paintings. Had this reclusive Sicilian seen the crazed visions of Richard Dadd, or illustrations by Arthur Rackham or John Anster Fitzgerald? Or even Disney? Disney was looking very carefully at picture books when he formed the famous characters and stamped them with his own jokiness. Casimiro doesn’t seem to be in earnest, and the long-nosed dwarfs look a little bit like self-mockery. It is impossible to know what he meant, if he meant what he said, or what he believed. But the fact remains, for a grown man to believe in fairies strikes us now as pretty silly.
The Piccolo family’s cousin, close friend and regular visitor was Giuseppe Tomasi di Lampedusa, the author of The Leopard, and he wrote a mysterious and memorable short story about a classics professor who once spent a passionate summer with a mermaid. But tales of fairies, goblins, and gnomes seem to belong to an altogether different degree of absurdity from a classics professor meeting a siren.
And yet, the Piccolo brothers communicated with Yeats, who held all kinds of beliefs. He smelted his wonderful poems from a chaotic rubble of fairy lore, psychic theories, dream interpretation, divinatory methods, and Christian symbolism: “Out of the quarrel with others we make rhetoric; out of the quarrel with ourselves we make poetry.”
Featured image credit: Capo d’Orlando, by Chtamina. CC-BY-SA-2.5 via Wikimedia Commons
At the Conference on Open Access Scholarly Publishing in Paris last month, Claudio Aspesi, Senior Analyst at Sanford C. Bernstein, raised an uncomfortable question. Did the continuing financial health of traditional publishers like Elsevier indicate that open access had “failed”? According to Aspesi, “Expectations that OA will address the serial costs crisis are fading away.”
Is Aspesi right? Has open access failed? I certainly don’t think so – but that doesn’t mean the job is done…
When we launched BioMed Central in 2000, the goal was a simple and positive one – it wasn’t to fix library budgets or to destroy the businesses of existing publishers, but to introduce a new form of publishing that would help researchers communicate their findings more effectively by challenging the established notion that published results “belonged” to the publisher.
The success of the open access movement is demonstrated most clearly by the extent to which it is simply no longer acceptable to the world’s largest science funders for the results of their funding to end up trapped indefinitely behind publisher paywalls.
Of course, existing publishers have found ways to adapt to funders’ expectations of open access, and even to grow their businesses thanks to the additional funding made available to cover “Gold” open access fees. This isn’t such a bad thing; traditional publishers are responsible for many good journals. As more of them convert to a fully open access model (Nature Communications being one recent example), we are seeing a ratchet mechanism at work which is progressively shifting an ever higher fraction of the scientific literature to being open access immediately on publication and in authoritative final form.
While some bemoan the fees associated with “Gold” open access publishing, the model has the powerful advantage of providing a funding mechanism which scales with the increasing volume of research funding, unlike library budgets. By making the costs of publishing visible to authors, it also has the potential to eventually save costs by creating a more efficient market for publishing services (though this does depend on authors showing at least some degree of price sensitivity).
As for the perceived failure of open access to knock the incumbents from their perch, this seems a curious metric of success. We don’t regard AirBnB as a failure because hotel chains still exist, and nor is the continued existence of national airline carriers seen as a fundamental failure of the budget airline model. In both cases, the landscape has been profoundly transformed by the new model, and existing players are having to adapt, improve, and refocus their offering to compete.
Perhaps the most important success of the open access movement is not what it has already achieved, but the foundation it provides for further improvements to scientific communication.
Open access to open data
Establishment of open access to research articles as a basic norm has paved the way for the EU pilot of Open Research Data within H2020, seeking to ensure that not only the published article but the underlying data resulting from scientific research should be routinely made available in a form that facilitates reuse and further analysis.
The need for improved access to research data is not a new idea. Funders such as the Wellcome Trust and NSF already require grantees to specify Data Management Plans to indicate how the results of funded research will be made accessible. Unfortunately these plans often aren’t worth the (virtual) paper they are printed on. With suitable data standards and data management infrastructure either absent or not widely used, obtaining a copy of the underlying data from a particular research project can still be tortuous or even impossible.
Even if the data can be obtained, often the metadata and experimental details available are insufficient to make the experimental results reliably reproducible. With luck, the lab member who carried out the experiment may still be around and may be able (with considerable effort) to dig out such information, but the longer that has elapsed since publication, the more likely it is that the exact details of what was done will have vanished forever.
To address this problem, it is not enough to change the way science is published, we also need to look upstream at how scientific experiments are carried out, and how the results are analysed and prepared for publication.
Improving the scientific method
In computational research, significant steps have been made towards improving the situation. The Galaxy platform allows researchers to share genomic analysis pipelines in a form which can be readily reused by other researchers, and the journal GigaScience (published by BioMed Central in collaboration with the BGI) has embraced this approach by making available a public Galaxy server to ensure that data and analyses associated with the articles it publishes are readily accessible. More generally, there is strong enthusiasm amongst computational researchers for the use of new tools such as Docker to allow arbitrary ‘experimental setups’ for in silico research to be shared efficiently.
At the lab bench, sharing full experimental details and data descriptions is more challenging due to the wide range of data types and experimental descriptions which need to be represented. The NIH’s Big Data 2 Knowledge (BD2K) initiative, which recently announced its first round of funding awards, shows that this challenge is now receiving serious attention, and the use by journals such as Scientific Data and GigaScience of (Investigation, Study, Assay Tabular format) as a general high-level metadata standard shows promise, though there is some way to go before we have tools in place to make preparing data for publication in such a formats a pleasure rather than a chore.
At Riffyn, we are working with academic and industrial partners to develop cloud-based tools which will help researchers design their experiments up front using a friendly modern user interface, and to capture data from those experiments in a way which retains a connection to the experimental design and context, with the goal of making results more reliable and reproducible, and helping to rapidly distinguish genuine insights in the data from artefacts and noise. The long term goal is to ensure that making data well-described and reusable isn’t an afterthought or a tedious additional step, but is at the heart of the experimental process.
Any attempt to improve the way science is done will only succeed through collective effort and widespread adoption of shared standards. We hope to see a whole ecosystem of tools and standards emerge which will support the smooth flow of data and accompanying descriptive metadata all the way from experimental design, through data capture, to analysis, visualization, authoring and publication, retaining as much as possible of the provenance information and structure at every stage.
This will not be easy, but the building blocks do seem to be falling into place. The Force11 conference (spawned from earlier Beyond the PDF workshops), which next takes place in Oxford in January 2015, has created a useful framework for such collaboration. This weekend (25-26 October 2014) at the Mozilla Festival in London, the science track will offer a data-driven authoring workshop. Tools providers including Authorea, WriteLaTeX, Papers, Pensoft and F1000 will run demos and tutorials, whilst discussing how best such tools can be made to work together and integrate smoothly with lab data systems and publisher platforms, so that ultimately data can flow freely and in meaningful form through the entire process. If you can, please join us there to work with us on the next chapter of the open access success story.
The opinions and other information contained in this blog post and comments do not necessarily reflect the opinions or positions of Oxford University Press.
If you ask many people about nurses, they will tell you how caring and kind nurses are. The word “angel” might even appear. Nursing consistently tops the annual Gallup poll comparing the ethics and honesty of different professions.
But it’s worth exploring the extent to which society really values nursing. In recent decades, a global nursing shortage has often meant too few nurses to fill open positions, woefully inadequate nurse staffing levels, and not enough funds for nursing education. Many nurses have migrated across the globe, easing shortages in developed nations but exacerbating them in the developed world, where health systems are already under great stress. In a world where funds for health care are limited, nursing does not seem to be getting the love we profess to have for it.
This all starts with how decision-makers and members of the public view nursing. In reality, nurses are autonomous, college-educated science professionals who save lives and improve patient outcomes, in settings that range from war zones to high-tech ICUs. But nursing remains subject to a set of toxic gender-related stereotypes, which the mass media both reflects and reinforces, undermining the profession’s claims to scarce resources. Research in the field of health communication confirms that what the public sees on television and in films has a significant effect on health-related views and actions.
Consider some recent examples. The recent Ebola outbreak has attracted great interest from the global news media. There have been stories about the work of nurses to fight the disease, such as a strong piece in the Guardianearlier this month in which nurses described in great detail what they were doing to care for patients in West Africa. Some reports about the recent infections of the nurses in Dallas have highlighted allegations by nurses at Texas Health Presbyterian about infection control failures, and a few of the pieces have even consulted nurse experts.
But most Ebola pieces consult only physicians for expert comment, and many suggest that physicians are the only health workers who really matter, with nurses as their faithful assistants. One long article from August 2014 in the New York Times described the shortage of local and international physicians in Liberia and suggested that this shortage was the main health staffing problem. In fact, nurses provide far more hands-on care to patients suffering from debilitating infectious diseases, and so they tend to face greater associated burdens and risks, as the first infections in the United States have now made clear. Indeed, it’s likely that those first cases were nurses because nurses provide most skilled in-patient care, not because they are unusually poor at infection control. Yet even many reports that have mentioned nurses and other health workers have used “doctors” as shorthand for everyone, reinforcing the message that even if physicians aren’t the only ones involved, they are the only ones who count.
Much Ebola coverage has involved the prominent aid group Doctors Without Borders/Médecins Sans Frontières (MSF), whose name has long suggested that it’s an organization comprised solely of physicians. In reality, nurses outnumber physicians among MSF health professionals. But each time MSF’s name is mentioned, physicians receive all the credit for its work.
Hollywood’s vision of nursing remains largely caught in a time warp of unskilled handmaidens, pathetic losers, and prickly battleaxes, despite the gifted (if flawed) Nurse Jackie and a few other good portrayals. ABC’s Grey’s Anatomy has spent a decade telling the world that nursing consists of chirping “Right away, doctor!” and handing things to the brilliant, pretty surgeons who provide all meaningful health care. Fox’s The Mindy Project is mostly about the romantic interactions of quirky New York City OB-GYN physicians, but it also includes three stooges, uh, nurses, who make lots of ludicrous remarks yet seem to know almost nothing about health care. Fox’s new Red Band Society features a senior nurse (a.k.a. “Scary Bitch”) who is often portrayed as a battle-axe. But the nurse shows little expertise. She once asked a physician colleague about a recent operation, in which he had chosen not to amputate a cancer patient’s leg, this way: “So, getting to keep his leg was a bad thing…?”
The naughty nurse remains an advertising staple worldwide. In one current US television ad, a young woman suggests that colleagues should eat food from the sandwich chain Subway so they’ll be able to fit into skimpy Halloween outfits. She proceeds to demonstrate with a very skimpy hot nurse costume, among others. Another recent ad campaign presents the new Klondike Kandy Bar as the love child of what Adweek accurately describes as “an illicit tryst between a [male] Klondike bar and a tall, striking, chocolaty [female] candy-bar nurse,” who seems to have seduced her surprised ice cream patient. Yes, those images are “just jokes.” But research shows that jokes have great influence over attitudes, and of course jokes remain a primary vehicle for delivering prejudice.
Nurses must take the lead in improving understanding of their profession, but we can all do something. We urge people to pay close attention when interacting with nurses and to ask if what they observe bears any relation to what they often see in the media. Was it just a “caring angel,” or did the nurse also save a life by educating a patient or detecting a subtle change in condition? Even the language we use matters. For instance, hospitals are not so much “medical centers” as they are “nursing centers,” since patients wouldn’t stay there unless they needed 24/7 nursing care. And calling physicians “doctors” wrongly implies that they are the only health professionals who earn doctorates. Yet nurses and others get doctorates as well. We hope the public can see past traditional assumptions about nursing—because they threaten our lives.
If you have read the previous partsof this “study,” you may remember that brown is defined as a color between orange and black, but lexicographical sources often abstain from definitions and refer to the color of familiar objects. They say that brown is the color of mud, dirt, coffee, chocolate, hazel, or chestnut. Sometimes compounds like golden-brown turn up. Despite such differences, most people associate brown with a dark hue. The standard Latin gloss of Engl. brown is fuscus “dark” (compare the verb obfuscate). In using descriptive adjectives for “brown,” language follows the ancient trend (green is the color of vegetation, red is the color of ore, and so forth). Modern Greek has lost the ancient names of this color and uses a word having the root of the noun chestnut, while Russian speakers refer to cinnamon, an unexpectedly exotic product (koritsa—korichnevyi). The more “genuine” Russian word for “brown” (byryi, stress on the first syllable) is probably a borrowing: it seems to have been taken over along with brown horses. Romance speakers went the same way, but their lender was Germanic.
If we consider that brown is an intermediate color, we may perhaps understand the uses of the epithets mentioned last week. Dante’s “brown [that is, clotted] blood” is about the blood that lost its glow and no longer looked red. Red is likewise a broad term: we call crimson and scarlet objects red, but Tyrian purple and especially Tuscan red are almost black. Russian words beginning with ryab- are applied to speckled creatures and objects. Quite properly, the hazel-grouse is called ryabchik (-chik is a diminutive suffix), but the Russian for “rowan tree” (“mountain ash”) is ryabina (stress on the second syllable), though the bright berries of the mountain ash look shining red, especially in winter.
The standard epithet Homer applied to the sea was wine-colored, and today it causes surprise. However, there is probably no riddle. The colors of the wines the Ancient Greeks produced varied from inky black to nearly clear. Since waves reflect the color of the sky, we find their surface blue or black. A look at the pictures by realistic painters shows that under the surface tossing waves often appeared green to them. The question is not whether sea waves can be wine-colored, because, considering how many brands of wine there are, they certainly can, but why Homer associated them with wine rather than with some other dark liquid. Can we conclude that in his time dark wines were especially popular? The brown waves of Old English poetry mean simply “dark waves.” If we are puzzled, it happens because, as pointed out, today the color brown evokes the image of chestnuts, hazel, and chocolate rather than sea water. In a modern poem, chestnut-colored, chocolate-colored, or wine-colored waves would sound either incongruous or pretentious. Yet only usage, not color perception, has changed since the days of the Odyssey and Beowulf. The idea that “primitive cultures” had an indistinct idea of colors should be ruled out by definition. In similar fashion, we wear neither chiton nor toga, but at one time people wore both and would have reacted to our expensive torn jeans with amazement and justifiable horror.
We are in more trouble with brown meaning “violet” (see Part 2 of the present essay). Recent etymological dictionaries trace this sense of brown (or rather brun ~ braun, because only German is involved) to Latin prunum “plum.” If we were dealing with poetic diction, we could accept the idea of Latin influence, but the word has wide currency in dialects, and one wonders to what extent Latin was instrumental in the emergence of brown “violet.” I would again like to cite a parallel from Slavic. The Russian for plum is sliva, a cognate of Latin lividus “livid” (some of our readers may remember what I once wrote about movable s, or s-mobile). It is instructive to follow the transformation of this color name. Although dictionaries differ when it comes to details, all of them say approximately the same about livid. I found the glosses “ashen or pallid,” “bluish gray,” “purplish,” “dull blue,” “grayish blue,” and “black-and-blue.” They agree that the color of a bruise is livid and that livid, when used in everyday speech, is understood as “furiously angry.” But many people think that livid means “red,” and it is easy to understand them, because red is the color of the face flushed with anger.
Another instructive case is lurid (from Latin luridus “wan or yellowish”). In English, the word surfaced only in the eighteenth century but since that time has changed from “sallow, sickly pale” to “shining with a red glare; yellow-brown.” Shelley begins his narrative poem Queen Mab so:
“How wonderful is Death,
Death and his brother Sleep!
One, pale as yonder waning moon
With lips of lurid blue;
The other rosy as the morn….”
In Shelley’s language, lurid meant, as it did in Latin, “deadly pale,” but don’t miss “yellow-brown” and especially “shining with a red glare” above. Note also how the contextual use of livid (“angry”) affected its main sense. I have no explanation for German dialectal braun “violet,” but, before we jump to conclusions and refer to borrowing and homonymy, it may be useful to remember how unpredictable the interplay of color names sometimes is.
It remains for me to say something about brown study. To be in a brown study means “to be in intense reverie.” Agatha Christie’s Hercule Poirot is often described in this state of mind. Why brown? Some people thought of “corruption” and traced brown in this idiom to barren or to some German word. The OED dismissed the reference to German as untenable and suggested that brown here means “gloomy” (first published 1888). As far as I can judge, this little problem of English phraseology has attracted little or no attention, so that it might be useful to quote an anonymous reviewer (The Nation 48, 1889, p. 288).
The journal review of the OED’s first volume (A and B) is laudatory, but its author wrote:
“In only one instance, so far as we have observed, have they [the editors] laid themselves open to criticism…; and the variation from their usual practice is noteworthy enough to merit special comment. It occurs in the case of the somewhat peculiar expression brown study. The adjective here has assuredly the general idea of ‘deep,’ profound,’ ‘abstracted.’ It is hard to fix upon the phrase the sense of ‘gloomy meditation’, by which Johnson defined it; and the particular meaning given to it in this dictionary of ‘an idle and purposeless reverie’ is certainly not common. But it is in the explanation of its origin that conjecture appears here for once to have triumphed over judgment. The meaning is declared to have apparently come in the first place from brown in the sense of ‘gloomy,’ but that this sense of the adjective has to a great extent been forgotten. When, however, we turn to the adjective itself, we find that so far from such a sense having been forgotten, there is not the slightest record that it ever existed…. …the origin of the meaning of brown study remains in the dark as ever. It may be added that stiff seems formerly to have been an equivalent expression. In the romance of William of Palerne one of the characters is represented as having fallen ‘into a styf studie’….”
Who could have written this well-informed review? Richard Bailey (in Mugglestone, 2000) identified the author as Thomas R. Lounsbury, though it was indeed originally printed as anonymous, like all reviews in The Nation.
So it goes: brown “red,” brown “violet,” brown blood (as opposed to blue blood!), brown waves, brown nights, in a brown study, brownie, Father Brown, and Good Mrs. Brown.
Headline image credit: Mountain Ash (Rowan Tree) in the winter. Photo by Hella Delicious. CC BY-NC-SA 2.0 via Flickr.
A ten-year anniversary seems an opportune time to take stock. Much has been said already about Oxford Scholarship Online (OSO) as it moves into its second decade, and let’s cast the net a bit wider and focus not on OSO, per se, but on what the academic publishing industry has gotten right and what we’ve missed since OSO was in its infancy.
The biggest change, of which OSO has been a central component at Oxford University Press, has of course been the transition from a print-centric, manufacturing-based industry to a print-and-online, service-oriented industry.
Drawing on that general context, below then are two lists: (1) what publishers have learned in this age of online publishing, and (2) what we took too long to learn or didn’t see coming.
10 things we’ve learned
1. The Long Tail. While the long tail is a familiar concept in statistical circles, it became a cultural conceit owing to Chris Anderson’s influential 2004 article in Wired magazine, which subsequently became a bestselling book. The lesson publishers took from this article/book was that, rather than endlessly pursuing new and different audiences and markets, you need to make sure you’re doing a good job of providing your goods — no matter how old — to the people — no matter how few in number — who have already demonstrated a desire for it. Too many publishers were terrible at keeping their books in stock, owing to the constraints imposed by the economies of scale associated with traditional offset printing. The ability to print books digitally and in very small batches or even one at a time in response to demand had a revolutionary effect on academic publishing, the very definition of a long-tail industry.
2. The E-book Revolution. E-books have taught us a great deal about consumer behavior, specifically what prompts people to make the decision to buy. E-books have proven to be an effective way to draw readers to overlooked books or to reinvigorate proven backlist titles. And, to the surprise of many, the massive glut of self-published work has had relatively little impact on the high-quality vetted non-fiction published by university presses.
3. Discoverability. Invest in the digital infrastructure necessary to draw attention to your offerings. Period.
4. Get the Basics Right. Populated by people who are better with metaphors than spreadsheets, publishing has historically been an inefficient industry, with more attention devoted to ideas than to execution. The valuable skills of demand planners, of project managers, of efficiency experts can, when properly incorporated into a publishing environment, greatly enhance the ability of a press to fulfill its mission.
5. Mistrust the Theologians. Whether it’s open access evangelists, Wikipedia detractors, print-will-never-die bibliophiles, print-is-dead technophiles, there’s generally little other than provocative headlines in absolutist prognostications about the future. The truth almost always lies in the messy, complicated middle. We are powerfully drawn to binaries and oppositional dichotomies but that’s rarely the way things play out.
6. Authors still want to be edited and advised. That’s really all there is to say. Even as many things about publishing have changed, the intellectual bond between author and editor — and often publicist or marketer — is paramount. Furthermore, in a world where the lingua franca of the academy and of business is now irrefutably English, and where one in three people is either Indian or Chinese, there are a lot of researchers and scholars who could use our editing and translational skills.
7. We’re a hardier industry than we give ourselves credit for. What seemed very scary just a few short years ago seems a bit less scary now. We’ve weathered the decline of independent stores, the rise of the chains, the demise of Borders, the rise of Amazon and Apple, the arrival of e-readers, and the unexpected flattening of e-book sales once the initial “load-up-your-e-reader” euphoria had subsided. The ubiquity of the web, and the transition from an information-scarcity economy to an information-glutted world in under 20 years, has highlighted the need for filters, which is, after all, a defining characteristic of publishers.
8. University presses are hard to run, but they’re even harder to kill. “Hard to kill” isn’t a business model but it does speak to the value that our regional communities place on the work we do. Numerous attempts to shutter or trim university presses have been met with howls of protest, often resulting in a reversal or tempering of the original decision.
9. The difference between extractive research and immersive reading. The web has highlighted the different ways in which people use books. Most bluntly, people read fiction but consume much non-fiction in an extractive, “dip-in-and-dip-out” manner. We may have presumed as much ten years ago but we can now trace the “user journey” of researchers and readers forensically via usage reporting.
10. The value of the physical book. A great many articles and books have appeared in the last decade extolling the virtues and utility and durability of the printed book. The emotional intensity of these affirmations of the physical book has surprised even some publishers. And, as we now know in a way we only presumed a decade ago, the physical book has an appeal that can happily run side-by-side with its digital and online cousins.
5 things we took too long to learn or didn’t see coming
1. POD and digital printing changed the world while we were nattering on about e-books, long before the market for them matured. See #1 above. We prognosticate endlessly about the future because we don’t want to seem anachronistic or Luddite (a historian recently said to me that she thought a great deal of innovation in business was driven by middle-aged people not wanting to appear old), but too often we focus too much on what will happen, rather than when it will happen, which is frequently the real question.
2. Things tend to happen more slowly than we think, and then they happen suddenly and fast. Exhibit A: E-book sales. Exhibit B: Social media relevance. If there is one tendency with which publishers have become very familiar since the onset of online publishing, it’s the phenomenon known in business-speak as Gartner’s Hype Cycle, depicted in the graph. In a sentence, new technologies are often met with wildly inflated expectations in the early days, resulting in inevitable disappointment, and then gradually gain a foothold and become established. Others, of course, never emerge from the “Trough of Disillusionment.”
Examples of technology-driven phenomena that have spent some time on the hype cycle, or remain there, are:
Print on Demand book manufacturing (which took years to become viable, both from a cost and a quality standpoint)
E-books (in the early days)
Apps (especially apps that aren’t free)
Websites for individual books (not useful now, if they ever were)
Book trailers for all but a very few books
Augmented e-book rights (about which there was a great fuss a few years ago)
Sales of individual book chapters
College textbook e-readers
And, perhaps most conspicuously, MOOCs
3. Back-office technology. Many publishers have underestimated the back-office technology challenges presented by the fragmenting media and sales landscape: the changing revenue streams, the multi-format sales models, the double running costs.
4. The potential perils of “secular stagnation”. The early years of digital publishing were typified by “retroconversion” efforts, in other words the digitizing of legacy backlist. Is there a risk that the one-time bumps granted publishers from the maturation of e-books, of POD efficiencies, of sales from journals back issues has concealed a slowdown or even a hollowing out of core markets? This is an unanswerable question when posed broadly but it’s a question all publishers should be asking themselves as they weigh their futures.
5. Access, access, access. Discussions about online publishing often focus on all the new and different things authors, educators, and publishers can do in an online environment. What has been relatively overlooked is the great promise of online publishing when it comes to access. Online delivery has the potential — already being realized in a number of ways — to enable authors and publishers to reach more people more efficiently and quickly, and at a lower cost, than ever before.
Throughout history and across cultures, marriage has often been accompanied by substantial exchange of wealth. However, the practice has mostly died out in western societies, which is perhaps why the meanings of these marital transfers are often not well understood. In anthropological terms, a dowry can be seen as a form of pre-mortem bequest to the bride from her parents, while bride-price or groom-price is a transaction between the kin of the groom and the kin of the bride. The former is an intergenerational transfer within the bride’s family, even though the groom and his family can also benefit when it is used to help set up the conjugal household, while the latter is a transfer between affinal families.
These conceptually distinct transfers are supposed to serve different functions. Recent economic research suggests that the bride-price or groom-price is a means of distributing gains generated by the marriage in accordance with overall social/cultural/economic conditions as well as personal attributes of the bride and the groom. Dowry, on the other hand, can be interpreted as a gift from altruistic parents to the bride to help establish her position in the new household. This is important since virilocality, whereby the bride leaves her natal family to join her husband’s household, is the norm in most traditional cultures. Empirical studies, using data from medieval Tuscany to modern-day China and Taiwan, appear to support these interpretations, particularly with regard to the beneficial effect of dowry on the welfare of women.
The picture is, however, quite different in South Asia, where dowry has taken on extravagant proportions in recent years and is often considered a social evil, morally offensive, and legally banned. Stories abound of Indian women abused or even murdered by their husbands or in-laws when the latter’s demands for dowries are not met. There is no shortage of commentaries on this issue, but relatively few rigorous empirical studies in this region. Nevertheless, the South Asian experience does raise a valid question: is dowry really good for the bride?
The answer to this question depends partly on what is meant by dowry, and in India today, it is synonymous with groom-price rather than a bequest to the bride. However, there is some evidence that, even in India, a “dowry” can include both components. Since these transfers are supposed to serve different purposes, we must first decompose “dowry” into these components before we can ascertain their effects on the welfare of women. This is what I did in a recent study using an Indian data set. By taking into account the control that various individuals (the wife, the husband, or other family members) exercise over the use of different types of marital transfers (land, cash, jewelry, etc.) as reported by the wife, I estimate the amounts of the bequest and the groom-price in a dowry, and find that a larger bequest does allow Indian women a greater say in a broad range of household decisions, from whom to invite to dinner, to children-rearing decisions, to making small independent purchases.
There can be alternative interpretations of this result. It is possible that a resourceful wife can secure a larger bequest from her parents as well as more decision power in her conjugal family, causing a spurious causal relationship between the two. However, even after I have controlled for this “endogeneity problem” using econometric methods, the positive effects of the bequest component of a dowry remain. Yet another possibility is that, in a patriarchal society such as India, any marital transfer will inevitably be appropriated by the husband and his family, regardless of the original intent of the bride’s parents. But since the Indian researchers who collected the data are confident that the women’s responses on the amount and control over marital transfers in my sample are reliable, we would have to take the data at face value, at least until better data become available.
This finding leads to the sensitive question of whether dowry should be banned in the first place: if a dowry enhances the status of the wife, wouldn’t banning it deprive parents a channel through which they can help their daughter? One can perhaps reject dowry simply because it is demeaning to women if marriage has to be consecrated by marital payments, although this objection lies more with the groom-price component of a “dowry” rather than the bequest component. Even if the ban were to apply only to groom-price, it would be a case of trying to fix the symptom without getting to the root of the problem. If parents voluntarily enter into a marriage agreement for their daughter even though it involves an exorbitant amount of groom-price, it must be because they believe the high price would secure a “good” marriage for their daughter. They would stop paying high groom-prices if they could be convinced that higher caste status is not worth the extra cash, or that a young man with a civil service career does not necessarily promise a better life for their daughter.
Alternatively, the government can ban dowry and impose heavy penalties on offenders. Neither seemed to have worked in India so far, as dowry remains prevalent, almost fifty years after the first dowry prohibition act was passed. What has changed is that marital transfers cannot be contracted upon because they are illegal. Enforcement of prior agreement depends on the good faith and concern for the reputation of the parties involved. When this mechanism fails, there is no legal recourse for the aggrieved party. The bride and her family are particularly vulnerable because she can literally be held hostage in ex post bargaining. If dowry is decriminalized and marital transfers can be negotiated, contracted, and accorded legal status, then not only can women enjoy the benefits of their dowry and better protection for their property rights, but opportunistic behavior by the grooms and their families to extract further concessions can also be discouraged. What looks like a step backward in history may bring positive dividends for women in the region.
A human eyeball shoots out of its socket, and rolls into a gutter. A child returns from the dead and tears the beating heart from his tormentor’s chest. A young man has horrifying visions of his mother’s decomposing corpse. A baby is ripped from its living mother’s womb. A mother tears her son to pieces, and parades around with his head on a stick… These are scenes from the notorious, banned ‘video nasty’ films Eaten Alive, Zombie Flesh Eaters, I Spit on Your Grave, Anthropophagous: The Beast, and Cannibal Holocaust.
Well, no. They could be – but they’re not. All these scenes and images can be found safely inside the respectable covers of Oxford World’s Classics, in the works of Edgar Allan Poe, M.R. James, James Joyce, William Shakespeare, and Euripides. Only the first two of these are avowedly writers of horror, and none of these books comes with any kind of public health warning or age-suitability guideline. What does this mean?
Euripides’s The Bacchae, first performed around 400 BC, is one of the foundational works of the Western literary canon. In describing graphically the actions of Agave and her Maenads, dismembering King Pentheus and putting his head on a pole, it also sets the bar very high for artistic representations of violence and gore. The episode of the baby ripped from the mother’s womb to which I alluded in the first paragraph is from Macbeth, of course – it’s Macduff’s account of his own birth. And Macbeth, though certainly no slouch in the mayhem department, isn’t even Shakespeare’s most violent play. That would be Titus Andronicus, whose opening scene makes the connections between civilization and horror very clear, as Tamora, Queen of the Goths, sees her son brutally killed by the conquering Romans:
See, lord and father, how we have performed
Our Roman rites: Alarbus’ limbs are lopp’d,
And entrails feed the sacrificing fire,
Whose smoke, like incense, doth perfume the sky.
What follows is well known: further mutilation, rape, cannibalism. Shocking, yes; surprising, no. After all, the greater part of the Western literary tradition follows, or celebrates, a faith whose own sacrificial rites have at their heart symbolic representations of torture and cannibalism, the cross and the host. A case could plausibly be made that the Western literary tradition is a tradition of horror. This may be an overstatement, but it’s an argument with which any honest thinker has to engage.
The classic argument adduced in defence of the brutality of tragedy (a form which I have come to think of as highbrow horror) is the Aristotelian concept of catharsis, according to which the act of witnessing artistic representations of cruelty and monstrosity, pity and fear, purges the audience of these emotions, leaving them psychologically healthier. Horror is good for you! I confess I have always had difficulty accepting this hypothesis (though I recognize that many people far more learned and brilliant than me have had no trouble accepting it). It seems to me to be a classic example of an intellectual’s gambit, a theory offered without recourse to any evidence. And yet catharsis seems to me to be far preferable to another, more common, response to horror: the urge to censor or ban extreme documents and images in the name of public morality. If catharsis is Aristotelian, then this hypothesis is Pavlovian: horror conditions our responses; a tendency to watch violent acts leads inexorably to a tendency to commit violent acts. For many people, this seems to make intuitive sense (on more than one occasion, I’ve noticed people backing away from me when I tell them I work on horror), and it’s the impetus behind the framing of the Video Recordings Act of 1984, after which Cannibal Holocaust and all those other video nasties were banned. As a number of commentators and critics have noted, there’s no evidence for this Pavlovian hypothesis, either. Worse than that, there’s a distinct class animus behind such thinking. You and I, cultured, literate, educated, middle class folks that we are, are perfectly safe: when we watch Cannibal Holocaust (which I do, even if you don’t) we know what we are seeing, we can contextualize the film, interpret it, recognize it for what it is. The problem, the argument implicitly goes, is not us, it is them, those festering, semi-bestial proletarians whose extant propensity for violence (always simmering beneath the surface) can only be stoked by watching these films. That’s why no-one seriously considers banning The Bacchae or Titus Andronicus – why any suggestion that we do so would be treated as an act of appalling philistinism. They are horror for the educated classes.
Horror is, unquestionably, an extreme art form. Like all avant-garde art, I would suggest, its real purpose is to force its audiences to confront the limits of their own tolerance – including, emphatically, their own tolerance for what is or is not art. Commonly, when hitting these limits, we respond with fear, frustration, and even rage. Even today, this is not an unusual reaction on first reading Finnegans Wake, for example: I see it occasionally in my students, who are (a) voluntarily students of literature, and (b) usually Irish, not to say actual Dubliners. So we shouldn’t be surprised that audiences respond to horror with – well, with horror. But we need to recognize that the reasons for doing this are complex, and are deeply bound up with the meaning and function of art, and of civilization.
Headline image: Pentheus torn apart by Agave and Ino. Attic red-figure lekanis (cosmetics bowl) lid, ca. 450-425 BC, public domain, via Wikimedia Commons
Preparing a new edition of an oral history manual, a decade after the last appeared, highlighted dramatic changes that have swept through the field. Technological development made previous references to equipment sound quaint. The use of oral history for exhibits and heritage touring, for instance, leaped from cassettes and compact discs to QR codes and smartphone apps. As oral historians grew more comfortable with new equipment, they expanded into video and discovered the endless possibilities of posting interviews, transcripts, and recordings on the Internet. Having found a way to get oral history off the archival shelves and into the community, interviewers also had to consider the ethical and legal issues of exposing interviewees to worldwide scrutiny.
Over the last decade, the Internet left no excuses for parochialism. As the practice of oral history grew more international, a manual could neither address a single nation nor ignore the rest of the world. Wherever social, political, or economic turmoil has occurred, oral histories have recorded the change — because state archives tend to reflect the old regimes. War, terrorism, hurricanes, floods, fires, pandemics, and other natural and human-made disasters spurred interviews with those who endured trauma and tragedy, and required interviewers to adjust their approaches. Issues of empathy for those suffering emotional distress increasingly became part of the discourse among oral historians. At the same time, the use of interviewing grew more interdisciplinary, with historians examining the fieldwork techniques and needs of social scientists. Sociologists, anthropologists, and ethnographers have long employed interviewing, usually through participant observation. Many have gradually shifted from quantitative to qualitative analysis, raising questions about identifying their sources rather than rendering them anonymous, and bringing their methods closer to oral history protocols.
New theoretical interests developed, particularly around memory studies. Oral historians became more concerned about not only what people remember, but also what they forget, and how they express these memories. Weighing the relationship between language and thought, and suggesting that that outward behavior reflects underlying signs, narrative theory has challenged the notion of objective history. It sees the past as recalled and recounted as simply a construction, shaped by the way it is told. Memory theories have dealt with the way suggestive questions can reshape memories, and the way recent experiences can block out memories of earlier ones. These theories suggest that people reconstruct memories of past experiences rather than mentally retrieve exact copies of them.
An increasingly litigious culture raised other concerns for oral historians. Lawsuits have alleged that some online interviews are defamatory. A court case with international implications arose when the United States supported British police efforts to subpoena closed interviews that might shed light on a murder case in Northern Ireland, exposing the vulnerability of oral history to judicial intervention. Although the courts treated closed interviews seriously and limited the amount of material to be opened, the case reminded oral historians that they could not promise absolute confidentiality when dealing with sensitive and possibly criminal issues.
It has been breathtaking to document the scope of change in oral history over the last two decades, and sobering to see how dated it made much of the past information and even some of the language. Looking back over the past decade also provided some reassurance about continuity. While it sometimes seems that everything about the practice of oral history has changed, the personal dynamics of conducting an interview have remained very much intact. Whether sitting down face-to-face or using some means of electronic communication, the human interaction of the interview has stayed the same. So have the basic steps: the interviewer’s need for prior research; for knowing how to operate the equipment; for crafting thoughtful, open-ended questions; for establishing rapport; for listening carefully and following up with further questions; and for doing everything possible to elicit candid and substantive responses.
I was glad to see so many of these new trends prominently displayed at the Oral History Association’s recent meeting in Madison, Wisconsin, (October 8-12) where sessions focused on oral history “in motion.” Motion aptly describes the forward-looking nature of oral history, with its expanding methodology and embrace of the latest technology, as well as its eagerness to confront established narratives with alternative voices.
The Salem Witch Trials of 1692-1693 were by far the largest and most lethal outbreak of witchcraft in American history. Yet Salem was just one of many incidents during the Great Age of Witch Hunts which took place throughout Europe and her colonies over many centuries. Indeed, by European standards, Salem was not even a large outbreak. But what exactly were the factors that made Salem stand out?
In A Storm of Witchcraft: The Salem Trials and the American Experience, Emerson Baker places the Salem trials in their broader context and reveals why it has become an enduring legacy. He explains why the Salem crisis marked a turning point in colonial history from Puritan communalism to Yankee independence, from faith in collective conscience to skepticism toward moral governance. Below is an infographic detailing some of the numbers involved in Salem and other witch hunts.
The theme of this year’s meeting is “International Law in a Time of Chaos”, exploring the role of international law in conflict mitigation. Panel discussions will examine various aspects of both public international law and private international law, including trade, investment, arbitration, intellectual property, combatting corruption, labor standards in the global supply chain, and human rights, as well as issues of international organizations and international security.
ILW is sponsored and organized by the American Branch of the International Law Association (ABILA) and the International Law Students Association (ILSA). Every year more than one thousand practitioners, academics, diplomats, members of the governmental and nongovernmental sectors, and students attend this conference.
This year’s conference highlights include:
This year’s keynote from Lori Damrosch, Hamilton Fish Professor of International Law and Diplomacy, Columbia Law School, and President of the American Society of International Law. “Democratization of Foreign Policy and International Law, 1914-2014” Friday, 1:30PM (Room 2-02A)
Top practitioners in the field discuss “International Investment Arbitration and the Rule of Law”, Friday 4:45PM (Room 2-02A). (Sign up for our Free Investment Claims Webinar on October 20th to brush up on VCLT in BIT arbitrations in time for this panel.)
Looking for career advice? Attend this roundtable discussion on Saturday afternoon “Careers in International Human Rights, International Development, and International Rule of Law,” Saturday, 3:30PM (Room 2-02B)
How rapidly does medical knowledge advance? Very quickly if you read modern newspapers, but rather slowly if you study history. Nowhere is this more true than in the fields of neurology and psychiatry.
It was believed that studies of common disorders of the nervous system began with Greco-Roman Medicine, for example, epilepsy, “The sacred disease” (Hippocrates) or “melancholia”, now called depression. Our studies have now revealed remarkable Babylonian descriptions of common neuropsychiatric disorders a millennium earlier.
There were several Babylonian Dynasties with their capital at Babylon on the River Euphrates. Best known is the Neo-Babylonian Dynasty (626-539 BC) associated with King Nebuchadnezzar II (604-562 BC) and the capture of Jerusalem (586 BC). But the neuropsychiatric sources we have studied nearly all derive from the Old Babylonian Dynasty of the first half of the second millennium BC, united under King Hammurabi (1792-1750 BC).
The Babylonians made important contributions to mathematics, astronomy, law and medicine conveyed in the cuneiform script, impressed into clay tablets with reeds, the earliest form of writing which began in Mesopotamia in the late 4th millennium BC. When Babylon was absorbed into the Persian Empire cuneiform writing was replaced by Aramaic and simpler alphabetic scripts and was only revived (translated) by European scholars in the 19th century AD.
The Babylonians were remarkably acute and objective observers of medical disorders and human behaviour. In texts located in museums in London, Paris, Berlin and Istanbul we have studied surprisingly detailed accounts of what we recognise today as epilepsy, stroke, psychoses, obsessive compulsive disorder (OCD), psychopathic behaviour, depression and anxiety. For example they described most of the common seizure types we know today e.g. tonic clonic, absence, focal motor, etc, as well as auras, post-ictal phenomena, provocative factors (such as sleep or emotion) and even a comprehensive account of schizophrenia-like psychoses of epilepsy.
Early attempts at prognosis included a recognition that numerous seizures in one day (i.e. status epilepticus) could lead to death. They recognised the unilateral nature of stroke involving limbs, face, speech and consciousness, and distinguished the facial weakness of stroke from the isolated facial paralysis we call Bell’s palsy. The modern psychiatrist will recognise an accurate description of an agitated depression, with biological features including insomnia, anorexia, weakness, impaired concentration and memory. The obsessive behaviour described by the Babylonians included such modern categories as contamination, orderliness of objects, aggression, sex, and religion. Accounts of psychopathic behaviour include the liar, the thief, the troublemaker, the sexual offender, the immature delinquent and social misfit, the violent, and the murderer.
The Babylonians had only a superficial knowledge of anatomy and no knowledge of brain, spinal cord or psychological function. They had no systematic classifications of their own and would not have understood our modern diagnostic categories. Some neuropsychiatric disorders e.g. stroke or facial palsy had a physical basis requiring the attention of the physician or asû, using a plant and mineral based pharmacology. Most disorders, such as epilepsy, psychoses and depression were regarded as supernatural due to evil demons and spirits, or the anger of personal gods, and thus required the intervention of the priest or ašipu. Other disorders, such as OCD, phobias and psychopathic behaviour were viewed as a mystery, yet to be resolved, revealing a surprisingly open-minded approach.
From the perspective of a modern neurologist or psychiatrist these ancient descriptions of neuropsychiatric phenomenology suggest that the Babylonians were observing many of the common neurological and psychiatric disorders that we recognise today. There is nothing comparable in the ancient Egyptian medical writings and the Babylonians therefore were the first to describe the clinical foundations of modern neurology and psychiatry.
A major and intriguing omission from these entirely objective Babylonian descriptions of neuropsychiatric disorders is the absence of any account of subjective thoughts or feelings, such as obsessional thoughts or ruminations in OCD, or suicidal thoughts or sadness in depression. The latter subjective phenomena only became a relatively modern field of description and enquiry in the 17th and 18th centuries AD. This raises interesting questions about the possibly slow evolution of human self awareness, which is central to the concept of “mental illness”, which only became the province of a professional medical discipline, i.e. psychiatry, in the last 200 years.
As an Africanist historian who has long been committed to reaching broader publics, I was thrilled when the research team for the BBC’s popular genealogy program Who Do You Think You Are? contacted me late last February about an episode they were working on that involved mixed race relationships in colonial Ghana. I was even more pleased when I realized that their questions about the practice and perception of intimate relationships between African women and European men in the Gold Coast, as Ghana was then known, were ones I had just explored in a newly published American Historical Review article, which I readily shared with them. This led to a month-long series of lengthy email exchanges, phone conversations, Skype chats, and eventually to an invitation to come to Ghana to shoot the Who Do You Think You Are? episode.
After landing in Ghana in early April, I quickly set off for the coastal town of Sekondi where I met the production team, and the episode’s subject, Reggie Yates, a remarkable young British DJ, actor, and television presenter. Reggie had come to Ghana to find out more about his West African roots, but discovered instead that his great grandfather was a British mining accountant who worked in the Gold Coast for several years. His great grandmother, Dorothy Lloyd, was a mixed-race Fante woman whose father—Reggie’s great-great grandfather—was rumored to be a British district commissioner at the turn of the century in the Gold Coast.
The episode explores the nature of the relationship between Dorothy and George, who were married by customary law around 1915 in the mining town of Broomassi, where George worked as the paymaster at the local mine. George and Dorothy set up house in Broomassi and raised their infant son, Harry, there for two years before George left the Gold Coast in 1917 for good. Although their marriage was relatively short lived, it appears that Dorothy’s family and the wider community that she lived in regarded it as a respectable union and no social stigma was attached to her or Harry after George’s departure from the coast.
George and Dorothy lived openly as man and wife in Broomassi during a time period in which publicly recognized intermarriages were almost unheard of. As a privately employed European, George was not bound by the colonial government’s directives against cohabitation between British officers and local women, but he certainly would have been aware of the informal codes of conduct that regulated colonial life. While it was an open secret that white men “kept” local women, these relationships were not to be publicly legitimated.
Precisely because George and Dorothy’s union challenged the racial prescripts of colonial life, it did not resemble the increasingly strident characterizations of interracial relationships as immoral and insalubrious in the African-owned Gold Coast press. Although not a perfect union, as George was already married to an English woman who lived in London with their children, the trajectory of their relationship suggests that George and Dorothy had a meaningful relationship while they were together, that they provided their son Harry with a loving home, and that they were recognized as a respectable married couple. No doubt this had much to do with why the wider African community seemingly embraced the couple, and why Dorothy was able to “marry well” after George left. Her marriage to Frank Vardon, a prominent Gold Coaster, would have been unlikely had she been regarded as nothing more than a discarded “whiteman’s toy,” as one Gold Coast writer mockingly called local women who casually liaised with European men. In her own right, Dorothy became an important figure in the Sekondi community where she ultimately settled and raised her son Harry, alongside the children she had with Frank Vardon.
The “white peril” commentaries that I explored in my AHR article proved to be a rhetorically powerful strategy for challenging the moral legitimacy of British colonial rule because they pointed to the gap between the civilizing mission’s moral rhetoric and the sexual immorality of white men in the colony. But rhetoric often sacrifices nuance for argumentative force and Gold Coasters’ “white peril” commentaries were no exception. Left out of view were men like George Yates, who challenged the conventions of their times, even if imperfectly, and women like Dorothy Lloyd who were not cast out of “respectable” society, but rather took their place in it.
This sense of conflict and connection and of categorical uncertainty is what I hope to have contributed to the research process, storyline development, and filming of the Reggie Yates episode of Who Do You Think You Are? The central question the show raises is how do we think about and define relationships that were so heavily circumscribed by racialized power without denying the “possibility of love?” By “endeavor[ing] to trace its imperfections, its perversions,” was Martinican philosopher and anticolonial revolutionary Frantz Fanon’s answer. While I have yet to see the episode, Fanon’s insight will surely reverberate throughout it.
Voting for the 2014 Atlas Place of the Year is now underway. However, you still be curious about the nominees. What makes them so special? Each year, we put the spotlight on the top locations in the world that make us go, “wow”. For good or for bad, this year’s longlist is quite the round-up.
Just hover over the place-markers on the map to learn a bit more about this year’s nominations.
Make sure to vote for your Place of the Year below. If you have another Place of the Year that you would like to nominate, we’d love to know about it in the comments section. Follow along with #POTY2014 until our announcement on 1 December.What do you think Place of the Year 2014 should be?
Image Credits: Ferguson: “Cops Kill Kids”. Photo by Shawn Semmler. CC BY 2.0 via Flickr. Liberia: Ebola Virus Particles. Photo by NIAID. CC BY 2.0 via Flickr. Ukraine: Euromaiden in Kiev 2014-02-19 10-22. Photo by Amakuha. CC BY-SA 3.0 via Wikimedia Commons. Colorado: Grow House 105. Photo by Coleen Whitfield. CC BY-SA 2.0 via Flickr. Nauru: In front of the Menen. Photo by Sean Kelleher. CC BY-SA 2.0 via Flickr. Sochi: Olympic Park Flags (2). Photo by american_rugbler. CC BY-SA 2.0 via Flickr. Mount Sinjar: Sinjar Karst. Photo by Cpl. Dean Davis. Public Domain via Wikimedia Commons. Gaza: The home of the Kware family after it was bombed by the military. Photo by B’Tselem. CC BY 4.0 via Wikimedia Commons. Scotland: Vandalised no thanks sign. Photo by kay roxby. CC BY 2.0 via Flickr. Brazil: World Cup stuff, Rio de Janeiro, Brazil (15). Photo by Jorge in Brazil. CC BY 2.0 via Flickr.
Beginning in the early 1920s, and continuing through the mid 1940s, record companies separated vernacular music of the American South into two categories, divided along racial lines: the “race” series, aimed at a black audience, and the “hillbilly” series, aimed at a white audience. These series were the precursors to the also racially separated Rhythm & Blues and Country & Western charts, and arguably the source of the frequent racial divisions of today’s recording industry. But a closer examination reveals that the two populations rely heavily on many of the same musical resources, and that early blues and country music exhibit thorough interpenetration.
Many admirers of early blues and country music observe that black and white musicians from the 1920s to the 1940s share much with respect to repertoire and genre, and that the separation of the two on commercial recordings grew out of the prejudices of record companies. It becomes even more apparent how deeply intertwined the two traditions are when we examine blues and country musicians’ shared stock of schemes. Schemes are preexisting harmonic grounds and melodic structures that are common resources for the creation of songs. A scheme generates multiple distinct songs, with different lyrics and titles. Many schemes generated songs in both blues and country music.
There are several different types of blues and country schemes. One type is a harmonic progression that combines with one particular tune. The “Trouble In Mind” scheme, for example, generates both Bertha Chippie Hill’s “Trouble in Mind” (1) and the Hackberry Ramblers’ “Fais Pas Ça” (2). Both use the same harmonic progression, and the two melodies have relatively slight variation. Hill recorded for the “race” series, and the Hackberry Ramblers for the “hillbilly” series.
1. Bertha “Chippie” Hill, “Trouble in Mind” (Bertha “Chippie” Hill—Document Records)
2. Hackberry Ramblers, “Fais Pas Ça” (Jolie Blonde—Arhoolie Productions)
A second type of scheme is a preexisting harmonic progression that musicians associate primarily with a specific tune, which they set to lyrics about various subjects, but which they also use to support original melodies. In the “Frankie and Johnny” scheme, the same melody combines with lyrics about Frankie’s shooting of Johnny (or Albert) (3), the Boll Weevil infestation at the turn of the twentieth century (4), and the gambler Stack O’Lee, who shot and killed fellow gambler Billy Lyons (5). Singers also use the harmonic progression to support original melodies, with lyrics about Frankie (6), Stack O’Lee (7), or another subject (8).
In all of the examples, the same correspondence between lyrics and harmony is evident in the harmonic shift that accompanies the completion of the opening rhyming couplet, on the words “above” (3), “your home” (4), “road” (5), “beer” (6), the first “Stack O’Lee” (7), and “that line” (8), and in the harmonic shifts that accompany emphasized words in the refrain, on the words “man” and “wrong” (3, 5, and 6), “no home” and “no home” (4), “bad man” and “Stack O’Lee” (7), and “bad” and “bad” (8). Four of the recordings given here are from the “race” labels, and two are from the “hillbilly” labels, but the same scheme generates all of them.
3. Jimmie Rodgers, “Frankie and Johnny” (The Essential Jimmie Rodgers—Sony)
4. W. A. Lindsey, “Boll Weevil” (People Take Warning—Tomkins Square)
5. Ma Rainey, “Stack O’Lee Blues” (Ma Rainey’s Black Bottom—Yazoo)
7. Mississippi John Hurt, “Stack O’Lee” (Before the Blues—Yazoo)
8. Henry Thomas, “Bob McKinney” (Texas Worried Blues—Document Records)
A third type of scheme is a preexisting harmonic progression that musicians use primarily to support original melodies. This type of scheme is the most productive, and often supports countless melodies. The most well-known and productive of this type is the standard twelve-bar blues scheme. All seven of the following recordings (9–15)—four from the “race” series and three from the “hillbilly” series—contain original melodies combined with the standard twelve-bar blues harmonic progression, and all demonstrate the AAB poetic form that typically combines with the scheme, in which singers state the opening A line of a couplet twice and follow it with one statement of the rhyming B line.
9. Ida Cox, “Lonesome Blues” (Ida Cox Complete Recorded Works—Document Records)
10. Charley Patton, “Moon Going Down” (Charlie Patton Founder of the Delta Blues—Mastercopy Pty Ltd)
11. Jesse “Babyface” Thomas, “Down in Texas Blues” (The Stuff that Dreams are Made Of)
12. Lonnie Johnson, “Mr. Johnson’s Blues No. 2” (A Smithsonian Collection of Classic Blues Singers—Sony/Smithsonian)
13. W. Lee O’Daniel & His Hillbilly Boys, “Dirty Hangover Blues” (White Country Blues—Sony)
14. Jesse “Babyface” Thomas, “Down in Texas Blues” (The Stuff that Dreams are Made Of) (White Country Blues—Sony)
15. Carlisle & Ball, “Guitar Blues” (White Country Blues—Sony)
A fourth type of scheme is a preexisting melodic structure whose harmonizations display considerable variance and yet also certain requirements. The following four examples—two by black musicians and two by white musicians—are all realizations of the “Sitting on Top of the World” scheme, and use the same melodic structure. Their harmonizations are in some ways quite similar—for example, all four harmonize the beginning of the second, rhyming line with the same harmony, and accelerate the rate of harmonic change going into the cadence—but the harmonizations vary more than the melodic structure.
16. Tampa Red, “Things ‘Bout Coming My Way No. 2” (Tampa Red the Guitar Wizard—Sony)
17. Bill Broonzy, “Worrying You Off My Mind” (Big Bill Broonzy Good Time Tonight—Sony)
18. Bob Wills & His Texas Playboys, “Sittin’ on Top of the World” (Bob Wills & His Texas Playboys Anthology—Puzzle Productions)
19. The Carter Family, “I’m Sitting on Top of the World” (On Border Radio—Arhoolie)
Finally, a fifth type of scheme is a preexisting melodic structure for which performers have little shared conception of the harmonic progression. The last four examples—one by a black musician and three by white musicians—are all realizations of the “John Henry” scheme, and use the same melodic structure, but very different harmonic progressions. Riley Puckett, in his instrumental version, uses only one harmony throughout (20). Woody Guthrie uses two harmonies (21). The Williamson Brothers & Curry also use two harmonies, but arrive at a much different harmonization than Guthrie (22). Leadbelly uses three harmonies (23).
20. Riley Puckett, “A Darkey’s Wail” (White Country Blues—Sony)
Record companies presented American vernacular music in the context of a racial divide, but examining the common stock of schemes helps to reveal how extensively black and white musical traditions are intertwined. There are stylistic differences between blues and country music, but many differences lie on the surface, while on a deeper level the two populations frequently rely on the same musical foundations.
The business press and general media often lament that firm executives are exhibiting “short-termism”, succumbing to the pressure by stock market investors to maximize quarterly earnings while sacrificing long-term investments and innovation. In our new article in the Socio-Economic Review, we suggest that this complaint is partly accurate, but partly not.
What seems accurate is that the maximization of short-term earnings by firms and their executives has become somewhat more prevalent in recent years, and that some of the roots of this phenomenon lead to stock market investors. What is inaccurate, though, is the assumption that investors – even if they were “short-term traders” – would inherently attend to short-term quarterly earnings when making trading decisions. Namely, even “short-term trading” (i.e. buying stocks with the aim to sell them after few minutes, days, or months) does not equal or necessitate “short-term earnings focus”, i.e., making trading decisions based on short-term earnings (let alone based on short-term earnings only). This means that in case the media observes – or executives perceive – that firms are pressured by stock market investors to focus on short-term earnings, such a pressure is illusionary, in part.
The illusion, in turn, is based on the phenomenon of “vociferous minority”: a minority of stock investors may be focusing on short-term earnings, causing some weak correlation between short-term earnings and stock price jumps / drops. But the illusion is born when this gets interpreted as if most or all investors (i.e., the majority) would be focusing on short-term earnings only. Alas, such an interpretation may, in the dynamic markets, lead to a self-fulfilling prophecy – whereby an increasing number of investors join the vociferous minority and focus increasingly on short-term earnings (even if still not the majority of investors would focus on short-term earnings only). And more importantly – or more unfortunately – firm executives may start to increasingly maximize short-term earnings, too, due to the (inaccurate) illusion that the majority of investors would prefer that.
A final paradox is the role of the media. Of course, the media have good intentions in lamenting about short-termism in the markets, trying to draw attention to an unsatisfactory state of affairs. However, such lamenting stories may actually contribute to the emergence of the self-fulfilling prophecy. Namely, despite the lamenting tone of the media articles, they are in any case emphasizing that the market participants are focusing just on short-term earnings. This contributes to the illusion that all investors are focusing on short-term earnings only – which in turn may lead a bigger majority of investors and firms to actually join the minority’s bandwagon, in the illusion that everyone else is doing that too.
Should the media do something different, then? Well, we suggest that in this case, the media should report more on “positive stories”, or cases whereby firms have managed to create great innovations with a patient, longer-term focus. The media could also report on an increasing number of investors looking at alternative, long-term measures (such as patents or innovation rates) instead of short-term earnings.
So, more stories like this one about Rolls-Royce – however, without claiming or lamenting that most investors are just wanting “quick results” (i.e., without portraying cases like Rolls-Royce just as rare exceptions). Such positive stories could, in the best scenario, contribute to a reverse, self-fulfilling prophecy – whereby more and more investors, and thereafter firm executives, would replace some of the excessive focus on short-term earnings that they might currently have.
Open access (OA) publishing stands at something of a crossroads. OA is now part of the mainstream. But with increasing success and increasing volume come increasing complexity, scrutiny, and demand. There are many facets of OA which will prove to be significant challenges for publishers over the next few years. Here I’m going to focus on one — licensing — and discuss how the arguments seen over licensing in recent months shine a light on the difference between OA as a movement, and OA as a reality.
Today’s authors face a number of conflicting pressures. Publish in a high impact journal. Publish in a journal with the correct OA options as mandated by your funder. Publish in a journal with the correct OA options as mandated by your institution. Publish your article in a way which complies with government requirements on research excellence. They are then met by a wide array of options, and it’s no wonder we at OUP sometimes receive queries from authors confused as to which OA option they should choose.
One of the most interesting aspects of the various surveys Taylor & Francis (T&F) have conducted on open access over the past year or two has been the divergence between what authors say they want, and what their funders/governments mandate. The T&F findings imply that, whilst there is generally a shared consensus as to what is meant by accessible, there are divergent positions and preferences between funders and researchers as to what constitutes reasonable reuse. T&F’s surveys always reveal the most restrictive licences in the Creative Commons (CC) suite such as Creative Commons Attribution Non-Commercial No-Derivs (CC BY-NC-ND) to be the most popular, with the liberal Creative Commons Attribution (CC BY) licence coming in last. This neither squares with the mandates of funders which are usually, but not always, pro CC BY, or author behaviour at OUP, where CC BY-NC-ND usually comes in a resounding third behind CC BY and CC BY-NC where it’s available. It’s not a dramatic logical step to think that proliferation may lead to confusion, but given the conflicting evidence and demand, and potential for change, it’s logical for publishers to offer myriad options. At the same time elsewhere in the OA space we have a recent example of pressure to remove choice.
In July 2014, the International Association of Science, Technical and Medical Publishers (STM) released their ‘model licences’ for open access. These were at their core a series of alternatives for, and extensions to the terms of the established CC licences. STM’s new addition did not go down well in OA circles, as a ‘Global Coalition’ subsequently called for their withdrawal. One of the interesting elements of the Coalition’s call was that, in amongst some very valid points about interoperability, etc. it fell back on the kind of language more commonly associated with a sermon to make the STM actions seem incompatible with some fundamental precepts about the practice of science: “let us work together in a world where the whole sum of human knowledge… is accessible, usable, reusable, and interoperable.” At root, it could be interpreted that the Coalition was using positive terminology to frame an essentially negative action – barring a new entry to the market. Personally, I don’t have a strong opinion on the new STM licences. We don’t have any plans to adapt them at OUP (we use CC). But it was odd and striking that rather than letting a competitor to the CC status quo exist and in all likelihood fail, some serious OA players felt the need to call for that competitor’s withdrawal.
This illustrates one of the central challenges of the dichotomy of OA. On one hand you have OA as a political movement seeking to replace commercial interests with self-organized and self-governed communities of interest – a bottom-up aspiration for the common good, often suggested to be applied in quite restricted ways, usually adhering to the Berlin, Budapest, and Bethesda declarations. On the other you have OA as a top-down pragmatic means to an end, aiming to improve the flow of research and by extension, economic performance. The OA pragmatist might suggest that it’s fine for an author to be given the choice of liberal or less liberal OA licences, as long as they meet the basic criteria of being free to read and easy to re-use. The OA dogmatist might only be satisfied with the most liberal licence, and with OA along the terms they’ve come to believe is the correct interpretation of their core precepts. The danger of this approach is that there is a ‘right’ and a ‘wrong’ and, as can be seen from the language of the Global Coalition in responding to the STM licences, that can very easily translate into; “If you’re not with us, you’re against us.”
Against this backdrop, publishers find themselves in a thorny position. Do you (a) respect author choice, but possibly at some expense of simplicity, or do you (b) offer fewer options, but potentially leave members of the scholarly community feeling dissatisfied or disenfranchised by your standard option?
Oxford University Press at the moment chooses option (a), as we feel this is the more inclusive way to proceed. To me at least it feels right to give your customers choice. But there is an argument for streamlining processes, avoiding confusion, and giving users consistent knowledge of what to expect. Nature Publishing Group (NPG), for example, recently announced that as part of their move to full OA for Nature Communications they would be making CC BY their default, and only allowing other options on request. This is notable in as much as it’s a very strong steer in a particular direction, while not ruling out everything else. NPG has done more than most to examine the choice issue – changing the order of their licences to see what authors select, sometimes varying charges, etc. Empirical evidence such as this is essential for a viable and credible resolution to the future of OA licensing. Perhaps the Global Coalition should have given a more considered and less emotional response to the STM licences. Was repudiation necessary in a broad OA community which should be able to recognise and accept different variants of OA? It would be a shame if all the positive impacts of open access for the consumer come hand in hand with a diminution of scholarly freedom for the producer.
The opinions and other information contained in this blog post and comments do not necessarily reflect the opinions or positions of Oxford University Press.