JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
In the first autumn of World War I, a German infantryman from the 25th Reserve Division sent this pithy greeting to his children in Schwarzenberg, Saxony.
11 November 1914
My dear little children!
How are you doing? Listen to your mother and grandmother and mind your manners.
Heartfelt greetings to all of you!
Your loving Papa
He scrawled the message in looping script on the back of a Feldpostkarte, or field postcard, one that had been designed for the Bahlsen cookie company by the German artist and illustrator Änne Koken. On the front side of the postcard, four smiling German soldiers share a box of Leibniz butter cookies as they stand on a grassy, sun-stippled outpost. The warm yellow pigment of the rectangular sweets seems to emanate from the opened care package, flushing the cheeks of the assembled soldiers with a rosy tint.
German citizens posted an average of nearly 10 million pieces of mail to the front during each day of World War I, and German service members sent over 6 million pieces in return; postcards comprised well over half of these items of correspondence. For active duty soldiers, postage was free of charge. Postcards thus formed a central and a portable component of wartime visual culture, a network of images in which patriotic, sentimental, and nationalistic postcards formed the dominant narrative — with key moments of resistance dispatched from artists and amateurs serving at the front.
The first postcards were permitted by the Austrian postal service in 1869 and in Germany one year later. (The Post Office Act of 1870 allowed for the first postcards to be sold in Great Britain; the United States followed suit in 1873.) Over the next four decades, Germany emerged as a leader in the design and printing of colorful picture postcards, which ranged from picturesque landscapes to tinted photographs of famous monuments and landmarks. Many of the earliest propaganda postcards, at the turn of the twentieth century, reproduced cartoons and caricatures from popular German humor magazines such as Simplicissimus, a politically progressive journal that moved toward an increasingly reactionary position during and after World War I. Indeed, the majority of postcards produced and exchanged between 1914 and 1918 adopted a sentimental style that matched the so-called “hurrah kitsch” of German official propaganda.
Beginning in 1914, the German artist and Karlsruhe Academy professor Walter Georgi produced 24 patriotic Feldpostkarten for the Bahlsen cookie company in Hannover. In a postcard titled Engineers Building a Bridge (1915), a pair of strong-armed sappers set to work on a wooden trestle while a packet of Leibniz butter cookies dangle conspicuously alongside their work boots.
These engineering troops prepared the German military for the more static form of combat that followed the “Race to the Sea” in the fall of 1914; they dug and fortified trenches and bunkers, built bridges, and developed and tested new weapons — from mines and hand grenades to flamethrowers and, eventually, poison gas.
Georgi’s postcard designs for the Bahlsen company deploy the elegant color lithography he had practiced as a frequent contributor to the Munich Art Nouveau journal Jugend (see Die Scholle).In another Bahlsen postcard titled “Hold Out in the Roaring Storm” (1914), Georgi depicted a group of soldiers wearing the distinctive spiked helmets of the Prussian Army. Their leader calls out to his comrades with an open mouth, a rifle slung over his shoulder, and a square package of Leibniz Keks looped through his pinkie finger. In a curious touch that is typical of First World War German patriotic postcards, both the long-barreled rifles and the soldier’s helmets are festooned with puffy pink and carmine flowers.
These lavishly illustrated field postcards, designed by artists and produced for private industry, could be purchased throughout Germany and mailed, traded, or collected in albums to express solidarity with loved ones in active duty. The German government also issued non-pictorial Feldpostkarten to its soldiers as an alternate and officially sanctioned means of communication. For artists serving at the front, these 4” x 6” blank cards provided a cheap and ready testing ground at a time when sketchbooks and other materials were in short supply. The German painter Otto Schubert dispatched scores of elegant watercolor sketches from sites along the Western Front; Otto Dix, likewise, sent hundreds of illustrated field postcards to Helene Jakob, the Dresden telephone operator he referred to as his “like-minded companion,” between June 1915 and September 1918. These sketches (see Rüdiger, Ulrike, ed. Grüsse aus dem Krieg: die Feldpostkarten der Otto-Dix-Sammlung in der Kunstgalerie Gera, Kunstgalerie Gera 1991) convey details both minute and panoramic, from the crowded trenches to the ruined fields and landmarks of France and Belgium. Often, their flip sides contain short greetings or cryptic lines of poetry written in both German and Esperanto.
Dix enlisted for service in 1914 and saw front line action during the Battle of the Somme, in August 1916, one of the largest and costliest offensives of World War I that spanned nearly five months and resulted in casualties numbering more than one million. By September of 1918, the artist had been promoted to staff sergeant and was recovering from injuries at a field hospital near the Western Front. He sent one of his final postcard greetings to Helene Jakob on the reverse side of a self-portrait photograph, in which he stands with visibly bandaged legs and one hand resting on his hip. Dix begins the greeting in Esperanto, but quickly shifts to German to report on his condition: “I’ve been released from the hospital but remain here until the 28th on a course of duty. I’m sending you a photograph, though not an especially good one. Heartfelt greetings, your Dix.” Just two months later, the First World War ended in German defeat.
For most language learners and lovers, translation is a hot topic. Should I translate new vocabulary into my first language? How can I say x in Japanese? Is this translated novel as good as the original? I’ve lost count of the number of times I’ve been told that Pushkin isn’t Pushkin unless he’s read in Russian, and I have definitely chastised my own students for anxiously writing out lengthy bilingual wordlists: Paola, you’ll only remember trifle if you learn it in context!
Context-based learning aside, I’m all for translation: without it, we wouldn’t understand each other. However, I remain unconvinced that untranslatable words really exist. In fact, I wrote a blog post on some of my favorite Russian words that touched on this very topic. Looking at the responses it received both here and in the Twitterverse, I decided to set out on my own linguistic odyssey: could I wrap my head around ‘untranslatable’ once and for all?
It’s all Greek to me!
Many lovely people of the internet are in accordance: untranslatable words are out there, and they’re fascinating. A quick Google brings up articles, listicles, and even entire blogs on the matter. Goya, jayus, dépaysement — all wonderful words that neatly convey familiar concepts, but also “untranslatable” words that appear accompanied by an English definition. This English definition may well be longer and more complex than the foreign-language word itself (Oxford translates dépaysement as both “change of scenery” and “disorientation,” for example), but it is arguably a translation nonetheless. A lot of the coffee-break reads popping up on the internet don’t contain untranslatable words, but rather language lacking a word-for-word English equivalent. Is a translation only a translation if it is eloquent and succinct?
Translation vs. definition
When moving from one language to another, what’s a translation and what’s a definition — and is there a difference? Brevity seems to matter: the longer the translation, the more likely it is to be considered a definition. Does this make it any less of a translation? When we translate, we “express sense;” when we define, we “state or describe exactly the nature, scope, or meaning.” If I say that toska (Russian) means misery, boredom, yearning, and anguish, is that a definition or a translation? Or even both? It is arguably a definition — yet all of the nouns above could, dependent on context, be used as the best translation.
If we are to talk about what is translatable and what isn’t, we need to start talking about language, rather than words. The Spanish word duende often features in lists of untranslatable words: it refers to the mystical power by which an artist or artwork captivates its audience. Have I just defined duende, or translated it? I for one am not so sure anymore, but I do know that in context, its meaning is clear: un cantante que tiene duende becomes “a singer who has a certain magic about him.” The same goes for the French word dépaysement. By itself, dépaysement can mean many things, but in the phrase les touristes anglais recherchent le dépaysement dans les voyages dans les îles tropicales, it’s clear from context that the sense required is “change of scene” (“English tourists look for a change of scene on holidays to tropical islands”). Does this mean that all words are translatable, as long they are in context?
Saying no to stereotypes
One of my biggest beefs with untranslatable word memes is the suggestion that these linguistic treasure troves are loaded with cultural inferences. Most of the time they’re twee, rather than offensive: for example, the German word Waldeinsamkeit means “the feeling of being alone in the woods.” Gosh, how typical of those woodland-loving Germans, wandering around the Black Forest enjoying oneness with nature! The existence of an “untranslatable” word hints at some kind of cultural mystery that is beyond our comprehension — but does the lack of a word-for-word translation of Waldeinsamskeit mean that no English speaker (or French speaker, or Mandarin speaker) can understand the concept of being alone in the woods? Of course not! However, these misinterpretations of Waldeinsamskeit, Schadenfreude, Backpfeifengesicht et al. make me think: what about those words that really do have a particular cultural resonance? Can we really translate them?
Excuse me, can I borrow your word?
Specialized translation throws up its own variety of “untranslatable” words. For example, if you are translating a text about the Russian banya into a language where steam baths are not the norm, how do you go about translating nouns such as venik (веник)? A venik is a broom, but in the context of the banya it is a collection of leafy twigs (rather than dried twigs) that is used to beat those enjoying the restorative steam. Translating venik as “broom” here would be wildly inaccurate (and probably generate some amusing mental images). The existence of a word-for-word translation doesn’t provide the whole answer if cultural context is missing. We can find examples of “untranslatable” words in relation to almost any culture-specific event, be it American Thanksgiving, Spanish bullfighting, or Balinese Nyepi. If I were to translate an article about bullfighting and retain tienta rather than use “trial” (significantly less specific), does that mean that tienta in this context is really untranslatable?
So what has all this research taught me about translation? Individual words may not be translatable, but language is. And as for the accuracy of the translation? That often depends on how we, as speakers of a particular language, attribute our own meaning. Sometimes, the “translation” just has to be Schadenfreude.
On August 23rd the United Nations observes the International Day for the Remembrance of the Slave Trade and its Abolition. In honor of this day, we examine the history of slavery and its abolition, and shed light on contemporary slavery practices.
When it comes to assessing someone’s sincerity, we pay close attention to what people say and how they say it. This is because the emotion-based elements of communication are understood as partially controllable and partially uncontrollable. The words that people use tend to be viewed as relatively controllable; in contrast, rate of speech, tone of voice, hesitations, and gestures (paralinguistic elements) have tended to be viewed as less controllable. As a result of the perception of speakers’ lack of control over them, the meanings conveyed via paralinguistic channels have tended to be understood as providing more reliable evidence of a speaker’s inner state.
Paradoxically, the very elements that are viewed as so reliable are consistent with multiple meanings. Furthermore, people often believe that their reading of another person’s demeanor is the correct one. Many studies have shown that people – judges included – are notoriously bad at assessing the meaning of another person’s affective display. Moreover, some research suggests that people are worse at this when the ethnic background of the speaker differs from their own – not an uncommon situation when defendants address federal judges, even in 2014.
The element of defendants’ demeanor is not only problematic for judges; it is also problematic for the record of the proceedings. This is due to courtroom reporters’ practice of reporting the words that are spoken and excluding input from paralinguistic channels.
I observed one case in which this practice had the potential for undermining the integrity of the sentencing hearing transcript. In this case, the defendant lost her composure while making her statement to the court. The short, sob-filled “sorry” she produced mid-way through her statement was (from my perspective) clearly intended to refer to her preceding tears and the delays in her speech. The official transcript, however, made no reference to the defendant’s outburst of emotion, thereby making her “sorry” difficult to understand. Without the clarifying information about what was going on at the time – namely, the defendant’s crying — her “sorry” could conceivably be read as part of her apology to the court for her crime of robbing a bank.
Not distinguishing between apologies for the crime and apologies for a problem with delivery of one’s statement is a problem in the context of a sentencing hearing because apologies for crimes are understood as an admission of guilt. If the defendant had not already apologized earlier, the ambiguity of the defendant’s words could have significant legal ramifications if she sought to appeal her sentence or to claim that her guilty plea was illegal.
As the above example illustrates, the exclusion of meaning that comes from paralinguistic channels can result in misleading and inaccurate transcripts. (This is one reason why more and more police departments are video-recording confessions and witness statements.) If a written record is to be made of a proceeding, it should preserve the significant paralinguistic elements of communication. (Following the approach advocated by Du Bois 2006, one can do this with varying amounts of detail. For example, the beginning and ending of crying-while-talking can be indicated with double angled brackets, e.g., < < sorry > >.) Relatedly, if a judge is going to use elements of a defendant’s demeanor in court to increase a sentence, the judge should be prepared to defend this decision and cite the evidence that was employed. Just as a judge’s decision based on the facts of the case can be challenged, a decision based on demeanor evidence deserves the same scrutiny.
In August 2014, OxfordDictionaries.com added numerous new words and definitions to their database, and we invited a few experts to comment on the new entries. Below, Janet Gilsdorf, President-elect of Pediatric Infectious Diseases Society, discusses anti-vax and anti-vaxxer. The views expressed do not necessarily reflect the opinions or positions of Oxford Dictionaries or Oxford University Press.
It’s beautiful, our English language — fluid and expressive, colorful and lively. And it’s changeable. New words appear all the time. Consider “selfie” (a noun), “problematical” (an adjective), and “Google” (a noun that turned into verbs.) Now we have two more: “anti-vax” and “anti-vaxxer.” (Typical of our flexible vernacular, “anti-vaxxer” is sometimes spelled with just one “x.”) I guess inventing these words was inevitable; a specific, snappy short-cut was needed when speaking about something as powerful and almost cult-like as the anti-vaccine movement and its disciples.
When we string our words together, either new ones or the old reliables, we find avenues for telling others of our joys and disappointments, our loves and hates, our passions and indifferences, our trusts and distrusts, and our fears. The words we choose are windows into our minds. Searching for the best terms to use helps us refine our thinking, decide what, exactly, we are contemplating, and what we intend to say.
Embedded in the force of the new words “anti-vax” and “anti-vaxxer” are many of the tales we like to tell: our joy in our children, our disappointment with the world; our love of independence and autonomy, our hate of things that hurt us or those important to us; our passion for coming together in groups, our indifference to the worries of strangers; our trust, fueled by hope rather than evidence, in whatever nutty things may sooth our anxieties, our distrust in our sometimes hard-to-understand scientific, medical, and public health systems; and, of course, our fears.
Fear is usually a one-sided view. It is blinding, so that in the heat of the moment we aren’t distracted by nonsense (the muddy foot prints on the floor, the lawn that needs mowing) and can focus on the crisis at hand. Unfortunately, fear may also prevent us from seeing useful things just beyond the most immediate (the helping hands that may look like claws, the alternatives that, in the end, are better).
For the anti-vax group, fear is the gripping terror that awful things will happen from a jab (aka shot, stick, poke). Of course, it isn’t the jab that’s the problem. Needles through the skin, after all, deliver medicines to cure all manner of illnesses. For anti-vaxxers, the fear is about the immunization materials delivered by the jab. They dread the vaccine antigens, the molecules (i.e. pieces of microbes-made-safe) that cause our bodies to think we have encountered a bad germ so we will mount a strong immune response designed to neutralize that bad germ. What happens after a person receives a vaccine is, in effect, identical to what happens after we recover from a cold or the flu — or anthrax, smallpox, or possibly ebola (if they don’t kill us first). Our blood is subsequently armed with protective immune cells and antibodies so we don’t get infected with that specific virus or bacterium again. Same for measles, polio, or chicken-pox. If we either get those diseases (which can be bad) or the vaccines to prevent them (which is good), our immune system can effectively combat these viruses in future encounters and prevent infections.
So what should we do with our new words? We can use them to express our thoughts about people who haven’t yet seen the value of vaccines. Hopefully, these new words will lead to constructive dialogues rather than attacks. Besides being incredibly valuable, words are among the most vicious weapons we have and we must find ways to use them responsibly.
If you share my jealousy of Peter Capaldi and his new guise as the Doctor, then read on to discover how you could become the next Time Lord with a fondness for Earth. However, be warned: you can’t just pick up Matt Smith’s bow-tie from the floor, don Tom Baker’s scarf, and expect to save planet Earth every Saturday at peak viewing time. You’re going to need training. This is where Oxford’s online products can help you. Think of us as your very own Companion guiding you through the dimensions of time, only with a bit more sass. So jump aboard (yes it’s bigger on the inside), press that button over there, pull that lever thingy, and let’s journey through the five things you need to know to become the Doctor.
Being called two-faced may not initially appeal to you. How about twelve-faced? No wait, don’t leave, come back! Part of the appeal of the Doctor is his ability to regenerate and assume many faces. Perhaps the most striking example of regeneration we have on our planet is the Hydra fish which is able to completely re-grow a severed head. Even more striking is its ability to grow more than one head if a small incision is made on its body. I don’t think it’s likely the BBC will commission a Doctor with two heads though so best to not go down that route. Another example of an animal capable of regeneration is Porifera, the sponges commonly seen on rocks under water. These sponge-type creatures are able to regenerate an entire limb which is certainly impressive but are not quite as attractive as The David Tenants or Matt Smiths of this world.
(2) Fighting aliens
Although alien invasion narratives only crossed over to mainstream fiction after World War II, the Doctor has been fighting off alien invasions since the Dalek War and the subsequent destruction of Gallifrey. Alien invasion narratives are tied together by one salient issue: conquer or be conquered. Whether you are battling Weeping Angels or Cybermen, you must first make sure what you are battling is indeed an alien. Yes, that lady you meet every day at the bus-stop with the strange smell may appear to be from another dimension but it’s always better to be sure before you whip out your sonic screwdriver.
(3) Visiting unknown galaxies
The Hubble Ultra Deep Field telescope captures a patch of sky that represents one thirteen-millionth of the area of the whole sky we see from Earth, and this tiny patch of the Universe contains over 10,000 galaxies. One thirteen-millionth of the sky is the equivalent to holding a grain of sand at arm’s length whilst looking up at the sky. When we look at a galaxy ten billion light years away, we are actually only seeing it by the light that left it ten billion years ago. Therefore, telescopes are akin to time machines.
The sheer vastness and mystery of the universe has baffled us for centuries. Doctor Who acts as a gatekeeper to the unknown, helping us imagine fantastical creatures such as the Daleks, all from the comfort of our living rooms.
(4) Operating the T.A.R.D.I.S.
The majority of time-travel narratives avoid the use of a physical time-machine. However, the Tardis, a blue police telephone box, journeys through time dimensions and is as important to the plot of Doctor Who as upgrades are to Cybermen. Although it looks like a plain old police telephone box, it has been known to withstand meteorite bombardment, shield itself from laser gun fire and traverse the time vortex all in one episode. The Tardis’s most striking characteristic, that it is “much bigger on the inside”, is explained by the Fourth Doctor, Tom Baker, by using the analogy of the tesseract.
(5) Looking good
It’s all very well saving the Universe every week but what use is that without a signature look? Tom Baker had the scarf, Peter Davison had the pin-stripes, John Hurt even had the brooding frown, so what will your dress-sense say about you? Perhaps you could be the Doctor with a cravat or the time-traveller with a toupee? Whatever your choice, I’m sure you’ll pull it off, you handsome devil you.
Don’t forget a good sense of humour to compliment your dashing visage. When Doctor Who was created by Donald Wilson and C.E. Webber in November 1963, the target audience of the show was eight-to-thirteen-year-olds watching as part of a family group on Saturday afternoons. In 2014, it has a worldwide general audience of all ages, claiming over 77 million viewers in the UK, Australia, and the United States. This is largely due to the Doctor’s quick quips and mix of adult and childish humour.
You’ve done it! You’ve conquered the cybermen, exterminated the daleks, and saved Earth (we’re eternally grateful of course). Why not take the Tardis for another spin and adventure through more of Oxford’s online products?
Image credit: Doctor Who poster, by Doctor Who Spoilers. CC-BY-SA-2.0 via Flickr.
Egyptian mummies continue to fascinate us due to the remarkable insights they provide into ancient civilizations. Flinders Petrie, the first UK chair in Egyptology did not have the luxury of X-ray techniques in his era of archaeological analysis in the late nineteenth century. However, twentieth century Egyptologists have benefited from Roentgen’s legacy. Sir Graham Elliott Smith along with Howard Carter did early work on plain x-ray analysis of mummies when they X-rayed the mummy Tuthmosis in 1904. Numerous X-ray analyses were performed using portable X-ray equipment on mummies in the Cairo Museum.
Since then, many studies have been done worldwide, especially with the development of more sophisticated imaging techniques such as CT scanning, invented by Hounsfield in the UK in the 1970s. With this, it became easier to visualize the interiors of mummies, thus revealing their hidden mysteries under their linen wrapped bodies and the elaborate face masks which had perplexed researchers for centuries. Harwood Nash performed one of the earliest head scans of a mummy in Canada in 1977 and Isherwood’s team along with Professor David also performed some of the earliest scannings of mummies in Manchester.
A fascinating new summer exhibition at the British Museum has recently opened, and consists of eight mummies, all from different periods and Egyptian dynasties, that have been studied with the latest dual energy CT scanners. These scanners have 3D volumetric image acquisitions that reveal the internal secrets of these mummies. Mummies of babies and young children are included, as well as adults. There have been some interesting discoveries already, for example, that dental abscesses were prevalent as well as calcified plaques in peripheral arteries, suggesting vascular disease was present in the population who lived over 3,000 years ago. More detailed analysis of bones, including the pelvis, has been made possible by the scanned images, enabling more accurate estimation of the age of death.
Although embalmers took their craft seriously, mistakes did occur, as evidenced by one of the mummy exhibits, which shows Padiamenet’s head detached from the body during the process, the head was subsequently stabilized by metal rods. Padiamenet was a temple doorkeeper who died around 700BC. Mummies had their brains removed with the heart preserved as this was considered the seat of the soul. Internal organs such as the stomach and liver were often removed; bodies were also buried with a range of amulets.
The exhibit provides a fascinating introduction to mummies and early Egyptian life more than 3,000 years ago and includes new insights gleaned from cutting edge twenty first century imaging technology.
At a time when the press and broadcast media are overwhelmed by accounts and images of humankind’s violence and stupidity, the fact that our race survives purely as a consequence of Nature’s consent, may seem irrelevant. Indeed, if we think about this at all, it might be to conclude that our world would likely be a nicer place all round, should a geophysical cull in some form or other, consign humanity to evolution’s dustbin, along with the dinosaurs and countless other life forms that are no longer with us. While toying with such a drastic action, however, we should be careful what we wish for, even during these difficult times when it is easy to question whether our race deserves to persist. This is partly because alongside its sometimes unimaginable cruelty, humankind also has an enormous capacity for good, but mainly because Nature could – at this very moment – be cooking up something nasty that, if it doesn’t wipe us all out, will certainly give us a very unpleasant shock.
After all, nature’s shock troops are still out there. Economy-busting megaquakes are biding their time beneath Tokyo and Los Angeles; volcanoes are swelling to bursting point across the globe; and killer asteroids are searching for a likely planet upon which to end their lives in spectacular fashion. Meanwhile, climate change grinds on remorselessly, spawning biblical floods, increasingly powerful storms, and baking heatwave and drought conditions. Nonetheless, it often seems – in our security obsessed. tech-driven society – as if the only horrors we are likely to face in the future are manufactured by us; nuclear terrorism; the march of the robots; out of control nanotechnology; high-energy physics experiments gone wrong. It is almost as if the future is nature-free; wholly and completely within humankind’s thrall. The truth is, however, that these are all threats that don’t and shouldn’t materialise, in the sense that whether or not we allow their realisation is entirely within our hands.
The same does not apply, however, to the worst that nature can throw at us. We can’t predict earthquakes and may never be able to, and there is nothing at all we can do if we spot a 10-km diameter comet heading our way. As for encouraging an impending super-eruption to ‘let of steam’ by drilling a borehole, this would – as I have said before – have the same effect as sticking a drawing pin in an elephant’s bum; none at all.
The bottom line is that while the human race may find itself, at some point in the future, in dire straits as a consequence of its own arrogance, aggression, or plain stupidity, this is by no means guaranteed. On the contrary, we can be 100 percent certain that at some point we will need to face the awful consequences of an exploding super-volcano or a chunk of rock barreling into our world that our telescopes have missed. Just because such events are very rare does not mean that we should not start thinking now about how we might prepare and cope with the aftermath. It does seem, however, that while it is OK to speculate at length upon the theoretical threat presented by robots and artificial intelligence, the global economic impact of the imminent quake beneath Tokyo, to cite one example of forthcoming catastrophe, is regarded as small beer.
Our apparent obsession with technological threats is also doing no favours in relation to how we view the coming climate cataclysm. While underpinned by humankind’s polluting activities, nature’s disruptive and detrimental response is driven largely by the atmosphere and the oceans, through increasingly wild weather, remorselessly-rising temperatures and climbing sea levels. With no sign of greenhouse gas emissions reducing and concentrations of carbon dioxide in the atmosphere crossing the emblematic 400 parts per million mark in 2013, there seems little chance now of avoiding a 2°C global average temperature rise that will bring dangerous, all-pervasive climate change to us all.
The hope is that we come to our collective senses and stop things getting much worse. But what if we don’t? A paper published last year in the Royal Society’s Philosophical Transactions, and written by lauded NASA climate scientist, James Hansen and colleagues, provides a terrifying picture of what out world will be be like if we burn all available fossil fuels. The global average temperature, which is currently a little under 15°C will more than double to around 30°C, transforming most of our planet into a wasteland too hot for humans to inhabit. If not an extinction level event as such, there would likely be few of us left to scrabble some sort of existence in this hothouse hell.
So, by all means carry on worrying about what happens if terrorists get hold of ‘the bomb’ or if robots turn on their masters, but always be aware that the only future global threats we can be certain of are those in nature’s armoury. Most of all, consider the fact that in relation to climate change, the greatest danger our world has ever faced, it is not terrorists or robots – or even experimental physicists – that are to blame, but ultimately, every one of us.
One day in 1668, the English diarist Samuel Pepys went shopping for a book to give his young French-speaking wife. He saw a book he thought she might enjoy, L’École des femmes or The School of Women, “but when I came to look into it, it is the most bawdy, lewd book that ever I saw,” he wrote, “so that I was ashamed of reading in it.” Not so ashamed, however, that he didn’t return to buy it for himself three weeks later — but “in plain binding…because I resolve, as soon as I have read it, to burn it, that it may not stand in the list of books, nor among them, to disgrace them if it should be found.” The next night he stole off to his room to read it, judging it to be “a lewd book, but what doth me no wrong to read for information sake (but it did hazer my prick para stand all the while, and una vez to decharger); and after I had done it, I burned it, that it might not be among my books to my shame.” Pepys’s coy detours into mock-Spanish or Franglais fail to conceal the orgasmic effect the lewd book had on him, and his is the earliest and most candid report we have of one reader’s bodily response to the reading of pornography. But what is “pornography”? What is its history? Was there even such a thing as “pornography” before the word was coined in the nineteenth century?
The announcement, in early 2013, of the establishment of a new academic journal to be called Porn Studies led to a minor flurry of media reports and set off, predictably, responses ranging from interest to outrage by way of derision. One group, self-titled Stop Porn Culture, circulated a petition denouncing the project, echoing the “porn wars” of the 1970s and 80s which pitted anti-censorship against anti-pornography activists. Those years saw an eruption of heated, if not always illuminating, debate over the meanings and effects of sexual representations; and if the anti-censorship side may seem to have “won” the war, in that sexual representations seem to be inescapable in the age of the internet and social media, the anti-pornography credo that such representations cause cultural, psychological, and physical harm is now so widespread as almost to be taken for granted in the mainstream press.
The brave new world of “sexting” and content-sharing apps may have fueled anxieties about the apparent sexualization of popular culture, and especially of young people, but these anxieties are anything but new; they may, in fact, be as old as culture itself. At the very least, they go back to a period when new print technologies and rising literacy rates first put sexual representations within reach of a wide popular audience in England and elsewhere in Western Europe: the late seventeenth and early eighteenth centuries. Most readers did not leave diaries, but Pepys was probably typical in the mixture of shame and excitement he felt when erotic works like L’École des filles began to appear in London bookshops from the 1680s on. Yet as long as such works could only be found in the original French or Italian, British censors took little interest in them, for their readership was limited to a linguistic elite. It was only when translation made such texts available to less privileged readers — women, tradesmen, apprentices, servants — that the agents of the law came to view them as a threat to what the Attorney General, Sir Philip Yorke, in an important 1728 obscenity trial, called the “public order which is morality.” The pornographic or obscene work is one whose sexual representations violate cultural taboos and norms of decency. In doing so it may lend itself to social and political critique, as happened in France in the 1780s and 90s, when obscene texts were used to critique the corruptions of the ancien régime; but the pornographic can also be used as a vehicle of debasement and violence, notably against women — which is one historical reality behind the US porn wars of the 1970s.
Pornography’s critics in the late twentieth or early twenty-first centuries have had less interest in the written word than in visual media; but recurrent campaigns to ban books by such authors as Judy Blume which aim to engage candidly with younger readers on sexual concerns suggest that literature can still be a battleground, as it was in the seventeenth and eighteenth centuries. Take, for example, the words of the British attorney general Dudley Ryder in the 1749 obscenity trial of Thomas Cannon’s Ancient and Modern Pederasty Investigated and Exemplify’d, a paean to male same-sex desire masquerading as an attack. Cannon, Ryder declared, aimed to “Debauch Poison and Infect the Minds of all the Youth of this Kingdom and to Raise Excite and Create in the Minds of all the said Youth most Shocking and Abominable Ideas and Sentiments”; and in so doing, Ryder contends, Cannon aimed to draw readers “into the Love and Practice of that unnatural detestable and odious crime of Sodomy.” Two and a half centuries ago, Ryder set the terms of our ongoing porn wars. Denouncing the recent profusion of sexual representations, he insists that such works create dangerous new desires and inspire their readers to commit sexual crimes of their own.
Then as now, attitudes towards sexuality and sexual representations were almost unbridgeably polarized. A surge in the popularity of pornographic texts was countered by increasingly severe campaigns to suppress them. Ironically, however, those very attempts to suppress could actually bring the offending work to a wider audience, by exciting their curiosity. No copies of Cannon’s “shocking and abominable” work survive in their original form; but the text has been preserved for us to read in the indictment that Ryder prepared for the trial against it. Eighty years earlier, after his encounter with L’École des femmes, Pepys guiltily burned the book, but at the same time immortalized the sensual, shameful experience of reading it. Of such contradictions is the long history of porn wars made.
Meet the woman behind Grove Music Online, Anna-Lise Santella. We snagged a bit of Anna-Lise’s time to sit down with her and find out more about her own musical passions and research.
Do you play any musical instruments? Which ones?
My main instrument is violin, which I’ve played since I was eight. I play both classical and Irish fiddle and am currently trying to learn bluegrass. In a previous life I played a lot of pit band for musical theater. I’ve also worked as a singer and choral conductor. These days, though, you’re more likely to find a mandolin or guitar in my hands.
Do you specialize in any particular area or genre of music?
My research interests are pretty broad, which is why I enjoy working in reference so much. Currently I’m working on a history of women’s symphony orchestras in the United States between 1871 and 1945. They were a key route for women seeking admission into formerly all-male orchestras like the Chicago Symphony. After that, I’m hoping to work on a history of the Three Arts Clubs, a network of residential clubs that housed women artists in cities in the US and abroad. The clubs allowed female performers to safely tour or study away from their families by giving them secure places to live while on the road, places to rehearse and practice, and a community of like-minded people to support them. In general, I’m interested in the ways public institutions have affected and responded to women as performers.
What artist do you have on repeat at the moment?
I tend to have my listening on shuffle. I like not being sure what’s coming next. That said, I’ve been listening to Tune-Yards’ (a.k.a. Merill Garbus) latest album an awful lot lately. Neko Case with the New Pornographers and guitarist/songwriter/storyteller extraordinaire Jim White are also in regular rotation.
What was the last concert/gig you went to?
I’m lucky to live not far from the bandshell in Prospect Park and I try to catch as many of the summer concerts there as I can. The last one I attended was Neutral Milk Hotel, although I didn’t stay for the whole thing. I’m looking forward to the upcoming Nickel Creek concert. I love watching Chris Thile play, although he makes me feel totally inadequate as a mandolinist.
How do you listen to most of the music you listen to? On your phone/mp3 player/computer/radio/car radio/CDs?
Mostly on headphones. I’m constantly plugged in, which makes me not a very good citizen, I think. I’m trying to get better about spending some time just listening to the city. But there’s something about the delivery system of headphones to ears that I like – music transmitted straight to your head makes you feel like your life has a soundtrack. I especially like listening on the subway. I’ll often be playing pieces I’m trying to learn on violin or guitar and trying to work out fingerings, which I’m pretty sure makes me look like an insane person. Fortunately insane people are a dime a dozen on the subway.
Do you find that listening to music helps you concentrate while you work, or do you prefer silence?
I like listening while I work, but it has to be music I find fairly innocuous, or I’ll start thinking about it and analyzing it and get distracted from what I’m trying to do. Something beat driven with no vocals is best. My usual office soundtrack is a Pandora station of EDM.
Has there been any recent music research or scholarship on a topic that has caught your eye or that you’ve found particularly innovative?
In general I’m attracted to interdisciplinary work, as I like what happens when ideologies from one field get applied to subject matter of another – it tends make you reevaluate your methods, to shake you out of the routine of your thinking. Right now I’ve become really interested in the way in which we categorize music vs. noise and am reading everything I can on the subject from all kinds of perspectives – music cognition, acoustics, cultural theory. It’s where neuroscience, anthropology, philosophy and musicology all come together, which, come to think of it, sounds like a pretty dangerous intersection. Currently I’m in the middle of The Oxford Handbook of Sound Studies (2012) edited by Trevor Pinch and Karin Bijsterveld. At the same time, I’m rereading Jacques Attali’s landmark work Noise: The Political Economy of Music (1977). We have a small music/neuroscience book group made up of several editors who work in music and psychology who have an interest in this area. We’ll be discussing the Attali next month.
Who are a few of your favorite music critics/writers?
There are so many – I’m a bit of a criticism junkie. I work a lot with period music journalism in my own research and I love reading music criticism from the early 20th century. It’s so beautifully candid — at times sexy, cruel, completely inappropriate — in a way that’s rare in contemporary criticism. A lot of the reviews were unsigned or pseudonymous, so I’m not sure I have a favorite I can name. There’s a great book by Mark N. Grant on the history of American music criticism called Maestros of the Pen that I highly recommend as an introduction. For rock criticism, Ellen Willis’columns from the Village Voice are still the benchmark for me, I think. Of people writing currently, I like Mark Gresham (classical) and Sasha Frere-Jones (pop). And I like to argue with Alex Ross and John von Rhein.
I also like reading more literary approaches to musical writing. Geoff Dyer’s But Beautiful is a poetic, semi-fictional look at jazz, with a mix of stories about legendary musicians like Duke Ellington and Lester Young interspersed with an analytical look at jazz. And some of my favorite writing about music is found in fiction. Three of my favorite novels use music to tell the story. Richard Powers’ The Time of Our Singing uses Marian Anderson’s 1939 concert at the Lincoln Memorial as the focal point of a story that alternates between a musical mixed-race family and the story of the Civil Rights movement itself. In The Fortress of Solitude, Jonathan Lethem writes beautifully about music of the 1970s that mediates between nearly journalistic detail of Brooklyn in the 1970s and magical realism. And Kathryn Davies’ The Girl Who Trod on a Loaf contains some of the best description of compositional process that I’ve come across in fiction. It’s a challenge to evoke sound in prose – it’s an act of translation – and I admire those who can do it well.
About half a century ago, an MIT professor set up a summer project for students to write a computer programme that can “see” or interpret objects in photographs. Why not! After all, seeing must be some smart manipulation of image data that can be implemented in an algorithm, and so should be a good practice for smart students. Decades passed, we still have not fully reached the aim of that summer student project, and a worldwide computer vision community has been born.
We think of being “smart” as including the intellectual ability to do advanced mathematics, complex computer programming, and similar feats. It was shocking to realise that this is often insufficient for recognising objects such as those in the following image.
Image credit: Fig 5.51 from Li Zhaoping,
Understanding Vision: Theory Models, and Data
Can you devise a computer code to “see” the apple from the black-and-white pixel values? A pre-school child could of course see the apple easily with her brain (using her eyes as cameras), despite lacking advanced maths or programming skills. It turns out that one of the most difficult issues is a chicken-and-egg problem: to see the apple it helps to first pick out the image pixels for this apple, and to pick out these pixels it helps to see the apple first.
A more recent shocking discovery about vision in our brain is that we are blind to almost everything in front of us. “What? I see things crystal-clearly in front of my eyes!” you may protest. However, can you quickly tell the difference between the following two images?
Image credit: Alyssa Dayan, 2013 Fig. 1.6 from Li Zhaoping Understanding Vision: Theory Models, and Data. Used with permission
It takes most people more than several seconds to see the (big) difference – but why so long? Our brain gives us the impression that we “have seen everything clearly”, and this impression is consistent with our ignorance of what we do not see. This makes us blind to our own blindness! How we survive in our world given our near-blindness is a long, and as yet incomplete, story, with a cast including powerful mechanisms of attention.
Being “smart” also includes the ability to use our conscious brain to reason and make logical deductions, using familiar rules and past experience. But what if most brain mechanisms for vision are subconscious and do not follow the rules or conform to the experience known to our conscious parts of the brain? Indeed, in humans, most of the brain areas responsible for visual processing are among the furthest from the frontal brain areas most responsible for our conscious thoughts and reasoning. No wonder the two examples above are so counter-intuitive! This explains why the most obvious near-blindness was discovered only a decade ago despite centuries of scientific investigation of vision.
Another counter-intuitive finding, discovered only six years ago, is that our attention or gaze can be attracted by something we are blind to. In our experience, only objects that appear highly distinctive from their surroundings attract our gaze automatically. For example, a lone-red flower in a field of green leaves does so, except if we are colour-blind. Our impression that gaze capture occurs only to highly distinctive features turns out to be wrong. In the following figure, a viewer perceives an image which is a superposition of two images, one shown to each of the two eyes using the equivalent of spectacles for watching 3D movies.
Image credit: Fig 5.9 from Li Zhaoping,
Understanding Vision: Theory Models, and Data
To the viewer, it is as if the perceived image (containing only the bars but not the arrows) is shown simultaneously to both eyes. The uniquely tilted bar appears most distinctive from the background. In contrast, the ocular singleton appears identical to all the other background bars, i.e. we are blind to its distinctiveness. Nevertheless, the ocular singleton often attracts attention more strongly than the orientation singleton (so that the first gaze shift is more frequently directed to the ocular rather than the orientation singleton) even when the viewer is told to find the latter as soon as possible and ignore all distractions. This is as if this ocular singleton is uniquely coloured and distracting like the lone-red flower in a green field, except that we are “colour-blind” to it. Many vision scientists find this hard to believe without experiencing it themselves.
Are these counter-intuitive visual phenomena too alien to our “smart”, intuitive, and conscious brain to comprehend? In studying vision, are we like Earthlings trying to comprehend Martians? Landing on Mars rather than glimpsing it from afar can help the Earthlings. However, are the conscious parts of our brain too “smart” and too partial to “dumb” down suitably to the less conscious parts of our brain? Are we ill-equipped to understand vision because we are such “smart” visual animals possessing too many conscious pre-conceptions about vision? (At least we will be impartial in studying, say, electric sensing in electric fish.) Being aware of our difficulties is the first step to overcoming them – then we can truly be smart rather than smarting at our incompetence.
The anniversaries of conflicts seem to be more likely to capture the public’s attention than any other significant commemorations. When I first began researching the nurses of the First World War in 2004, I was vaguely aware of an increase in media attention: now, ten years on, as my third book leaves the press, I find myself astonished by the level of interest in the subject. The Centenary of the First World War is becoming a significant cultural event. This time, though, much of the attention is focussed on the role of women, and, in particular, of nurses. The recent publication of several nurses’ diaries has increased the public’s fascination for the subject. A number of television programmes have already been aired. Most of these trace journeys of discovery by celebrity presenters, and are, therefore, somewhat quirky – if not rather random – in their content. The BBC’s project, World War One at Home, has aired numerous stories. I have been involved in some of these – as I have, also, in local projects, such as the impressive recreation of the ‘Stamford Military Hospital’ at Dunham Massey Hall, Cheshire. Many local radio stories have brought to light the work of individuals whose extraordinary experiences and contributions would otherwise have remained hidden – women such as Kate Luard, sister-in-charge of a casualty clearing station during the Battle of Passchendaele; Margaret Maule, who nursed German prisoners-of-war in Dartford; and Elsie Knocker, a fully-trained nurse who established an aid post on the Belgian front lines. One radio story is particularly poignant: that of Clementina Addison, a British nurse, who served with the French Flag Nursing Corps – a unit of fully trained professionals working in French military field hospitals. Clementina cared for hundreds of wounded French ‘poilus’, and died of an unnamed infectious disease as a direct result of her work.
The BBC drama, The Crimson Field was just one of a number of television programmes designed to capture the interest of viewers. I was one of the historical advisers to the series. I came ‘on board’ quite late in the process, and discovered just how difficult it is to transform real, historical events into engaging drama. Most of my work took place in the safety of my own office, where I commented on scripts. But I did spend one highly memorable – and pretty terrifying – week in a field in Wiltshire working with the team producing the first two episodes. Providing ‘authentic background detail’, while, at the same time, creating atmosphere and constructing characters who are both credible and interesting is fraught with difficulty for producers and directors. Since its release this spring, The Crimson Field has become quite controversial, because whilst many people appear to have loved it, others complained vociferously about its lack of authentic detail. Of course, it is hard to reconcile the realities of history with the demands of popular drama.
I give talks about the nurses of the First World War, and often people come up to me to ask about The Crimson Field. Surprisingly often, their one objection is to the fact that the hospital and the nurses were ‘just too clean’. This makes me smile. In these days of contract-cleaners and hospital-acquired infection, we have forgotten the meticulous attention to detail the nurses of the past gave to the cleanliness of their wards. The depiction of cleanliness in the drama was, in fact one of its authentic details.
One of the events I remember most clearly about my work on set with The Crimson Field is the remarkable commitment of director, David Evans, and leading actor, Hermione Norris, in recreating a scene in which Matron Grace Carter enters a ward which is in chaos because a patient has become psychotic and is attacking a padre. The matron takes a sedative injection from a nurse, checks the medication and administers the drug with impeccable professionalism – and this all happens in the space of about three minutes. I remember the intensity of the discussions about how this scene would work, and how many times it was ‘shot’ on the day of filming. But I also remember with some chagrin how, the night after filming, I realised that the injection technique had not been performed entirely correctly. I had to tell David Evans that I had watched the whole sequence six times without noticing that a mistake had been made. Some historical adviser! The entire scene had to be re-filmed. The end result, though, is an impressive piece of hospital drama. Norris looks as though she has been giving intramuscular injections all her life. I shall never forget the professionalism of the director and actors on that set – nor their patience with the absent-minded-professor who was their adviser for the week.
In a centenary year, it can be difficult to distinguish between myths and realities. We all want to know the ‘facts’ or the ‘truths’ about the First World War, but we also want to hear good stories – and it is all the better if those elide facts and enhance the drama of events – because, as human beings, we want to be entertained as well. The important thing, for me, is to fully realise what it is we are commemorating: the significance of the contributions and the enormity of the sacrifices made by our ancestors. Being honest to their memories is the only thing that really matters –the thing that makes all centenary commemoration projects worthwhile.
Image credit: Ministry of Information First World War Collection, from Imperial War Museum Archive. IWM Non Commercial Licence via Wikimedia Commons.
As can be guessed from the above title, my today’s subject is the derivation of the word road. The history of road has some interest not only because a word that looks so easy for analysis has an involved and, one can say, unsolved etymology but also because it shows how the best scholars walk in circles, return to the same conclusions, find drawbacks in what was believed to be solid arguments, and end up saying: “Origin unknown (uncertain).” The public should know about the effort it takes to recover the past of the words we use. I am acutely aware of the knots language historians have to untie and of most people’s ignorance of the labor this task entails. In a grant application submitted to a central agency ten or so years ago, I promised to elucidate (rather than solve!) the etymology of several hundred English words. One of the referees divided the requested number of dollars by the number of words and wrote an indignant comment about the burden I expected taxpayers to carry (in financial matters, suffering taxpayers are always invoked: they are an equivalent of women and children in descriptions of war; those who don’t pay taxes and men do not really matter). Needless to say, my application was rejected, the taxpayers escaped with a whole skin, and the light remained under the bushel I keep in my office. My critic probably had something to do with linguistics, for otherwise he would not have been invited to the panel. In light of that information I am happy to report that today’s post will cost taxpayers absolutely nothing.
According to the original idea, road developed from Old Engl. rad “riding.” Its vowel was long, that is, similar to a in Modern Engl. spa. Rad belonged with ridan “to ride,” whose long i (a vowel like ee in Modern Engl. fee) alternated with long a by a rule. In the past, roads existed for riding on horseback, and people distinguished between “a road” and “a footpath.” But this seemingly self-evident etymology has to overcome a formidable obstacle: in Standard English, the noun road acquired its present-day meaning late (one can say very late). It was new or perhaps unknown even to Shakespeare. A Shakespeare glossary lists the following senses of road in his plays: “journey on horseback,” “hostile incursion, raid,” “roadstead,” and “highway” (“roadstead,” that is, “harbor,” needn’t surprise us, for ships were said to ride at anchor.) “Highway” appears as the last of the four senses because it is the rarest, but, as we will see, there is a string attached even to such a cautious statement. Raid is the Scots version of road (“long a,” mentioned above, developed differently in the south and the north; hence the doublets). In sum, road used to mean “raid” and “riding.” When English speakers needed to refer to a road, they said way, as, for example, in the Authorized Version of the Bible.
No disquisition, however learned, will answer in a fully convincing manner why about 250 years ago road partly replaced way. But there have been attempts to overthrow even the basic statement. Perhaps, it was proposed, road does not go back to Old. Engl. rad, with its long vowel! This heretical suggestion was first put forward in 1888 by Oxford Professor of Anglo-Saxon John Earle. In his opinion, the story began with rod “clearing.” The word has not made it into the Standard, but we still rid our houses of vermin and get rid of old junk. Rid is related to Old Engl. rod.
Earle’s command of Old English was excellent, but he did not care much about phonetic niceties. In his opinion, if meanings show that certain words are allied, phoneticians should explain why something has gone wrong in their domain rather than dismissing an otherwise persuasive conclusion as invalid. This type of reasoning cut no ice with the etymologists of the last quarter of the nineteenth century. Nor does it thrill modern researchers, even though at all times there have been serious scholars who refused to bow to the tyranny of so-called phonetic laws. Such mavericks face a great difficulty, for, if we allow ourselves to be guided by similarity of meaning in disregard of established sound correspondences, we may return to the fantasies of medieval etymology. Earle tried to posit long o in rod, though not because he had proof of its length but because he needed it to be long. A. L. Mayhew, whom I mentioned in the post on qualm, and Skeat dismissed the rod-road etymology as not worthy of discussion. Surprisingly, it was revived ten years ago (without reference to Earle), now buttressed by phonetic arguments. It appears that rod with a long vowel did exist, but, more probably, its length was due to a later process. In any case, Earle would have been thrilled. I have said more than once that etymology is a myth of eternal return.
Whatever the origin of road, we still wonder why its modern sense emerged so late. In 1934, this question was the subject of a lively exchange in the pages of The Times Literary Supplement. In response to that discussion the German scholar Max Deutschbein showed that Shakespeare never used road “way” without making it clear what he meant. Once he used the compound roadway. Elsewhere some road is followed by as common as the way between…. We read about the even road of a blankverse, easy roads (for riding), and a thievish living on the common road. The word way helps us understand what is meant in You know the very road (= “journey”: OED) into his kindness, / and cannot lose your way (Coriolanus). Deutschbein concluded that Shakespeare hardly knew our sense of road.
This sense had become universally understood only by the sixteen-seventies (Shakespeare died in 1616), and Milton (1608-1624) used it “unapologetically.” So how did it arise? Extraneous influences—Scottish and Irish—have often been considered; the arguments for their role are thin. The anonymous initiator of the discussion in The Times LiterarySupplement (I am sure the author’s name is known) spun a wonderful yarn about how Shakespeare met a group of Scotsmen, learned something about the Scots, and picked up a new word. The story is clever but not particularly trustworthy. The Irish connection is even less likely. Deutschbein noted that, according to the OED, the compound roadway reached the peak of its popularity in the seventeenth century and disappeared once road established itself. Is it possible that this is where we should look for the solution of the riddle? Etymological riddles are always hard, while solutions are usually simple, and the simpler they are, the higher the chance that they are correct.
No citations for the noun roadway antedating 1600 have been found. We don’t know how early in the sixteenth century it arose, but in this case an exact date is of little consequence. The OED suggests that the earliest meaning of roadway was “riding way,” and so it must have been. At some time, speakers probably reinterpreted this noun as a tautological compound (which it was not), a word like pathway, apparently, a sixteenth-century coinage, and many others like them. Words having this meaning are prone to be made up of two near-synonyms (way-way, road-road, path-path); see my old post on such compounds. Roadway could have continued its existence for centuries, but at some time the second element was dumped as superfluous. For a relatively short period road coexisted with way as its equal partner, but then they divided their spheres of influence: road began to refer to physical reality and way to more abstract situations. We speak of impassable roads and road maps, as opposed to the way of all flesh and ways and means committees. Extraneous influences were not needed for such a process to happen.
I often complain that the scholarly literature on some words is meager. By contrast, the literature on road is extensive. A long paper devoted to it was published as recently as a year ago, whence an extremely detailed etymological introduction to the entry road in the OED online. Even if I failed to discern the complexity of the problem and untie or cut the knot, my intentions were good.
The discovery of the periodic system of the elements and the associated periodic table is generally attributed to the great Russian chemist Dmitri Mendeleev. Many authors have indulged in the game of debating just how much credit should be attributed to Mendeleev and how much to the other discoverers of this unifying theme of modern chemistry.
In fact the discovery of the periodic table represents one of a multitude of multiple discoveries which most accounts of science try to explain away. Multiple discovery is actually the rule rather than the exception and it is one of the many hints that point to the interconnected, almost organic nature of how science really develops. Many, including myself, have explored this theme by considering examples from the history of atomic physics and chemistry.
But today I am writing about a subaltern who discovered the periodic table well before Mendeleev and whose most significant contribution was published on 20 August 1864, or precisely 150 years ago. John Reina Newlands was an English chemist who never held a university position and yet went further than any of his contemporary professional chemists in discovering the all-important repeating pattern among the elements which he described in a number of articles.
Newlands came from Southwark, a suburb of London. After studying at the Royal College of chemistry he became the chief chemist at Royal Agricultural Society of Great Britain. In 1860 when the leading European chemists were attending the Karlsruhe conference to discuss such concepts as atoms, molecules and atomic weights, Newlands was busy volunteering to fight in the Italian revolutionary war under Garibaldi. This is explained by the fact that his mother was Italian descent, which also explains his having the middle name Reina. In any case he survived the fighting and set about thinking about the elements on his return to London to become a sugar chemist.
In 1863 Newlands published a list of elements which he arranged into 11 groups. The elements within each of his groups had analogous properties and displayed weights that differed by eight units or some factor of eight. But no table yet!
Nevertheless he even predicted the existence of a new element, which he believed should have an atomic weight of 163 and should fall between iridium and rhodium. Unfortunately for Newlands neither this element, or a few more he predicted, ever materialized but it does show that the prediction of elements from a system of elements is not something that only Mendeleev invented.
In the first of three articles of 1864 Newlands published his first periodic table, five years before Mendeleev incidentally. This arrangement benefited from the revised atomic weights that had been announced at the Karlsruhe conference he had missed and showed that many elements had weights differing by 16 units. But it only contained 12 elements ranging between lithium as the lightest and chlorine as the heaviest.
Then another article, on 20 August 1864, with a slightly expanded range of elements in which he dropped the use of atomic weights for the elements and replaced them with an ordinal number for each one. Historians and philosophers have amused themselves over the years by debating whether this represents an anticipation of the modern concept of atomic number, but that’s another story.
More importantly Newlands now suggested that he had a system, a repeating and periodic pattern of elements, or a periodic law. Another innovation was Newlands’ willingness to reverse pairs of elements if their atomic weights demanded this change as in the case of tellurium and iodine. Even though tellurium has a higher atomic weight than iodine it must be placed before iodine so that each element falls into the appropriate column according to chemical similarities.
The following year, Newlands had the opportunity to present his findings in a lecture to the London Chemical Society but the result was public ridicule. One member of the audience mockingly asked Newlands whether he had considered arranging the elements alphabetically since this might have produced an even better chemical grouping of the elements. The society declined to publish Newlands’ article although he was able to publish it in another journal.
In 1869 and 1870 two more prominent chemists who held university positions published more elaborate periodic systems. They were the German Julius Lothar Meyer and the Russian Dmitri Mendeleev. They essentially rediscovered what Newlands found and made some improvements. Mendeleev in particular made a point of denying Newlands’ priority claiming that Newlands had not regarded his discovery as representing a scientific law. These two chemists were awarded the lion’s share of the credit and Newlands was reduced to arguing for his priority for several years afterwards. In the end he did gain some recognition when the Davy award, or the equivalent of the Nobel Prize for chemistry at the time, and which had already been jointly awarded to Lothar Meyer and Mendeleev, was finally accorded to Newlands in 1887, twenty three years after his article of August 1864.
But there is a final word to be said on this subject. In 1862, two years before Newlands, a French geologist, Emile Béguyer de Chancourtois had already published a periodic system that he arranged in a three-dimensional fashion on the surface of a metal cylinder. He called this the “telluric screw,” from tellos — Greek for the Earth since he was a geologist and since he was classifying the elements of the earth.
Dmitri Mendeleev believed he was a great scientist and indeed he was. He was not actually recognized as such until his periodic table achieved worldwide diffusion and began to appear in textbooks of general chemistry and in other major publications. When Mendeleev died in February 1907, the periodic table was established well enough to stand on its own and perpetuate his name for upcoming generations of chemists.
The man died, but the myth was born.
Mendeleev as a legendary figure grew with time, aided by his own well-organized promotion of his discovery. Well-versed in foreign languages and with a sort of overwhelming desire to escape his tsar-dominated homeland, he traveled the length and breadth of Europe, attending many conferences in England, Germany, Italy, and central Europe, his only luggage seemingly his periodic table.
Mendeleev had succeeded in creating a new tool that chemists could use as a springboard to new and fascinating discoveries in the fields of theoretical, mineral, and general chemistry. But every coin has two faces, even the periodic table. On the one hand, it lighted the path to the discovery of still missing elements; on the other, it led some unfortunate individuals to fall into the fatal error of announcing the discovery of false or spurious supposed new elements. Even Mendeleev, who considered himself the Newton of the chemical sciences, fell into this trap, announcing the discovery of imaginary elements that presently we know to have been mere self-deception or illusion.
It probably is not well-known that Mendeleev had predicted the existence of a large number of elements, actually more than ten. Their discoveries were sometimes the result of lucky guesses (like the famous cases of gallium, germanium, and scandium), and at other times they were erroneous. Historiography has kindly passed over the latter, forgetting about the long line of imaginary elements that Mendeleev had proposed, among which were two with atomic weights lower than that of hydrogen, newtonium (atomic weight = 0.17) and coronium (Atomic weight = 0.4). He also proposed the existence of six new elements between hydrogen and lithium, whose existence could not but be false.
Mendeleev represented a sort of tormented genius who believed in the universality of his creature and dreaded the possibility that it could be eclipsed by other discoveries. He did not live long enough to see the seed that he had planted become a mighty tree. He fought equally, with fierce indignation, the priority claims of others as well as the advent of new discoveries that appeared to menace his discovery.
In the end, his table was enduring enough to accommodate atomic number, isotopes, radioisotopes, the noble gases, the rare earth elements, the actinides, and the quantum mechanics that endowed it with a theoretical framework, allowing it to appear fresh and modern even after a scientific journey of 145 years.
Image: Nursery of new stars by NASA, Hui Yang University of Illinois. Public domain via Wikimedia Commons.
Martin Partington discussed a range of careers in his podcasts yesterday. Today, he tackles how new legal issues and developments in the professional environment have in turn changed organizational structures, rules and regulations, and aspects of legal education.
Co-operative Legal Services: An interview with Christina Blacklaws
Co-operative Legal Services was the first large organisation to be authorised by the Solicitors Regulatory Authority as an Alternative Business Structure. In this podcast, Martin talks to Christina Blacklaws, Head of Policy of Co-operative Legal Services.
The role of chartered legal executives: An interview with Diane Burleigh
The Chartered Institute of Legal Executives sets standards for and regulates the activities of legal executives, who play an important role in the delivery of legal services. In this podcast Martin talks with Diane Burleigh, the Chief Executive of CILEX, about the challenges facing the legal profession and the opportunities provided for Legal Executives in the rapidly developing legal world.
Educating Judges and the Judicial College: An interview with Lady Justice Hallett
The Judicial College was created by bringing together separate arrangements that had previously existed for training judicial office-holders in the courts (the Judicial Studies Board) and Tribunals Service (through the Tribunals Judicial Training Group). In this podcast Martin talks to its Chairman, Lady Justice Hallett, about the reasons for the change and ways in which the College is developing new ideas about judicial education.
In 1985, Nobel Laureate Gary Becker observed that the gap in employment between mothers and fathers of young children had been shrinking since the 1960s in OECD countries. This led Becker to predict that such sex differences “may only be a legacy of powerful forces from the past and may disappear or be greatly attenuated in the near future.” In the 1990s, however, the shrinking of the mother-father gap stalled before Becker’s prediction could be realized. In today’s economy, how big is this mother-father employment gap, what forces underlie it, and are there any policies which could close it further?
A simple way to characterize the mother-father employment gap is to sum up how much more work is done by mothers compared to fathers of children from ages 0 to 10. In 2010, fathers in the United States worked 3.1 more years on average than mothers over this age 0 to 10 age range. In the United Kingdom, the comparable number is 3.8, while in Canada it is 2.9 and Germany 4.5. The figure below traces the evolution of this mother-father employment gap for all four of these countries.
Becker’s theorizing about the family can help us to understand the development of this mother-father employment gap. Becker’s theoretical models suggest that if there are even slight differences between the productivity of mothers and fathers in the home vs. the workplace, spouses will tend to specialize completely in either in-home or in out-of-home work. These kind of productivity differences could arise because of cultural conditioning, as society pushes certain roles and expectations on women and men. Also, biology could be important as women have a heavier physical burden during pregnancy and after the birth of a child women have an advantage in breastfeeding. It is possible that the initial impact of these unique biological roles for mothers lingers as their children age. Biology is not destiny, but should be acknowledged as a potential barrier that contributes to the origins of the mother-father work gap.
Will today’s differences in mother-father work patterns persist into the future? To some extent that may depend on how cultural attitudes evolve. But there’s also the possibility that family-friendly policy can move things along more quickly. Both parental leave and subsidized childcare are options to consider.
Analysis of some data across the four countries suggest that these kinds of policies can make some difference, but the impact is limited.
Parental leave makes a very big difference when the children are age zero and the parent is actually taking the leave—but because mothers take much more parental leave than fathers, this increases the mother-father employment gap rather than shrinking it. Evidence suggests that after age 0 when most parents return to work, there doesn’t seem to be any lasting impact of having taken a maternity leave on mothers’ employment patterns when their children are ages 1 to 10.
Another policy that might matter is childcare. In the Canadian province of Quebec, a subsidized childcare program was put in place in 1997 that required parents to pay only $5 per day for childcare. This program not only increased mothers’ work at pre-school ages, but also seems to have had a lasting impact when their children reach older ages, as employment of women in Quebec increased at all ages from 0 to 10. When summed up over these ages, Quebec’s subsidized childcare closed the mother-father employment gap by about half a year of work.
Gary Becker’s prediction about the disappearance of mother-father work gaps hasn’t come true – yet. Evidence from Canada, Germany, the United States, and the United Kingdom suggests that policy can contribute to a shrinking of the mother-father employment gap. However, the analysis makes clear that policy alone may not be enough to overcome the combination of strong cultural attitudes and any persistence of intrinsic biological differences between mothers and fathers.
Kleptoplasty describes a special type of endosymbiosis where a host organism retain photosynthetic organelles from their algal prey. Kleptoplasty is widespread in ciliates and foraminifera; however, within Metazoa animals (animals having the body composed of cells differentiated into tissues and organs, and usually a digestive cavity lined with specialized cells), sacoglossan sea slugs are the only known species to harbour functional plastids. This characteristic gives these sea slugs their very special feature.
The “stolen” chloroplasts are acquired by the ingestion of macro algal tissue and retention of undigested functional chloroplasts in special cells of their gut. These “stolen” chloroplasts (thereafter named kleptoplasts) continue to photosynthesize for varied periods of time, in some cases up to one year.
In our study, we analyzed the pigment profile of Elysia viridis in order to evaluate appropriate measures of photosynthetic activity.
The pigments siphonaxanthin, trans and cis-neoxanthin, violaxanthin, siphonaxanthin dodecenoate, chlorophyll (Chl) a and Chl b, ε,ε- and β,ε-carotenes, and an unidentified carotenoid were observed in all Elysia viridis. With the exception of the unidentified carotenoid, the same pigment profile was recorded for the macro algae C. tomentosum (its algal prey).
In general, carotenoids found in animals are either directly accumulated from food or partially modified through metabolic reactions. Therefore, the unidentified carotenoid was most likely a product modified by the sea slugs since it was not present in their food source.
Pigments characteristic of other macro algae present in the sampling locations were not detected inthe sea slugs. These results suggest that these Elysia viridis retained chloroplasts exclusively from C. tomentosum.
In general, the carotenoids to Chl a ratios were significantly higher in Elysia viridis than in C. tomentosum. Further analysis using starved individuals suggests carotenoid retention over Chlorophylls during the digestion of kleptoplasts. It is important to note that, despite a loss of 80% of Chl a in Elysia viridis starved for two weeks, measurements of maximum capacity of performing photosynthesis indicated a decrease of only 5% of the photosynthetic capacity of kleptoplasts that remain functional.
This result clearly illustrates that measurement of photosynthetic activity using this approach can be misleading when evaluating the importance of kleptoplasts for the overall nutrition of the animal.
Finally, concentrations of violaxanthin were low in C. tomentosum and Elysia viridis and no detectable levels of antheraxanthin or zeaxanthin were observed in either organism. Therefore, the occurrence of a xanthophyll cycle as a photoregulatory mechanism, crucial for most photosynthetic organisms, seems unlikely to occur in C. tomentosum and Elysia viridis but requires further research.
In the 1990s, policing in major US cities was transformed. Some cities embraced the strategy of “community policing” under which officers developed working relationships with members of their local communities on the belief that doing so would change the neighborhood conditions that give rise to crime. Other cities pursued a strategy of “order maintenance” in which officers strictly enforced minor offenses on the theory that restoring public order would avert more serious crimes. Numerous scholars have examined and debated the efficacy of these approaches.
A companion concept, called “community prosecution,” seeks to transform the work of local district attorneys in ways analogous to how community policing changed the work of big-city cops. Prosecutors in numerous jurisdictions have embraced the strategy. Indeed, Attorney General Eric Holder was an early adopter of the strategy when he was US Attorney for the District of Columbia in the mid-1990s. Yet, community prosecution has not received the level of public attention or academic scrutiny that community policing has.
A possible reason for community prosecution’s lower profile is the difficulty of defining it. Community prosecution contrasts with the traditional model of a local prosecutor, which is sometimes called the “case processor” approach. In the traditional model, police provide a continuous flow of cases to the prosecutor, and she prioritizes some cases for prosecution and declines others. The prosecutor secures guilty pleas in most of the pursued cases, often through plea bargains, and trials are rare. The signature feature of the traditional prosecutor’s work is quickly resolving or processing a large volume of cases.
Community prosecution breaks with the traditional paradigm and changes the work of prosecutors in several ways. It removes prosecutors from the central courthouse and relocates them to a small office in a neighborhood, often in a retail storefront. This permits the prosecutor to develop relationships with community groups and individual residents, even allowing residents to walk into the prosecutor’s office and express concerns. It frees the prosecutors from responsibility for managing the flow of cases supplied by police and allows them to undertake two main tasks. The first is that prosecutors partner with community members to identify the sources of crime within the neighborhood and formulate solutions that will prevent crime before it occurs. The second is that when community prosecutors seek to impose criminal punishments, they develop their own cases rather than rely on those presented by police, and they typically focus on the cases they anticipate will have the greatest positive impact on the local community.
In the past fifteen years, Chicago, Illinois, has had a unique experience with community prosecution that allowed the first examination of its impact on crime rates. The State’s Attorney in Cook County (in which Chicago is located), opened four community prosecution offices between 1998 and 2000. Each of these offices had responsibility for applying the community prosecution approach to a target neighborhood in Chicago, and collectively, about 38% of Chicago’s population resided in a target neighborhood. Other parts of the city received no community prosecution intervention. The efforts continued until early 2007, when a budget crisis compelled the closure of these offices and the cessation of the county’s community prosecution program. For more than two years, Chicago had no community prosecution program. In 2009, a new State’s Attorney re-launched the program, and during the next three years, the four community prosecution offices were re-opened.
This sequence of events provided an opportunity to evaluate the impact of community prosecution on crime. The first adoption of community prosecution in the late 1990s lent itself to differences-in-differences estimation. The application of community prosecution to four sets of neighborhoods, each beginning at four different dates, enabled comparisons of crime rates before and after the program’s implementation within those neighborhoods. The fact that other neighborhoods received no intervention permitted these comparisons to drawn relative to the crime rates in a control group. Furthermore, Chicago’s singular experience with community prosecution – its launch, cancellation, and re-launch – furnished a sequence of three policy transitions (off to on, on to off again, and off again to on again). By contrast, the typical policy analysis observes only one policy transition (commonly from off to on). These multiple rounds of program application enhanced the opportunity to detect whether community prosecution affected public safety.
The estimates from this differences-in-differences approach showed that community prosecution reduced crime in Chicago. The declines in violent crime were large and statistically significant. For example, the estimates imply that aggravated assaults fell by 7% following the activation of community prosecution in a neighborhood. The estimates for property crime also showed declines, but they were too imprecisely estimated to permit firm statistical inferences. These results are the first evidence that community prosecution can produce reductions in crime and that the reductions are sizable.
Moreover, there was no indication that community prosecution simply displaced crime, moving it from one neighborhood to another. Neighborhoods just over the border of each community prosecution target area experienced no change in their average rates of crime. The declines thus appeared to reflect a true reduction instead of a reallocation of crime. In addition, the drops in offending were immediate and sustained. One might expect responses in crime rates would arrive slowly and gain momentum over time as prosecutors’ relationships with the community grew. But the estimates instead suggest that community prosecutors were able to identify and exploit immediately opportunities to improve public safety.
This evaluation of the community prosecution in Chicago offers broad lessons about the role of prosecutors. As with any empirical study, some caveats apply. The highly decentralized and flexible nature of community prosecution forbids reducing the program to a fixed set of principles and steps that can be readily implemented elsewhere. To the degree that its success depends on bonds of trust between prosecutor and community, its success may hinge on the personality and talents of specific prosecutors. (Indeed, the article’s estimates show variation in the estimated impacts across offices within Chicago.) At minimum, the results demonstrate that, under circumstances that require more study, community prosecution can reduce crime.
More broadly, the estimates suggest that the role of prosecutors is more far-reaching than typically thought. Crime control is conventionally understood to be primarily the responsibility of police. It was for this very reason that in the 1990s so much attention was devoted to the cities’ choice of policing style – community policing or order maintenance. Restructuring the work of police was thought to be a key mechanism through which crime could be reduced. By contrast, a conventional view of prosecutors is that their responsibilities pertain to the selection of cases, adjudication in the courtroom, and striking plea bargains. This article’s estimates show that this view is unduly narrow. Just as altering the structure and tasks of police may affect crime, so too can changing how prosecutors perform their work.
Loyal readers will have noticed a few changes to the OUPblog over the past week. Every few years, we redesign the OUPblog as technology changes and the needs of our editors and readers evolve. We have retired the design we have been using since 2010 and updated the OUPblog to a fresh look and feel.
Our top priority has been making the OUPblog easier to navigate. We have streamlined many of the links and widgets that you see — and the processes that you don’t see — so that it is more straightforward to scroll through and click. We have shifted to a responsive design, so that the OUPblog is effortless to view on desktop, tablet, or mobile phone. Our blog is now more closely aligned with other Oxford University Press websites so you will have a consistent experience moving from one site to the next. We have tested to ensure the website appears properly on different browsers and devices. (If you are having problems, please update to the latest version of your browser.)
We are still working out a few kinks from this initial launch, and we will continue to update the blog, our RSS feeds, and e-newletters over the coming months and years as technology and readership evolves.
What hasn’t changed? We will continue to publish the same quality scholarship from authors, editors, and academics around the globe.
Thank you to our designers and developers at Electric Studio, and the invaluable input from staff at Oxford University Press. We welcome feedback from our readers on the design and hope to integrate your suggestions in future. Please leave a comment below.
Each summer, Oxford University Press USA and Bryant Park in New York City partner for their summer reading series Word for Word Book Club. The Bryant Park Reading Room offers free copies of book club selections while supply lasts, compliments of Oxford University Press, and guest speakers lead the group in discussion. On Tuesday 19 August 2014, Garnette Cadogan, freelance writer and co-editor of the forthcoming Oxford Handbook of the Harlem Renaissance, leads a discussion on Frederick Douglass’s Narrative of the Life of Frederick Douglass, an American Slave.
What was your inspiration for working on the Oxford Handbook of the Harlem Renaissance?
I kept encountering the influence of the Harlem Renaissance — on art, music, literature, dance, and politics, among other spheres – and longed for a fresh, interesting discussion of the Renaissance in its splendid variety. My close friend and colleague Shirley Thompson, who teaches at UT-Austin, often discussed with me the enormous accomplishments and rich legacies of that movement. So, when she invited me to help her bring together myriad voices to talk about central cultural, intellectual, and political figures and ideas of the Harlem Renaissance, I, of course, gleefully joined her to arrange The Oxford Handbook of the Harlem Renaissance.
Where do you do your best writing?
On the kitchen counter. The comfort of the kitchen is like nowhere else, nothing else. (Look where everyone gathers at your next house party). To boot, nothing gets my mind revving like cooking. I’ll often run from skillet to keyboard shouting “Yes!”
Did you have an “a-ha!” moment that made you want to be a writer?
No one moment — it was a multitude of taps, then a grab — but having one of my professors in college call me to ask that I read my final paper to him over the phone was a big motivator. I took it as encouragement to be a writer, though, in retrospect, I recognize that it was my strange accent and not my prose style that was the appeal.
Which author do you wish had been your 7th grade English teacher?
Someone who could handle the distractible, chatterbox me, the troublemaker who had absolutely no interest in books or learning. Someone with a love for books who led a fascinating life and could tell a good story. Why, yes, George Orwell — What a remarkable life! What remarkable work! — would hold my attention and interest.
What is your secret talent?
Remarkably creative procrastination, coupled with the ability to trick myself that I’m not procrastinating. (Sadly, no one else but me is fooled.)
What is your favorite book?
Wait, what day is it? It all depends on the day you ask me. Sometimes, even the time of day you ask. Right now, it’s The Poems of Emily Dickinson (the handsome, authoritative edition edited by R.W. Franklin). I stand by this decision for another forty-eight hours.
Who reads your first draft?
Two friends who possess the right balance of grace and brutal honesty, the journalists Eve Fairbanks and Ilan Greenberg. They know just how to knock down and lift up, especially Eve, who has almost supernatural discernment and knows exactly what to say — and, more important in the early stages, what not to say. But who really gets the first draft are my friends John Wilson, the affable sage who edits Books and Culture, and John Freeman, whose eagle eye used to edit Granta; I verbally unload on them my fugitive ideas trying to assemble into a story (poor fellas), and then wait for red, yellow, green, or detour. Without this quartet, everything I write reads like the journal entries of Cookie Monster.
Do you read your books after they’ve been published?
My books haven’t been published yet, but I imagine that I’ll treat them like the rest of my writing: mental detritus I avoid looking at. I’m cursed with a near-pathological ability to only see what’s wrong with my writing.
Do you prefer writing on a computer or longhand?
Painful as it is to transcribe my hieroglyphics from writing pads (or concert programs and restaurant napkins), I prefer writing longhand. My second-guessing, severe, demanding, judgmental inner-editor makes it so. On a laptop, it’s cut this, change that, insert who-knows-what, and at day’s end I’m behind where I began. And yet, I never learn. I still do most of my writing on a computer.
What book are you currently reading? (And is it in print or on an e-Reader?)
I own two e-readers but never use them; I get too much enjoyment from the tactile pleasures of bound paper. I’m now reading a riveting, touching account of the thirty-three miners trapped underground in Chile four years ago, Hector Tobar’s Deep Down Dark, which is much more than the story of their survival. It’s also a story about faith and family and perseverance. Emily St. John’s novel Station Eleven is another book that intriguingly explores survival and belief and belonging. And art and culture, too. It’s partially set in a post-apocalyptic era, but without the clichés and cloying, overplayed scenarios that come with that setting. And I’ve been regularly dipping into Michael Robbins’ new book of poems, The Second Sex — smart, smart-alecky, “sonicky,” vibrantly awake to sound and meaning — not because he’s a friend, but because he’s oh-so-good. I’ll be pressing all three books on everyone I know that can read.
What word or punctuation mark are you most guilty of overusing?
The em-dash — since it allows my sentences to breathe much easier once it’s around. It’s so forgiving, too — I get to clear my throat and then be garrulous, and readers will put up with me trying have it both ways. The em-dash is both chaperone and wingman; which other punctuation mark can make that boast? Plus, it’s a looker — bold and purposeful and lean.
If you weren’t a writer, what would you be?
Something that takes me outdoors — and in the streets — as much as possible. Anything that doesn’t require sitting at a desk with my own boring thoughts for hours. And where I get to meet lots of new people. Bike messenger, perhaps.
Image credits: (1) Bryant Park, New York. Photo by cerfon. CC BY-NC-SA 2.0 via cerfon Flickr. (2) Garnette Cadogan. Photo by Bart Babinski. Courtesy of Garnette Cadogan.
Grove Music Online presents this multi-part series by Don Harrán, Artur Rubinstein Professor Emeritus of Musicology at the Hebrew University of Jerusalem, on the life of Jewish musician Salamone Rossi on the anniversary of his birth in 1570. Professor Harrán considers three major questions: Salamone Rossi as a Jew among Jews; Rossi as a Jew among Christians; and the conclusions to be drawn from both.
Salamone Rossi as a Jew among Jews
What do we know of Salamone Rossi’s family? His father was named Bonaiuto Azaria de’ Rossi (d. 1578): he composed Me’or einayim (Light of the Eyes). Rossi had a brother, Emanuele (Menaḥem), and a sister, Europe, who, like him, was a musician. She is known to have performed as a singer in the play Il ratto di Europa (“The Rape of Europa”) in 1608. The court chronicler Federico Follino raved over her performance, describing it as that of “a woman understanding music to perfection” and “singing, to the listeners’ great delight and their greater wonder, in a most delicate and sweet-sounding voice.”
Salamone Rossi appears to have used his connections at court to improve his family’s situation, as in 1602 when Rossi wrote to Duke Vincenzo on behalf of his brother Emanuele:
The duke granted the request in order to “to show Salamone Rossi ebreo some sign of gratitude for services that he, with utmost diligence, rendered and continues to render over many years. We have resolved to confer the duties of collecting the fees on the person of Emanuele, Salamone’s brother, in whose faith and diligence we place our confidence.”
Until now, it has been thought that Rossi earned his livelihood from his salary at the Mantuan court, and since the salary was—by comparison with that of other musicians at the court—very small, Rossi tried to supplement it by earning money on the side by investments. From 1622 on he was earning 1,200 lire, a large sum of money for a musician whose annual wages at the court were only 156 lire. Rossi needed the money to cover the cost of his publications and to support his family.
Rossi’s situation within the community can only be conjectured. By “community,” we are talking about some 2,325 Jews living in the city of Mantua out of a total population of 50,000. True, Rossi was its most distinguished “musician” and his service for the court would have brought honor on the Jewish community. But because of his non-Jewish connections, he enjoyed privileges denied his coreligionists. In 1606, for example, he was exempted from wearing a badge. The badge was shameful to Jews who, in their activities, were in close touch with Christians, as were Rossi and other Jews who performed before them as musicians or actors or who engaged in loan banking.
As other “privileged” Jews, Rossi occupied a difficult situation: his Christian employers considered him a Jew, yet the Jews probably considered him an outsider. He could choose from two alternatives: convert to Christianity to improve his situation with the Christians; or solidify his position within the Jewish community, which he probably did whenever he could by representing its interests before the authorities and by providing compositions for Jewish weddings, circumcisions, the inauguration of Torah scrolls, and for Purim festivities. All this is speculative, for we know nothing about these activities. We are better informed about Rossi’s role in the Jewish theater, whose actors were required to prepare each year one or two plays with musical intermedi. Since the Jews were expected to act, sing, and play instruments, their leading musician Salamone Rossi probably contributed to the theater by writing vocal and instrumental works, rehearsing them and, together with others, playing or even singing them.
It was in his Hebrew collection, however, that Rossi demonstrated his connections with his people. His intentions were good: after having published collections of Italian vocal music and instrumental works, Rossi decided, around 1612, to write Hebrew songs. He describes these songs as “new songs [zemirot] that I devised through ‘counterpoint’ [seder].” True, attempts were made to introduce art music into the synagogue in the early seventeenth century. But none of these early works survive. Rossi’s thirty-three “Songs by Solomon” (Ha-shirim asher li-Shelomoh) are the first Hebrew polyphonic “songs” to be printed. Here is an example from the opening of the collection, “Elohim, hashivenu”.
Good intentions are one thing; the status of art music in the synagogue is another. The prayer services made no accommodation for art music. Rossi’s aim, to quote him, was to write works “for thanking God and singing [le-zammer] to His exalted name on all sacred occasions” to be performed in prayer services, particularly on Sabbaths and festivals.
Headline image credit: Opening of Salomone de Rossi’s Madrigaletti, Venice, 1628. Photo of Exhibit at the Diaspora Museum, Tel Aviv. Public domain via Wikimedia Commons.
What range of career options are out there for those attending law school? In this series of podcasts, Martin Partington talks to influential figures in the law about topics ranging from restorative justice to legal journalism.
Restorative Justice: An interview with Lizzie Nelson
The Restorative Justice Council is a small charitable organisation that exists to promote the use of restorative justice, not just in the court (criminal justice) context, but in other situations of conflict as well (e.g. schools). In this podcast Martin talks to Lizzie Nelson, Director of the Restorative Justice Council.
Handling complaints against lawyers: An interview with Adam Sampson
In this podcast, Martin talks to Adam Sampson, Chief Legal Ombudsman. They discuss the work of the Legal Ombudsman, how it operates, the kinds of issue it deals with, and some of the limitations the office has to deal with matters raised by dissatisfied clients.
Reporting the law: An interview with Joshua Rozenberg
Joshua Rozenberg is one of a very small number of specialist journalists who cover legal issues in a serious and thoughtful way. He has worked in a wide variety of media, including the BBC, The Daily Telegraph, and The Guardian. In this interview, he describes how he decided to become a journalist rather than a practising lawyer and comments on the challenges of devising ways to enable legal issues to be raised in mass media.
On 19 August 1692, George Burroughs stood on the ladder and calmly made a perfect recitation of the Lord’s Prayer. Some in the large crowd of observers were moved to tears, so much so that it seemed the proceedings might come to a halt. But Reverend Burroughs had uttered his last words. He was soon “turned off” the ladder, hanged to death for the high crime of witchcraft. After the execution, Reverend Cotton Mather, who had been watching the proceedings from horseback, acted quickly to calm the restless multitude. He reminded them among other things “that the Devil has often been transformed into an Angel of Light” — that despite his pious words and demeanor, Burroughs had been the leader of Satan’s war against New England. Thus assured, the executions would continue. Five people would die that day, one of most dramatic and important in the course of the Salem witch trials. For the audience on 19 August realized that if a Puritan minister could hang for witchcraft, then no one was safe. Their tears and protests were the beginning of the public opposition that would eventually bring the trials to an end. Unfortunately, by the time that happened, nineteen people had been executed, one pressed to death, and five perished in the wretched squalor of the Salem and Boston jails.
The fact that a Harvard-educated Puritan minister was considered the ringleader of the largest witch hunt in American history is one of the many striking oddities about the Salem trials. Yet, a close look at Burroughs reveals that his character and his background personified virtually all the fears and suspicions that ignited witchcraft accusations in 1692. There was no single cause, no simple explanation to why the Salem crisis happened. Massachusetts Bay faced a confluence of events that produced the fears and doubts that led to the crisis. Likewise, a wide range of people faced charges for having supposedly committed diverse acts of witchcraft against a broad swath of the populace. Yet, there were many reasons people were suspicious of George Burroughs, indeed he was the perfect witch.
In 1680 when Burroughs was hired to be the minister of Salem Village he quickly became a central figure in the on-going controversy over religion, politics, and money that would span more than thirty years and result in the departure of the community’s first four ministers. One of Burroughs’s parishioners wrote to him, complaining that “Brother is against brother and neighbors against neighbors, all quarreling and smiting one another.” After a little over two years in office, the Salem Village Committee stopped paying Burroughs’s salary, so he wisely left town to return to his old job, as minister of Falmouth (now Portland, Maine).
George Burroughs spent most of his career in Falmouth, a town on the edge of the frontier. He was fortunate to escape the bloody destruction of the settlement by Native Americans in 1676 (during King Philip’s War) and 1690 (during King William’s War). The latter conflict brought a string of disastrous defeats to Massachusetts, and as many historians have noted, the ensuing war panic helped trigger the witch trials. The war was a spiritual defeat for the Puritan colony as they were losing to French Catholics allied with people they considered to be “heathen” Indians. It seemed Satan’s minions would end the Puritans’ New England experiment. Burroughs was one of many refugees from Maine who were either afflicted by or accused of witchcraft. In addition, most of the judges were military officers as well as speculators in Maine lands that the war had made worthless. Some of the afflicted refugees were suffering what today would be considered post-traumatic shock. Used to the manual labor of the frontier, Burroughs was so incredibly strong that several would testify in 1692 to his feats of supernatural strength. The minister’s seemingly miraculous escapes from Falmouth in 1676 and 1690 also brought him under suspicion. Perhaps he had done so with the help of the devil, or the Indians.
Tainted by his frontier ties, the twice-widowed Burroughs’s personal life and perceived religious views amplified fears of the minister. At his trial, several testified to his secretive ways, his seemingly preternatural knowledge, and his strict rule over his wives. He forbid his wives to speak about him to others, and even censored their letters to family. Meanwhile the afflicted said they saw the specters of Burroughs’s late wives, who claimed he murdered them. The charges were groundless. However, his controlling ways and the spectacular testimony against him at least raised the question of domestic abuse. Such perceived abuse of authority — at the family, community or colony-wide level — is a common thread linking many of Salem’s accused.
Some observers believed Burroughs was secretive because they suspected he was a Baptist. This Protestant sect had legal toleration but like the Quakers, was considered dangerous by most Massachusetts Puritans because of their belief in adult baptism and adult-only membership in the church. Burroughs admitted to the Salem judges that he had not recently received Puritan communion and had not baptized his younger children (both signs that he might be a Baptist). His excuse was that he was never ordained and hence could not lead the communion service, nor could he baptize children. However, since Burroughs left his post in Maine, he admitted he had visited Boston and Charlestown and had failed to take advantage of these rights there.
Even if he was not a Baptist, as a Puritan minister he was at risk. Burroughs was just one of five ministers cried out upon in 1692. Fully, 30 percent of the people accused were ministers, their immediate family members, or extended kin. In many ways, the witch trials were a critique of the religious and political policies of the colony. But that is another story.
As we enter the potentially crucial phase of the Scottish independence referendum campaign, it is worth remembering more broadly that political campaigns always matter, but they often matter most at referendums.
Referendums are often classified as low information elections. Research demonstrates that it can be difficult to engage voters on the specific information and arguments involved (Lupia 1994, McDermott 1997) and consequently they can be decided on issues other than the matter at hand. Referendums also vary from traditional political contests, in that they are usually focused on a single issue; the dynamics of political party interaction can diverge from national and local elections; non-political actors may often have a prominent role in the campaign; and voters may or may not have strong, clear views on the issue being decided. Furthermore, there is great variation in the information environment at referendums. As a result the campaign itself can be vital.
We can understand campaigns through the lens of LeDuc’s framework which seeks to capture some of the underlying elements which can lead to stability or volatility in voter behaviour at referendums. The essential proposition of this model is that referendums ask different types of questions of voters, and that the type of question posed conditions the behaviour of voters. Referendums that ask questions related to the core fundamental values and attitudes held by voters should be stable. Voters’ opinions that draw on cleavages, ideology, and central beliefs are unlikely to change in the course of a campaign. Consequently, opinion polls should show very little movement over the campaign. At the other end of the spectrum, volatile referendums are those which ask questions on which voters do not have pre-conceived fixed views or opinions. The referendum may ask questions on new areas of policy, previously un-discussed items, or items of generally low salience such as political architecture or institutions.
Another essential component determining the importance of the campaign are undecided voters. When voter political knowledge emanates from a low base, the campaign contributes greatly to increasing political knowledge. This point is particularly clear from Farrell and Schmitt-Beck (2002) where they demonstrated that voter ignorance is widespread and levels of political knowledge among voters are often overestimated. As Ian McAllister argues, partisan de-alignment has created a more volatile electoral environment and the number of voters who make their decisions during campaigns has risen. In particular, there has been a sharp rise in the number of voters who decide quite late in a campaign. In this case, the campaign learning is vital and the campaign may change voters’ initial disposition. Opinions may only form during the campaign when voters acquire information and these opinions may be changeable, leading to volatility.
The experience of referendums in Ireland is worth examining as Ireland is one of a small but growing number of countries which makes frequent use of referendums. It is also worth noting that Ireland has a highly regulated campaign environment. In the Oireachtas Inquiries Referendum 2011, Irish voters were asked to decide on a parliamentary reform proposal (Oireachtas Inquiries – OI) in October 2011. The issue was of limited interest to voters and co-scheduled with a second referendum on reducing the pay of members of the judiciary along with a lively presidential election.
The OI referendum was defeated by a narrow margin and the campaign period witnessed a sharp fall in support for the proposal. Only a small number of polls were taken but the sharp decline is clear from the figure below.
Few voters had any existing opinion on the proposal and the post-referendum research indicated that voters relied significantly on heuristics or shortcuts emanating from the campaign and to a lesser extent on either media campaigns or rational knowledge. The evidence showed that just a few weeks after the referendum, many voters were unable to recall the reasons for their voting decision. An interesting result was that while there was underlying support for the reform with 74% of all voters in support of Oireachtas Inquiries in principle, it failed to pass. There was a very high level of ignorance of the issues where some 44% of voters could not give cogent reasons for why they voted ‘no’, underlining the common practice of ‘if you don’t know, vote no’.
So are there any lessons we can draw for Scottish Independence campaign? Scottish independence would likely be placed on the stable end of the Le Duc spectrum in that some voters could be expected to have an ideological predisposition on this question. Campaigns matter less at these types of referendums. However, they are by no means a foregone conclusion. We would expect that the number of undecided voters will be key and these voters may use shortcuts to make their decision. In other words the positions of the parties, of celebrities of unions and businesses and others will likely matter. In addition, the extent to which voters feel fully informed on the issues will also possibly be a determining factor. It may be instructive to look at another Irish referendum, on the introduction of divorce in the 1980s, during which voters’ opinions moved sharply during the campaign, even though the referendum question drew largely from the deep rooted conservative-liberal cleavage in Irish politics (Darcy and Laver 1990). The Scottish campaign might thus still conceivably see some shifts in opinion.
Headline image: Scottish Parliament Building via iStockphoto.