JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: *Featured, Most Recent at Top [Help]
Results 1 - 25 of 2,265
How to use this Page
You are viewing the most recent posts tagged with the words: *Featured in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
I recall a dinner conversation at a symposium in Paris that I organized in 2010, where a number of eminent evolutionary biologists, economists and philosophers were present. One of the economists asked the biologists why it was that whenever the topic of “group selection” was brought up, a ferocious argument always seemed to ensue. The biologists pondered the question. Three hours later the conversation was still stuck on group selection, and a ferocious argument was underway.
Group selection refers to the idea that natural selection sometimes acts on whole groups of organisms, favoring some groups over others, leading to the evolution of traits that are group-advantageous. This contrasts with the traditional ‘individualist’ view which holds that Darwinian selection usually occurs at the individual level, favoring some individual organisms over others, and leading to the evolution of traits that benefit individuals themselves. Thus, for example, the polar bear’s white coat is an adaptation that evolved to benefit individual polar bears, not the groups to which they belong.
The debate over group selection has raged for a long time in biology. Darwin himself primarily invoked selection at the individual level, for he was convinced that most features of the plants and animals he studied had evolved to benefit the individual plant or animal. But he did briefly toy with group selection in his discussion of social insect colonies, which often function as highly cohesive units, and also in his discussion of how self-sacrificial (‘altruistic’) behaviours might have evolved in early hominids.
In the 1960s and 1970s, the group selection hypothesis was heavily critiqued by authors such as G.C. Williams, John Maynard Smith, and Richard Dawkins. They argued that group selection was an inherently weak evolutionary mechanism, and not needed to explain the data anyway. Examples of altruism, in which an individual performs an action that is costly to itself but benefits others (e.g. fighting an intruder), are better explained by kin selection, they argued. Kin selection arises because relatives share genes. A gene which causes an individual to behave altruistically towards its relatives will often be favoured by natural selection—since these relatives have a better than random chance of also carrying the gene. This simple piece of logic tallies with the fact that empirically, altruistic behaviours in nature tend to be kin-directed.
Strangely, the group selection controversy seems to re-emerge anew every generation. Most recently, Harvard’s E.O. Wilson, the “father of sociobiology” and a world-expert on ant colonies, has argued that “multi-level selection”—essentially a modern version of group selection—is the best way to understand social evolution. In his earlier work, Wilson was a staunch defender of kin selection, but no longer; he has recently penned sharp critiques of the reigning kin selection orthodoxy, both alone and in a 2010 Nature article co-authored with Martin Nowak and Corina Tarnita. Wilson’s volte-face has led him to clash swords with Richard Dawkins, who says that Wilson is “just wrong” about kin selection and that his most recent book contains “pervasive theoretical errors.” Both parties point to eminent scientists who support their view.
What explains the persistence of the controversy over group and kin selection? Usually in science, one expects to see controversies resolved by the accumulation of empirical data. That is how the “scientific method” is meant to work, and often does. But the group selection controversy does not seem amenable to a straightforward empirical resolution; indeed, it is unclear whether there are any empirical disagreements at all between the opposing parties. Partly for this reason, the controversy has sometimes been dismissed as “semantic,” but this is too quick. There have been semantic disagreements, in particular over what constitutes a “group,” but this is not the whole story. For underlying the debate are deep issues to do with causality, a notoriously problematic concept, and one which quickly lands one in philosophical hot water.
All parties agree that differential group success is common in nature. Dawkins uses the example of red squirrels being outcompeted by grey squirrels. However, as he intuitively notes, this is not a case of genuine group selection, as the success of one group and the decline of another was a side-effect of individual level selection. More generally, there may be a correlation between some group feature and the group’s biological success (or “fitness”); but like any correlation, this need not mean that the former has a direct causal impact on the latter. But how are we to distinguish, even in theory, between cases where the group feature does causally influence the group’s success, so “real” group selection occurs, and cases where the correlation between group feature and group success is “caused from below”? This distinction is crucial; however it cannot even be expressed in terms of the standard formalisms that biologists use to describe the evolutionary process, as these are statistical not causal. The distinction is related to the more general question of how to understand causality in hierarchical systems that has long troubled philosophers of science.
Recently, a number of authors have argued that the opposition between kin and multi-level (or group) selection is misconceived, on the grounds that the two are actually equivalent—a suggestion first broached by W.D. Hamilton as early as 1975. Proponents of this view argue that kin and multi-level selection are simply alternative mathematical frameworks for describing a single evolutionary process, so the choice between them is one of convention not empirical fact. This view has much to recommend it, and offers a potential way out of the Wilson/Dawkins impasse (for it implies that they are both wrong). However, the equivalence in question is a formal equivalence only. A correct expression for evolutionary change can usually be derived using either the kin or multi-level selection frameworks, but it does not follow that they constitute equally good causal descriptions of the evolutionary process.
This suggests that the persistence of the group selection controversy can in part be attributed to the mismatch between the scientific explanations that evolutionary biologists want to give, which are causal, and the formalisms they use to describe evolution, which are usually statistical. To make progress, it is essential to attend carefully to the subtleties of the relation between statistics and causality.
Throughout the month, we’ve been examining the myriad aspects of the human voice. But who better to discuss it than a singer herself? We asked Jenny Forsyth, member of the Sospiri choir in Oxford, what it takes to be part of a successful choir.
Which vocal part do you sing in the choir?
I sing soprano – usually first soprano if the parts split, but I’ll sing second if I need to.
For how long have you been singing?
I started singing in the training choir of the Farnham Youth Choir, in Surrey, when I was seven. Then I moved up through the junior choir when I was about 10 years old and then auditioned and moved up to the main performance choir at the age of 12 and stayed with them until I was 18. After this I studied for a Bachelors in Music, then did a Masters degree in Choral Studies (Conducting).
What first made you want to join a choir?
I had recently started having piano lessons and my dad, a musician himself, thought it would be good for my musical education to join a choir. We went to a concert given by the Farnham Youth Choir and after that I was hooked!
What is your favourite piece or song to perform?
That’s a really difficult question – there is so much great music around! I enjoy singing Renaissance music so I might choose Taverner’s Dum Transsiset. I also love Byrd’s Ne Irascaris Domine and Bogoroditse Devo from Rachmaninoff’s Vespers.
I also sing with an ensemble called the Lacock Scholars, and we sing a lot of plainsong chant, a lot of which is just so beautiful. Reading from historical notation – neumes – can give you so much musical information through such simple notation; it’s really exciting!
I’ve recently recorded an album of new commissions for the centenary of World War I with a choir from Oxford called Sospiri, directed by Chris Watson. The disk is called A Multitude of Voices and all the commissions are settings of war poems and texts. The composers were asked to look outside the poetical canon and consider texts by women, neglected poets and writers in languages other than English. I love all the music on the disk and it’s a thrilling feeling to be the first choir ever to sing a work. I really love Standing as I do before God by Cecilia McDowall and Three Songs of Remembrance by David Bednall. Two completely different works but both incredibly moving to perform.
However I think my all-time favourite has to be Las Amarillas by Stephen Hatfield – an arrangement of Mexican playground songs. It’s in Spanish and has some complicated cross rhythms, clapping, and other body percussion. It’s a hard piece to learn but when it comes together it just clicks into place and is one of the most rewarding pieces of music!
How do you keep your voice in peak condition?
These are the five things I find really help me. (Though a busy schedule means the early nights are often a little elusive!)
Keeping hydrated. It is vital to drink enough water to keep your whole system hydrated (ie., the internal hydration of the entire body that keeps the skin, eyes, and all other mucosal tissue healthy), and to make sure the vocal chords themselves are hydrated. When you drink water the water doesn’t actually touch the vocal chords so I find the best way to keep them hydrated is to steam, either over a bowl of hot water or with a purpose-built steam inhaler. The topical, or surface, hydration is the moisture level that keeps the epithelial surface of the vocal folds slippery enough to vibrate. Steaming is incredibly good for a tired voice!
I’m not sure what the science behind this is but I find eating an apple just before I sing makes my voice feel more flexible and resonant.
Hot drinks. A warm tea or coffee helps to relax my voice when it’s feeling a bit tired.
Regular singing lessons. Having regular singing lessons with a teacher who is up to date on research into singing techniques is crucial to keeping your voice in peak condition. Often you won’t notice the development of bad habits, which could potentially be damaging to your voice, but your singing teacher will be able to correct you and keep you in check.
Keeping physically fit and getting early nights. Singing is a really physical activity. When you’ve been working hard in a rehearsal or lesson you can end up feeling physically exhausted. Even though singers usually make singing look easy, there is a lot of work going on behind the scenes with lots of different sets of muscles working incredibly hard to support their sound. It’s essential to keep your body fit and well-rested to allow you to create the music you want to without damaging your voice.
Do you play any other musical instruments?
When I was younger I played the piano, flute and violin but I had to give up piano and flute as I didn’t have enough time to do enough practice to make my lessons worthwhile. I continued playing violin and took up viola in my gap year and then at university studied violin as my first study instrument for my first two years before swapping to voice in my final year.
Do you have a favourite place to perform?
I’ve been fortunate enough to travel all around the world with the Farnham Youth Choir, with tours around Europe and trips to both China and Australia. So, even before I decided to take my singing more seriously, I had had the chance to sing in some of the best venues in the world. It’s hard to choose a favourite as some venues lend themselves better to certain types of repertoire. Anywhere with a nice acoustic where you can hear both what you are singing and what others around you are singing is lovely. It can be very disconcerting to feel as though you’re singing completely by yourself when you know you’re in a choir of 20! I’m currently doing a lot of singing with the Lacock Scholars at Saint Cuthbert’s Church, Earl’s Court, so I think that’s my favourite at the moment. Having said that, I would absolutely love to sing at the building where I work as a music administrator – Westminster Cathedral! It’s got the most glorious acoustics and is absolutely stunning.
What is the most rewarding thing about being in a choir?
There are so many great things about singing in a choir. You get a sense of working as part of a team, which you rarely get to the same extent outside of choral singing. I think this is because your voice is so personal to you can find yourself feeling quite vulnerable. I sometimes think that to sing well you have to take that vulnerability and use it; to really put yourself ‘out there’ to give the music a sense of vitality. You have to really trust your fellow singers. You have to know that when you come in on a loud entry (or a quiet one, for that matter!) that you won’t be left high and dry singing on your own.
What’s the most challenging thing about singing in a choir?
I think this is similar to the things that are rewarding about being part of a choir. That sense of vulnerability can be unnerving and can sow seeds of doubt in your mind. “Do I sound ok? Is the audience enjoying the performance? Was that what the conductor wanted?” But you have to put some of these thoughts out of your mind and focus on the job in hand. If you’ve been rehearsing the repertoire for a long time you can sometimes find your mind wandering, and then you’re singing on autopilot. So it can be a challenge to keep trying to find new and interesting things in the music itself.
Also, personality differences between members of the choir or singers and conductors can cause friction. It’s important to strike the right balance so that everyone’s time is used effectively. The dynamic between a conductor and their choir is important in creating a finely tuned machine, and it is different with each conductor and each choir. Sometimes in a small ensemble a “Choirocracy” can work with the singers being able to give opinions but it can make rehearsals tedious and in a choral society of over a hundred singers it would be a nightmare.
Do you have any advice for someone thinking about joining a choir?
Do it! I think singing in a choir as I grew up really helped my confidence; I used to be very shy but the responsibility my youth choir gave me really brought me out of myself. You get a great feeling of achievement when singing in a choir. I don’t think that changes whether you’re an amateur singing for fun or in a church choir once a week or whether you’re a professional doing it to make a living. I’ve recently spent time working with an “Office Choir”. All of the members work in the same building for large banking corporation, and they meet up once a week for a rehearsal and perform a couple of concerts a year. It’s great because people who wouldn’t usually talk to each other are engaging over a common interest. So it doesn’t matter whether you’re a CEO, secretary, manager, or an intern; you’re all in the same boat when learning a new piece of music! They all say the same thing: they look forward to Wednesdays now because of their lunchtime rehearsals, and they find themselves feeling a lot more invigorated when they return to their desks afterwards.
Lastly, singing in a choir is a great way to make new friends. Some of my closest friends are people I met at choir aged 7!
Header image credit: St John’s College Chapel by Ed Webster, CC BY 2.0 via Flickr
Not long after the beginning, Genesis tells us that there were two brothers. One killed the other. “And the Lord said, ‘What have you done? Listen; your brother’s blood is crying out to me from the ground’” (Gen. 4:10). This is the Lord’s response when the murderer denies knowing where his brother is and asks, “Am I my brother’s keeper?” We humans are our brothers’ and sisters’ keepers; and yet we have been disowning and killing each other since the beginning.
On this day seventy years ago, the last prisoners were liberated from Auschwitz. On this day today, we commit to remembering the more than six million Jews, Gypsies, homosexuals, and others who were rejected and murdered by their fellow humans. Their blood still cries out to God from the ground.
When we remember the Shoah, always but especially on this day, we must focus our energies on remembering those who we have lost. Those who murdered them dehumanized them. Let us defy that curse and celebrate their thoughts, beliefs, feelings, and experiences – the fullness of their humanity.
Let us sit in the memory of those who we have lost. Those of us without memories of our own must humbly seek welcome in the memories of others, to the extent that they are knowable. When we cannot know what the people who perished were like, let us grow in our awareness of our ignorance. We can never fully understand what we have lost. Let us mourn both what we know and what we know we cannot know.
The words of the victims themselves offer the clearest hope of seeing and remembering through their eyes, of knowing what the Shoah was and what it destroyed, even as it defies human comprehension. In 1947, writing from Sweden, poet, refugee, and future Nobel laureate Nelly Sachs offered words of caution to her fellow humans: “We the rescued beg you: Show us your sun slowly. Lead us step by step from star to star. Let us quietly learn to live again. Otherwise the song of a bird or the filling of a bucket at a well could unleash our ill-sealed ache and wash us away.”
There is beauty in the world. There is hope. There is joy to be found in the midst of the ordinary. But there is also great darkness within those around us and even, at times, within ourselves. We have killed our brothers and sisters since the beginning of humanity, it seems. We are each other’s keepers. We learn to keep each other in the present, in part, by keeping the memories alive of those who have gone before, especially of those who were the victims of immeasurable evil.
This remembering can be no mere intellectual exercise of memorizing facts and figures. As heirs of the memories of the victims, we must take their legacy personally. Some felt anger and indignation, a few even hate, but for many the overwhelming response was one of sorrow. Entire communities, villages, towns, families, clans, cultures, and sub-cultures that once thrived are now gone.
The Talmud teaches that “anyone who destroys a life is considered by Scripture to have destroyed an entire world” (Mishnah Sanhedrin 4:9). In the Shoah, more than six million worlds were wiped out. Let us mourn their loss. Let us seek to remember. Their blood still cries out to God. Let us listen.
Image Credit: Holocaust Remembrance Day. Photo by Brittney Bush Bollay. CC by NC-ND 2.0 via Flickr.
There’s a puzzle around economics. On the one hand, economists have the most policy influence of any group of social scientists. In the United States, for example, economics is the only social science that controls a major branch of government policy (through the Federal Reserve), or has an office in the White House (the Council of Economic Advisers). And though they don’t rank up there with lawyers, economists make a fairly strong showing among prime ministers and presidents, as well.
But as any economist will tell you, that doesn’t mean that policymakers commonly take their advice. There are lots of areas where economists broadly agree, but policymakers don’t seem to care. Economists have wide consensus on the need for carbon taxes, but that doesn’t make them an easier political sell. And on topics where there’s a wider range of economic opinions, like over minimum wages, it seems that every politician can find an economist to tell her exactly what she wants to hear.
So if policymakers don’t take economists’ advice, do they actually matter in public policy? Here, it’s useful to distinguish between two different types of influence: direct and indirect.
Direct influence is what we usually think of when we consider how experts might affect policy. A political leader turns to a prominent academic to help him craft new legislation. A president asks economic advisers which of two policy options is preferable. Or, in the case where the expert is herself the decisionmaker, she draws on her own deep knowledge to inform political choices.
This happens, but to a limited extent. Though politicians may listen to economists’ recommendations, their decisions are dominated by political concerns. They pay particular attention to advice that agrees with what they already want to do, and the rise of think tanks has made it even easier to find experts who support a preexisting position.
Research on experts suggests that direct advisory effects are more likely to occur under two conditions. The first is when a policy decision has already been defined as more technical than political—that experts are the appropriate group to be deciding. So we leave it to specialists to determine what inventions can be patented, or which drugs are safe for consumers, or (with occasional exceptions) how best to count the population. In countries with independent central banks, economists often control monetary policy in this way.
Experts can also have direct effects when possible solutions to a problem have not yet been defined. This can happen in crisis situations: think of policymakers desperately casting about for answers during the peak of the financial crisis. Or it can take place early in the policy process: consider economists being brought in at the beginning of an administration to inject new ideas into health care reform.
But though economists have some direct influence, their greatest policy effects may take place through less direct routes—by helping policymakers to think about the world in new ways.
For example, economists help create new forms of measurement and decision-making tools that change public debate. GDP is perhaps the most obvious of these. A hundred years ago, while politicians talked about economic issues, they did not talk about “the economy.” “The economy,” that focal point of so much of today’s chatter, only emerged when national income and product accounts were created in the mid-20th century. GDP changes have political, as well as economic, effects. There were military implications when China’s GDP overtook Japan’s; no doubt the political environment will change more when it surpasses the United States.
Less visible economic tools also shape political debate. When policymakers require cost-benefit analysis of new regulation, conversations change because the costs of regulation become much more visible, while unquantifiable effects may get lost in the debate. Indicators like GDP and methods like cost-benefit analysis are not solely the product of economists, but economists have been central in developing them and encouraging their use.
The spread of technical devices, though, is not the only way economics changes how we think about policy. The spread of an economic style of reasoning has been equally important.
Philosopher Ian Hacking has argued that the emergence of a statistical style of reasoning first made it possible to say that the population of New York on 1 January 1820 was 100,000. Similarly, an economic style of reasoning—a sort of Econ 101-thinking organized around basic concepts like incentives, efficiency, and opportunity costs—has changed the way policymakers think.
While economists might wish economic reasoning were more visible in government, over the past fifty years it has in fact become much more widespread. Organizations like the US Congressional Budget Office (and its equivalents elsewhere) are now formally responsible for quantifying policy tradeoffs. Less formally, other disciplines that train policymakers now include some element of economics. This includes master’s programs in public policy, organized loosely around microeconomics, and law, in which law and economics is an important subfield. These curricular developments have exposed more policymakers to basic economic reasoning.
The policy effects of an economic style of reasoning are harder to pinpoint than, for example, whether policymakers adopted an economist’s tax policy recommendation. But in the last few decades, new policy areas have been reconceptualized in economic terms. As a result, we now see education as an investment in human capital, science as a source of productivity-increasing technological innovations, and the environment as a collection of ecosystem services. This subtle shift in orientation has implications for what policies we consider, as well as our perception of their ultimate goals.
In the end, then, there is no puzzle. Economists do matter in public policy, even though policymakers, in fact, often ignore their advice. If we are interested in understanding how, though, we should pay attention to more than whether politicians take economists’ recommendations—we must also consider how their intellectual tools shape the very ways that policymakers, and all of us, think.
I am pleased to report that A Happy New Year is moving along its warlike path at the predicted speed of one day in twenty-four hours and that it is already the end of January. Spring will come before you can say Jack Robinson, as Kipling’s bicolored python would put it, and soon there will be snowdrops to glean. Etymology and spelling are the topics today. Some other questions will be answered in February.
Sod, seethe, suds
Our correspondent Paul Nance is not satisfied with the idea that sod is related to seethe because the senses don’t match; he also wonders where suds in the triad seethe-sod-suds comes in. As concerns his doubts about sod and seethe, he is in good company. Yet Skeat was probably right and the two words seem to be related. We should first note that sodden, the petrified past participle of seethe, contains the syllable sod. The form of some importance is Dutch zode “sod,” “boiling,” and “heap, a lot,” the latter usually occurring in the forms zooi or zo. It is not immediately clear whether all of them are related and with how many words we are dealing (one, two, or three).
I think the best clue to the sod – seethe question is provided by Engl. suds (the singular sud also exists, but its meaning can be left out of the present discussion). English has a regional verb suddle “to sully,” a congener of German sudeln “to daub; sully; do dirty work,” often translated rather misleadingly as “to botch.” Sudeln is believed to have arisen as the result of the confusion of two different roots: one meant “cook” (compare “boil,” above); the other, which meant “sap, moisture,” referred to small bodies of water (pools, puddles, wells, and so forth) and is present in many words of the Indo-European languages, Old English among them. But it is not the ancient history of sudeln that matters. Engl. suddle looks like a borrowing from Dutch or Low German. The same is true of Standard German sudeln, which does not antedate the 15th century, and of Engl. suds, which goes back to the fifteen-hundreds. They emerged too late to be classified with native words. Finally, the same holds for sod, another fifteenth-century intruder, and here comes the main point: sod is almost certainly allied to suds and suds is almost certainly allied to seethe. By the law of transitivity, sod is also allied to this verb. Mr. Lance writes: “In Upstate New York, sod is only occasionally sodden.” But the semantic history of the entire group (sod, suds, sudeln, and suddle) should be looked for in the Low Countries.
House and hood
Even though house might refer to “covering,” while hood, a cognate of hat, certainly does so, they are not related. The ancient vowel of hood was long o (as in Engl. or, without the r glider after o), while house, from hus, had long u (as in Engl. too), and no bridge connects them.
Engl. house and German Haus
Why do the cognates Engl. brother and German Bruder (to cite one typical example) have only br- in common, while house and Haus sound alike? House and Haus owe their similarity to good luck. It was the so-called German Consonant Shift that drove a wedge between German and the other Germanic languages. Engl. tide and German Zeit “time” are cognates, but the new consonants in Zeit destroyed the similarity. The consonants s and h stayed intact in German, and the vowel (long u) changed the same way in both German and English; hence house and Haus. However, the vowel shift, great or not so great, had partly unpredictable results; compare Dutch huis. The vowel in bread has undergone many changes since the Old English period, and it is hard to believe that both o in German Brot and ea, pronounced as short e, in Engl. bread go back to the same diphthong au. I have known a student who tried to translate an English text into Russian with the help of a German dictionary and, miraculously, had some success. Foreign languages are tough. One’s mother tongue may also look foreign. Thus, ea in bread, as opposed to e in bred, does not increase the amount of happiness in English spellers, and the horror of lead/led is known to many of us.
Thomas Lambdin, Professor in Harvard Department of Near Eastern Studies, once suggested that the Latin adjective antiquus “old, ancient” was a borrowing of Aramaic attiq “old.” One of his former students asked me what I can say about this conjecture. I have known for a long time that scholars’ etymologies of English words depend very strongly on their professional orientation. Those linguists who specialize in Old Norse point to possible Scandinavian etymons of English words, while Romance scholars find equally plausible Old French roots. (I am not speaking of the monomaniacs who trace all words of English, and not only of English, to Hebrew, Irish, Slavic, and so forth: those are simply crazy.) Similar things happen in some other areas. Modern linguistics is strongly influenced by the concepts of English phonetics and syntax, because the Chomskyan revolution, before spreading to the rest of the world, took place in the United States and its creator was a native speaker of English. Someone noted that, if N. S. Trubetzkoy were not a native speaker of Russian, some of the central ideas developed in his epoch-making book The Bases ofPhonology (Grundzüge der Phonologie) may not have occurred to him.
Professor Lambdin is an expert in Semitic linguistics and, naturally, receives impulses from the material he knows best. I happen to be well-acquainted with his books and even reviewed the etymologies offered in his untraditional manual of Gothic. It is true that that the etymology of antiquus entails several difficulties, but, in my opinion, suggesting that that adjective came from Aramaic is hard to justify. As usual, the closeness of forms is not a sufficient argument. We would like to know why such a basic concept had to be taken over from a foreign language, under what circumstances the borrowing took place, and whether it filled a lacuna in Latin or superseded a native synonym. In the absence of additional arguments I would stay away from such a bold hypothesis.
Dwell and its Latvian parallels
I read the comment on the subject indicated in the title of this section with great interest. Such parallels are of utmost importance. They prove nothing but add credence to some of our conjectures. If a certain semantic shift happened in one language, it may, theoretically speaking, have happen in another. In etymology, high probability and verisimilitude are often the only criteria of truth. That is why Carl Darling Buck’s dictionary of synonyms in the Indo-European languages is so useful.
Spelling and spelling reform
Spelling: whose cup of tea?
One of our correspondents wonders why Modern English spelling is so irrational. It would take a book to answer this question in detail, but the main reasons are two.
After the Norman Conquest of 1066 French and French-educated scribes imposed their habits on English spelling, and the medieval norm has more or less stayed intact to this day.
The second reason is the loyalty of English to foreign spelling. The Spanish don’t mind writing futbol, while English speakers live with monsters like committee, though one m and one t would have been quite enough. Nor do we need sugar, chagrin, and shrine, to say nothing of fuchsia, despite its origin in a proper name.
Thus, the chaos most of us bemoan stems from reverence for tradition. Shureli, a tru skolar wud be imensli shagrind if he were made to put a spoon of shugar in his cup of tee. The tee would taste bitter and the world wud kolaps, wudnt it?
News about spelling reform
I am afraid to sound too optimistic, but it may be that the Spelling Society is making progress, that is, it seems to have feasible plans for effecting the reform and not only ideas about how to spell the words of Modern English. English children take up to two years longer to master basic words than those of other countries (the torture imposed on dyslexics and foreigners should not be forgotten either, for aren’t we all against torture?). The sound system of English is such that we’ll never reach the elegance of Finnish spelling, but something can and should be done. For that purpose, the institution of INTERNATIONAL ENGLISH SPELLING CONGRESS has been proposed. Everyone is welcome to join it. The Expert Committee will be appointed by the delegates who will make the final decision on the alternative scheme. The main virtue of the proposal is that it seeks to engage as many people in the movement as possible. Some publishers of visible journals are already showing an interest in the cause. The public should be informed that the preservation of the status quo has serious negative economic consequences. It is no longer a virtue to smoke. Perhaps the Spelling Congress will be able to explain to the world that retaining a medieval norm in spelling (arguably the most complicated in the world) is not a virtue either. Mr. Stephen Linstead, the Chairman of the Society, has spoken on the BBC and was mocked by many for offering to tamper with a thing of beauty. This is a good sign: no success without public outrage before a novelty is accepted. A report of these events has also been published by the Chicago Tribune.
Galileo and some of his contemporaries left careful records of their telescopic observations of sunspots – dark patches on the surface of the sun, the largest of which can be larger than the whole earth. Then in 1844 a German apothecary reported the unexpected discovery that the number of sunspots seen on the sun waxes and wanes with a period of about 11 years.
Initially nobody considered sunspots as anything more than an odd curiosity. However, by the end of the nineteenth century, scientists started gathering more and more data that sunspots affect us in strange ways that seemed to defy all known laws of physics. In 1859 Richard Carrington, while watching a sunspot, accidentally saw a powerful explosion above it, which was followed a few hours later by a geomagnetic storm – a sudden change in the earth’s magnetic field. Such explosions – known as solar flares – occur more often around the peak of the sunspot cycle when there are many sunspots. One of the benign effects of a large flare is the beautiful aurora seen around the earth’s poles. However, flares can have other disastrous consequences. A large flare in 1989 caused a major electrical blackout in Quebec affecting six million people.
Interestingly, Carrington’s flare of 1859, the first flare observed by any human being, has remained the most powerful flare so far observed by anybody. It is estimated that this flare was three times as powerful as the 1989 flare that caused the Quebec blackout. The world was technologically a much less developed place in 1859. If a flare of the same strength as Carrington’s 1859 flare unleashes its full fury on the earth today, it will simply cause havoc – disrupting electrical networks, radio transmission, high-altitude air flights and satellites, various communication channels – with damages running into many billions of dollars.
There are two natural cycles – the day-night cycle and the cycle of seasons – around which many human activities are organized. As our society becomes technologically more advanced, the 11-year cycle of sunspots is emerging as the third most important cycle affecting our lives, although we have been aware of its existence for less than two centuries. We have more solar disturbances when this cycle is at its peak. For about a century after its discovery, the 11-year sunspot cycle was a complete mystery to scientists. Nobody had any clue as to why the sun has spots and why spots have this cycle of 11 years.
A first breakthrough came in 1908 when Hale found that sunspots are regions of strong magnetic field – about 5000 times stronger than the magnetic field around the earth’s magnetic poles. Incidentally, this was the first discovery of a magnetic field in an astronomical object and was eventually to revolutionize astronomy, with subsequent discoveries that nearly all astronomical objects have magnetic fields. Hale’s discovery also made it clear that the 11-year sunspot cycle is the sun’s magnetic cycle.
Matter inside the sun exists in the plasma state – often called the fourth state of matter – in which electrons break out of atoms. Major developments in plasma physics within the last few decades at last enabled us to systematically address the questions of why sunspots exist and what causes their 11-year cycle. In 1955 Eugene Parker theoretically proposed a plasma process known as the dynamo process capable of generating magnetic fields within astronomical objects. Parker also came up with the first theoretical model of the 11-year cycle. It is only within the last 10 years or so that it has been possible to build sufficiently realistic and detailed theoretical dynamo models of the 11-year sunspot cycle.
Until about half a century ago, scientists believed that our solar system basically consisted of empty space around the sun through which planets were moving. The sun is surrounded by a million-degree hot corona – much hotter than the sun’s surface with a temperature of ‘only’ about 6000 K. Eugene Parker, in another of his seminal papers in 1958, showed that this corona will drive a wind of hot plasma from the sun – the solar wind – to blow through the entire solar system. Since the earth is immersed in this solar wind – and not surrounded by empty space as suspected earlier – the sun can affect the earth in complicated ways. Magnetic fields created by the dynamo process inside the sun can float up above the sun’s surface, producing beautiful magnetic arcades. By applying the basic principles of plasma physics, scientists have figured out that violent explosions can occur within these arcades, hurling huge chunks of plasma from the sun that can be carried to the earth by the solar wind.
The 11-year sunspot cycle is only approximately cyclic. Some cycles are stronger and some are weaker. Some are slightly longer than 11 years and some are shorter. During the seventeenth century, several sunspot cycles went missing and sunspots were not seen for about 70 years. There is evidence that Europe went through an unusually cold spell during this epoch. Was this a coincidence or did the missing sunspots have something to do with the cold climate? There is increasing evidence that sunspots affect the earth’s climate, though we do not yet understand how this happens.
Can we predict the strength of a sunspot cycle before its onset? The sunspot minimum around 2006–2009 was the first sunspot minimum when sufficiently sophisticated theoretical dynamo models of the sunspot cycle existed and whether these models could predict the upcoming cycle correctly became a challenge for these young theoretical models. We are now at the peak of the present sunspot cycle and its strength agrees remarkably with what my students and I predicted in 2007 from our dynamo model. This is the first such successful prediction from a theoretical model in the history of our subject. But is it merely a lucky accident that our prediction has been successful this time? If our methodology is used to predict more sunspot cycles in the future, will this success be repeated?
Headline image credit: A spectacular coronal mass ejection, by Steve Jurvetson. CC-BY-2.0 via Flickr.
Is it better to be positive or negative? Many of the most vivid public health appeals have been negative – “Smoking Kills” or “Drive, Drive, and Die” – but do these negative messages work when it comes to changing eating behavior?
Past literature reviews of positive- or gain-framed versus negative or loss-based health messages have been inconsistent. In our content analysis of 63 nutrition education studies, we discovered four key questions which can resolve these inconsistencies and help predict which type of health message will work best for a particular target audience. The more questions are answered with a “Yes,” the more a negative- or loss-based health message will be effective.
Is the target audience highly involved in this issue?
The more knowledgeable or involved a target audience, the more strongly they’ll be motivated by a negative- or loss-based message. In contrast, those who are less involved may not believe the message or may simply wish to avoid bad news. Less involved consumers generally respond better to positive messages that provide a clear, actionable step that leaves them feeling positive and motivated. For instance, telling them to “eat more sweet potatoes to help your skin look younger” is more effective than telling them “your skin will age faster if you don’t eat sweet potatoes.” The former doesn’t require them to know why or to link sweet potatoes to Vitamin A.
Is the target audience detail-oriented?
People who like details – such as most of the people designing public health messages – prefer negative- or loss-framed messages. They have a deeper understanding and knowledge base on which to elaborate on the message. In her coverage of the article for the Food Navigator, Elizabeth Crawford, noted that most of the general public is not interested in the details and is more influenced by the more superficial features of the message, including whether it is more positive or attractive relative to the other things vying for their attention at that moment.
Is the target audience risk averse?
When a positive outcome is certain, gain-framed messages work best (“you’ll live 7 years longer if you are a healthy weight”). When a negative outcome is certain, loss-framed messages work best (“you’ll die 7 years earlier if you are obese”). For instance, we found that if it is believed that eating more fruits and vegetables leads to lower obesity, a positive message (“eat broccoli and live longer”) is more effective than a negative message.
Is the outcome uncertain?
When claims appear factual and convincing, positive messages tend to work best. If a person believes that eating soy will extend their life by reducing their risk of heart disease, a positive message stating this is best. If they aren’t as convinced, a more effective message could be “people who don’t eat soy have a higher rate of heart disease.”
These findings show how those who design health messages, such as health care professionals, will be impacted by them differently than the general public. When writing a health message, rather than appealing to the sentiment of the experts, the message will be more effective if it’s presented positively. The general public is more likely to adopt the behavior being promoted if they see that there is a potential positive outcome. Evoking fear may seem like a good way to get your message across but this study shows that, in fact, the opposite is true—telling the public that a behavior will help them be healthier and happier is actually more effective.
One of Glasgow’s best-known tourist highlights is its Victorian Necropolis, a dramatic complex of Victorian funerary sculpture in all its grandeur and variety. Christian and pagan symbols, obelisks, urns, broken columns and overgrown mortuary chapels in classical, Gothic, and Byzantine styles convey the hope that those who are buried there—the great and the good of 19th century Glasgow—will not be forgotten.
But, of course, they are mostly forgotten and even the conspicuous consumption expressed in this extraordinary array of great and costly monuments has not been enough to keep their names alive. And, of course, we, the living, will soon enough go the same way: ‘As you are now, so once was I’, to recall a once-popular gravestone inscription.
Is this the last word on human life? Religion often claims to offer a different perspective on death since (it is said) the business of religion is not with time, but with eternity. But what, if anything, does this mean?
‘Eternal love’ and ‘eternal memory’ are phrases that spring to the lips of lovers and mourners. Even in secular France, some friends of the recently murdered journalists talked about the ‘immortality’ of their work. But surely that is just a way of talking, a way of expressing our especially high esteem for those described in these terms? And even when talk of eternity and immortality is meant seriously, what would a human life that had ‘put on immortality’ be like? Would it be recognizably human at all? As to God, can we really conceive of what it would be for God (or any other being) to somehow be above or outside of time? Isn’t time the condition for anything at all to be?
If we really take seriously the way in which time pervades all our experiences, all our thinking, and (for that matter) the basic structures of the physical universe, won’t it follow that the religious appeal to eternity is really just a primitive attempt to ward off the spectre of transience, whilst declarations of eternal love and eternal memory are little more than gestures of feeble defiance and that if, in the end, there is anything truly ‘eternal’ it is eternal oblivion—annihilation?
Human beings have a strong track record when it comes to denying reality.
One fashionable book of the post-war period was dramatically entitled The Denial of Death and it argued that our entire civilization was built on the inevitably futile attempt to deny the ineluctable reality of death. But if there is nothing we can do about death, must we always think of time in negative terms—the old man with the hour-glass and scythe, so like the figure of the grim reaper?
And instead of thinking of eternity as somehow beyond or above time, might not time itself offer clues as to the presence of eternity, as in the experiences that mystics and meditators say report as being momentary experiences of eternity in, with, and under the conditions of time? But such experiences, valuable as they are to those who have them, remain marginal unless they can be brought into fruitful connection with the weave of past and future.
From the beginnings of philosophy, recollection has been valued as an important clue to finding the tracks of eternity in time, as in Augustine’s search for God in the treasure-house of memory. But the past can only ever give us so much (or so little) eternity.
A recent French philosopher has proposed that time cannot undo our having-been and that the fact that the unknown slave of ancient times or the forgotten victim of the Nazi death-camps really existed means that the tyrants have failed in their attempt to make them non-human. But this is a meagre consolation if we have no hope for the future and for the flourishing of all that is good and true in time to come. Really affirming the enduring value of human lives and loves therefore presupposes the possibility of hope.
One Jewish sage taught that ‘In remembering lies redemption; in forgetfulness lies exile’ but perhaps what we it is most important to remember is the possibility of hope itself and of going on saying ‘Yes’ to the common, shared reality of human life and of reconciling the multiple broken relationships that mortality leaves unresolved.
Pindar, an ancient poet of hope, wrote that ‘modesty befits mortals’ and if we cannot escape time (which we probably cannot), it is maybe time we have to thank for the possibility of hope and for visions of a better and more blessed life. And perhaps this is also the message that a contemporary graffiti-artist has added to one of the Necropolis’s more ruined monuments. ‘Life goes on’, either extreme cynicism or, perhaps, real hope.
Featured image credit: ‘Life goes on.’ Photo by George Pattison. Used with permission.
Last year was an important year in the field of public health. In 2014, West Africa, particularly Sierra Leone, Liberia, and Guinea, experienced the worst outbreak of the Ebola virus in history, and with devastating effects. Debates around e-cigarettes and vaping became central, as more research was published about their health implications. Conversations surrounding nutrition and the spread of disease through travel and migration continued in the media and among experts.
We’ve chosen a selection of articles that discuss public health issues that arose in 2014, their effects on the present and implications for the future.
Header image: US specialist helping Afghan nomads by Sfc. Larry Johns (US Army). Public domain via Wikimedia Commons.
2015 may be a watershed year for one part of the UK economy—the market for legal services.
Much is made of London’s status as the world’s legal capital. This has nothing to do with the legal issues that most people encounter, involving crime, wills, houses, or divorce. It concerns London’s pre-eminence in the resolution of international commercial disputes— those substantial business disputes, often involving foreign parties or contracts performed abroad, which might in principle be heard anywhere. That an English court is everyone’s court of choice in such cases has long been an article of faith, at least for English lawyers.
English law is often chosen as the law governing commercial contracts, even those having little or no connection with England, because it is valued for its certainty and commercial approach. So whether, say, a German company is liable for failing to perform a contract in Kazakhstan may depend on English rules. If English law is to be applied, however, it is perhaps obvious that this will be done best in the English courts. Those courts are also widely respected for their impartiality, for the quality of the judges, and for their experience in commercial matters. The quality and expertise of English lawyers, confirmed in a recent survey, and the availability of remedies unknown elsewhere, notably injunctions to prevent foreign proceedings and to freeze a defendant’s foreign assets, are also powerful attractions.
The assumption that London is the market leader in commercial disputes is also reflected in the numbers. Since 2008, when cases arising from the economic downturn began to emerge, more than 1,000 claims have been made each year in the London Commercial Court (housed in the state-of-the-art Rolls Building). But it is the nature of these cases, not the quantity, which is striking. 81% of those started in 2013 involved at least one foreign party, and 48% involved no party from the UK at all. The message is that the Court is not just a national court, but an international court favoured by litigants from around the world who could no doubt have taken their dispute elsewhere.
The effects of this dominance are significant. English law is recognized as setting the standard in resolving business disputes, and English judgments (and the work of English writers) are widely read around the world. The economic value of such disputes is also considerable, and the resolution of such cases is a major invisible export.
But London’s profile in resolving transnational disputes cannot be taken for granted. Even if the parties’ obligations are subject to English law, how their dispute is handled may not be. Whether a court can hear a case at all (the issue of jurisdiction) is in large part governed by EU law. Cases may ultimately be resolved not in London, but by the European Court in Luxembourg, under rules less flexible, and less commercially attuned, than the English courts have traditionally used. This matters because jurisdiction is at the heart of most commercial cases.
The threats to London’s prominence are also home-grown. The much prized certainty of English contract law has become less secure as the courts have toyed with requiring parties to comply not just with a contract’s terms but with an ill-defined duty of good faith. The courts have also become increasingly intolerant about failure to meet procedural deadlines, a hard thing to achieve in complex cases, which undermines (or may be seen to undermine) their traditionally flexible, common sense approach to litigation. They are also less willing to allow lengthy arguments about which country’s courts should hear a case, a particular issue when so many cases have little connection with England, which for the parties at least is usually the heart of their dispute. Most striking of all, the government has proposed charging premium fees for using the Commercial Court, significantly increasing the cost of litigation, partly to reflect concerns that the taxpayer should not be subsidising a court largely used by foreign litigants.
Some courts have sought to limit the fallout from this new approach, at least when it comes to contractual certainty, and managing cases inflexibly. There are also signs that the government has back-tracked on the controversial proposal to penalize commercial litigants with higher fees, given concerns that London’s dominance in the legal marketplace would suffer. But the genie is out of the bottle, and lawyers have become uneasy about official commitment to London’s role as a legal hub.
Such doubts, justified or not, are a dangerous thing in a competitive market, and London certainly faces increased competition from overseas courts. New commercial courts established in Dubai, Qatar and Singapore, generously funded by the state, may threaten London’s traditional dominance.
Neither the English courts nor Parliament can resolve the uncertainties of EU procedural law—short of leaving the EU altogether. But any damage done by making English law less certain, or by over-regulating civil disputes, or by exposing litigants to additional costs, is avoidable. Whether such self-inflicted injuries can be avoided and whether the English courts remain competitive depends, however, on making a choice—a choice for the courts and for politicians. Whether or not London is the world’s legal capital, do we want it to be?
The international standing of the English courts is unlikely to be featured in most people’s New Year’s resolutions. But for the courts and government perhaps it should.
Image Credit: Courts Closed. Photo by Chris Kealy. CC BY-NC-SA 2.0 via Flickr.
This timeline below shows the development of data privacy laws across numerous different Asian territories over the past 35 years. In each case it maps the year a data privacy law or equivalent was created, as well as providing some further information about each. It also maps the major guidelines and pieces of legislation from various global bodies, including those mentioned above.
Featured image credit: Data (scrabble), by justgrimes. CC-BY-SA 2.0 via Flickr.
Even in this place one can survive, and therefore one must want to survive, to tell the story, to bear witness; and that to survive we must force ourselves to save at least the skeleton, the scaffolding, the form of civilization. We are slaves, deprived of every right, exposed to every insult, condemned to certain death, but we still possess one power, and we must defend it with all our strength for it is the last — the power to refuse our consent.
― Primo Levi, Survival in Auschwitz
On the 70th anniversary of the liberation of the German Nazi concentration and death camp at Auschwitz, I hope we can keep telling the stories of survival and miracles that the victims experienced. But never shall we forget the six million Jews that were murdered. There are many stories of the Shoah (Holocaust) that are told over and over again by survivors, witnesses, and children of survivors. Today, the tenuous relationship between Jews and Muslims around the world echoes negative sentiments and feelings about these two rich traditions. Anti-Semitism has been on the rise in Europe and unfortunately some of the weight of this tide rests on the shoulders of Muslim immigrants in Europe.
As an Islamic and Holocaust scholar, I was always saddened to witness such animosity and tension between the two traditions and decided to take another turn in the field of the Holocaust: Muslims and the Holocaust. I am a Muslim woman who teaches the Holocaust, Genocide, World Religions, and Islam; many questions are raised about my work and identity. Some scholars and community members view the two areas of study, Holocaust and Islam, in contradiction; they seem puzzled and at times, accuse me of being “divided.” They ask me: “How can you teach two unrelated fields? How can a Muslim teach the Holocaust? What kind of a scholar are you?” I am amused by these questions as I think of how much esoteric knowledge rests on dusty shelves, for I believe there is an important connection between my two areas of research.
My work has steered me to confront my own Muslim community on the suffering of “others,” which I argue can become a bridge of mutual understanding and interreligious dialogue. How can we create interreligious dialogue and confront the suffering of one another at different historical moments? How can we discuss and sustain dialogue, which by its very nature also risks dehumanizing the “other”? What aspects about Islam and about the Holocaust might connect both Muslims and Jews? And in a greater sense, what does my work offer students, communities, and academia? These and other questions haunt me every day, knocking on my faith, my study of Holocaust memoirs, my study of new research on Muslims and Jews during the Holocaust and colonialism.
The lost stories of Muslim rescuers and the relationship between Jews and Muslims in Arab countries have been lost under the noise of media portrayal of these faiths being at war throughout time. Israel and Palestine seems to carve the relationship for the rest of us and I feel that we must change that for the future of Judaism and Islam. To tell the stories of positive cooperation between Jews and Muslims is crucial in my work. To reflect on the deep-rooted anti-Semitism and Islamophobia within each community is an important.
Teaching the Holocaust to young students with very little knowledge of the Holocaust or Islam has been challenging. I invite Holocaust survivors to visit our classes and they are stunned and shocked at the stories of survival and loss. The personal connection creates an intimate reaction within the classroom and that is why I embarked on the idea of interviewing survivors. Interviewing survivors as a Muslim was an uncomfortable experience because I did not know what to expect and neither did they. There is one man I will never forget for the rest of my life:
On February 27th, 2010, I looked into the sky-blue eyes of Albert Rosa, an 85-year-old Shoah survivor, for three hours as he spoke about his experience at Auschwitz-Birkenau. As I left him, he told me with tears in his eyes that he wanted someone to write his life story, since he had very little formal education and would not be able to express in writing his feelings on the Shoah. He asked me, “How can I express in words how I felt when my sister was bludgeoned to death in front of me by a Nazi woman, or when I saw my elder brother hanging from a rope when I had tried to defend him?” I looked into his eyes, which had pierced me all day, and wondered how I could tell his story in words without losing the sense of the emotional and physical strength it had taken him to survive the horror of his life in the camps. He spoke of maggots crawling on his body as he was ordered to move the dead Jewish bodies, the gold he stole from the teeth of the dead, the urine he saved to nurse the wounds inflicted by a German Shepherd, the plant roots that he dug out with his fingers for nourishment, the ashes he swallowed from the crematorium as he helped build Birkenau. How was I to give these events any life with mere words? These feelings of paralysis emerge as I write this testimony; how I can give the Shoah a life of its own without trespassing on politics, ethics, and the millions of victims? In some ways, I felt like abandoning this project because I feared that I could not do it justice. (Shoah through Muslim Eyes (Academic Studies Press, 2015))
Finally, I hope to take the testimonies of survivors, lost stories of Muslims during the Holocaust, and the memory of two traditions to a new level where one can speak up for one another.
This Christmas, London’s Royal Opera House played host to Christopher Wheeldon’s critically acclaimed Alice’s Adventures in Wonderland, performed by the Royal Ballet and with a score by Joby Talbot. Indeed, Lewis Carroll’s seminal work Alice’s Adventures in Wonderland (1865) has long inspired classical compositions, in forms as diverse as ballet, opera, chamber music, song, as well as, of course, film scores. Examples include English composer Liza Lehmann’s Nonsense songs (1908); American composer Irving Fine’s two sets of Choruses from Alice in Wonderland (1949 and 1953); and contemporary composer Wendy Hiscock’s ‘Jill in the box’, commissioned by the BFI to accompany the first footage of Alice in Wonderland – a 1903 silent film directed by Percy Stow and Cecil Hepworth.
In the Oxford catalogue, the influence of Alice’s Adventures in Wonderland can be seen in choral pieces by Maurice Bailey, Bob Chilcott, and Sarah Quartel, and it is interesting to observe the similarities in their treatment of this famous text. Maurice Bailey selects seven poems from the book to produce a set of seven songs for upper voices and piano or instrumental ensemble. The set begins with a short narration—a direct quotation of the book’s first four paragraphs—and the first song takes up the image of Alice sitting by the riverbank, setting the scene with the performance direction ‘like a warm and lazy summer afternoon’. Each song has a distinct character:
‘Twinkle, twinkle, little bat!’ is jovial, with a gentle swing feel;
‘You are old, Father William’ is solemn and dramatic;
‘How doth the little crocodile’ is a peaceful, chorale-like setting;
‘Will you walk a little faster?’ has a deliberate feel, featuring call-and-response imitation;
‘Beautiful Soup’ is in the manner of a leisurely waltz; and
‘They told me you had been to her’ is mysterious and energetic, with evocative musical language.
In all the songs, the piano or instrumental ensemble is a key component in the drama, rather than being simply a supportive accompanying force. There is also some scat singing, recitation, and spoken text. ‘You are old, Father William’ in particular exploits recitation to great dramatic effect, requiring a member of the choir to take on the part of Father William, which is entirely spoken, while the rest of the choir adopt the role of narrator, with sung interjections that complete the story.
Chilcott’s Mouse Tales, for SA and piano, is in two movements: the second setting the familiar poem ‘The Mouse’s Tale’ from the published version of Alice’s Adventures in Wonderland; and the first setting the poem that Carroll included in its place in his original manuscript. Both movements have an abundance of character, and Chilcott marks the first movement ‘sassy’, a term that perfectly describes the musical style and that encourages the singers to give a characterful performance. The first movement has a jazz flavour, while the energetic second movement features driving ostinatos in the piano and accents in the vocal lines that place emphasis on unexpected beats of the bar, keeping the singers on their toes. Like Bailey, Chilcott employs scat singing and spoken interjections such as ‘you did?’ and ‘nice!’ for dramatic effect, as well as a catchy refrain to present the well-known proverb ‘when the cat’s away, then the mice will play’.
Unlike the other two composers, Sarah Quartel uses Carroll’s story as the basis for her own text, in which we encounter characters such as the White Rabbit, the Cheshire Cat, and the Hatter. The piece, for SSA and piano, has great potential for dramatic performance, with sections of a cappella scat singing and spoken text and a catchy refrain that centres around the Cheshire Cat’s declaration that ‘we’re all mad here’, where the part-writing encourages playful interaction between the different sections of the choir. The choir adopts the role of Alice, and Quartel helps the singers to convey Alice’s responses to the narrative through performance directions such as ‘with distinct character, telling a story’, ‘playful, like a caucus-race’, ‘indignant!’, and ‘with awe!’. Naturally, the music itself contributes to the characterization. For example, a march-like figure is employed to represent the Queen, while the music for the flustered White Rabbit features rapidly ascending and descending scales in the piano. Indeed, once again, the piano is a key component in the portrayal of the drama, and the rapid movement through different keys also helps to convey Alice’s mixture of confusion and wonder at the strange world she inhabits.
As we have seen, there are certain similarities in the three composers’ responses to this influential work of children’s literature. Perhaps unsurprisingly, each of the composers elected to write for upper voices, so that their settings might be performed by children’s choir. Imaginative and descriptive performance directions play an important part, assisting the singers in their characterization of the unusual protagonists in the story that they are telling. Again, unsurprisingly, the book appears to inspire a certain theatricality in the writing and music; it requires the performers to give a dramatic performance that has a strong sense of fun. Spoken text and scat singing are also prevalent in all three works, and the piano makes an integral contribution to the musical characterization. With its adventurous heroine, extraordinary characters, and unapologetic celebration of the quirky and the ‘mad’, it is little wonder that the text has proven a source of inspiration for composers since its inception and will undoubtedly continue to do so.
Headline image credit: Иллюстрация к главе Бег по кругу книги Алиса в стране чудес. Image by Gertrude Kay. Public domain via Wikimedia Commons.
While nascent talk of the Holocaust was in the air when I was growing up in New York City, we did not learn about it in school, even in lessons about World War II or the waves of immigration to America’s shores. There were no public memorials or museums to the murdered millions, and the genocide of European Jewry was subsumed under talk of “the war.”
My father was a somber man who arrived here from Poland after the war and, like many survivors, kept to himself, trying his best to block out the past. Growing up, my connection to my father’s lost world consisted of names mentioned in hushed tones and photographs retrieved from hidden boxes.
But as I grew older, I watched with great interest, more than a little curiosity, and a good deal of relief as it became more acceptable to talk about “our” tragedy. By the 1980s, lessons about the genocide of European Jewry became de rigueur in high schools through the nation. In the following decade, people could flock to a hulking museum in our nation’s capital that told the story for all who cared to listen.
The Holocaust became a universal moral touchstone that called upon us to defend our common humanity against the capacity for evil. But today, on the eve of International Holocaust Remembrance Day on 27 January, the lesson we Jews seem to draw from our history is that those outside the tribe cannot be trusted.
In the wake of the recent terrorist attack on a kosher food store in Paris, and as anti-Semitism rises in France and elsewhere, these fears seem understandable. I know these kinds of fears well. Even in the relative comfort of his postwar existence, my father had a recurring nightmare that he was being chased by German shepherds.
But when such fears lead to catastrophic thinking, they harden our hearts to the suffering of others and contribute, paradoxically, to a sense of Holocaust fatigue among many Jewish Americans — particularly younger ones.
“I’m sick of the Holocaust as a shorthand for ‘we suffered more than you, so we should get the piece of cake with the rosette on it,’” a 20-something columnist wrote in the Forward. Peter Beinart in The Crisis of Zionism argues that the growing emphasis on the Holocaust in American life beginning in the 1960s and 1970s marked the end of Jewish universalism.
“Liberalism was out,” Beinart wrote. “Tribalism was in.”
Beinart and others are partly right: Holocaust trauma is too readily exploited. But historically, Holocaust commemoration efforts have been more than simply exercises in tribalism. They often emerged from an urge to acknowledge and alleviate human suffering writ large.
Raphael Lemkin, the Polish-Jewish legal scholar and Holocaust survivor who coined the term “genocide” and fought to have the concept recognized by the United Nations, exemplified this impulse. So did the mobilization of the Holocaust second generation. Descendants of survivors, empowered by the progressive movements of the 1960s and 1970s, coaxed our parents to share their stories. The Holocaust consciousness we helped build was part of a larger search for self-expression and human rights.
Today, many Holocaust commemoration activities reflect this universal spirit as well, including the U.S. Holocaust Memorial Museum’s efforts to promote awareness of genocide in Sudan and elsewhere. Jewish-American donors provided the bulk of the funds for a memorial to the more than two million Cambodians murdered during the brutal reign of the Khmer Rouge, an acknowledgement of a shared tragic history.
These and other efforts to remember the suffering of others should be applauded, but they must be more than window dressing. They should also spur our own collective soul-searching. Committing funds for projects in places where Jews have few political or emotional investments, such as Cambodia or Sudan, is relatively easy. Subjecting our own deeply felt loyalties to Israel to scrutiny is a much more difficult, but no less important, task.
The truth is that at times our privileges may in fact be implicated in the suffering of others in the Palestinian territories, where life is brutal and frequently too short. A sense of hopelessness prevails among both Israelis and Palestinians, fueling acts of desperation and violence in the Middle East and beyond.
A chorus of leaders on both sides is promoting a politics of fear, declaring I cannot be my brother’s keeper when my brother is out to murder me. But on this Holocaust Remembrance Day, let us honor the memory of the parents and grandparents, uncles and aunts, and all of the unknown others we have lost by resisting such talk and redoubling our efforts to seek peace.
Meet Utricularia. It’s a bladderwort, an aquatic carnivorous plant, and one of the fastest things on the planet. It can catch its prey in a millisecond, accelerating it up to 600g.
Once caught inside the prey suffocates and digestive enzymes break down the unfortunate creature for its nutrients. Anything small enough to be pulled in won’t know their mistake until it’s too late. But as lethal as the trap is, it did seem to have some flaws. The traps don’t just catch animals, they catch anything that gets sucked in, so often that’s algae and pollen too.
A team at the University of Vienna led by Marianne Koller-Peroutka and Wolfram Adlassnig closely examined Utricularia and found the plants were not very efficient killers. Studying over 2000 traps showed that only about 10% of the objects sucked in were animals. Animals are great if you want nutrients like nitrogen and phosphorus, but half of the catch was algae and a third pollen.
What was more puzzling was that not all the algae entered with an animal. If a bladder is left for a long while, it will trigger anyway. No animal is needed; algae, pollen, and fungi will enter. Is this a sign that the plant is desperate for a meal, and hoping an animal is passing? Koller-Peroutka and Adlassnig found that the traps catching algae and pollen grew larger and had more biomass. Examining the bladders under a microscope showed that algae caught in the traps died and decayed. This was more evidence that it’s happy to eat other plants too. It seems that it’s not just animals that Utricularia is hunting.
Koller-Peroutka and Adlassnig say this is why Utricularia is able to live in places with comparatively few animals. Nitrogen from animals and other elements from plants mean it is happy with a balanced diet. It can grow more and bigger traps, and use these for catching animals or plants or both.
Fortunately even the big traps only catch tiny animals, so if someone has bought you one for Christmas you can leave it on the dinner table without losing your turkey and trimmings in a millisecond.
The field of anaesthesia is a subtle discipline, when properly applied the patient falls gently asleep, miraculously waking-up with one less kidney or even a whole new nose. Today, anaesthesiologists have perfected measuring the depth and risk of anaesthesia, but these breakthroughs were hard-won. The history of anaesthesia is resplendent with pus and cadavers, each new development moved one step closer to the art of the modern anaesthesiologist, who can send you to oblivion and float you safely back. This timeline marks some of the most macabre and downright bizarre events in its long history.
Heading image: Junker-type inhaler for anaesthesia, London, England, 1867-1 Wellcome L0058160. Wellcome Library, London. CC BY 4.0 via Wikimedia Commons.
For our second blog post of 2015, we’re looking back at a great article from Katie Kuszmar in The Oral History Review (OHR), “From Boat to Throat: How Oral Histories Immerse Students in Ecoliteracy and Community Building” (OHR, 41.2.) In the article, Katie discussed a research trip she and her students used to record the oral histories of local fishing practices and to learn about sustainable fishing and consumption. We followed up with her over email to see what we could learn from high school oral historians, and what she has been up to since the article came out. Enjoy the article, and check out her current work at Narrability.com.
In the article, you mentioned that your students’ youthful curiosity, or lack of inhibition, helped them get answers to tough questions. Can you think of particular moments where this made a difference? Were there any difficulties you didn’t expect, working with high school oral historians?
One particular moment was at the end of the trip. Our final interview was with the Monterey Bay Aquarium’s (MBA) Seafood Watch public relations coordinator, who was kind enough to arrange the fisheries historian interviews and offered to be one of the interviewees as well. When we finally interviewed the coordinator, the most burning question the students had was whether or not Seafood Watch worked directly with fishermen. The students didn’t like her answer. She let us know that fishermen are welcome to approach Seafood Watch and that Seafood Watch is interested in fishermen, but they didn’t work directly with fishermen in setting the standards for their sustainable seafood guidelines. The students seemed to think that taking sides with fishermen was the way to react. When we left the interview they were conflicted. The Monterey Bay Aquarium is a well-respected organization for young people in the area. The aquarium itself is full of nostalgic memories for most students in the region who visit the aquarium frequently on field trips or on vacation. How could such a beloved establishment not consider fishermen voices, for whom the students had just built a newfound respect? It was a big learning moment about bureaucracy, research, empathetic listening, and the usefulness of oral history.
After the interview, when the students cooled off, we discussed how the dynamics in an interview can change when personal conflicts arise. The narrator may even change her story and tone because of the interviewer’s biases. We explored several essential questions that I would now use for discussion before interviews were to occur, for I was learning too. Some questions that we considered were: When you don’t agree with your narrator, how do you ask questions that will keep the communication safe and open?
Oral history has power in this way: voices can illuminate the issues without the need for strong editorializing.
How do you set aside your own beliefs from the narrator, and why is this important when collecting oral history? In other words, how do you take the ego out of it?
The students were given a learning opportunity from which I hoped we all could gain insight. We discussed how if you can capture in your interview the narrator’s perspective (even if different than your own or other narrators for that matter), then the audience will be able to see discrepancies in the narratives and gather the evidence they need to engage with the issues. Hearing that Seafood Watch doesn’t work with fishermen might potentially help an audience to ask questions on a larger public scale. Considering oral history’s usefulness in engaging the public, inspiring advocacy, and questioning bureaucracy might be a powerful way for students to engage in the process without worrying about trying to prove their narrators wrong or telling the audience what to think. Oral history has power in this way: voices can illuminate the issues without the need for strong editorializing. This narrative power can be studied beforehand with samples of oral history, as it can also be a great way for students to reflect metacognitively on what they have participated in and how they might want to extend their learning experiences into the real world. Voice of Witness (VOW) contends that students who engage in oral history are “history makers.” What a powerful way to learn!
How did this project start? Did you start with wanting to do oral history with your students, or were you more interested in exploring sustainability and fall into oral history as a method?
Being a fisherwoman myself and just having started commercial fishing with my husband who is a fishmonger, I found my two worlds of fishing and teaching oral history colliding. Even after teaching English for ten years because of my love of storytelling, I have long been interested in creating experiential learning opportunities for students concerning where food comes from and sustainable food hubs.
Through a series of uncanny events connecting fishing and oral history, the project seemed to fall into place. I first attended an oral history for educators training through a collaborative pilot program created by VOW and Facing History and Ourselves (FHAO). After the training, I mentored ten seniors at my school to produce oral history Senior Service Learning Projects that ended in a public performance at a local art museum’s performance space. VOW was integral in my first year’s experience with oral history education. I still work with VOW and sit on their Education Advisory Board, which helps me to continue my engagement in oral history education.
In the same year as the pilot program with VOW, I attended the annual California Association of Teachers of English conference in which the National Oceanic Atmospheric Association’s (NOAA) Voices of the Bay (VOB) program coordinator offered a training. The training offered curriculum strategies in marine ecology, fishing, economics, and basic oral history skill-building. To record interviews, NOAA would help arrange interviews with local fishermen in classrooms or at nearby harbors. The interviews would eventually go into a national archive called Voices from the Fisheries.
The trainer for VOB and I knew many of the same fishermen and mongers up and down the central and north (Pacific) coast. I arranged a meeting between the two educational directors of VOW and VOB, who were both eager to meet each other, as they both were just firing up their educational programs in oral history education. The meeting was very fruitful for all of us, as we brainstormed new ways to approach interdisciplinary oral history opportunities. As such, I was able to synthesize curriculum from both programs in preparing my students for the immersion trip, considering sustainability as an interdependent learning opportunity in environmental, social, and economic content. When I created the trip I didn’t have a term for what the outcome would be, except that I had hoped they would become aware more aware of sustainable seafood and how to promote its values. Ecoliteracy was a term that came to fruition after the projects were completed, but I think it can be extremely valuable as a goal in interdisciplinary oral history education.
I believe oral history education can help to shape our students into compassionate critical thinkers, and may even inspire them to continue to interview and listen empathetically to solve problems in their personal, educational, and professional futures.
What pointers can you give to other educators interested in using oral history to engage their students?
With all the material out there, I feel that educators have ample access to help prepare for projects. In the scheme of these projects, I would advise scheduling time for thoughtful processing or metacognitive reflection. All too often, it is easy to focus on the preparation, conducting and capturing the interviews, and then getting something tangible done with it. Perhaps, it is embedded in the education world of outcome-based assessment: getting results and evidence that learning is happening. With high school students, the experience of interviewing is an extremely valuable learning tool that could easily get overlooked when we are focusing on a project
For example, on an immersion trip to El Salvador with my high school students, we were given an opportunity to interview the daughter of the sole survivor of El Mozote, an infamous massacre that happened at the climax of the civil war. The narrator insisted on telling us her and her mother’s story, despite the fact that she had just gotten chemotherapy the day prior. She said that her storytelling was therapeutic for her and helped her feel that her mother, who had passed away, and all those victims of the massacre would not die in vain. This was such heavy content for her and for us as her audience. We all needed to talk, be quiet about it, cry about it, and reflect on the value of the witnessing. In the end, it wasn’t the deliverable that would be the focus of the learning, it was the actual experience. From it, compassion was built in the students, not just for El Salvadorian victims and survivors, but on a broader scale for all people who face civil strife and persecution. After such an experience, statistics were not just numbers anymore, they had a human face. This, to date, for me has been the most valuable part of oral history education: the transformation that can occur during the experience of an interview, as opposed to the product produced from it. For educators, it is vital to facilitate a pointed and thoughtful discussion with the interviewer to hone in on the learning and realize the transformation, if there is one. The discussion about the experience is essential in understanding the value of the oral history interviewing.
Do you have plans to do similar projects in the future?
After such positive experiences with oral history education, I wanted a chance to actively be an oral historian who captures narratives in issues of sustainable food sources. I have transitioned from teaching to running my own business called Narrability with the mission to build sustainability through community narratives. I just completed a small project, in which I collected oral histories of local fishermen called: “Long Live the King: Storytelling the Value of Salmon Fishing in the Monterey Bay.” Housed on the Monterey Bay Salmon and Trout Project (MBSTP) website, the project highlights some of the realities connected to the MBSTP local hatchery net pen program that augments the natural Chinook salmon runs from rivers in the Sacramento area to be released into the Monterey Bay. Because of drought, dams, overfishing, and urbanization, the Chinook fishery in the central coast area has been deeply affected, and the need for a net pen program seems strong. In the Monterey Bay, there have been many challenges in implementing the Chinook net pen program due to the unfortunate bureaucracy of a discouraging port commission out of the Santa Cruz harbor. Because of the challenges, the oral histories that I collected help to illustrate that regional Chinook salmon fishing builds environmental stewardship, family bonding, community building, and provides a healthy protein source.
Through Narrability, I have also been working on developing a large oral history program with a group of organic farming, wholesale, and certification pioneers. As many organic pioneers face retirement, the need for their history to be recorded is growing. Irene Reti sparked this realization in her project through University of California, Santa Cruz: Cultivating a Movement: An Oral History Series on Organic Farming & Sustainable Agriculture on California’s Central Coast. Through collaboration with some of the major players in organics, we aim to build a comprehensive national collection of the history of organics for the public domain.
Is there anything you couldn’t address in the article that you’d like to share here?
I know being a teacher can be time crunched, and once interviews are recorded, students and teachers want to do something tactile with the interviews (podcasts/narratives/documentaries). I encourage educators to implement time to reflect on the process. I wished I would have done more reflective processing in this manner: to interview as a class; to discuss the experience of interviewing and the feelings elicited before, during and after an interview; to authentically analyze how the interviews went, including considering narrator dynamics. In many cases, the skills learned and personal growth is not the most tangible outcome. Despite this, I believe oral history education can help to shape our students into compassionate critical thinkers, and may even inspire them to continue to interview and listen empathetically to solve problems in their personal, educational, and professional futures. This might not be something we can grade or present as a deliverable, it might be a long-term effect that grows with a students’ life long learning.
Image Credit: Front entrance of the Aquarium. Photo by Amadscientist. CC by SA 3.0 via Wikimedia Commons.
Though he’s largely forgotten today, Walter Savage Landor was one of the major authors of his time—of both his times, in fact, for he was long-lived enough to produce major writing during both the Romantic and the Victorian eras. He kept writing and publishing promiscuously through his long life (he died in his ninetieth year) which puts him in a unique category. Maybe the problem is that he outlived his own reputation. Byron, Shelly and Keats all died in their twenties, and this fact somehow seals-in their importance as poets. Landor’s close friend Southey died at the beginning of the 1840s. Landor lived on, writing and publishing poetry, prose, drama, English and Latin. He forged friendships now with men like Robert Browning—who was deeply influenced by Landor’s writing—John Forster and Charles Dickens (Dickens named his second son Walter Savage Landor Dickens in his friend’s honour). His Victorian reputation was higher than his sales; but and if we’re puzzled by how completely his literary reputation was eclipsed during the 20th century in part that may simply be a function of his prolixity. Landor’s Collected Works was published between 1927 and 1936 in sixteen fat volumes; and even that capacious edition doesn’t by any means contain everything Landor published. It omits, for instance, his voluminous Latin writing—for Landor was the last English writer to produce a substantial body of work in that dead language. In late life he once said ‘I am sometimes at a loss for an English word; for a Latin—never!’
His most substantial prose writings were the Imaginary Conversations: dozens and dozens of prose dialogues between famous historical figures, and occasionally between fictionalised versions of living individuals, varying in length from a few pages each to seventy or eighty. The prose is exquisite, balanced, beautifully mannered and expressed and full of potent epigrams and apothegms on art, society, history, morals and religion. Nobody reads the Imaginary Conversations any more. Then there are the epics—his masterpiece, Gebir (1798), an heroic poem of immense ambition, was greeted by bafflement and ridicule on its initial publication. Landor’s experimental epic idiom was simply too obscure for his readers even to understand—though Lamb claimed the poem has ‘lucid interludes’, and Shelley loved it. Critic William Gifford was less kind: he called the poem ‘a jumble of incomprehensible trash; the effusion of a mad and muddy brain.’ Landor decided to address the question of the poem’s obscurity the best way he knew: by translating the entire epic into Latin (Gebirus, 1803). Ah, those were the days!
He wrote shoals of beautiful lyrics and elegies. He wrote volumes-full of plays, all cod-Shakespearian blank-verse dramas. He wrote historical novels, one of which (Pericles and Aspasia, 1836) is very good. He wrote classical idylls, pastoral poetry—he was a passionate gardener—epigrams and epitaphs in English and Latin. The sheer amount of work he produced may explain the decline in his reputation; for looking new readers surveying the cliff-face of text to climb may find it offputting.
It’s worth the ascent, though. Landor was a choleric individual, given to sudden rages, whilst also magnanimous, kind-hearted and loyal to his friends. Dickens wrote him into Bleak House as the character Boythorn; and a Boythorn-ish energy and vitality very often breaks through the classical refinement of the verse. Unhappily married (he and his wife separated in 1835) he lived through a series of towering, unrequired passions for other, married women. This hopelessness, paradoxically, gives force to some of the best poetry Landor ever wrote: love poems in which the impossibility of love only magnifies the intensity of affection. It’s idea Landor understands better almost than any other writer: that the strongest feelings are predicated upon absence rather than presence. Here’s his short lyric ‘Dirce’ (1831):
Stand close around, ye Stygian set,
With Dirce in one boat convey’d,
Or Charon, seeing, may forget
That he is old, and she a shade.
This says that Dirce is so beautiful that, were he to see her, Charon might ‘forget himself’, and presumably ignore the obstacles of his own dotage and the fact that she is ‘a shade’ to make erotic advances. But in fact the ‘forgetting’ in this lyric involves a much more complex mode of amnesia. It’s tempting to read the poem as being about a particular affect: the melancholy, hopeless desire of an old man for the ideal of youthful female beauty. Desire haunted by the sense that, really, it would be better not to feel desire at all—that to desire is in some sense to ‘forget yourself.’ That idiom is an interesting one, actually; as if an old man feeling sexual desire is in some sense ‘forgetting’ not just that he is old, and that young girls aren’t interested in clapped-out old codgers, but more crucially forgetting that he isn’t the sort of person who feels in that way at all. Perhaps we tend to think of desire not as something to be remembered or forgotten, but as something experienced directly. In its compact way this poem suggests otherwise.
Renunciation is another of Landor’s perennial themes. One of his most famous quatrains runs:
I strove with none, for none was worth my strife;
Nature I loved; and next to Nature, Art.
I warmed both hands before the fire of life;
It sinks, and I am ready to depart.
Written in 1849, on the occasion of Landor’s 74th birthday, this has a certain clean dignity, both stylistically and in terms of what it is saying; although it takes part of its force from the knowledge that (as I mention above) Landor actually strove with people all the time, all through his life: personally, cholerically, in law courts, in print and face-to-face. The second line of the poem, by (it seems to me) rather pointedly omitting ‘people’ from the things that Landor has spent his life loving, rather reinforces this notion. One consequence of a man, particularly a large man like Landor, standing in front of the fire to warm his hands is to block off the heat from everybody else in the room. And that seems appropriate too, somehow.
Featured image credit: ‘Inscription from Walter Savage Landor (1775-1864) to Robert Browning (1812-1889)’ by Provenance Online Project. CC-BY-2.0 via Flickr
When patients are discharged from the intensive care unit it’s great news for everyone. However, it doesn’t necessarily mean the road to recovery is straight. As breakthroughs and new technology increase the survival rate for highly critical patients, the number of possible further complications rises, meaning life after the ICU can be complex. Joe Hitchcock from Oxford University Press’s medical publishing team spoke to Dr. Robert D. Stevens, Associate Professor at Johns Hopkins University School of Medicine, to find out more.
Can you tell us a little about your career?
As a junior doctor in the intensive care unit, I observed that prowess in resuscitation is a double edged sword. We were getting better and better at promoting survival, but at what cost in the long term? I decided I would dedicate my career to the recovery process that follows severe illnesses and injuries. Currently, my team has several cohort studies under way in human subjects with head injury, stroke and sepsis. We’re looking at their long term outcomes and also imaging their brains. I have a laboratory in which we are studying a range of neurologic readouts in mice following brain injury. We’re looking at the biology of neuronal plasticity and studying stem cells as a treatment to promote recovery of function.
What is Post-ICU medicine and what does it aim to achieve?
Medicine is increasingly a victim of its own successes. People are surviving complex and terrifying illnesses, which only years ago would almost certainly have been fatal. This means there is an ever-growing population of “survivors”. Like survivors of cancer, survivors of intensive care bring with them an entirely new set of clinical problems, demanding new approaches. We propose Post-ICU Medicine as an umbrella term for this new domain of medical practice and research, which is specifically concerned with the biology, diagnosis and treatment of illnesses and disabilities resulting from critical illness.
What do you mean by the “legacy” of critical illnesses?
The “legacy” of critical illness refers to what people “carry with them” after living through a life threatening illness in the intensive care unit (ICU). It is the sum of consequences, both physical and mental, some temporary others permanent, which unfold in the weeks, months and years after someone is discharged from the ICU.
In what ways might a patient’s post-ICU experience differ from public/idealized expectations?
There is a widely held perception, or perhaps an anticipation, that acute and severe illnesses, such as sepsis or respiratory failure, are a zero-sum game: You may die from this illness, but if you survive you have a good chance of recovering completely and of going on with your life as if nothing had happened. This notion has been turned on its head. We know now that the post-ICU experience presents physical and psychological challenges for a high proportion of patients. Even the most fortunate, those we might regard as having recovered successfully, often acknowledge problems months after they have left the hospital. They report that they feel weak, have difficulties concentrating, are impulsive, anxious or depressed. When tested formally, they are often score below population means on tests of memory, attention, and functional status.
Have you observed patterns in the way patients recover?
I do not know that there are any easily classifiable patterns. There are countless possible trajectories of recovery which we are only beginning to characterize with some degree of scientific rigor. In reality, just as each patient is biologically unique, so too is his or her recovery. One of the main tasks of Post-ICU Medicine is to identify and validate markers (e.g. genetic variants, protein expression) that allow us to predict and track recovery patterns with a much higher level of confidence and reliability.
How do you assess and treat patients who have a multitude of Post-ICU conditions, psychological and physical?
Ideally, a single provider would be able to follow and treat patients in the post-ICU period. However, the range of different problems — neurologic, cognitive, psychological, cardiac, pulmonary, renal, musculoskeletal, digestive, nutritional, endocrine, social, economic — which these patients present with, are beyond the scope of even a very knowledgeable practitioner. Some groups that specialize in post-ICU follow up care have adopted a different approach, in which patients are evaluated by a multi-disciplinary “Recovery Team” with a wide array of minimally-overlapping knowledge and skills. The latter may include internists, specialists in rehabilitation, psychiatrists, neuropsychologists, neurologists, physical therapists, occupational therapists, orthopaedic surgeons, rheumatologists, and social workers. Patients recovering from critical illness are evaluated periodically and referred to the different members of the Recovery Team depending on clinical symptoms and signs. While evidence is mounting regarding the benefits of integrated post-ICU Recovery Team approach, such interventions area resource intensive and costly and are not currently available to the vast majority of recovering post-ICU patients.
Is it possible to accurately predict patient rehabilitation and recovery trajectories?
This is the “holy grail” of post-ICU medicine, and even of critical care medicine more generally. We desperately need discriminative methods to predict recovery trajectories. Current predictive approaches rely on multiple logistic regression models often using a mix of demographic and clinical severity variables. These models are terribly inaccurate, to the point of being quite useless in the clinical setting. New approaches are needed which analyse large biological datasets – patterns of gene and protein expression, changes in the microbiome, changes in carbohydrate and lipid metabolism, alterations in brain functional and metabolic activity. The great hope is that models emerging from these more sophisticated data sets will allow individualized or personalized approaches to outcome prediction and treatment.
If recovery is considered a gradated process, when is a patient “cured”?
The World Health Organization states that physical and mental well-being are a right of all human beings. It is likely that the insults and injuries suffered in the ICU can never be completely healed or cured. However, the good news is that some ICU survivors achieve astonishing levels of recovery. We need to study these individuals – the ones who do very well and surpass all expectations for recovery– as they seem to have biological or psychological characteristics (e.g. resilience factors, motivation) which set them apart. Knowing more about these characteristics may help us treat those with less favorable recovery profiles.
What might the post-ICU medicine look like in the distant future?
I believe that mortality will continue to decline for a range of illnesses an injuries encountered in the ICU. The key task will be to maximize health status in those who survive. I expect that major discoveries will be made regarding organ-specific patterns of gene and protein expression and molecular signalling which drive post-injury recovery versus failure — and that this knowledge will enable novel treatment strategies. I anticipate that important advances will be made in the regeneration tissues and organs using stem cell and tissue engineering approaches.
As anyone knows who has looked at the newspapers over the festive season, 2015 is a bumper year for anniversaries: among them Magna Carta (800 years), Agincourt (600 years), and Waterloo (200 years). But it is January which sees the first of 2015’s major commemorations, for it is fifty years since Sir Winston Churchill died (on the 24th) and received a magnificent state funeral (on the 30th). As Churchill himself had earlier predicted, he died on just the same day as his father, Lord Randolph Churchill, had done, in 1895, exactly seventy years before.
The arrangements for Churchill’s funeral, codenamed ‘Operation Hope Not’, had long been in the planning, which meant that Churchill would receive the grandest obsequies afforded to any commoner since the funerals of Nelson and Wellington. And unlike Magna Carta or Agincourt or Waterloo, there are many of us still alive who can vividly remember those sad yet stirring events of half a century ago. My generation (I was born in 1950) grew up in what were, among other things, the sunset years of Churchillian apotheosis. They may, as Lord Moran’s diary makes searingly plain, have been sad and enfeebled years for Churchill himself, but they were also years of unprecedented acclaim and veneration. During the last decade of his life, he was the most famous man alive. On his ninetieth birthday, thousands of greeting cards were sent, addressed to ‘The Greatest Man in the World, London’, and they were all delivered to Churchill’s home. During his last days, when he lay dying, there were many who found it impossible to contemplate the world without him, just as Queen Victoria had earlier wondered, at the time of his death in 1852, how Britain would manage without the Duke of Wellington.
Like all such great ceremonial occasions, the funeral itself had many meanings, and for those of us who watched it on television, by turns enthralled and tearful, it has also left many memories. In one guise, it was the final act homage to the man who had been described as ‘the saviour of his country’, and who had lived a life so full of years and achievement and honour and controversy that it was impossible to believe anyone in Britain would see his like again. But it was also, and in a rather different emotional and historical register, not only the last rites of the great man himself, but also a requiem for Britain as a great power. While Churchill might have saved his country during the Second World War, he could not preserve its global greatness thereafter. It was this sorrowful realization that had darkened his final years, just as his funeral, attended by so many world leaders and heads of state, was the last time that a British figure could command such global attention and recognition. (The turn out for Margaret Thatcher’s funeral, in 2013, was nothing like as illustrious.) These multiple meanings made the ceremonial the more moving, just as there were many episodes which made it unforgettable: the bearer party struggling and straining to carry the huge, lead-lined coffin up the steps of St Paul’s; Clement Attlee—Churchill’s former political adversary—old and frail, but determined to be there as one of the pallbearers, sitting on a chair outside the west door brought especially for him; the cranes of the London docks dipping in salute, as Churchill’s coffin was born up the Thames from Tower Pier to Waterloo Station; and the funeral train, hauled by a steam engine of the Battle of Britain class, named Winston Churchill, steaming out of the station.
For many of us, the funeral was made the more memorable by Richard Dimbleby’s commentary. Already stricken with cancer, he must have known that this would be the last he would deliver for a great state occasion (he would, indeed, be dead before the year was out), and this awareness of his own impending mortality gave to his commentary a tone of tender resignation that he had never quite achieved before. As his son, Jonathan, would later observe in his biography of his father, ‘Richard Dimbleby’s public was Churchill’s public, and he had spoken their emotions.’
Fifty years on, the intensity of those emotions cannot be recovered, but many events have been planned to commemorate Churchill’s passing, and to ponder the nature of his legacy. Two years ago, a committee was put together, consisting of representatives of the many institutions and individuals that constitute the greater Churchill world, both in Britain and around the world, which it has been my privilege to chair. Significant events are planned for 30 January: in Parliament, where a wreath will be laid; on the River Thames, where Havengore, the ship that bore Churchill’s coffin, will retrace its journey; and at Westminster Abbey, where there will be a special evensong. It will be a moving and resonant day, and the prelude to many other events around the country and around the world. Will any other British prime minister be so vividly and gratefully remembered fifty years after his—or her—death?
Headline image credit: Franklin D. Roosevelt and Winston Churchill, New Bond Street, London. Sculpted by Lawrence Holofcener. Public domain via Wikimedia Commons.
The recent release of The Imitation Game has revealed the important role crosswords played in the recruitment of code-breakers at Bletchley Park. In response to complaints that its crosswords were too easy, The Daily Telegraph organised a contest in which entrants attempted to solve a puzzle in less than 12 minutes. Successful competitors subsequently found themselves being approached by the War Office, and later working as cryptographers at Bletchley Park.
The birth of the crossword
The crossword was the invention of Liverpool émigré Arthur Wynne, whose first puzzle appeared in the New York World in 1913. This initial foray was christened a Word-Cross; the instruction in subsequent issues to ‘Find the missing cross words’ led to the birth of the cross-word. Although Wynne’s invention was initially greeted with scepticism, by the 1920s it had established itself as a popular pastime, entertaining and frustrating generations of solvers, solutionists, puzzle-heads, and cruciverbalists (Latin for ‘crossworders’).
Crosswords consist of a grid made up of black and white boxes, in which the answers, also known as lights, are to be written. The term light derives from the word’s wider use to refer to facts or suggestions which help to explain, or ‘cast light upon’, a problem. The puzzle consists of a series of clues, a word that derives from Old English cleowen ‘ball of thread’. Since a ball of thread could be used to help guide someone out of a maze – just as Ariadne’s thread came to Theseus’s aid in the Minotaur’s labyrinth – it developed the figurative sense of a piece of evidence leading to a solution, especially in the investigation of a crime. The spelling changed from clew to clue under the influence of French in the seventeenth century; the same shift affected words like blew, glew, rew, and trew.
Anagrams, homophones, and Spoonerisms: clues in crosswords
In the earliest crosswords the clue consisted of a straightforward synonym (Greek ‘with name’) – this type is still popular in concise or so-called quick crosswords. A later development saw the emergence of the cryptic clue (from a Greek word meaning ‘hidden’), where, in addition to a definition, another route to the answer is concealed within a form of wordplay. Wordplay devices include the anagram, from a Greek word meaning ‘transposition of letters’, and the charade, from a French word referring to a type of riddle in which each syllable of a word, or a complete word, is described, or acted out – as in the game charades. A well-known example, by prolific Guardian setter Rufus, is ‘Two girls, one on each knee’ (7). Combining two girls’ names, Pat and Ella, gives you a word for the kneecap: PATELLA.
Punning on similar-sounding words, or homophones (Greek ‘same sound’), is a common trick. A reference to Spooner requires a solver to transpose the initial sounds of two or more words; this derives from a supposed predisposition to such slips of the tongue in the speech of Reverend William Archibald Spooner (1844–1930), Warden of New College Oxford, whose alleged Spoonerisms include a toast to ‘our queer dean’ and upbraiding a student who ‘hissed all his mystery lectures’. Other devious devices of misdirection include reversals, double definitions, containers (where all or part of word must be placed within another), and words hidden inside others, or between two or more words. In the type known as &lit. (short for ‘& literally so’), the whole clue serves as both definition and wordplay, as in this clue by Rufus: ‘I’m a leader of Muslims”. Here the word play gives IMA+M (the leader, i.e. first letter, of Muslims), while the whole clue stands as the definition.
Crossword compilers and setters
Crossword compilers, or setters, traditionally remain anonymous (Greek ‘without name’), or assume pseudonyms (Greek ‘false name’). Famous exponents of the art include Torquemada and Ximenes, who assumed the names of Spanish inquisitors, Afrit, the name of a mythological Arabic demon hidden in that of the setter A.F.Ritchie, and Araucaria, the Latin name for the monkey puzzle tree. Some crosswords conceal a name or message within the grid, perhaps along the diagonal, or using the unchecked letters (or unches), which do not cross with other words in the grid. This is known as a nina, a term deriving from the practice of the American cartoonist Al Hirschfield of hiding the name of his daughter Nina in his illustrations.
If you’re a budding code-cracker and fancy pitting your wits against the cryptographers of Bletchley Park, you can find the original Telegraph puzzle here.
But remember, you only have 12 minutes to solve it.
‘Oh, that this too, too solid flesh would melt,’ so wrote the other bard, Shakespeare.
Scotland’s bard, Robert Burns, has had a surfeit of biographical attention: upwards of three hundred biographical treatments, and as if many of these were not fanciful enough hundreds of novels, short stories, theatrical, television, and film treatments that often strain well beyond credulity.
Burns has been pursued beyond (or properly in) the grave in even more extreme ways. His remains have been disinterred twice, the second time so that his skull might be examined for the purposes of phrenology. In death he has been bothered again very recently in the run up to Scotland’s referendum in October 2014. Would Burns have been a ‘Yes’ or ‘No’ voter, a Nationalist or a Unionist, was often posed and answered across media outlets.
This de-historicised Burns, someone who never actually had any kind of political vote in life, who had no access to nationalist, or indeed, unionist ideology, in the modern senses is nothing new. During World War I, the minute book of the Dumfries Volunteer Militia, in which Burns had enlisted in 1795 in the face of threatened French invasion, was rediscovered. It was published in 1919 by William Will of the London Burns Club with a rather emotional introduction claiming that the minute-book’s records showing Burns’s impeccable conduct as a militiaman was proof of the poet’s sound British patriotism and how he might be compared to the many brave British soldiers who had just taken on the Kaiser. In response, those who had been recently constructing a pacifist Burns spluttered with indignation. Wasn’t the Scottish Bard the man who had written ‘Why Shouldna Poor Folk Mowe [make love]’ during the 1790s:
When Princes and Prelates and het-headed zealots
All Europe hae set in a lowe [noisy turmoil]
The poor man lies down, nor envies a crown,
And comforts himself with a mowe.
This is an increasingly obscene song, an anti-war text saying, ‘a plague on all your houses’ (to paraphrase the other bard again): the poor should choose love, and not war – the latter being the result of much more shameful shenanigans by their supposed lords and masters.
O wad some Pow’r the giftie gie us
To see oursels as others see us! It wad frae monie a blunder free us
An’ foolish notion
The problem is that Burns would be dizzy with the multifarious contradictoriness of it all if he could truly emerge from the grave and attempt to see himself as others have seen him. Ultimately, what we have with Burns is the man who may or may not have been Scotland’s greatest poet, but who is certainly Scotland’s greatest song-writer (with the production of twice as many songs as poems) — the nearest Scotland has, a bit cheesy though the comparison is, to Lennon and McCartney. These songs and poems express indeed many different ideas, moods, emotions, and characters. They sympathise with radically different viewpoints (for instance, Burns can write empathetically on occasion about both Mary Queen of Scots (Catholic Stuart tyrant) and the Covenanters (Calvinist fanatics, according to their respective detractors)). Burns’s work is both his living achievement and the real remains over which we ought to pore. In the end there is no real Burns, but instead a fictional one and the important fictions are of his making.
Image Credit: Scottish Highlands by Gustave Doré (1875). Public domain via Wikimedia Commons.
I call myself a moral philosopher. However, I sometimes worry that I might actually be an immoral philosopher. I worry that there might be something morally wrong with making the arguments I make. Let me explain.
When it comes to preventing poverty related deaths, it is almost universally agreed that Peter Singer is one of the good guys. His landmark 1971 article, “Famine, Affluence and Morality” (FAM), not only launched a rich new area of philosophical discussion, but also led to millions in donations to famine relief. In the month after Singer restated the argument from FAM in a piece in the New York Times, UNICEF and OXFAM claimed to have received about $660, 000 more than they usually took in from the phone numbers given in the piece. His organisation, “The Life You Can Save”, used to keep a running estimate of total donations generated. When I last checked the website on 13th February 2012, this figure stood at $62, 741, 848.
Singer argues that the typical person living in an affluent country is morally required to give most of his or her money away to prevent poverty related deaths. To fail to give as much as you can to charities that save children dying of poverty is every bit as bad as walking past a child drowning in a pond because you don’t want to ruin your new shoes. Singer argues that any difference between the child in the pond and the child dying of poverty is morally irrelevant, so failure to help must be morally equivalent. For an approachable version of his argument see Peter Unger, who developed and refined Singer’s arguments in his 1996 book, Living High and Letting Die.
I’ve argued that Singer and Unger are wrong: failing to donate to charity is not equivalent to walking past a drowning child. Morality does – and must – pay attention to features such as distance, personal connection and how many other people are in a position to help. I defend what seems to me to be the commonsense position that while most people are required to give much more than they currently do to charities such as Oxfam, they are not required to give the extreme proportions suggested by Singer and Unger.
So, Singer and Unger are the good guys when it comes to debates on poverty-related death. I’m arguing that Singer and Unger are wrong. I’m arguing against the good guys. Does that make me one of the bad guys? It is true that my own position is that most people are required to give more than they do. But isn’t there still something morally dubious about arguing for weaker moral requirements to save lives? Singer and Unger’s position is clear and easy to understand. It offers a strong call to action that seems to actually work – to make people put their hands in their pockets. Isn’t it wrong to risk jeopardising that given the possibility that people will focus only on the arguments I give against extreme requirements to aid?
On reflection, I don’t think what I do is immoral philosophy. The job of moral philosophers is to help people to decide what to believe about moral issues on the basis of reasoned reflection. Moral philosophers provide arguments and critique the arguments of others. We won’t be able to do this properly if we shy away from attacking some arguments because it is good for people to believe them.
In addition, the Singer/Unger position doesn’t really offer a clear, simple conclusion about what to do. For Singer and Unger, there is a nice simple answer about what morality requires us to do: keep giving until giving more would cost us something more morally significant than the harm we could prevent; in other words, keep giving till you have given most of your money away. However, this doesn’t translate into a simple answer about what we should do, overall. For, on Singer’s view, we might not be rationally required or overall required to do what we are morally required to.
This need to separate moral requirements from overall requirements is a result of the extreme, impersonal view of morality espoused by Singer. The demands of Singer’s morality are so extreme it must sometimes be reasonable to ignore them. A more modest understanding of morality, which takes into account the agent’s special concern with what is near and dear to her, avoids this problem. Its demands are reasonable so cannot be reasonably ignored. Looked at in this way, my position gives a clearer and simpler answer to the question of what we should do in response to global poverty. It tells us both what is morally and rationally required. Providing such an answer surely can’t be immoral philosophy.
Headline image credit: Devil gate, Paris, by PHGCOM (Own work). CC-BY-SA 3.0 via Wikimedia Commons.
The world has watched as ISIS (ISIL, the “Islamic State”) has moved from being a small but extreme section of the Syrian opposition to a powerful organization in control of a large swath of Iraq and Syria. Even President Obama recently admitted that the US was surprised by the success of ISIS in that region. Why have they been so successful, and why now?
Political Scientist Robert A. Pape and undergraduate research associate Sarah Morell, both from the University of Chicago, share their thoughts.
ISIS has been successful for four primary reasons. First, the group has tapped into the marginalization of the Sunni population in Iraq to gain territory and local support. Second, ISIS fighters are battle-hardened strategists fighting against an unmotivated Iraqi army. Third, the group exploits natural resources to fund their operations. And fourth, ISIS has utilized a brilliant social media strategy to recruit fighters and increase their international recognition. One of the important aspects cutting across these four elements is the unification of anti-American populations across Iraq and Syria — remnants of the Saddam regime, Iraqi civilians driven to militant behavior during the US occupation, transnational jihadists, and the tribes who were hung out to dry following the withdrawal of US forces in 2011.
The Sunni population’s hatred of the Shia-dominated government in Baghdad has allowed ISIS to quickly overtake huge swaths of Iraqi Sunni territory. The Iraq parliamentary elections in 2010 were a critical moment in this story. The Iraqiyya coalition, led by Ayad Allawi, won support of the Sunni population to win the plurality of seats in Iraq’s parliament. Maliki’s party came second by a slim two-seat margin. Despite Allawi’s electoral victory, Maliki and his Shia coalition — backed by the United States — succeeded in forming a government with Maliki as Prime Minister.
In the months following the election, Maliki targeted Sunni leaders in an effort to consolidate Shia domination of Baghdad. Many of these were the same Sunni leaders successfully mobilized by US forces during the occupation — in an operation that became known as the Anbar Awakening — to cripple al-Qa’ida in Iraq strongholds within the Sunni population. When the US withdrew, they directed the aid to the Maliki government with the expectation that Maliki would distribute it fairly. Instead, the day after the US forces withdrew in December 2011, Iraq’s Judicial Council issued an arrest warrant for Iraqi Vice President Hashimi, a key Sunni leader. Arrests of Sunni leaders and their staffs continued, sparking widespread Sunni protests in Anbar province. When ISIS — a Sunni extremist group — rolled into Iraq, many in the Sunni population cooperated, viewing the group as the lesser of two evils.
The second element in the ISIS success story is their military strategy. Their leader, Abu Bakr al-Baghdadi, spent four years as a prisoner in the Bucca Camp before assuming control of AQI (ISIS’s predecessor) in 2010. He seized upon the opportunity of the Syrian civil war to fuel a resurgence of the group. As a result, today’s ISIS militants are battle-hardened through their Syrian experience fighting moderate rebels. The Washington Post has described Baghdadi as “a shrewd strategist, a prolific fundraiser, and a ruthless killer.”
In Iraq, ISIS has adopted “an operational form that allows decentralized commanders to use their experienced fighters against the weakest points of its foes,” writes Robert Farley in The National Interest. “At the same time, the center retains enough operational control to conduct medium-to-long term planning on how to allocate forces, logistics, and reinforcements.” Their strategy — hitting their adversaries at their weakest points while avoiding fights they cannot win — has created a narrative of momentum that increases the group’s morale and prestige.
ISIS has also carved out a territory in Iraq that Shia and Kurdish forces will not fight and die to retake, an argument articulated by Kenneth Pollack at Brookings. ISIS has not tried to take Baghdad because they know they would lose; Shia forces would be motivated to expend blood and treasure to defeat ISIS on their home turf. Some experts believe the Kurds, likewise, are unlikely to commit forces to retake Sunni territory. This mentality also plays into the catastrophic performance of the Iraqi Security Forces at Mosul, forces composed disproportionately of Kurds and Sunni Arabs; when confronted with Sunni militants, these soldiers “were never going to fight to the death for Maliki and against Sunni militants looking to stop him,” writes Pollack.
Third, ISIS has also been able to seize key natural resources in Syria to fund their operations, probably making them one of the wealthiest terror groups in history. ISIS is in control of 60% of Syria’s oil assets, including the Al Omar, Tanak, and Shadadi oil fields. According to the US Treasury, the group’s oil sales are pulling in about $1 million a day. This enables ISIS to increasingly become “a hybrid organization, on the model of Hezbollah,” writes Steve Coll in The New Yorker — “part terrorist network, part guerrilla army, part proto-state.”
Finally, ISIS has developed a sophisticated social media campaign to “recruit, radicalize, and raise funds,” according to J. M. Berger in The Atlantic. The piece details ISIS’s Arabic-language Twitter app called The Dawn of Glad Tidings, advertised as a way to keep up on the latest news about the group. On the day ISIS marched into Mosul, the app sent almost 40,000 tweets. The group has displayed a lighter side to the militants, such as videos showing young children breaking their Ramadan fast with ISIS fighters. These strategies “project strength and promote engagement online” while also romanticizing their fight, attracting new recruits from around the world and inspiring lone wolf attacks.
Since June 2014, the United Sates has pursued a policy of offshore balancing — over-the-horizon air and naval power, Special Forces, and empowerment of local allies — to contain and undermine ISIS. The crucial local groups are the Sunni tribes. These leaders were responsible for the near-collapse of AQI during the Anbar Awakening, and could well be able to defeat ISIS in the future.
This is part two of a series of articles discussing ISIS. Part one is by Hanin Ghaddar, Lebanese journalist and editor. Part two is by Shadi Hamid, fellow at the Brookings Institution. Part three is by Charles Kurzman, Professor of Sociology at the University of North Carolina at Chapel Hill.
Headline image credit: Coalition airstrike on ISIL position in Kobane on 22 October 2014. Public Domain via Wikimedia Commons.
In 1971, William Irvin Thompson, a professor at York University in Toronto, wrote an op-ed in the New York Times entitled, “We Become What We Hate,” describing the way in which “thoughts can become inverted when they are reflected in actions.”
He cited several scientific, sociocultural, economic, and political situations where the maxim appeared to be true. The physician who believed he was inventing a pill to help women become pregnant had actually invented the oral contraceptive. Germany and Japan, having lost World War II, had become peaceful consumer societies. The People’s Republic of China had become, at least back in 1971, a puritanical nation.
Today, many of the values that we, as a nation, profess — protection of civil rights and human rights, assistance for the needy, support for international cooperation, and promotion of peace — have become inverted in our actions. As a nation, we say one thing, but often do the opposite.
As a nation, we profess protection of civil rights. But our criminal justice system and our systems for federal, state, and local elections discriminate against people of color and other minorities.
As a nation, we profess protection of human rights. But we have imprisoned “enemy combatants” without charges, stripped them of their rights as prisoners of war, and tortured many of them in violation of the Geneva Conventions.
As a nation, we profess adherence to the late Senator Hubert H. Humphrey’s dictum that the true measure of a government is how it cares for the young, the old, the sick, and the needy. But we set the minimum wage at a level at which working people cannot survive. We inadequately fund human services for those who need them most. And, even after implementation of the Patient Protection and Affordable Care Act, we continue to be the only industrialized country that does not ensure health care for all its citizens.
As a nation, we profess support for international cooperation. But we fail to sign treaties to ban antipersonnel landmines and prevent the proliferation of nuclear weapons. And we, as a nation, contribute much less than our fair share of foreign assistance to low-income countries.
As a nation, we profess commitment to world peace. But we lead all other countries, by far, in both arms sales and military expenditures.
In many ways, we, as a nation, have become what we hate.
Image Credit: Dispersed, Occupy Oakland Move In Day. Photo by Glenn Halog. CC by NC 2.0 via Flickr.