JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Blog Posts by Tag
In the past 7 days
Blog Posts by Date
Click days in this calendar to see posts by day or month
Viewing: Blog Posts Tagged with: Arts &, Most Recent at Top [Help]
Results 1 - 25 of 192
How to use this Page
You are viewing the most recent posts tagged with the words: Arts & in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
Tragedies certainly aren’t the most popular types of performances these days. When you hear a film is a tragedy, you might think “outdated Ancient Greek genre, no thanks!” Back in those times, Athenians thought it their civic duty to attend tragic performances of dramas like Antigone or Agammemnon. Were they on to something that we have lost in contemporary Western society? That there is something specifically valuable in a tragic performance that a spectator doesn’t get from other types or performances, such as those of our modern genres of comedy, farce, and melodrama?
Since films reach a greater audience in our culture than plays, after updating Aristotle’s Poetics for the twenty-first century, we analyzed what we call “cinematic tragedies”: films that demonstrate the key components of Aristotelian tragedy. We conclude that a tragedy must consist in the representation of an action that is: (1) complete; (2) serious; (3) probable; (4) has universal significance; (5) involves a reversal of fortune (from good to bad); (6) includes recognition (a change in epistemic state from ignorance to knowledge); (7) includes a specific kind of irrevocable suffering (in the form of death, agony or a terrible wound); (8) has a protagonist who is capable of arousing compassion; and (9) is performed by actors. The effects of the tragedy must include: (10) the arousal in the spectator of pity and fear; and (11) a resolution of pity and fear that is internal to the experience of the drama.
Unlike melodrama (which we hold is the most common film genre), tragedy calls on spectators to ponder thorny moral issues and to navigate them with their own moral compass. One such cinematic tragedy — Into The Wild, 2007, directed by Sean Penn — thematizes the preciousness and precariousness of human life alongside environmental problems, raising questions about human beings’ apparent inability to live on earth without despoiling the beauty and integrity of the biosphere. Other cinematic tragedies deal with a variety of problems with which our modern societies must grapple.
One such topic is illegal immigration, a highly politicized issue that is far more complex than national governments seem equipped to handle, especially beyond the powers of the two parties in the American system. Cinematic tragedies that deal with this issue have been produced over several decades involving immigration into various Western countries, especially the United States; these include Black Girl (France, 1966), El norte (US/UK, 1983), and Sin nombre (Mexico, 2009), the last of which we will expand on here.
In US director Cary Fukunaga’s Sin nombre (which means “Nameless” but which was released in the United States under the Spanish title), Hondurans escaping from their harsh political and economic realities risk their lives in order to make it to the United States, through Mexico, on the tops of rail cars. They travel in this manner since, as we all know, there would be no other legal way for most of these foreign citizens to come to the United States. Over the course of the journey, the immigrants endure terrible suffering or die at the hands of gang members who rob, rape, and even kill some of them.
The film focuses on just a few of the multitudes atop the trains: on a teenage Honduran girl, Sayra, migrating with her father and uncle; and on a few of the gang members. One of them, Casper, has had a change of heart and is no longer loyal to the gang, after its leader killed Casper’s girlfriend after trying to rape her. Casper and other gang members are atop the train robbing the migrants, but he defends Sayra by killing the leader when he tries to rape her. Ultimately, Sayra will arrive in the United States. However, she realizes that the cost has been too great—her father has died falling off of the train; she has lost Casper who is, ironically, shot to death by the pre-pubescent boy whom he himself had trained in the ways of the gang in the opening scenes of the film.
The tremendous losses, and the scenes of suffering, rape, and murder, make unlikely the possibility that the spectator will feel that Sayra’s arrival constitutes a happy ending. In some other aesthetic treatment, Casper’s ultimate death might have been melodramatized as redemptive selflessness for the sake of his new girlfriend. But in Fukunaga’s film, the juxtaposed images imply a continuing cycle of despair and death: Casper’s young killer in Mexico is promoted up the ranks of the gang with a new tattoo, while Sayra’s uncle, back in Honduras after being deported from Mexico, starts the voyage to the United States all over again. Sayra too may face deportation in the future. Following the scene of the reinvigoration of the criminal gang system, as its new young leader gets his first tattoo, the viewer sees Sayra outside a shopping mall in the American southwest. The teenage girl has arrived in the United States and may aspire to participate in advanced consumer capitalism, yet she has lost so much and suffered so undeservingly.
This aesthetic juxtaposition prompts the spectator to attend to the failure of Western political leaders to create a humane system of immigration for the twenty-first century, one which cannot be reached with the entrenched politicized views of the “two sides of the aisle” who miss the human story of immigrants’ plight. This film—like all tragedies—promotes the spectator’s active pondering, that is, it challenges them to respond in some way.
In the tradition of philosophers as various as Aristotle, Seneca, Schopenhauer, Nietzsche, Martha Nussbaum, and Bernard Williams, we find that tragedies bring to conscious awareness the most significant moral, social, political, and existential problems of the human condition. A film such as Sin nombre, through its tragic performance, points to one of these terrible necessities with which our contemporary Western culture must grapple. While it doesn’t offer an answer, this cinematic tragedy prompts us to recognize and deal with a seemingly intractable problem that needs to move beyond the current impasse of political debate, as we in the industrialized nations continue to shop for and watch movies in the comfort of our malls.
When the first production of On the Town in 1944 featured the Japanese American ballerina Sono Osato as its star, as part of a cast that also included whites and blacks, it aimed for a realistic depiction of the diversity among US citizens during World War II. It did so at a time when African Americans were expressing affinity with Nisei – that is, with second-generation children of Japanese nationals who had immigrated to other countries. The two communities shared the struggle of discrimination by the majority culture.
In 1942, the Office of War Information conducted a survey in Harlem, trying to gain an African-American perspective on the war, and opinions about the Japanese emerged in the process. Many Harlemites communicated a feeling that “these Japanese are colored people.” That quotation comes from a letter written by William Pickens, an African-American journalist who worked for the US Department of Treasury during World War II. When asked “Would you be better off if America or the Axis won the war?” most blacks in the survey stated they “would be treated either the same or better under Japanese rule, although a large majority responded that conditions would be worse under the Germans.”
Yet relationships between these two marginalized communities were not always easy, and On the Town became a flash point for racial distress. A striking case appeared in the memoir Long Old Road (Trident Press, 1965), written by Horace R. Cayton, Jr. An African American sociologist from Chicago, Cayton attended On the Town soon after he heard about the bombing of Hiroshima, which occurred on 6 August 1945. He articulated a shared mission between Nisei and African Americans, yet he did so with considerable agitation. “Our seats were good, and the theater was cool after the heat of New York,” wrote Cayton. He responded positively to the opening number, “New York, New York,” then launched into an assessment of the racial and political complexities posed by Osato’s appearance on stage at that particular moment in time. He perceived her as racially accommodating.
“It was a catchy tune with cute lyrics, but when the beautiful Sono Osato, who is of Japanese descent, appeared and frolicked with the American sailors, I was filled with anger and disgust,” wrote Cayton. “I care more about your people than you do, I thought, as I sat through the rest of the first act looking at the floor and wondering how soon I could escape to the bar next door.”
Cayton’s “anger and disgust” came from watching Osato engage directly and uncritically with white actors playing the role of sailors. At intermission, Cayton’s wife June, who was white, said to him: “This is the first good musical I’ve seen in years. Isn’t Sono Osato wonderful?” Cayton then recounted a tense conversation between the two of them:
“If I were half-Japanese I wouldn’t be dancing with three American sailors at a time like this,” I [Cayton] commented sourly.
“Why shouldn’t she? She’s as America as you or I.” June began to warm to her subject. “She was born in this country. She’s one hundred per cent American, doesn’t even understand Japanese.”
[Cayton replied:] ‘She’s a Jap, I’m a nigger, and you’re a white girl. Let none of us forget what we are.”
Cayton’s outburst comes across as a racial polemic. But there was deep complexity to his reaction, as he expressed solidarity with other non-white races as they confronted the hegemonic power of Caucasians. Even though his language is disturbing, it is extraordinarily frank, acknowledging the era’s venomous racism against the Japanese and the degree to which African Americans felt themselves to be backed against a wall during World War II. Cayton continued:
“I’m torn a dozen ways. I didn’t want the Japanese to win; after all, I am an American. But the mighty white man was being humiliated, and by the little yellow bastards he had nothing but contempt for. It gave me a sense of satisfaction, a feeling that white wasn’t always right, not always able to enforce its will on everyone who was colored. All those fine white liberals rejoicing because we dropped a bomb killing or maiming seventy-eight thousand helpless civilians. Why couldn’t we have dropped it on the Germans—because they were white? No, save it for the yellow bastards.”
Those multi-layered thoughts were unleashed by watching Sono Osato on stage, dancing an identity that was intended to portray her as “All-American” yet could not avoid the realities of her mixed-race heritage at a harrowing historical moment.
Headline Image: Sono Osato modeling a dress by Pattullo Modes, early 1940s. Dance Clipping Files, New York Public Library at Lincoln Center, Astor, Lenox, and Tilden Foundations.
This month marks the 50th anniversary of Disney’s beloved film Mary Poppins, starring the legendary Julie Andrews. Although Andrews was only twenty-nine at the time of the film’s release, she had already established herself as a formidable star with numerous credits to her name and performances opposite Richard Burton, Rex Harrison, and other leading actors of Hollywood’s Golden Age. Mary Poppins would earn Andrews an Academy Award for Best Actress and serve as a milestone in a career that continues today. Herewith are some of our favorite songs from Andrew’s illustrious career.
“I Could Have Danced All Night”
Andrews belted out this song in the 1956 Broadway performance of My Fair Lady. Andrews proved her singing capabilities playing Eliza Doolittle opposite Rex Harrison as Professor Higgins, although she was replaced in the film version (with Audrey Hepburn acting and Marni Nixon dubbing).
Andrews performed the play’s title track during its 1960 performance on Broadway. The actress played Queen Guenevere – a title she was apparently comfortable with, later playing Queen Renaldi in Disney’s Princess Diaries – opposite Richard Burton as King Arthur.
“Impossible; It’s Possible”
Starring in another royal role, Andrews played the title character in CBS’ 1957 production of Cinderella, written by Richard Rodgers and Oscar Hammerstein.
People are still reciting this tongue twister performed by Andrews in Disney’s 1964 hit film Mary Poppins. In addition to earning her an Oscar, Andrews’ role as the angelic English Nanny cemented her name in silver screen history.
“My Favorite Things”
Hot on the heels of her success from Mary Poppins, Andrews starred as Maria von Trapp in The Sound of Music, expanding her international fame and branding herself as a singer to be reckoned with in Hollywood and on Broadway.
Much of the comment on the official photographic portrait of the Queen released in April this year to celebrate her 88th birthday focussed on her celebrity photographer, David Bailey, who seemed to have ‘infiltrated’ (his word) the bosom of the establishment. Less remarked on, but equally of note, is that the very informal pose that the queen adopted showed her smiling, and not only smiling but also showing her teeth.
It is only very recently that monarchs have cracked a smile for a portrait, let alone a smile that revealed teeth. Before the modern age, monarchs embodied power – and power rarely smiles. Indeed it has often been thought to be worrying when it does. Prime Minister Tony Blair’s endlessly flashing teeth caused this powerful statesman to trigger as much suspicion as approval. The negative reaction was testimony to an unwritten law of portraiture, present until very recently in western art. According to this, an open mouth signifies plebeian status, extreme emotion, or else folly and licence, bordering on insanity. As late as the eighteenth century, an individual who liked to be depicted smiling as manifestly as Tony Blair would have risked being locked up as a lunatic.
The individual who broke this unwritten law of western portraiture was Louise Élisabeth Vigée Le Brun whose charming smile –- at once twinklingly seductive and reassuringly maternal – was displayed at the Paris Salon in 1787. It appears on the front cover of my book, The Smile Revolution in Eighteenth-Century Paris. The French capital had witnessed the emergence of modern dentistry over the course of the century – a subject that has been largely neglected. In addition, the city’s elites adopted the polite smile of sensibility that they had learned from the novels of Samuel Richardson and Jean-Jacques Rousseau. Madame Vigée Le Brun’s smile shocked the artistic establishment and the stuffy court elite out at Versailles, who still observed tradition, but it marked the advent of white teeth as a positive attribute in western art.
Yet if Vigée Le Brun’s example was followed by many of the most eminent artists of her day (David, Ingres, Gérard, etc), the white tooth smile took much longer to establish itself as a canonical and approved portrait gesture. The eighteenth century’s ‘Smile Revolution’ aborted after 1789. Politics under the French Revolution and the Terror were far too serious to accommodate smiles. The increasingly gendered world of separate spheres consigned the smile to the domestic environment. And for most of the nineteenth century, monarchs and men of power in the public sphere, following traditional modes of the expression of gravitas, invariably presented a smile-less face to the world.
Probably the first reigning monarch to have a portrait painted that revealed white teeth was Queen Victoria. This may seem surprising given her famous penchant for staying resolutely ‘unamused’. Yet in 1843, she commissioned the German portrait-painter Franz-Xaver Winterhalter to paint a delightfully informal study, that showed the twenty-four year-old monarch reclining on a sofa revealing her teeth in a dreamy and indeed mildly aroused smile. Yet the conditions of the portrait’s commission showed that the seemly old rules were still in place. For Victoria had commissioned the portrait as a precious personal gift for her ‘angelic’ husband, Prince Albert. What she called her ‘secret picture’ was hung in the queen’s bedroom and was not seen in public throughout her reign. Indeed, its display in an exhibition in 2009, over a century after her death, marked only its second public showing since its creation. This was three years after Rolf Harris’s 2006 portrayal of the queen with a white-tooth smile, a significant precursor to David Bailey’s photograph.
If English monarchs have thus been late-comers to the twentieth-century smile-fest, their subjects have been baring their teeth in a smile for many decades. As early as the 1930s and 1940, the practice of saying ‘cheese’ when confronted with a camera became the norm. Hollywood-style studio photography, advertising models and more relaxed forms of sociability and subjectivity have combined to produce the twentieth century’s very own Smile Revolution. So it is worth reflecting whether the reigning monarch’s early twenty-first century acceptance of the smile’s progress will mark a complete and durable revolution in royal portraiture. Seemingly only time – and the Prince of Wales – will tell.
As we celebrate the golden anniversary of the Civil Rights Act of 1964, a significant aspect of the struggle for racial equality often gets ignored: racial activism in performance. Actors, singers, and dancers mobilized over the decades, pushing back against racial restrictions that shifted over time, and On the Town of 1944 marked an auspicious but little-recognized moment in that history.
On the Town opened on Broadway in December of 1944 towards the end of World War II, and marked the debut of a dazzling group of creative artists: the composer Leonard Bernstein, the lyricists Betty Comden and Adolph Green, and the choreographer Jerome Robbins. All were the children of Jewish immigrants. Balancing left-leaning personal politics with the pressure of launching their first show, this team of twenty-somethings made a number of hiring decisions that boldly challenged racial performance practices of the day. Exploring those progressive choices opens a perspective on the racial climate for performers of the day.
One daring step was to feature the Japanese-American dancer Sono Osato in the starring role of Ivy Smith, a character shaped as an “All-American Girl,” while the United States was at war with Japan, internment camps established on the West Coast and Southwest, and government propaganda aggressively targeting the Japanese. Like thousands of Japanese nationals, the US government detained Osato’s father, Shoji, immediately after Pearl Harbor, and he remained on parole in Chicago for most of the war. As a result, he could not attend his daughter’s opening night on Broadway. Declassified FBI files tell the story of Shoji’s imprisonment and persecution, revealing no justification for the treatment he received.
As a result, On the Town—a show about three American sailors on a one-day leave in New York City—flirted with what was then called miscegenation. The pursuit of Ivy by one of those sailors — Gabey (played by Cris Alexander, an actor of Caucasian heritage) — was the central premise of the show. A promotional photo, now housed in clipping files at the New York Public Library at Lincoln Center, shows Osato standing seductively over Alexander, giving a sense of how brazenly their relationship was portrayed.
Equally audacious were staging decisions related to African Americans in the cast. On opening night, there were 6 blacks out of a cast of 56. By today’s standards, that number appears as tokenism. Yet these black performers directly challenged racial stereotypes of the day. On the Town eschewed blackface, steering clear of bandanas, maids, and butlers. It did not segregate the black performers on stage, as was often the case, but rather it modeled an integrated citizenry. Black dancers in sailor costumes stood comfortably alongside their white comrades, and there was mixed-race dancing, some of which required training in ballet. These staging decisions modeled a vision of urban interracial fellowship. They imagined an alternative to the segregated US military of World War II, and they offered an early case of what has become known as color-blind casting. The Times Square Ballet, which closed Act I (pictured here), was one of the principal showcases for these progressive racial statements.
In yet another gesture towards civil rights, Everett Lee took over the podium of On the Town, becoming one of the first African Americans to conduct an all-white orchestra in a mainstream Broadway production. Lee had been concertmaster of the show since opening night, and he became conductor nine months into the run.
The racial desegregation of performance on New York’s stages gained traction as the Civil Rights Movement grew more effective in the 1950s and 1960s. Yet the advances were never completely game-changing, as has been the case in the culture at large. To its credit, however, the first production of On the Town yielded a site of opportunity, and many of its performers of color went on to distinguished careers in the theater and concert hall.
Imagine a possible world where you are having coffee with … Aristotle! You begin exchanging views on how you like the coffee; you examine its qualities – it is bitter, hot, aromatic, etc. It tastes to you this way or this other way. But how do you make these perceptual judgments? It might seem obvious to say that it is via the senses we are endowed with. Which senses though? How many senses are involved in coffee tasting? And how many senses do we have in all?
The question of how many senses we have is far from being of interest to philosophers only; perhaps surprisingly, it appears to be at the forefront of our thinking – so much so that it was even made the topic of an episode of the BBC comedy program QI. Yet, it is a question that is very difficult to answer. Neurologists, computer scientists and philosophers alike are divided on what the right answer might be. 5? 7? 22? Uncertainty prevails.
Even if the number of the senses is a question for future research to settle, it is in fact as old as rational thought. Aristotle raised it, argued about it, and even illuminated the problem, setting the stage for future generations to investigate it. Aristotle’s views are almost invariably the point of departure of current discussions, and get mentioned in what one might think unlikely places, such as the Harvard Medical School blog, the John Hopkins University Press blog, and QI. “Why did they teach me they are five?” says Alan Davies on the QI panel. “Because Aristotle said it,” replies Stephen Fry in an eye blink. (Probably) the senses are in fact more than the five Aristotle identified, but his views remain very much a point of departure in our thinking about this topic.
Aristotle thought the senses are five because there are five types of perceptible properties in the world to be experienced. This criterion for individuating the senses has had a very longstanding influence, in many domains including for example the visual arts.
Yet, something as ‘mundane’ as coffee tasting generates one of the most challenging philosophical questions, and not only for Aristotle. As you are enjoying your cup of coffee, you appreciate its flavor with your senses of taste and smell: this is one experience and not two, even if two senses are involved. So how do senses do this? For Aristotle, no sense can by itself enable the perceiver to receive input of more than one modality, precisely because uni-modal sensitivity is what according to Aristotle identifies uniquely each sense. On the other hand, it would be of no use to the perceiving subject to have two different types of perceptual input delivered by two different senses simultaneously, but as two distinct perceptual contents. If this were the case, the difficulty would remain unsolved. In which way would the subject make a perceptual judgment (e.g. about the flavor of the coffee), given that not one of the senses could operate outside its own special perceptual domain, but perceptual judgment presupposes discriminating, comparing, binding, etc. different types of perceptual input? One might think that perceptual judgments are made at the conceptual rather than perceptual level. Aristotle (and Plato) however would reject this explanation because they seek an account of animal perception that generalizes to all species and is not only applicable to human beings. In sum, for Aristotle to deliver a unified multimodal perceptual content the senses need to somehow cooperate and gain access in some way to each other’s special domain. But how do they do this?
A sixth sense? Is that the solution? Is this what Aristotle means when talking about the ‘common’ sense? There cannot be room for a sixth sense in Aristotle’s theory of perception, for as we have seen each sense is individuated by the special type of perceptible quality it is sensitive to, and of these types there are only five in the world. There is no sixth type of perceptible quality that the common sense would be sensitive to. (And even if there were a sixth sense so individuated, this would not solve the problem of delivering multimodal content to the perceiver, because the sixth sense would be sensitive only to its own special type of perceptibles). The way forward is then to investigate how modally different perceptual contents, each delivered by one sense, can be somehow unified, in such a way that my perceptual experience of coffee may be bitter and hot at once. But how can bitter and hot be unified?
Modeling (metaphysically) of how the senses cooperate to deliver to the perceiving subject unified but complex perceptual content is another breakthrough Aristotle made in his theory of perception. But it is much less known than his criterion for the senses’ individuation. In fact, Aristotle is often thought to have given an ad hoc and unsatisfactory solution to the problem of multimodal binding (of which tasting the coffee’s flavor is an instance), by postulating that there is a ‘common’ sense that somehow enables the subject to perform all the perceptual functions that the five sense singly cannot do. It is timely to take a departure form this received view which does not pay justice to Aristotle’s insights. Investigating Aristotle’s thoughts on complex perceptual content (often scattered among his various works, which adds to the interpretative challenge) reveals a much richer theory of perception that it is by and large thought he has.
If the number of the senses is a difficult question to address, how the senses combine their contents is an even harder one. Aristotle’s answer to it deserves at least as much attention as his views on the number of the senses currently receive in scholarly as well as ‘popular’ culture.
Headline image credit: Coffee. CC0 Public Domain via Pixabay
Michael Kennedy has described Job as one of Vaughan Williams’s mightiest achievements. It is a work which, in a full production, combines painting (the inspiration for the work came from a scenario drawn up by Geoffrey Keynes based on William Blake’s Illustrations of the Book of Job), literature (the King James Bible), music, and dance. The idea of a ballet on the Blake Job illustrations was conceived by Geoffrey Keynes, whose mother was a Darwin and a cousin of Vaughan Williams, assisted by another Darwin cousin, Gwen Raverat whom Keynes asked to design the scenery and costumes. They decided to keep it in the family and approached Vaughan Williams about writing the music. The idea took such a hold on the composer that he found himself writing to Mrs Raverat in August 1927 ‘I am anxiously awaiting your scenario – otherwise the music will push on by itself which may cause trouble later on’.
Out of all this emerged a musical work that exhibits the composer at the height of his powers. Often ballet music can seem only half the story when it is played apart from the dancing it was written for, but in this case the composer fully realised that an actual danced production was by no means assured (Diaghilev had firmly turned down Keynes’s offer of the ballet for Ballets Russes) and wrote a powerful piece for full orchestra, including organ, which could stand independently in a concert. That was indeed how Job received its first and second performances, the first in Norwich in October 1930 and the second in London in February 1931, both under the composer’s baton. It is dedicated to Adrian Boult. The first danced production was given by the recently formed Camargo Society at the Cambridge Theatre on 5 July 1931. It was choreographed by Ninette de Valois and conducted by Constant Lambert, who (much to the composer’s admiration) adeptly reduced the orchestration because the pit at the Cambridge Theatre could not accommodate the full orchestra specified by the composer. The part of Satan was danced by Anton Dolin.
Opinion was divided at the time as to how well the work stood up to performance independently of the dance dimension, but now, with the wisdom of hindsight, we can see it as having the stature of a symphony in terms of its overall shape and length. The careful placing of different elements in the score – the heavenly, the earthly and the infernal, all characterised by a different style of music – emphasises the sense of symphonic unity. In the music for Satan we hear a foretaste of the savagery which was to cause so much astonishment in the Fourth Symphony, on which he started work almost at once after completing Job. In the music for Job and his family we find elements of the calm we have come to associate with the Fifth Symphony, while the music for God and the ‘sons of the morning’ (Saraband, Pavane, and Galliard) presents a broad diatonic sweep at the beginning and then towards the end of the work. This will become apparent to listeners of Job performed at the Promenade Concert on 13 August 2014. They will also be able to draw comparisons between the ethereal violin solo in The Lark Ascending and the violin solo in ‘Elihu’s dance of youth and beauty’ in Scene VII.
It is no accident that two of the pieces, the Pavane and Galliard, together with the calm Epilogue, were played at Vaughan Williams’s funeral at Westminster Abbey on 19 September 1958.
Headline image credit: symphony orchestra concert philharmonic hall music. Public domain via Pixabay.
Sidebar image credit: Ralph Vaughan Williams. Lebrecht Archive.
In August 2014 the world marks the 100th anniversary of the outbreak of the First World War.
A time of great upheaval for countless aspects of society, social, economic and sexual to name a few, the onset of war punctured the sartorial mold of the early 20th century and resulted in perhaps one of the biggest strides to clothing reform that women had ever seen.
The turn of the century began with a feeling of unease and fevered anticipation regarding the changing political climate; the ‘new woman’ of the fin-de-siècle and the clothes associated with her threatened to disrupt conservative gender values of the middle and upper classes. But the position of women was about to take an even sharper turn. As it soon became necessary to recruit women into the war effort, hemlines got shorter, cuts became looser, and the two-piece suit took centre stage for the first time, making way for more practical attire. Women experienced a relative degree of liberation, entering professions and industries previously dominated by men, which created the need for an entirely new ‘working wardrobe’.
Permeating mainstream and avant-garde fashion and fuelling the rise of the female’s role in the public sphere, fashion was about to move in a new, androgynous direction. Practical clothing influenced by men’s tailoring led the way and the suit, newly composed of jackets and skirts, developed its own identity as a women’s garment with soft, loose lines. In the world of high fashion, Paul Poiret and his taste for the ‘exotic’ firmly established the innovative trend for the tube-like silhouette, which reverberated throughout the fashion sphere more broadly. The kimono similarly burst onto the scene, reflecting the sentiment for looser and freer garments. Also, perhaps less well-remarked is the rapid development of the department store in Europe, which acknowledged the increasingly varied roles of women and made ready-made garments more available than ever before.
The changes were not only evident in Britain. Relationships between Germany and the French houses that dominated the fashion scene became increasingly fraught at the outbreak of war. As Irene Guenther remarks in Nazi Chic?, “the war was viewed as providing the perfect opportunity to unseat France, militarily and sartorially, from its throne. Because the conflict had slowed down the French fashion machine, a space had developed that the German nation was eager and ready to fill.” Luxury items imported from France, including silk, lace, and leather gloves were forbidden and a culture of “make do and mend” was established, which was set to echo throughout the Second World War that was to follow.
The Great War and its disruptions, dislocations, and recastings is rarely remembered for its creative output, but the war made way for innovative fashions and manufacturing techniques to suit a rapidly changing society and the new roles for the women and men who inhabited it. The sartorial changes witnessed in this turbulent decade became visual signifiers of the larger upheavals facing British and European society more generally, and we only have to look to our sartorial history from this period to sneak a peek at the way in which societal roles were uprooted and the face of women’s fashion markedly changed.
What is a classic album? Not a classical album – a classic album. One definition would be a recording that is both of superb quality and of enduring significance. I would suggest that Miles Davis’s 1959 recording Kind of Blue is indubitably a classic. It presents music making of the highest order, and it has influenced — and continues to influence — jazz to this day.
There were several important records released in 1959, but no event or recording matches the importance of the release of the new Miles Davis album Kind of Blue on 17 August 1959. There were people waiting in line at record stores to buy it on the day it appeared. It sold very well from its first day, and it has sold increasingly well ever since. It is the best-selling jazz album in the Columbia Records catalogue, and at the end of the twentieth century it was voted one of the ten best albums ever produced.
But popularity or commercial success do not correlate with musical worth, and it is in the music on the recording that we find both quality and significance. From the very first notes we know we are hearing something new. Piano and bass draw in the listener into a new world of sound: contemplative, dreamy and yet intense.
The pianist here is Bill Evans, who was new to Davis’s band and a vital contributor to the whole project. Evans played spaciously and had an advanced harmonic sense. His sound was floating and open. The lighter sound and less crowded manner were more akin to the understated way in which Davis himself played. “He plays the piano the way it should be played,” said Davis about Bill Evans. And although Davis’s speech was often sprinkled with blunt Anglo-Saxon expressions, he waxed poetic about Evans’s playing: “Bill had this quiet fire. . . . [T]he sound he got was like crystal notes or sparkling water cascading down from some clear waterfall.” The admiration was mutual. Evans thought of Davis and the other musicians in his band as “superhumans.”
Evans makes his mark throughout the album, though Wynton Kelly substitutes for him on the bluesier and somewhat more traditional second track “Freddie Freeloader.”
Musicians refer to the special sound on Kind of Blue as “modal.” And the term “modal jazz” is often found in writings about jazz styles and jazz history. What exactly is modal jazz? There are two characteristic features that set this style apart. The first is the use of scales that are different from the standard major and minor ones. So the first secret of the special sound on this album is the use of unusual scales. But the second characteristic is even more noticeable, and that is the way the music is grounded on long passages of unchanging harmony. “So What” is an AABA form in which all the A sections are based on a single harmony and the B sections on a different harmony a half step higher.
A [D harmony]
A [D harmony]
B [Eb harmony]
A [D harmony]
Unusual scales are most clearly heard on “All Blues.”
And for hypnotic and meditative, you can’t do better than “Flamenco Sketches,” the last track, which brings the modal conception to its most developed point. It is based upon five scales or modes, and each musician improvises in turn upon all five in order. A clear analysis of this track is given in Mark Gridley’s excellent jazz textbook Jazz Styles.)
An aside here:
It is possible — even likely — that the titles of these two tracks are reversed. In my Musical Quarterly article (link below), I suggest that “Flamenco Sketches” is the correct title for the strumming medium-tempo music on the track that is now known as “All Blues” and that “All Blues” is the correct title for the last, very slow, track on the album. I also show how the mixup occurred in 1959, just as the album was released.
Perhaps the most beautiful piece on the album is the Evans composition “Blue in Green,” for which Coltrane fashions his greatest and most moving solo. Of the five tracks on the album, four are quite long, ranging from nine to eleven and a half minutes, and they are placed two before and two after “Blue in Green.” Regarding the program as a whole, therefore, one sees “Blue in Green” as the small capstone of a musical arch. But “Blue in Green” itself is in arch form, with a palindromic arrangement of the solos. The capstone of this arch upon an arch is the thirty seconds or so of Coltrane’s solo.
“Blue in Green”
“Freddie Freeloader” “All Blues”
“So What” “Flamenco Sketches”
Kind of Blue
The great strength of Kind of Blue lies in the consistency of its inspiration and the palpable excitement of its musicians. “See,” wrote Davis in his autobiography, “If you put a musician in a place where he has to do something different from what he does all the time . . . that’s where great art and music happens.”
Grove Music Online presents this multi-part series by Don Harrán, Artur Rubinstein Professor Emeritus of Musicology at the Hebrew University of Jerusalem, on the life of Jewish musician Salamone Rossi on the anniversary of his birth in 1570. Professor Harrán considers three major questions: Salamone Rossi as a Jew among Jews; Rossi as a Jew among Christians; and the conclusions to be drawn from both.
Salamone Rossi as a Jew among Jews
What do we know of Salamone Rossi’s family? His father was named Bonaiuto Azaria de’ Rossi (d. 1578): he composed Me’or einayim (Light of the Eyes). Rossi had a brother, Emanuele (Menaḥem), and a sister, Europe, who, like him, was a musician. She is known to have performed as a singer in the play Il ratto di Europa (“The Rape of Europa”) in 1608. The court chronicler Federico Follino raved over her performance, describing it as that of “a woman understanding music to perfection” and “singing, to the listeners’ great delight and their greater wonder, in a most delicate and sweet-sounding voice.”
Salamone Rossi appears to have used his connections at court to improve his family’s situation, as in 1602 when Rossi wrote to Duke Vincenzo on behalf of his brother Emanuele:
The duke granted the request in order to “to show Salamone Rossi ebreo some sign of gratitude for services that he, with utmost diligence, rendered and continues to render over many years. We have resolved to confer the duties of collecting the fees on the person of Emanuele, Salamone’s brother, in whose faith and diligence we place our confidence.”
Until now, it has been thought that Rossi earned his livelihood from his salary at the Mantuan court, and since the salary was—by comparison with that of other musicians at the court—very small, Rossi tried to supplement it by earning money on the side by investments. From 1622 on he was earning 1,200 lire, a large sum of money for a musician whose annual wages at the court were only 156 lire. Rossi needed the money to cover the cost of his publications and to support his family.
Rossi’s situation within the community can only be conjectured. By “community,” we are talking about some 2,325 Jews living in the city of Mantua out of a total population of 50,000. True, Rossi was its most distinguished “musician” and his service for the court would have brought honor on the Jewish community. But because of his non-Jewish connections, he enjoyed privileges denied his coreligionists. In 1606, for example, he was exempted from wearing a badge. The badge was shameful to Jews who, in their activities, were in close touch with Christians, as were Rossi and other Jews who performed before them as musicians or actors or who engaged in loan banking.
As other “privileged” Jews, Rossi occupied a difficult situation: his Christian employers considered him a Jew, yet the Jews probably considered him an outsider. He could choose from two alternatives: convert to Christianity to improve his situation with the Christians; or solidify his position within the Jewish community, which he probably did whenever he could by representing its interests before the authorities and by providing compositions for Jewish weddings, circumcisions, the inauguration of Torah scrolls, and for Purim festivities. All this is speculative, for we know nothing about these activities. We are better informed about Rossi’s role in the Jewish theater, whose actors were required to prepare each year one or two plays with musical intermedi. Since the Jews were expected to act, sing, and play instruments, their leading musician Salamone Rossi probably contributed to the theater by writing vocal and instrumental works, rehearsing them and, together with others, playing or even singing them.
It was in his Hebrew collection, however, that Rossi demonstrated his connections with his people. His intentions were good: after having published collections of Italian vocal music and instrumental works, Rossi decided, around 1612, to write Hebrew songs. He describes these songs as “new songs [zemirot] that I devised through ‘counterpoint’ [seder].” True, attempts were made to introduce art music into the synagogue in the early seventeenth century. But none of these early works survive. Rossi’s thirty-three “Songs by Solomon” (Ha-shirim asher li-Shelomoh) are the first Hebrew polyphonic “songs” to be printed. Here is an example from the opening of the collection, “Elohim, hashivenu”.
Good intentions are one thing; the status of art music in the synagogue is another. The prayer services made no accommodation for art music. Rossi’s aim, to quote him, was to write works “for thanking God and singing [le-zammer] to His exalted name on all sacred occasions” to be performed in prayer services, particularly on Sabbaths and festivals.
Headline image credit: Opening of Salomone de Rossi’s Madrigaletti, Venice, 1628. Photo of Exhibit at the Diaspora Museum, Tel Aviv. Public domain via Wikimedia Commons.
The anniversaries of conflicts seem to be more likely to capture the public’s attention than any other significant commemorations. When I first began researching the nurses of the First World War in 2004, I was vaguely aware of an increase in media attention: now, ten years on, as my third book leaves the press, I find myself astonished by the level of interest in the subject. The Centenary of the First World War is becoming a significant cultural event. This time, though, much of the attention is focussed on the role of women, and, in particular, of nurses. The recent publication of several nurses’ diaries has increased the public’s fascination for the subject. A number of television programmes have already been aired. Most of these trace journeys of discovery by celebrity presenters, and are, therefore, somewhat quirky – if not rather random – in their content. The BBC’s project, World War One at Home, has aired numerous stories. I have been involved in some of these – as I have, also, in local projects, such as the impressive recreation of the ‘Stamford Military Hospital’ at Dunham Massey Hall, Cheshire. Many local radio stories have brought to light the work of individuals whose extraordinary experiences and contributions would otherwise have remained hidden – women such as Kate Luard, sister-in-charge of a casualty clearing station during the Battle of Passchendaele; Margaret Maule, who nursed German prisoners-of-war in Dartford; and Elsie Knocker, a fully-trained nurse who established an aid post on the Belgian front lines. One radio story is particularly poignant: that of Clementina Addison, a British nurse, who served with the French Flag Nursing Corps – a unit of fully trained professionals working in French military field hospitals. Clementina cared for hundreds of wounded French ‘poilus’, and died of an unnamed infectious disease as a direct result of her work.
The BBC drama, The Crimson Field was just one of a number of television programmes designed to capture the interest of viewers. I was one of the historical advisers to the series. I came ‘on board’ quite late in the process, and discovered just how difficult it is to transform real, historical events into engaging drama. Most of my work took place in the safety of my own office, where I commented on scripts. But I did spend one highly memorable – and pretty terrifying – week in a field in Wiltshire working with the team producing the first two episodes. Providing ‘authentic background detail’, while, at the same time, creating atmosphere and constructing characters who are both credible and interesting is fraught with difficulty for producers and directors. Since its release this spring, The Crimson Field has become quite controversial, because whilst many people appear to have loved it, others complained vociferously about its lack of authentic detail. Of course, it is hard to reconcile the realities of history with the demands of popular drama.
I give talks about the nurses of the First World War, and often people come up to me to ask about The Crimson Field. Surprisingly often, their one objection is to the fact that the hospital and the nurses were ‘just too clean’. This makes me smile. In these days of contract-cleaners and hospital-acquired infection, we have forgotten the meticulous attention to detail the nurses of the past gave to the cleanliness of their wards. The depiction of cleanliness in the drama was, in fact one of its authentic details.
One of the events I remember most clearly about my work on set with The Crimson Field is the remarkable commitment of director, David Evans, and leading actor, Hermione Norris, in recreating a scene in which Matron Grace Carter enters a ward which is in chaos because a patient has become psychotic and is attacking a padre. The matron takes a sedative injection from a nurse, checks the medication and administers the drug with impeccable professionalism – and this all happens in the space of about three minutes. I remember the intensity of the discussions about how this scene would work, and how many times it was ‘shot’ on the day of filming. But I also remember with some chagrin how, the night after filming, I realised that the injection technique had not been performed entirely correctly. I had to tell David Evans that I had watched the whole sequence six times without noticing that a mistake had been made. Some historical adviser! The entire scene had to be re-filmed. The end result, though, is an impressive piece of hospital drama. Norris looks as though she has been giving intramuscular injections all her life. I shall never forget the professionalism of the director and actors on that set – nor their patience with the absent-minded-professor who was their adviser for the week.
In a centenary year, it can be difficult to distinguish between myths and realities. We all want to know the ‘facts’ or the ‘truths’ about the First World War, but we also want to hear good stories – and it is all the better if those elide facts and enhance the drama of events – because, as human beings, we want to be entertained as well. The important thing, for me, is to fully realise what it is we are commemorating: the significance of the contributions and the enormity of the sacrifices made by our ancestors. Being honest to their memories is the only thing that really matters –the thing that makes all centenary commemoration projects worthwhile.
Image credit: Ministry of Information First World War Collection, from Imperial War Museum Archive. IWM Non Commercial Licence via Wikimedia Commons.
Meet the woman behind Grove Music Online, Anna-Lise Santella. We snagged a bit of Anna-Lise’s time to sit down with her and find out more about her own musical passions and research.
Do you play any musical instruments? Which ones?
My main instrument is violin, which I’ve played since I was eight. I play both classical and Irish fiddle and am currently trying to learn bluegrass. In a previous life I played a lot of pit band for musical theater. I’ve also worked as a singer and choral conductor. These days, though, you’re more likely to find a mandolin or guitar in my hands.
Do you specialize in any particular area or genre of music?
My research interests are pretty broad, which is why I enjoy working in reference so much. Currently I’m working on a history of women’s symphony orchestras in the United States between 1871 and 1945. They were a key route for women seeking admission into formerly all-male orchestras like the Chicago Symphony. After that, I’m hoping to work on a history of the Three Arts Clubs, a network of residential clubs that housed women artists in cities in the US and abroad. The clubs allowed female performers to safely tour or study away from their families by giving them secure places to live while on the road, places to rehearse and practice, and a community of like-minded people to support them. In general, I’m interested in the ways public institutions have affected and responded to women as performers.
What artist do you have on repeat at the moment?
I tend to have my listening on shuffle. I like not being sure what’s coming next. That said, I’ve been listening to Tune-Yards’ (a.k.a. Merill Garbus) latest album an awful lot lately. Neko Case with the New Pornographers and guitarist/songwriter/storyteller extraordinaire Jim White are also in regular rotation.
What was the last concert/gig you went to?
I’m lucky to live not far from the bandshell in Prospect Park and I try to catch as many of the summer concerts there as I can. The last one I attended was Neutral Milk Hotel, although I didn’t stay for the whole thing. I’m looking forward to the upcoming Nickel Creek concert. I love watching Chris Thile play, although he makes me feel totally inadequate as a mandolinist.
How do you listen to most of the music you listen to? On your phone/mp3 player/computer/radio/car radio/CDs?
Mostly on headphones. I’m constantly plugged in, which makes me not a very good citizen, I think. I’m trying to get better about spending some time just listening to the city. But there’s something about the delivery system of headphones to ears that I like – music transmitted straight to your head makes you feel like your life has a soundtrack. I especially like listening on the subway. I’ll often be playing pieces I’m trying to learn on violin or guitar and trying to work out fingerings, which I’m pretty sure makes me look like an insane person. Fortunately insane people are a dime a dozen on the subway.
Do you find that listening to music helps you concentrate while you work, or do you prefer silence?
I like listening while I work, but it has to be music I find fairly innocuous, or I’ll start thinking about it and analyzing it and get distracted from what I’m trying to do. Something beat driven with no vocals is best. My usual office soundtrack is a Pandora station of EDM.
Has there been any recent music research or scholarship on a topic that has caught your eye or that you’ve found particularly innovative?
In general I’m attracted to interdisciplinary work, as I like what happens when ideologies from one field get applied to subject matter of another – it tends make you reevaluate your methods, to shake you out of the routine of your thinking. Right now I’ve become really interested in the way in which we categorize music vs. noise and am reading everything I can on the subject from all kinds of perspectives – music cognition, acoustics, cultural theory. It’s where neuroscience, anthropology, philosophy and musicology all come together, which, come to think of it, sounds like a pretty dangerous intersection. Currently I’m in the middle of The Oxford Handbook of Sound Studies (2012) edited by Trevor Pinch and Karin Bijsterveld. At the same time, I’m rereading Jacques Attali’s landmark work Noise: The Political Economy of Music (1977). We have a small music/neuroscience book group made up of several editors who work in music and psychology who have an interest in this area. We’ll be discussing the Attali next month.
Who are a few of your favorite music critics/writers?
There are so many – I’m a bit of a criticism junkie. I work a lot with period music journalism in my own research and I love reading music criticism from the early 20th century. It’s so beautifully candid — at times sexy, cruel, completely inappropriate — in a way that’s rare in contemporary criticism. A lot of the reviews were unsigned or pseudonymous, so I’m not sure I have a favorite I can name. There’s a great book by Mark N. Grant on the history of American music criticism called Maestros of the Pen that I highly recommend as an introduction. For rock criticism, Ellen Willis’columns from the Village Voice are still the benchmark for me, I think. Of people writing currently, I like Mark Gresham (classical) and Sasha Frere-Jones (pop). And I like to argue with Alex Ross and John von Rhein.
I also like reading more literary approaches to musical writing. Geoff Dyer’s But Beautiful is a poetic, semi-fictional look at jazz, with a mix of stories about legendary musicians like Duke Ellington and Lester Young interspersed with an analytical look at jazz. And some of my favorite writing about music is found in fiction. Three of my favorite novels use music to tell the story. Richard Powers’ The Time of Our Singing uses Marian Anderson’s 1939 concert at the Lincoln Memorial as the focal point of a story that alternates between a musical mixed-race family and the story of the Civil Rights movement itself. In The Fortress of Solitude, Jonathan Lethem writes beautifully about music of the 1970s that mediates between nearly journalistic detail of Brooklyn in the 1970s and magical realism. And Kathryn Davies’ The Girl Who Trod on a Loaf contains some of the best description of compositional process that I’ve come across in fiction. It’s a challenge to evoke sound in prose – it’s an act of translation – and I admire those who can do it well.
Egyptian mummies continue to fascinate us due to the remarkable insights they provide into ancient civilizations. Flinders Petrie, the first UK chair in Egyptology did not have the luxury of X-ray techniques in his era of archaeological analysis in the late nineteenth century. However, twentieth century Egyptologists have benefited from Roentgen’s legacy. Sir Graham Elliott Smith along with Howard Carter did early work on plain x-ray analysis of mummies when they X-rayed the mummy Tuthmosis in 1904. Numerous X-ray analyses were performed using portable X-ray equipment on mummies in the Cairo Museum.
Since then, many studies have been done worldwide, especially with the development of more sophisticated imaging techniques such as CT scanning, invented by Hounsfield in the UK in the 1970s. With this, it became easier to visualize the interiors of mummies, thus revealing their hidden mysteries under their linen wrapped bodies and the elaborate face masks which had perplexed researchers for centuries. Harwood Nash performed one of the earliest head scans of a mummy in Canada in 1977 and Isherwood’s team along with Professor David also performed some of the earliest scannings of mummies in Manchester.
A fascinating new summer exhibition at the British Museum has recently opened, and consists of eight mummies, all from different periods and Egyptian dynasties, that have been studied with the latest dual energy CT scanners. These scanners have 3D volumetric image acquisitions that reveal the internal secrets of these mummies. Mummies of babies and young children are included, as well as adults. There have been some interesting discoveries already, for example, that dental abscesses were prevalent as well as calcified plaques in peripheral arteries, suggesting vascular disease was present in the population who lived over 3,000 years ago. More detailed analysis of bones, including the pelvis, has been made possible by the scanned images, enabling more accurate estimation of the age of death.
Although embalmers took their craft seriously, mistakes did occur, as evidenced by one of the mummy exhibits, which shows Padiamenet’s head detached from the body during the process, the head was subsequently stabilized by metal rods. Padiamenet was a temple doorkeeper who died around 700BC. Mummies had their brains removed with the heart preserved as this was considered the seat of the soul. Internal organs such as the stomach and liver were often removed; bodies were also buried with a range of amulets.
The exhibit provides a fascinating introduction to mummies and early Egyptian life more than 3,000 years ago and includes new insights gleaned from cutting edge twenty first century imaging technology.
In the first autumn of World War I, a German infantryman from the 25th Reserve Division sent this pithy greeting to his children in Schwarzenberg, Saxony.
11 November 1914
My dear little children!
How are you doing? Listen to your mother and grandmother and mind your manners.
Heartfelt greetings to all of you!
Your loving Papa
He scrawled the message in looping script on the back of a Feldpostkarte, or field postcard, one that had been designed for the Bahlsen cookie company by the German artist and illustrator Änne Koken. On the front side of the postcard, four smiling German soldiers share a box of Leibniz butter cookies as they stand on a grassy, sun-stippled outpost. The warm yellow pigment of the rectangular sweets seems to emanate from the opened care package, flushing the cheeks of the assembled soldiers with a rosy tint.
German citizens posted an average of nearly 10 million pieces of mail to the front during each day of World War I, and German service members sent over 6 million pieces in return; postcards comprised well over half of these items of correspondence. For active duty soldiers, postage was free of charge. Postcards thus formed a central and a portable component of wartime visual culture, a network of images in which patriotic, sentimental, and nationalistic postcards formed the dominant narrative — with key moments of resistance dispatched from artists and amateurs serving at the front.
The first postcards were permitted by the Austrian postal service in 1869 and in Germany one year later. (The Post Office Act of 1870 allowed for the first postcards to be sold in Great Britain; the United States followed suit in 1873.) Over the next four decades, Germany emerged as a leader in the design and printing of colorful picture postcards, which ranged from picturesque landscapes to tinted photographs of famous monuments and landmarks. Many of the earliest propaganda postcards, at the turn of the twentieth century, reproduced cartoons and caricatures from popular German humor magazines such as Simplicissimus, a politically progressive journal that moved toward an increasingly reactionary position during and after World War I. Indeed, the majority of postcards produced and exchanged between 1914 and 1918 adopted a sentimental style that matched the so-called “hurrah kitsch” of German official propaganda.
Beginning in 1914, the German artist and Karlsruhe Academy professor Walter Georgi produced 24 patriotic Feldpostkarten for the Bahlsen cookie company in Hannover. In a postcard titled Engineers Building a Bridge (1915), a pair of strong-armed sappers set to work on a wooden trestle while a packet of Leibniz butter cookies dangle conspicuously alongside their work boots.
These engineering troops prepared the German military for the more static form of combat that followed the “Race to the Sea” in the fall of 1914; they dug and fortified trenches and bunkers, built bridges, and developed and tested new weapons — from mines and hand grenades to flamethrowers and, eventually, poison gas.
Georgi’s postcard designs for the Bahlsen company deploy the elegant color lithography he had practiced as a frequent contributor to the Munich Art Nouveau journal Jugend (see Die Scholle).In another Bahlsen postcard titled “Hold Out in the Roaring Storm” (1914), Georgi depicted a group of soldiers wearing the distinctive spiked helmets of the Prussian Army. Their leader calls out to his comrades with an open mouth, a rifle slung over his shoulder, and a square package of Leibniz Keks looped through his pinkie finger. In a curious touch that is typical of First World War German patriotic postcards, both the long-barreled rifles and the soldier’s helmets are festooned with puffy pink and carmine flowers.
These lavishly illustrated field postcards, designed by artists and produced for private industry, could be purchased throughout Germany and mailed, traded, or collected in albums to express solidarity with loved ones in active duty. The German government also issued non-pictorial Feldpostkarten to its soldiers as an alternate and officially sanctioned means of communication. For artists serving at the front, these 4” x 6” blank cards provided a cheap and ready testing ground at a time when sketchbooks and other materials were in short supply. The German painter Otto Schubert dispatched scores of elegant watercolor sketches from sites along the Western Front; Otto Dix, likewise, sent hundreds of illustrated field postcards to Helene Jakob, the Dresden telephone operator he referred to as his “like-minded companion,” between June 1915 and September 1918. These sketches (see Rüdiger, Ulrike, ed. Grüsse aus dem Krieg: die Feldpostkarten der Otto-Dix-Sammlung in der Kunstgalerie Gera, Kunstgalerie Gera 1991) convey details both minute and panoramic, from the crowded trenches to the ruined fields and landmarks of France and Belgium. Often, their flip sides contain short greetings or cryptic lines of poetry written in both German and Esperanto.
Dix enlisted for service in 1914 and saw front line action during the Battle of the Somme, in August 1916, one of the largest and costliest offensives of World War I that spanned nearly five months and resulted in casualties numbering more than one million. By September of 1918, the artist had been promoted to staff sergeant and was recovering from injuries at a field hospital near the Western Front. He sent one of his final postcard greetings to Helene Jakob on the reverse side of a self-portrait photograph, in which he stands with visibly bandaged legs and one hand resting on his hip. Dix begins the greeting in Esperanto, but quickly shifts to German to report on his condition: “I’ve been released from the hospital but remain here until the 28th on a course of duty. I’m sending you a photograph, though not an especially good one. Heartfelt greetings, your Dix.” Just two months later, the First World War ended in German defeat.
When war was declared in the summer of 1914, Claude Debussy was fifty-one. Widely regarded as the greatest living French composer, he lived in Paris in a fashionable, elegant neighborhood near the Bois de Boulogne. Politics had never held much interest for him, and as the movement toward war increased in both France and Germany, Debussy’s focus was on more personal matters. He worried about his growing debt, a result of consistently living beyond his means. And he was frightened by his lack of productivity: in the past few years he’d produced only a handful of compositions.
When France’s armies were mobilized, Debussy was genuinely astonished by the fervor it aroused. He himself was not a flag-waver, and took some pride in observing that he had never “had occasion to handle a gun.” But he was drawn into a more active role as family and friends became involved, and as the German invasion threatened to overrun Paris.
That September he witnessed the repulse of the German forces from temporary asylum in Angers, and grew increasingly horrified by daily reports in the French press of “Hun atrocities” against civilians in Belgium and France. The violation of Belgian neutrality by the Germans (“the rape of Belgium”) served as the basis for what became a well-organized propaganda campaign, one that soon drew on Debussy’s fame.
One of the first publications intended to broaden support for the Allies appeared in November 1914: King Albert’s Book. A Tribute to the Belgian King and People from Representative Men and Women Throughout the World. The popular English novelist, Hall Caine, was listed as “general organizer,” and there were more than 200 contributors from all branches of the arts, including Edward Elgar, Jack London, Edith Wharton, Walter Crane, Maurice Maeterlinck, and Anatole France. Debussy was one of the few composers approached to be part of the project, and contributed a short piano piece, Berceuse héroïque. He described it as as “melancholy and discreet . . . with no pretensions other than to offer a homage to so much patient suffering.”
The Berceuse was followed by two brief piano pieces similar in intent: Page d’album and Elégie. Page d’album was composed in June 1915 for a concert series created to supplying clothing for the wounded. Debussy’s wife, Emma, was involved with the project, and that helps to explain his participation. The Elégie, a simple and solemn piece, was published six months later in Pages inédites sur la femme et la guerre. Profits from sale of the book were intended for war orphans.
That same month Debussy completed his final work directly inspired by the war effort: Noël des enfants qui n’ont plus des maisons (Christmas for Homeless Children). Here Debussy presented children as an illustration of the horror and atrocities of war. He composed both words and music. Its recurrent refrain—“Revenge the children of France!”—gives an indication of its mood. (The following year Debussy started work on a cantata about Joan of Arc, Ode à la France, set in Rheims—whose cathedral, destroyed by German shelling, had become a symbol both of French fortitude and German barbarity—but completed only a few sketches.)
Life in Paris during the war years became more and more of a challenge, with increasing shortages of food and fuel, and a steady escalation in their cost. In time it became difficult for Debussy simply to earn a living. Concert life was reduced, as were commissions for new compositions. Debussy’s last surviving, musical autograph—a short, improvisatory piano piece—was presented as a form of payment to his coal-dealer, probably in February or March 1917.
It came as a surprise to Debussy that, in the midst of all these hardships, he began to compose more than he had in years, including works more substantial in size and broader in their appeal. Among them were En Blanc et Noir (for two pianos), the Etudes (for solo piano), and a set of sonatas, including ones for violin and cello. These were not propagandistic pieces, but the war affected them nonetheless. They were created, Debussy confided to a friend, “not so much for myself, [but ]to offer proof, small as it may be, that 30 million Boches can not destroy French thought . . . I think of the youth of France, senselessly mowed down by those merchants of ‘Kultur’ . . . What I am writing will be a secret homage to them.” For the sonatas, the last compositions completed before his death, he provided a new signature: “Claude Debussy, musicien français”—an indication not just of Debussy’s nationalism during a time of war, but of the heritage he drew upon in writing them.
Debussy died of cancer on 21 March 1918, at a time when Paris was under attack as part of a mammoth, final German offensive. But by that time his perception of the war had altered. The years of carnage had made a straight-forward patriotic stance simplistic. “When will hate be exhausted?” Debussy wrote. “Or is it hate that’s the issue in all this? When will the practice cease of entrusting the destiny of nations to people who see humanity as a way of furthering their careers?”
Eric Frederick Jensen received a doctorate in musicology from the Eastman School of Music. He has written widely in his areas of expertise: German Romanticism, and nineteenth- and early twentieth-century French music. His studies of Debussy and Robert Schumann are in the Master Musicians Series.
Subscribe to the OUPblog via email or RSS.
Subscribe to only music articles on the OUPblog via email or RSS.
When Leonard Bernstein first arrived in New York, he was unknown, much like the artists he worked with at the time, who would also gain international recognition. Bernstein Meets Broadway: Collaborative Art in a Time of War looks at the early days of Bernstein’s career during World War II, and is centered around the debut in 1944 of the Broadway musical On the Town and the ballet Fancy Free. This excerpt from the book describes the opening night of Fancy Free.
When the curtain rose on the first production of Fancy Free, the audience at the old Metropolitan Opera House did not hear a pit orchestra, which would have followed a long-established norm in ballet. Rather, a recorded vocal blues wafted from the stage. Those attending must have been caught by surprise, as they were drawn into a contemporary sound world. The song was “Big Stuff,” with music and lyrics by Bernstein. It had been conceived with the African American jazz singer Billie Holiday in mind, even though it ended up being recorded for the production by Bernstein’s sister, Shirley. At that early point in Bernstein’s career, he lacked the cultural and fiscal capital to hire anyone as famous as Holiday. The melody and piano accompaniment for “Big Stuff” contained bent notes and lilting rhythms basic to urban blues, and the lyrics summoned up the blues as an animate force, following a standard rhetorical mode for the genre:
So you cry, “What’s it about, Baby?”
You ask why the blues had to go and pick you.
Talk of going “down to the shore” vaguely referred to the sailors of Fancy Free, as the lyrics became sexually explicit:
So you go down to the shore, kid stuff.
Don’t you know there’s honey in store for you, Big Stuff?
Let’s take a ride in my gravy train;
The door’s open wide,
Come in from out of the rain.
“Big Stuff” spoke to youth in the audience by alluding to contemporary popular culture. It boldly injected an African American commercial idiom into a predominantly white high-art performance sphere, and its raunchiness enhanced the sexual provocations of Fancy Free. “Big Stuff” also blurred distinctions between acoustic and recorded sound. It marked Bernstein as a crossover composer, with the talent to write a pop song and the temerity to unveil it within a high-art context.
Billie Holiday by William P. Gottlieb, c. February 1947. Public domain via Wikimedia Commons.
Billie Holiday was “one of [Bernstein’s] idols,” according to Humphrey Burton. He admired her brilliance as a performer, and he was also sympathetic to her progressive politics. In 1939, Holiday first recorded “Strange Fruit,” a song about a lynching that became one of her signatures. With biracial and left-leaning roots, “Strange Fruit” was written by the white teacher and social activist Abel Meeropol. Holiday performed “Strange Fruit” nightly at Café Society, a club that enforced a progressive desegregationist agenda both onstage and in the audience. Those performances marked “the beginning of the civil rights movement,” recalled the famed record producer Ahmet Ertegun (founder of Atlantic Records). Barney Josephson, who ran Café Society, famously declared, “I wanted a club where blacks and whites worked together behind the footlights and sat together out front.”
Bernstein had experience on both sides of Café Society’s footlights. In the early 1940s, he performed there occasionally with The Revuers, and he played excerpts from The Cradle Will Rock in at least one evening session with Marc Blitzstein. Bernstein also hung out at the club with friends, including Judy Tuvim, Betty Comden, and Adolph Green, listening to the jazz pianist Teddy Wilson and boogie-woogie pianists Pete Johnson and Albert Ammons. Thus Bernstein had ample opportunities to witness the intentional “blurring of cultural categories, genres, and ethnic groups” that historian David Stowe has called the “dominant theme” of Café Society.
Robbins also had an affinity for the work of Billie Holiday. In the summer of 1940, he choreographed Holiday’s recording of “Strange Fruit” and performed it with the dancer Anita Alvarez at Camp Tamiment. “Strange Fruit was one of the most dramatic and heart-breaking dances I have ever seen—a masterpiece,” remembered Dorothy Bird, a dancer there that summer.
As a result of these experiences, the music of Billie Holiday had crossed the paths of both Robbins and Bernstein before “Big Stuff” opened their first ballet. While Billie Holiday’s voice was not heard the evening of Fancy Free’s premiere, only seven months passed before she recorded “Big Stuff” with the Toots Camarata Orchestra on November 8, 1944. The fact that Holiday made this recording so soon after the premiere of Fancy Free bore witness to the rapid rise of Bernstein’s clout within the music industry. Over the next two years, Holiday made six more recordings of “Big Stuff,” and when Bernstein issued the first recording of Fancy Free with the Ballet Theatre Orchestra in 1946, Holiday’s rendition of “Big Stuff” opened the disc. Both she and Bernstein recorded for the Decca label. Holiday recorded her final three takes of “Big Stuff” for Decca on March 13, 1946, and that label released Fancy Free the same year.
Musically, “Big Stuff” links closely to the worlds of George Gershwin and Harold Arlen, whose songs drew on African American idioms. Like some of the most beloved songs by these composers — whether Arlen’s “Stormy Weather” of 1933 or Gershwin’s “Summertime” of 1935 from Porgy and Bess – “Big Stuff” used a standard thirty-two-bar song form. With a tempo indication of “slow & blue,” “Big Stuff” has a lilting one-bar riff in the bass, a classic formulation for a jazzbased popular song of the day. The riff retains its shape throughout, as is also typical, while its internal pitch structure shifts in relation to the harmonic motion. Both the accompaniment and melody are drenched with signifiers of the blues, especially with chromatically altered third, fourth, sixth, and seventh scale degrees, and the overall downward motion of the melody is also characteristic of the blues, with a weighted sense of being ultimately earthbound.
Carol J. Oja is William Powell Mason Professor of Music and American Studies at Harvard University. She is author of Bernstein Meets Broadway: Collaborative Art in a Time of War and Making Music Modern: New York in the 1920s (2000), winner of the Irving Lowens Book Award from the Society for American Music.
Subscribe to the OUPblog via email or RSS.
Subscribe to onyl music articles on the OUPblog via email or RSS.
It was such a shameless Bruce Springsteen rip-off that Boss fans considered it as sacrilegious as devout Christians do Jesus Christ Superstar.
It had a whiplash-inducing twist ending that Roger Ebert called “so frustrating, so dumb, so unsatisfactory that it gives a bad reputation to the whole movie.”
It was a box-office flop that thirty years ago this month shocked Hollywood by becoming a surprise HBO hit.
It was a movie you rented repeatedly during the decade’s video boom because it fit perfectly VHS’s promise of cheap home entertainment: undemanding, toe-tapping, and eminently re-watchable, it was an ideal 99-cent diversion that helped you forget VCRs cost $500 and were as boxy as Samsonite suitcases.
What you’re less likely to hear, unfortunately: it was based on one of the best, most criminally underappreciated rock ‘n’ roll novels ever.
In a preface to Overlook Press’s 2008 reissue (the book’s first widely available trade paperback), no less than Sherman Alexie admits he never knew Eddie was originally a novel by P. F. Kluge until deep into his own career, long after “obsessing” over the movie as a high-schooler. It’s indicative of how the film overshadows its source material that Kluge’s Eddie doesn’t even make this supposedly comprehensive list of rock novels published since the 1950s.
The novel’s relative obscurity is a shame, for as Alexie notes, it has literary “ambitions and secrets and qualities” that far surpass the movie’s “mainstream” pleasures. Director Martin Davidson, who co-wrote the script with his wife, Arlene, made several changes to Kluge’s tale of a Jersey rock star who may or may not be haunting former bandmates twenty years after his supposed death. The most significant is seemingly the most cosmetic. Whereas Kluge conceived hero Eddie Wilson as a Dion-esque doo-wop rocker, Davidson turned him into an awkward splice of Springsteen and Jim Morrison. In so doing, the filmmaker altered the literary inspiration that in Kluge gives the musician a model for imagining rock ‘n’ roll as an art form instead of mere entertainment. The change is decisive to how differently each version of Eddie depicts the purpose of popular music.
Une saison en enfer, Arthur Rimbaud, Bruxelles, Alliance typographique, 1873. Public Domain via Wikimedia Commons.
In the movie, college dropout Frank “Wordman” Ridgeway, the story’s Nick Carraway, introduces Eddie to the 19th-century French symboliste Arthur Rimbaud. Literature spurs the hunky frontman to make “serious” music instead of cranking out bar-band favorites for Jersey beachgoers: “I want songs that echo,” Eddie insists. “The [music] we’re doing now is like bed sheets. Spread ’em, soil ’em, ship ’em out to laundry. Our songs — I like to fold ourselves up in them forever.” Soon enough, Eddie pens a concept album called A Season in Hell, after Rimbaud’s most famous work. His slimy record-company owner refuses to release it, however, because the music sounds “like a bunch of jerkoffs making weird sounds.” The rejection sends Eddie squealing away in his ’57 Chevy, which hurtles off the Raritan Bridge, either an accident or a suicide. The Cruisers are forgotten for two decades later until an Entertainment Tonight-type reporter begins hyping Hell as an ominous foreshadowing of the late sixties, “a new age, an age of confusion, an age of passion, of commitment!” Suddenly, someone claiming to be the dead rock star is stalking the surviving Cruisers, intent on finally releasing the missing opus so the public can recognize Eddie’s brilliance.
Serious scholarly papers have drawn parallels between Eddie and Rimbaud, but the script’s invocation of the poet never really rises above literary window dressing. Davidson mainly uses Rimbaud to allude to Morrison, who idolized the literary libertine and who, according to a farcical urban legend, faked his 1971 death to escape the rock biz (much as Rimbaud abandoned literature before he was twenty). The movie asks us to believe that the Beatlemania-era Eddie predicted the Dionysian extremes of the Doors’ “The End” or (God help us) “Horse Latitudes,” but the song that’s supposed to illustrate his visionary genius, “Fire,” hardly qualifies as “weird sounds”. It’s merely an arthritic gloss on Springsteen’s “Adam Raised a Cain” with none of the Boss’s blistering vitality.
Walt Whitman. Photo by George C. Cox, restoration by Adam Cuerden. Public domain via Wikimedia Commons.
For Kluge’s Eddie, by contrast, the spirit father isn’t Rimbaud but Walt Whitman, and Eddie’s magnum opus is Leaves of Grass. Having seen Leaves appropriated to do everything from woo interns to expose unlikely meth kingpins, I’ll be the first to say that the Good Gray Poet’s popularity as the Go-To Lit Reference sometimes leaves me craving a Longfellow revival. Yet his role in Kluge isn’t gratuitous. Whitman inspires Eddie to reimagine rock ‘n’ roll as the vox populi, a medium not for becoming famous but for creating the true song of democracy. To produce his rock version of Leaves, Eddie recruits black and white greats from Elvis to Sam Cooke to Buddy Holly (the novel is set in 1957-58, a half-decade earlier than the film). Their mission is to snip the American barbed wire of segregation through a series of secret jam sessions designed to “to bring off the impossible, some fantastic union of black and white music.” What breakthroughs Eddie achieved before his supposed death is as compelling a page-turner as the mystery of who’s harassing the surviving Cruisers. (Spoiler alert: Eddie does not predict “Ebony and Ivory”).
In ditching Whitman for Rimbaud, Davidson’s film became a story not about the Gordian knot of race in American music but about rock-star greatness and fame. That point is bashed home like a gong by the movie’s trick ending, which reveals Eddie is indeed alive but indifferent to the hullaballoo the media creates when his masterwork is finally released. Despite the adaptation’s defects, Kluge speaks appreciatively of it, and rightly so: as a cult favorite, the movie kept the novel’s name alive during the decades the book was out of print. Besides, when the other movie based on your writing is Dog Day Afternoon, you can afford to be generous.
Nevertheless, the lack of attention Book Eddie receives feels like a missed opportunity for rock novels in general. The genre is a diverse, unruly one. Some of its entries are romans à clef that do little more than pencil fictional names into legends rock fans already know by heart (Paul Quarrington’s Brian Wilson-retelling Whale Music). Many others are coming-of-age novels in which that form’s traditional theme of lost innocence plays out like a Behind the Music episode, all downward-spiral cocaine and coitus. Still others are less about music-making than about the grotesquery of fame and fan worship (Don DeLillo’s Great Jones Street). What rock novels aren’t nearly as often about is race — or, at least, the alchemies of ethnic interchange explored in such great nonfiction music histories as Peter Guralnick’s Sweet Soul Music: Rhythm and Blues and the Southern Dream of Freedom (1986). A handful of exceptions do come to mind, Alexie’s own Reservation Blues (1995) most notably. Yet for the most part storylines about ahead-of-their-time geniuses predominate, and frankly, the plot of making personal art instead of appeasing a hits-happy public is as tired as the playlist at my local oldies station.
The idea of rock ‘n’ roll as both the promise and impasse of a racially egalitarian barbaric yawp, on the other hand… That’s a song in fiction we still don’t hear nearly enough.
Kirk Curnutt is professor and chair of English at Troy University’s Montgomery, Alabama, campus, where Scott Fitzgerald met Zelda Sayre in 1918. His publications include A Historical Guide to F. Scott Fitzgerald (2004), the novels Breathing Out the Ghost (2008) and Dixie Noir (2009), and Brian Wilson (2012). He is currently at work on a reader’s guide to Ernest Hemingway’s To Have and Have Not. Read his previous OUPblog posts.
Subscribe to the OUPblog via email or RSS.
Subscribe to only literature articles on the OUPblog via email or RSS.
Twenty-seven years ago, on 31 July 1987, James Bond returned to the screen in The Living Daylights, with Timothy Dalton as the new Bond. The film has a notable departure in the style of music, as composer John Barry decided that the film needed a new sound to match this reinvented Bond, and his love interest — a musician with dangerous ties. To celebrate the anniversary, here is a brief extract from The Music of James Bond by John Burlingame.
In the script, Bond is caught up in a complex plot involving high-ranking Soviet intelligence officer Koskov (Jeroen Krabbe) who is supposedly defecting to the West. Koskov’s girlfriend, Czech cellist Kara Milovy (Maryam d’Abo), is duped into helping him escape his KGB guards. A Greek terrorist named Necros (Andreas Wisniewski) then supervises his “abduction” from England and transport to the Tangiers estate of an American arms dealer (Joe Don Baker). Eventually Bond and Kara find themselves at a Soviet airbase in Afghanistan, where they meet a Mujahidin leader (Art Malik) who helps 007 thwart the plot.
Because the early portions of the story take place in Czechoslovakia and Austria, The Living Daylights crew shot for two weeks in Vienna, including all of the scenes where Kara is performing on her cello. Director John Glen recalled conferring with Barry about the classical music that would be heard in the film. “We listened to various pieces before we chose what we were going to use,” Glen said. “Obviously we needed something where the cello was featured strongly.” (They ended up with Mozart, Borodin, Strauss, Dvořák and Tchaikovsky.) They recorded the classical selections with Gert Meditz conducting the Austrian Youth Orchestra and then filmed the ensemble, using the prerecorded music as playback on the set.
Maryam d’Abo was filmed “playing” the cello during several of these scenes. “I started taking private lessons a month prior to the film,” she recalled. “I just learned the movements. They basically soaped the bow so there wasn’t any sound [from the instrument]. It was hard work; I could have done with a couple more weeks of lessons. They demanded a lot of strength. No wonder cellists start when they are eight years old.” The solo parts heard in the film were played by Austrian cellist Stefan Kropfitsch.
The Living Daylights Film Poster (c) MGM
The actress, as Kara, “performs” with the orchestra in several scenes, notably at the end of the film when Barry himself is seen conducting Tchaikovsky’s 1877 Variations on a Rococo Theme and Kara is the soloist. It was filmed on October 15, 1986, at Vienna’s Schönbrunn Palace. Recalled Glen: “It was very unusual for John—unlike a lot of other people who liked to appear in movies, John had never asked before—but on that film, he asked if he could appear. At the time, it struck me as a bit strange. It was almost a premonition that this was going to be his last Bond. I was happy to accommodate him, and he was eminently qualified to do it.”
In fact, Barry had done this once before, appearing on-screen as the conductor of a Madrid orchestra in Bryan Forbes’s Deadfall (1968). On that occasion, he was conducting his own music (a single-movement guitar concerto that was ingeniously written to double as dramatic music for a jewel robbery occurring simultaneously with the concert). This time, he was supposed to be conducting the “Lenin’s People’s Conservatoire Orchestra.”
D’Abo socialized with Barry in London, when the unit was shooting at Pinewood. (She later realized that she had already appeared in two Barry films: Until September and Out of Africa.) “John was there, working on the music,” she said. “He was just a joy to be around. I remember seeing him and having dinner with him and [his wife] Laurie, and John being so excited about writing the music. He was so adorable, saying ‘Your love scenes inspire me to write this romantic music.’ John was such a charmer with women.”
Jon Burlingame is the author of The Music of James Bond, now out in paperback with a new chapter on Skyfall. He is one of the nation’s leading writers on the subject of music for film and television. He writes regularly for Daily Variety and teaches film-music history at the University of Southern California. His other work has included three previous books on film and TV music; articles for other publications including The New York Times, Los Angeles Times, The Washington Post, and Premiere and Emmy magazines; and producing radio specials for Los Angeles classical station KUSC.
Subscribe to the OUPblog via email or RSS.
Subscribe to only music articles on the OUPblog via email or RSS.
There are many cases of musicians with homonymic names, including jazz performers Bill Evans (pianist, 1929-1980) and Bill Evans (saxophonist, 1958-), and composers John Adams and John Luther Adams. In the following paragraphs, I discuss musical examples by artists comprising three such pairs.
The arrangement here works for me: no real solos and clearly defined instrumental roles, including the absence of the piano during the bridge (1:56-2:29). Wilson’s performance, particularly the memorable way she sings the cascading titular line at 1:01 and 2:31, is stunning.
Nancy Wilson sings a powerful lead vocal on this track from Heart’s Brigade album (produced by Richie Zito, who also produced Cheap Trick’s “The Flame” and Bad English’s “When I See You Smile”). The chorus features one of the great uses of the I-V-ii-IV pattern, evoking the chorus of Peter Frampton’s “Baby, I Love Your Way” (with which “Stranded” shares the key of G major following the half step “pump-up” modulation at 2:55).
From her first album Horses (produced by John Cale of the original Velvet Undergound), this track features Smith’s distinctive mix of song and spoken word. I enjoy Smith’s vocalizations as well as the arrangement, which features a somewhat gradual buildup of instrumental forces. The accompaniment begins with piano; the bass and drums enter at 0:30 and rhythm guitars at 0:48. A double time feel begins at 1:01, followed by an uneasy, repeating eighth note gesture in the drums beginning at 1:33. Additional vocal tracks enter at 2:24 and a lead guitar comes in at 3:08.
Featuring lead vocals by Patty Smyth, this song preceded Scandal’s bigger 1984 hit “The Warrior.” (Both became karaoke staples long ago.) The background vocals on this track are nicely placed in 1:18-1:31 and 2:48-2:56. The decision to elide Smyth’s voice with the synth lead beginning at 1:48 provides a smooth transition into the solo section, which ends with what are possibly my favorite two seconds of the song, from 2:19-2:21.
Also featuring Ray Bryant (piano) and Tommy Bryant (bass), this track features Jones’ uniquely colorful cymbal playing. I especially enjoy Jones’ contribution during the last chorus, beginning at 2:32.
Sonny Clark Trio – “I Didn’t Know What Time It Was” (1957)
With “Philly” Joe Jones (drums) and Paul Chambers (bass). Jones is in top form here with pianist Sonny Clark and frequent rhythm section mate Paul Chambers. The group’s interplay during Chambers’ solo (2:31-3:21) is particularly engaging, as Jones and Clark create a subtle interplay within the accompaniment.
Oxford Music Online is the gateway offering users the ability to access and cross-search multiple music reference resources in one location. With Grove Music Online as its cornerstone, Oxford Music Online also contains The Oxford Companion to Music, The Oxford Dictionary of Music, and The Encyclopedia of Popular Music.
Subscribe to the OUPblog via email or RSS.
Subscribe to only music articles on the OUPblog via email or RSS.
Fifty years ago, a wave of British performers began showing up on The Ed Sullivan Show following the dramatic and game-changing appearances by The Beatles. That spring, a number of “beat” groups made the transatlantic leap and scored hits on American charts prompting many pop pundits to declare (not for the last time) that the Beatles’ fifteen-minutes of fame had elapsed. The first pretenders to the throne were London’s The Dave Clark Five with “Glad All Over” (sung and written by organist Mike Smith with Dave Clark), which anticipated the many other British pop records that would find a place on American charts in the mid sixties. Soon, Liverpudlian performers The Searchers (“Needles and Pins”), Gerry and the Pacemakers (“Don’t Let the Sun Catch You Crying”), and Billy J. Kramer (“Bad to Me”) followed fellow Merseysiders The Beatles and debuted on the Sullivan’s Sunday-night show, even as other American networks scrambled to get their piece of the British pop pie.
Publicity photo of The Dave Clark Five from their cameo performing appearance in the US film Get Yourself a College Girl. 27 November 1964. (c) MGM. Public domain via Wikimedia Commons
Over the course of that year, the success of acts like these changed both American impressions of British music and, importantly, British musicians’ attitudes about themselves. After an era of economic hardship and the occasional geopolitical embarrassment (e.g., the Suez Crisis of 1956), Britain came out of its postwar cultural funk to the soundtrack of pop music. At least two interrelated trends in this music emerged. First, British artists followed the long-established practice of white performers covering music created by African Americans and, second, they began to explore their own versions of what those traditions might sound like. Often, their approach was to take material previously performed acoustically and reinterpret it with electric guitars and keyboards accompanied by drums. They also applied production forces that had not been available to the original performers. Ultimately, British producers, songwriters, and musicians began to find the confidence—sometimes tinged with arrogance—that they could compete with Americans.
“House of the Rising Sun.” This traditional ballad (collector Alan Lomax had recorded an Appalachian version in the thirties) about a life gone wrong in New Orleans had been included on Bob Dylan’s eponymous first album. The Animals from Newcastle had already extracted and interpreted a song that appears on that album for British charts (“Baby Let Me Take You Home”), applying blues-rock aesthetics to a folk ballad. In a way, The Animals’ version of “House of the Rising Sun” was an early example of folk rock.
Guitarist Hilton Valentine and keyboardist Alan Price found ways to update and electrify the instrumental accompaniment of “House of the Rising Sun,” and Eric Burdon gave it a convincing and ultimately defining interpretation. The session unfolded at the unglamorous hour of eight AM after the band had traveled overnight from a gig in Liverpool, arriving at Kingsway Studios (opposite the Holborn Underground station) tired, but excited to be recording again. The band ran through the arrangement they had been playing in clubs and did two takes; but the second proved unnecessary. Mickie Most, the artist-and-repertoire manager on the session, knew he had a hit. Most later told Spencer Leigh, “Everything was in the right place, the planets were in the right place, the stars were in the right place and the wind was blowing in the right direction. It only took 15 minutes to make.”
Most’s role in the success of mid-sixties British rock and pop cannot be overstated. He would produce recordings by Herman’s Hermits, the Nashville Teens, Donovan, the Yardbirds, and many others. In the case of “House of the Rising Sun,” Most made the unconventional call to press all 4 minutes and 28 seconds of the recording. The combination of microgroove technology and vinyl allowed for longer playing times and a cleaner sound from a 45 rpm disc, even if most singles still followed the industry norm of 2:30 established by 78 rpm shellac discs. Most concluded that, if the recording and the performance were good, the length would not matter. He was proved right, even if MGM (the American distributor) would break the recording up into two parts for radio play. Released on June 19th, the record would hit number 1 on British charts in July 1964 and soon proved successful on American charts as well such that the Animals would debut on The Ed Sullivan Show in October with a hit.
“Doo Wah Diddy Diddy.” Named after South African keyboardist Manfred Lubowitz’s stage persona, the London band Manfred Mann got their break when asked to write theme music for the popular ITV television show, Ready, Steady, Go! EMI artist-and-repertoire manager John Burgess had signed them and “5, 4, 3, 2, 1” — their second release — had become a hit, albeit one that derived its success through its association with a television show. Their self-penned follow-up—“Hubble Bubble (Toil and Trouble)”—rose to a respectable #11 on UK charts, but they hoped for a piece of the transatlantic prize that The Beatles, The Dave Clark Five, and others were enjoying. As with many British performers at the time, the band and their producer decided to cover a tune that had already been released by an American singing group.
Written by Jeff Barry and Ellie Greenwich in New York, The Exciters’ version of “Do Wah Diddy” featured a very basic instrumental backing and had been a regional success; but it had been unable to capture a national market. Indeed, recordings by African Americans often found release only on small independent labels that lacked national distribution and promotion structures. Burgess would have guessed that his band could give it a different spin and, with the current hunger for British acts in the US and parent company EMI’s growing clout, a good promotion and distribution arrangement would ensure success.
Paul Jones gives a constrained performance, his singing style featuring a much more constricted and nasal quality when compared to the original’s open-throated joyfulness. Burgess with Norman Smith (who also served as the balance engineer on Beatles recordings in this era) capture a slightly more elaborate instrumental performance that included timpani and Mann’s electronic keyboard prominently in the mix. More importantly, Smith’s soundscape for the recording adds a depth that was lacking in the original production by the Exciters. “Doo Wah Diddy Diddy” would not be the band’s last American hit, but it would be their biggest.
“It’s All Over Now.” The Rolling Stones had had hits in the UK with a covers of Chuck Berry’s “Come On,” Lennon and McCartney’s “I Wanna Be Your Man,” and Buddy Holly’s “Not Fade Away”; but success had largely evaded them in the US in the first half of 1964. The summer had not bode well for the Rolling Stones, with The Daily Mirror at the end of May describing them as the “ugliest group in Britain.” But manager Andrew Oldham, if nothing else, had ambitious plans for the band.
In anticipation of their short inaugural American tour, he released one of Mick Jagger and Keith Richards’ earliest attempts at songwriting (the poppish “Tell Me,” which the songwriters had intended only as a demo) in the US to very modest regional success, but they had yet to get to the number-one spot in either the UK or the US. Once the two-week American tour began in June 1964, they played to half-empty houses and an indifferent press. If the Stones were on the road to success, it was beginning to look unpleasant.
When the tour stopped in Chicago, Oldham arranged for them to record at the studios for Chess Records. American legends who loomed large in the band’s imagination had recorded here: Chuck Berry, Howlin’ Wolf, Muddy Waters, and others had all spent time at 2120 South Michigan Avenue. With Ron Malo selecting and positioning microphones before setting levels, The Stones felt they were tapping into history, while manager Andrew Oldham understood a good marketing opportunity when he saw one.
One of the tunes they had heard in New York seemed like just the thing to record during this session. Bobby and Shirley Womack had written “It’s All Over Now” for Bobby’s band, the Valentinos, but again the disc had failed to achieve much success. Their version had something of Chuck Berry’s “Memphis, Tennessee” to its grove and feel, and a prominent bass line that drove the recording along provided a sense of humor and irony.
The Rolling Stones sped their version up and added an arpeggiated guitar part, while Mick Jagger delivered the lyrics as an angry victim who gains vindication, a role he would develop extensively in the coming years. When released in Britain on the June 26th, it would prove to be the Stones’ first chart topper the week after “House of the Rising Sun” had occupied that spot in July.
When asked the previous year about why British teens liked The Rolling Stones’ blues and rhythm-and-blues covers, Jagger acknowledged that their audiences liked white faces better. Indeed, British artists (including The Beatles) relied heavily on music originally created in the US by either African Americans or by rural whites.
As 1964 unfolded, songwriters, musicians, music directors, recording engineers, and artist-and-repertoire managers would gain self-confidence and begin producing something more identifiably British.
In the opening months of 1964, The Beatles turned the American popular music world on its head, racking up hits and opening the door for other British musicians. Lennon and McCartney demonstrated that—in the footsteps of Americans like Buddy Holly and Chuck Berry—British performers could be successful songwriters too. In the summer, “A Hard Day’s Night” would prove that their success had not been a winter fluke or a momentary bit of post assassination frenzy.
It wasn’t that the Brits had been absent from the very profitable American market: Joe Meek had had success with the Tornados’ “Telstar” at the end of 1962. But before The Beatles, few in America cared much at all about what the British recording industry released. Indeed, British irrelevancy lay behind Capitol’s decision not to release recordings by The Beatles until news coverage got ahead of them.
In the wake of the Beatles, some of the evolving diversity of British songwriting emerged and the first stage came from composers associated with the heart of London’s music publishing world: Denmark Street. Publishers and musical instrument stores still call that short stretch of pavement home, but in the early to mid-sixties, everyone from the Beatles to the Kinks had been there. You could record at Regent Sound Studios (as did The Rolling Stones and The Who), you could grab a coffee with session musicians at Julie’s Café, or buy an ad at either Melody Maker or The New Musical Express. Indeed, The Beatles had gotten a huge break through publisher Dick James whose offices were at the corner of Denmark Street and Charing Cross Road.
A promotional photo of British rock group The Kinks, taken in Stockholm, Sweden, ca. 2 September 1965. Public domain via Wikimedia Commons.
In 1963, Lennon and McCartney’s major competitor was Mitch Murray who had had a string of hits with Gerry and the Pacemakers (“How Do You Do It?”) and Freddie and the Dreamers (“I’m Telling You Now”). Murray’s forte was the simple, catchy lyric and tune, purchased and consumed in an instant, paid for by happy teens who eagerly waited for the next release. His songs had proved so successful in 1963 that John Lennon jokingly (perhaps) suggested that another challenge to the Lennon-McCartney catalogue could result in bruises for their competitor. Notably in this period, both the Liverpudlians and this Londoner published through Dick James Music.
However a particularly interesting composition emerged from the pen of another Denmark Street songwriter, this time associated with Southern Music. The twenty-nine-year-old Geoff Stephens was never much of a musician, but he had an ear for lyrics and tunes and “The Crying Game” had begun as a title and a premise. The title “seemed the perfect seed from which to grow a very good pop song,” he recalled. “We all know what it’s like to cry and have deep feelings.” Still, the song’s convoluted melody and irregular prosody made it an unlikely hit for 1964, but succeed it did.
A winning interpretation would come through Dave Berry whose breathy and exposed voice served as the perfect instrument for the melody, even if he initially thought the music inappropriate for him. (He saw himself as a rhythm-and-blues artist.) Decca producer Mike Smith (who had auditioned the Beatles back in 1962) brought in Reg Guest to serve as music director who, in turn, hired guitarist Big Jim Sullivan to complement Berry’s emotive interpretation. Employing a foot pedal meant for a steel guitar that controlled both tone and volume, Sullivan put the musical equivalent of crying into the recording. The result deeply impressed Beatle George Harrison who sought to find out how to recreate the sound (something he would accomplish the next year on songs like “I Need You” and “Yes It Is”).
Not all non-performer songwriters in this era had deep ties to Denmark Street. Ken Howard and Alan Blaikley had met at University College School as teens, sported proper academic degrees, had worked at the BBC, and were active participants in the intellectual world of late fifties and early sixties London. In 1964 they collaborated on the song “Have I the Right” and in a tavern found the Sheritons whom they believed were perfect to deliver their plea for love. The musical and lyrical materials are simple, but catchy, and demanded a distinctive sound and interpretation.
No one could better create a distinctive sound in London at the time than the enigmatic Joe Meek in his home studio on Holloway Road in North London. In order to create a sound around the band and the song, Meek turned to the four-on-the-floor musical grooves that had been popular that year (notably heard on recordings by another North London group, the Dave Clark Five). Meek recorded clipped microphones to the stairs outside his studio and had the band stomp in time with the music, perhaps in imitation of the Dave Clark Five’s “Bits and Pieces.” Next, he repeatedly overdubbed a guitar part and played with the tape speed to give it a wavering bell-like quality.
Howard and Blaikley would lease the recording to Louis Benjamin at Pye Records, who thought that the Sheritons needed a new name. Seeing the band’s female hairdresser-drummer Honey Lantree as its visual distinction and marketing hook, he renamed the band, The Honeycombs. The song topped British charts late in the summer and successfully climbed American charts that fall. The songwriters would become the band’s managers and continue to write music for them, although they never quite duplicated their success.
But where were British songwriters who also performed their own material? Jagger and Richards of The Rolling Stones had written “As Tears Go By,” but had decided to give it to Marianne Faithful. (They didn’t think it appropriate for themselves to release, at least as a single.) The band had also recorded a demo of another Jagger-Richards tune, “Tell Me,” at Regent Sound Studios in Denmark Street, only to discover that their manager Andrew Oldham had released it in America. Despite its modest success, Richards has since cited this recording as evidence of how little control they had over their career at this stage. They wouldn’t record their first real self-penned success early the next year with “The Last Time.”
More significantly during the summer of ‘64, one of the most important British artists of the era woke up every garage band on both continents, simultaneously frightening parents and the custodians of culture.
In July 1964 at IBC Studios in London, Shel Talmy prepared to give an unlikely group of musicians their last chance to have a hit. Talmy was a Los Angeles transplant, an outsider to the London recording scene who preferred to work as an independent artist-and-repertoire manager. Through hard work, good luck, and a bit of bluff, he had managed almost immediate success, much to the jealousy of the locals.
The group on whom he was gambling had the Davies Brothers as its leaders who had beaten the odds to get a recording contract, but who had struck out with their first two releases. The Kinks’ version of Little Richard’s “Long Tall Sally” had the misfortune of comparison with The Beatles, who were now using the tune to close their shows and who would soon release their own version of it. Their next attempt was a composition by Ray Davies. “You Still Want Me” carries all the hallmarks of early sixties British pop and, consequently, had very little that would distinguish The Kinks from everyone else.
“You Really Got Me” would be the song that lifted them to success. They arrived to record it at IBC Studios in July 1964 after already taping a slower and more bluesy version. Davies and Talmy (although they might disagree about the process later) agreed that a faster version could be more successful and booked time at the studio late at night. To insure success, Talmy had engaged session drummer Bobby Graham, who was already known around professional circles as at least one of the drummers on the Dave Clark Five records. He also brought in the veteran bandleader Art Greenslade to play piano.
Graham and Greenslade had been at a previous recording session earlier that night where a contractor had asked them to do a second session. After a pint or two and a bite to eat, they showed up at IBC for a date with the Kinks. Their first reaction, according to Greenslade, was a one of slightly restrained horror at the sight of the band; but, after a short rehearsal, they settled into a good working relationship. The band’s drummer, Mick Avory, settled into playing the tambourine.
Knowing full well, that you get six sides to get a hit, Ray Davies remembered years later the tension that night. “When that record starts it’s like… doing the four-minute mile; there’s a lot of emotion.” He remembers shouting at Dave, “willing him to do it, saying it was the last chance we had.” Brother Dave apparently responded with an expletive and launched into what must be one of the original punk guitar solos played through a ripped speaker. Talmy, for his part, tried to capture the sound and to shape it in a distinctive way, employing the young Glyn Johns and Bob Auger as his engineers.
The recording of “You Really Got Me” would establish The Kinks as one of Britain’s most important bands and Ray Davies as a songwriter to be watched.
2014 is the year of role-playing…November marks the 10th anniversary of World of Warcraft, the first truly global online game, and in January gamers celebrated the 40th anniversary of Dungeons & Dragons, the fantasy game of elves and dwarves, heroes and villains, that changed the world.
When Dungeons & Dragons (D&D) became popular in the late 1970s and early 1980s, many commentators lambasted the game as a gateway to amorality, witchcraft, Satanism, suicide, and murder. Of course, such accusations were no more substantive than the claim that vicious tricksters put needles in Halloween candy, and eventually everyone saw through them. In fact, the only thing that D&D’s detractors got right is that D&D competed against the conservative religions that attacked them.
Those original D&D books were and remain sacred texts. Finding an out-of-print copy of Deities and Demigods was a religious experience in the 1980s. It was impossibly rare, appearing once a year behind the counter at the comic book shop and with a plastic bag protecting it from the mundane dust, dirt, and fingerprints that could sully its sacred value (and it’s high price). The magic of Unearthed Arcana could inspire the spirit, renewing a love of the game through new rules and new treasures. Like any good sacred text, the handbooks of D&D enthralled the players and gave them dreams worth dreaming. In doing so, they gave them opportunities to be more than anyone else had ever hoped. Dungeons & Dragons made heroes of us all.
As the devoted fans of D&D grew up and, more often than not, gave up the game and its requisite all-night forays against evil, fueled by junk food, soda, or beer, they nevertheless carried it with them in their hearts and their minds. Dungeons and Dragons never changed people into Satanists and murderers, but it did change them. All of those years carrying a Player’s Handbook or a Dungeon Master’s Guide couldn’t help but reshape the bodies that lugged them around or the minds that fixated upon their contents. Those books encouraged adventure, and a desire to go one step further, even in the face of cataclysmic danger. Let the mysterious be understood, for there is always another mystery to uncover.
Dungeons & Dragons was a revelation. It didn’t come—as far as we know—from any gods, but it revealed the future. Today more than 90% of high school students play videogames and the demographics just keep getting better for the manufacturers. Every time a new Marvel comics-themed movie hits the theaters, it goes radioactive, raking in many times over its enormous cost to film. The religions of Star Trek and Star Wars have played a part in this cultural turn, and they get most of the mainstream credit. But it was the subtler impact of D&D that really re-shaped the world. Dungeons & Dragons provided the intellectual and imaginative space that has produced many of today’s great writers, technology entrepreneurs, and even academics. The game is a game of imagination, and its players—whether they gave up when they graduated high school or college or whether they play now with their friends and their children—never forgot what it means to imagine a world. They’ve been re-imagining this one into their image of it and we should all be thankful for the opportunity to play in their world.
Subscribe to the OUPblog via email or RSS.
Subscribe to only religion articles on the OUPblog via email or RSS.
Image credit: Dungeons and Dragons (meets Warhammer…) by Nomadic Lass. CC BY-SA 2.0 via Flickr.
“desire to discover materials for my work in modern life never leaves me … and, though I have occasionally been betrayed by my love into themes somewhat trifling and commonplace, the conviction that possessed me that I was speaking – or rather painting – the truth, the whole truth, and nothing but the truth, rendered the production of real-life pictures an unmixed delight. In obedience to this impulse I began work on a small work suggested by some lady-archers, whose feats had amused me at the seaside … The subject was trifling, and totally devoid of character interest; but the girls are true to nature, and the dresses will be a record of the female habiliments of the time.”
After Gwendolen Harleth’s encounter with Daniel Deronda in Leubronn in Chapters 1 and 2, there’s a flashback to Gwendolen’s life in the year leading up to that meeting, with Chapters 9 to 11 focusing on the Archery Meeting, where she first meets Henleigh Grandcourt, and its consequences. In the England of the past archery was the basis of military and political power, most famously enabling the English to defeat the French at Agincourt. In the later nineteenth century it is now a leisure pursuit for upper-class women. This may be seen as symptomatic of the decline or even decadence of the upper class since it is now associated with an activity which Frith suggests is “trifling and commonplace.” A related symptom of that decline is the devotion of aristocratic and upper-class men, such as Grandcourt and Sir Hugo Mallinger, to a life centred on hunting and shooting.
The Frith painting shows a young female archer wearing a fashionable and no doubt extremely expensive dress and matching hat. This fits well with the novel for Gwendolen takes great care in her choice of a dress that will enhance her striking figure and make her stand out at the Archery Meeting, since “every one present must gaze at her” (p. 89), especially Grandcourt. The reader may similarly be inclined to gaze at the figure in the painting. One might say that together with her bow and arrow Gwendolen dresses to kill, an appropriate expression for arrows can kill though in her case she wishes only to kill Grandcourt metaphorically: “My arrow will pierce him before he has time for thought” (p. 78). Readers of the novel will discover that light-hearted thoughts about killing Grandcourt will take a more serious turn later.
With the coming of Grandcourt into the Wancester neighbourhood through renting Diplow Hall, the thoughts of young women and especially their mothers turn to thoughts of marriage – there is obvious literary allusion to the plot of Pride and Prejudice in which Mr Bingley’s renting of Netherfield Park creates a similar effect. The Archery Meeting is the counterpart to the ball in Pride and Prejudice since it is an opportunity for women to display themselves to the male gaze in order to attract eligible husbands and no man is more eligible than Grandcourt. Whereas Mr Darcy eventually turns out to be the perfect gentleman, in Eliot’s darker vision Grandcourt has degenerated into a sadist, “a remnant of a human being” (p. 340), as Deronda calls him. Though Gwendolen is contemptuous of the Archery Meeting as marriage-market, she cannot help being drawn into it as she believes at this point that ultimately a woman of her class, background, and upbringing has no viable alternative to marriage.
While Grandcourt’s moving into Diplow Hall together with his likely attendance of the Archery Meeting become the central talking points of the neighbourhood among Gwendolen and her circle, the narrator casually mentions another matter that is being ignored – “the results of the American war” (p. 74). Victory for the North in the Civil War established the United States as a single nation, one which would ultimately become a great power. There is a similar passing reference later to the Prussian victory over the Austrians at “the world-changing battle of Sadowa” (p. 523), a major step towards the emergence of a unified German nation. While the English upper class are living trivial lives the world is changing around them and Britain’s time as the dominant world power may be ending.
Though the eponymous Deronda does not feature in this part of the novel, he is in implicit contrast to Gwendolen and the upper-class characters as he is preoccupied with these larger issues and uninvolved in trivial activities like archery or hunting and finally commits himself to the ideal of creating a political identity for the Jews. When he tells Gwendolen near the end of the novel of his plans, she is at first uncomprehending but is forced to confront the existence and significance of great events that she previously had ignored through being preoccupied with such “trifling” matters as making an impression at the Archery Meeting: “… she felt herself reduced to a mere speck. There comes a terrible moment to many souls when the great movements of the world, the larger destinies of mankind … enter like an earthquake into their own lives — when the slow urgency of growing generations turns into the tread of an invading army or the dire clash of civil war” (p. 677). She will no longer be oblivious of something like “the American war.” By the end of the novel the reader looking at the painting on the front cover may realize that though this woman who resembles Gwendolen remains trapped in triviality and superficiality, the character created in the mind of the reader by the words of the novel has moved on from that image and undergone a fundamental alteration in consciousness.
K. M. Newton is Professor Emeritus at the University of Dundee. He is the editor, with Graham Handley, of the new Oxford World’s Classics edition of Daniel Deronda by George Eliot.
For over 100 years Oxford World’s Classics has made available the broadest spectrum of literature from around the globe. Each affordable volume reflects Oxford’s commitment to scholarship, providing the most accurate text plus a wealth of other valuable features, including expert introductions by leading authorities, voluminous notes to clarify the text, up-to-date bibliographies for further study, and much more. You can follow Oxford World’s Classics on Twitter, Facebook, or here on the OUPblog. Subscribe to only Oxford World’s Classics articles on the OUPblog via email or RSS.
Subscribe to the OUPblog via email or RSS.
Subscribe to only literature articles on the OUPblog via email or RSS.
Image credit: The Fair Toxophilites by W. P. Frith. Public domain via Wikimedia Commons
Rising to prominence at lightning speed during World War II, Leonard Bernstein quickly became one of the most famous musicians of all time, gaining notice as a conductor and composer of both classical works and musical theater. One day he was a recent Harvard graduate, struggling to earn a living in the music world. The next, he was on the front page of the New York Times for his stunning debut with the New York Philharmonic in November 1943. At twenty-five, Bernstein was the newly appointed assistant conductor of the orchestra, and he stepped in at the last minute to replace the eminent maestro Bruno Walter in a concert that was broadcast over the radio.
At the same time – and with the same blistering pace — Bernstein had two high-profile premieres in the theater: the ballet Fancy Free in April 1944, and the Broadway musical On the Town in December that same year. For both, he collaborated with the young choreographer Jerome Robbins, and the two men later became mega-famous for West Side Story in 1957. Added to that, the writers of the book and lyrics for On the Town were Bernstein’s close friends Betty Comden and Adolph Green, whose major celebrity came with the screenplay for Singin’ in the Rain in 1952.
So 1944 was a key year for Bernstein in the theater. Yet he already had considerable experience with theatrical productions, albeit with neighborhood kids in the Jewish community of Sharon, Massachusetts, south of Boston, where his parents had a summer home, and as a counselor at a Jewish summer camp in the Berkshires.
Some of these productions were charmingly outrageous, including a staging of Carmen in Sharon during the summer of 1934, when Bernstein was fifteen. Together with his male friend Dana Schnittken, Bernstein organized local teens in presenting an adaptation of Carmen in Yiddish, with the performers in drag. “Together we wrote a highly localized joke version of a highly abbreviated Carmen in drag, using just the hit tunes,” Bernstein later recalled in an interview with the BBC. “Dana played Micaela in a wig supplied by my father’s Hair Company—I’ll never forget his blonde tresses—and I sang Carmen in a red wig and a black mantilla and in a series of chiffon dresses borrowed from various neighbors on Lake Avenue, through which my underwear was showing. Don José was played by the love of my life, Beatrice Gordon. The bullfighter was played by a lady called Rose Schwartz.” Bernstein’s father, who was an immigrant to the United States, owned the Samuel J. Bernstein Hair Company in Boston, which not only prospered mightily during the Great Depression but also provided wigs for his son’s theatrical exploits.
The young Leonard’s summer performances also involved rollicking productions of operettas by Gilbert and Sullivan. In the summer of 1935, he directed The Mikado in Sharon. Bernstein sang the role of Nanki-Poo, and his eleven-year-old sister Shirley was Yum-Yum. Decades later, friends of Bernstein who were involved in that production—by then quite elderly—recalled going with the cast to a nearby Howard Johnson’s Restaurant to celebrate. After eating a hearty meal, they stole the silverware! Being upright young citizens, they quickly returned it.
In the summer of 1936, Bernstein and his buddies produced H.M.S. Pinafore. “I think the bane of my family’s existence was Gilbert and Sullivan, whose scores my sister Shirley and I would howl through from cover to cover,” Bernstein later reminisced to The Book of Knowledge.
As a culmination of this youthful activity, Bernstein produced The Pirates of Penzance during the summer of 1937, while he worked as the music counselor at Camp Onota in the Berkshires. His future collaborator Adolph Green was a visitor at the camp, and Green took the role of the Pirate King.
A photograph in the voluminous Bernstein Collection at the Library of Congress vividly evokes Bernstein’s experience at Camp Onota. There, the youthful Lenny stands next to a bandstand, conducting a rhythm band of even younger campers. This is clearly not a stage production. But there he is – an aspiring conductor, honing his craft in the balmy days of summer.
As it turned out, Bernstein’s transition from teenage artistic adventures to mature commercial success—from camp T-shirts to tux and tails—took place in a blink.
Carol J. Oja is William Powell Mason Professor of Music and American Studies at Harvard University. She is author of Bernstein Meets Broadway: Collaborative Art in a Time of War and Making Music Modern: New York in the 1920s (2000), winner of the Irving Lowens Book Award from the Society for American Music.
Subscribe to the OUPblog via email or RSS.
Subscribe to only music articles on the OUPblog via email or RSS.
The genre of ‘choral jazz’ has become increasingly prevalent among choirs, with the jazz mass the ultimate form. Settings of the Latin mass by Lalo Schifrin and Scott Stroman have enjoyed popular following, while more recently Bob Chilcott’s A Little Jazz Mass and Nidaros Jazz Mass have established the genre in the wider choral tradition, reaching choirs from across the choral spectrum and audiences young and old.
Composed in 2001, Will Todd‘s Mass in Blue is a further example of the genre, presenting an innate fusion of jazz elements within choral writing. The composer describes the piece as ‘a real watershed work’, combining his passion for jazz with his previous experience of church and choral music, including as a boy treble.
2014 sees the publication of a new edition of Todd’s Mass in Blue, in which the composer has sought to enhance the flexibility and accessibility of the work while retaining its essence and drive. For instance, the choral parts and textures have been simplified in places, while the piano part increases support to the choir and has been revised to accommodate players of more modest ability. Optional exemplar solos are provided in the instrumental parts (piano, bass, and drum-kit, with optional saxophone) and additional cues have been added to the piano part to aid rehearsal.
Why? Will Todd observes that he has ‘experienced the work in a wide variety of guises and venues’, and the revised edition should allow the piece to travel still further. For a composer who says that his music is ‘about bringing people together’, the jazz mass is the perfect vehicle. The form lends itself to universality, with its synthesis of the sacred and the secular, of a traditional text with contemporary jazz styles, and an ability to unite musicians from diverse musical backgrounds.
Image credit: Choir Sing Cheer Joyfull Voices Vocals A Capella. Public domain via Pixabay