JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: TV & Film, Most Recent at Top [Help]
Results 1 - 25 of 98
How to use this Page
You are viewing the most recent posts tagged with the words: TV & Film in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
Historians should be banned from watching movies or TV set in their area of expertise. We usually bore and irritate friends and family with pedantic interjections about minor factual errors and chronological mix-ups. With Hilary Mantel’s novels Wolf Hall and Bring Up the Bodies, and the sumptuous BBC series based on them, this pleasure is denied us. The series is as ferociously well researched as it is superbly acted and directed. Cranmer probably didn’t have a beard in 1533, but, honestly, that’s about the best I can do.
January saw the critically acclaimed and award winning Broadchurch return to our TV screens for a second series. There was a publicity blackout in an attempt to prevent spoilers or leaks; TV critics were not sent the usual preview DVDs. The opening episode sees Joe Miller plead not guilty to the murder of Danny Latimer, a shock as the previous season’s finale ended with his admission of guilt. The change of plea means that the programme shifts from police procedural to courtroom drama – both staples of the TV schedules. Witnesses have to give evidence, new information is revealed through cross-examination, and old scores settled by witnesses and barristers.
Among this year’s Oscar nominees for Best Picture were two films with drum scores: Whiplash, in which a highly regarded but abusive conductor molds an aspiring young jazz musician into the genius he was meant to be, and Birdman, in which an aging film actor who was never a genius at all stars in a play and possibly flies. In spite of their innovative soundtracks, neither film received an Oscar nomination for Best Original Score.
After what feels like a year's worth of buzz, publicity, predictions, and celebrity gossip, the 87th Academy Award ceremony is upon us. I dug into the entries available in the alphabetized categories of The Dictionary of Film Studies-- and added some of my own trivia -- to highlight 26 key concepts in the elements of cinema and the history surrounding the Oscars.
Millions of Americans are eagerly anticipating this year’s Academy Awards ceremony. For over a century, motion pictures have been a dominant cultural and leisure medium. There are, however, two aspects worth highlighting: the sheer novelty of motion pictures and the medium’s initial democratic nature. Twenty-first century Americans have difficulty imagining the wonder and awe motion pictures inspired in the early 1900s.
Why do we flinch when Rocky takes a punch in Sylvester Stallone's movies, duck when the jet careens towards the tower in Airplane, and tap our toes to the dance numbers in Chicago or Moulin Rouge? With this year’s Academy Awards upon us, we want to know what happens between your ears when you sit down in the theatre and the lights go out. Take a look at some of the ways our brains work when watching a movie—you may just find some of them to be all too familiar.
Moses and Pharaoh are returning to the big screen in Ridley Scott’s seasonal blockbuster, Exodus: Gods and Kings. With a $200m budget and Christian Bale in the leading role, the British director will hope to replicate the success of Gladiator (where he resurrected the sword and sandals genre) and surpass the shock and awe of Cecil B. DeMille’s The Ten Commandments. Even before its release, the movie sparked controversy. The casting of white actors as Egyptians provoked charges of racial discrimination; describing Moses as ‘barbaric’ and ‘schizophrenic’ did not endear the leading actor to traditional believers; and casting a truculent young boy as the voice of Yahweh was bound to raise eyebrows. In other respects, the storyline remains traditional. Indeed, the film follows a long tradition of interpretation by presenting the Exodus as a political saga of slavery and liberation. 600,000 slaves are delivered as an oppressive empire is overwhelmed by divine power.
This political reading of the biblical epic will be familiar to anyone who has studied its remarkable reception history. In Christian preaching, liturgy and hymnology, Exodus has been read as spiritual typology — Israel points forward to the Church, Pharaoh’s Egypt to enslavement by Satan, Moses to the Messiah, the Red Sea to salvation, the Wilderness Wanderings to earthly pilgrimage, the Promised Land to heavenly rest.
Yet there has been an almost equally potent tradition of reading Exodus politically. It originated with Eusebius of Caesarea in the fourth century, who hailed the Emperor Constantine as a Mosaic deliverer of the persecuted Church. It took on new intensity when the Protestant Reformation was promoted as liberation from ‘popish bondage’. As a vulnerable minority, European Calvinists identified with the oppressed children of Israel in Egypt and then celebrated national reformations in Britain and the Netherlands as a new exodus. The title page of the Geneva Bible (1560) pictured the Israelites pinned against the Red Sea by the chariots and horsemen of Pharaoh, the moment before their deliverance. Deliverance became a keyword in Anglophone political rhetoric, a term that fused Providence and Liberation.
Over the coming centuries, this Protestant reading of Exodus would go through some surprising twists. The Reformers had sought deliverance from the Papacy, but radical Puritans condemned intolerant Protestant clergy as ‘Egyptian taskmasters’. Rhetoric that had once been trained on ecclesiastical oppression was turned against ‘political slavery’, as revolutionaries in 1649, 1688 and 1776 co-opted biblical narrative. For Oliver Cromwell, Israel’s journey from Egypt through the Wilderness towards Canaan was ‘the only parallel’ to the course of English Revolution. For John Milton, tolerationist and republican, England’s Exodus led to ‘civil and religious liberty’, a phrase coined in Cromwellian England. The most startling development occurred during the American Revolution, when Patriots unleashed the language of slavery and deliverance against ‘the British Pharaoh’, George III. The contradiction between their libertarian rhetoric and American slaveholding galvanized the nascent anti-slavery movement on both sides of the Atlantic. Black Protestants now seized upon Exodus and the language of deliverance. ‘For the first time in history’, writes historian John Saillant, ‘slaves had a book on their side’.
African Americans inhabited the story like no other people before them. When they fled from slavery and segregation and migrated to the North, they consciously re-enacted the Exodus. In slave revolts and in the American Civil War they called on God for deliverance from Egyptian taskmasters. In the spiritual ‘Go Down Moses’, they re-imagined the United States as ‘Egyptland’, throwing into question the biblical construction of the nation as an ‘American Zion’. They sang of a deliverer who would tell old Pharaoh, ‘Let my People go’. They celebrated the abolition of the slave trade, West Indian emancipation, and Lincoln’s Emancipation Proclamation by recalling the song of Moses and Miriam at the Red Sea.
The black use of Exodus was not without its ironies. It owed more than has been recognized to the long tradition of Protestant Exodus politics, albeit reworked and subverted. African Americans took pride in the fact that Moses married an Ethiopian (Numbers 12:1), but they were embarrassed by the sanction given to slavery in the Mosaic Law, and by the Hebrews’ oppression at the hands of African Pharaohs. Yet Exodus spoke to African American experience like no other text. Like the Children of Israel, their Red Sea moment was followed by a long and bitter Wilderness experience. On the night before his assassination, Martin Luther King Jr assured his black audience that he had ‘seen the Promised Land’. Barack Obama talked of ‘the Joshua Generation’ completing the work of King’s ‘Moses Generation’, but the land of milk and honey can still seem like a distant prospect.
Heading image: Dura Europos Synagogue wall painting showing the Hebrews leaving Egypt. Adaptation by Gill/Gillerman slides collection, Yale. Public domain via Wikimedia Commons.
The Red Tent was perfect for the Lifetime channel. The network’s four-hour miniseries closely followed Anita Diamont’s 1997 novel, which gave voice—and agency—to the biblical character of Dinah. In both the novel and the miniseries, Dinah the daughter of Jacob is characterized not as a victim (as in Genesis 34) but as a strong, assertive woman raised by a band of mothers who draw power from one another and from their worship of the Divine Mother rather than the patriarchal god of Jacob. And yet, as much as she delivers strong speeches against patriarchal ways, Dinah Redux does not stray from the traditional scripts for women. Her life is shaped by romances with muscled men and by motherhood.
Dinah is tenderly loved by two men. Her first husband Shalem, who in Genesis 34 is called Shechem and is described as seizing Dinah by force, becomes in The Red Tent Dinah’s consensual spouse. Refusing to request permission to marry from her father, she claims her union with Shalem as “my life, my future, my choice.” It is the men of her family who construe her choice as defilement, using it as a pretext for slaughtering Shalem and all the men of his village. Her second husband, created for the novel, overcomes her reluctance to marry again and, like her first husband, consummates their union in slow motion on a dimly-lit bed of mutual pleasure and tenderness. While criticizing patriarchal ideas in general and some men in particular (including Laban, who is depicted as a drunk, gambling, abusive tyrant), Dinah clearly loves her husbands as well as her brother Joseph.
From the beginning of her pregnancy with Shalem’s child, Dinah’s identity rests in her role as mother. When her son is claimed by Shalem’s Egyptian mother, Dinah is willing to live in a mice-infested cellar and be treated as a slave in order to remain in her son’s life. Childbearing as the essential essence of womanhood, indeed, runs throughout The Red Tent. Even as a child, Dinah learns from her mothers in the women’s-only space of the tent the power of menstrual blood and the ability to give birth; her later role as midwife allows her to continue to participate in this most female of activities.
In placing romance and the mother-child bond at the center of women’s lives, The Red Tent follows a very modern script. Like the heroines of romance novels, Dinah willingly surrenders to the attentions of attractive men and is passionately devoted to her son. Other modern tropes appear as well. She and her mothers attempt to protect Laban’s wife from domestic violence, treat slaves as their equals, and eventually manage their anger. While Dinah resists patriarchy as a system, she ultimately forgives the people (like her father) who embody that system. Dinah is strong and independent but still desirable to men, still a devoted mother, still kind in a self-sacrificing way.
The novel The Red Tent is so beloved by many women because it offers a relatable female biblical character, one whose loves, commitments, and challenges resonate in the modern world. Presented as the recovery of the lost voices of ancient women, it also plays well with a current climate of distrust in religious traditions and institutions. Like The Da Vinci Code, The Red Tent is fiction, but its claim that history has demeaned women’s stories rings true for many who are desperately seeking a usable past.
And yet, by making the past mirror the present, this retelling of the biblical story not only does disservice to the past but also reinscribes the very gender scripts it claims to resist.
My recent work as the editor in chief of The Oxford Encyclopedia of the Bible and Gender Studies aims to work against such anachronistic assumptions. In the case of ancient Israel, our participating scholars explored topics such as the nature of goddess worship, marriage, gender roles, and the social significance of children. They argue that the worship of female deities was not limited to women and had little bearing on the well-being of human women; that children’s importance was as much economic as affectional; that “biblical marriage” required neither female consent, mutual vow making, nor romance; and that low life expectancies not only promoted the “marriage” of females by the age of 13 but also meant that few people would have ever known their grandparents. Johanna Stiebert, author of “Social Scientific Approaches,” contextualizes The Red Tent as one strategy of feminist appropriation of the ancient world, while Susanne Scholz (“Second Wave Feminism”) and Teresa J. Hornsby (“Heterosexism/Heteronormativity”) explain the perspectives of those who find the valorization of romance and motherhood as reflective of rather than resistant to patriarchy. Deborah W. Rooke (“Patriarchy/Kyriarchy”) traces the history of conversations about goddesses and women in the ancient world.
These and other entries suggest just how speculative, selective, and skewed many of The Red Tent’s portrayals of the ancient world are. In Diamant’s world, four women willingly share Jacob as husband and experience little competition within women’s space. In the red tent, they cooperate with one another, sharing stories and essential oils. Such portrayals downplay not only biblical stories of tensions between women but also the modern systems that pit women against one another.
By paying attention to the ways in which gender is constructed in the diverse texts, cultures, and readers that constitute “the world of the Bible,” gender-sensitive biblical scholarship seeks to move beyond such stereotypes of women. It suggests that women—and men and those whom societies place as “other”—operate within systems and structures that must be named and, when necessary, critiqued. Though giving Dinah agency within a world that limits women’s roles to romance and motherhood might seem liberating to some readers/viewers of The Red Tent, gender studies brings into focus the socially constructed nature of these limits of women’s worth.
On the surface, the Lifetime channel’s special Women of the Bible tells a very different story than The Red Tent. The two-hour program which aired just prior to the miniseries premiere claims to read with the Bible rather than against it, suggesting that the text itself depicts strong and faithful women—no retelling necessary. Moreover, while the miniseries adaptation of Anita Diamont’s novel valorizes goddess worship and condemns the patriarchal bias of the Bible, Women of the Bible recounts the story of selected biblical women from a decidedly conservative Christian perspective.
This perspective is clearly evident in the choice of the “experts” chosen to comment on the biblical narratives. Victoria Osteen, wife of evangelist Joel Osteen, and Joyce Meier, described on her website as a “charismatic Christian author,” appear alongside a woman designated as “Bible Teacher” and several female leaders of Christian ministries. Those outside this circle include a female rabbi and a female professor at Notre Dame, though their comments are integrated with rather than contrasted with the majority of conservative Christian voices.
Conservative Christian theology is also reflected in the choice of biblical women and the aspects of their stories eliciting commentary.
Eve. The program spends little time on Eve as a character. Instead, commentators use her story to discuss “the Fall,” a distinctively Christian understanding that Genesis 3 depicts a universal human fall from grace to which Jesus later provides a remedy.
Sarah. The two episodes selected from Sarah’s story are (1) her motherhood late in life and (2) her response to Abraham’s near sacrifice of Isaac on Mt. Moriah (Genesis 22). Although the Bible does not include Sarah in this latter story, commentators speculate on how she must have felt, and the visual reenactment depicts her running to find her son. This passage is far less relevant to understanding the Bible’s characterization of Sarah than it is to certain strands of Christianity theology. In Christianity, Abraham’s willingness to sacrifice his son has traditionally been invoked as prefiguring God’s willingness to sacrifice his son Jesus on the cross. This linkage is clearly implied in the video footage. Although Genesis 22 indicates that God provided a ram as a substitute sacrifice, the program shows a lamb instead (in the gospels and later Christian tradition, Jesus is called the “lamb of God”).
Rahab. This brothel owner who saved the Israelite spies is praised for her willingness to protect her family. Commentators also expound upon the significance of the red cord she uses to mark her house for deliverance. Following traditional Christian interpretation, they connect Rahab’s red cord with Jesus’ blood shed on the cross to save humanity. They also explicitly trace Rahab’s genealogy to Jesus, following the gospel of Matthew.
Samson’s mother and his mistress Delilah. In the program, these two women are not explicitly linked with the Christian message. The commentators instead use their stories to advance important morals and teachings. Samson’s mother is explained as providing hope to “mothers who try to be good parents but the children stray,” and Delilah becomes a cautionary tale of being “tempted like Eve.”
The Marys. The majority of the program (close to one half) is devoted to Mary the mother of Jesus and Mary Magdalene. Mary Magdalene is depicted as playing an important role in early Christianity, and yet most of the scenes depicting both women recounted the life and death of Jesus. Their stories offer windows into his story. In keeping with a particular understanding of the importance of Jesus shedding blood at his crucifixion, scenes graphically depict Jesus’ flogging and crucifixion (“he came to die”). The imagined feelings of the Marys become a means to reflect on the painfulness of Jesus’ sacrifice: “I would imagine they felt this way,” “They must have felt this way.” Although the program insists that the Magdalene was instrumental in the growth of Christianity, it provides no support for this claim.
As a biblical scholar devoted to gender critical work, I was amazed and disturbed that this program demonstrated no awareness of the important discussions conducted by feminist interpreters of the Bible over the past 40 years. Reassessments of Eve, Sarah, Mary Magdalene, and our traditions of reading are now old news, as is the recognition that standard ways of depicting Jesus as female-friendly have anti-Jewish dimensions. At least since the 1990s, Jewish feminists have insisted upon the inaccuracy and the danger of statements like those made in the program: “a Jewish rabbi wouldn’t talk to a woman,” “women were devalued in that culture.” The program leaves these statements to stand unchallenged and actually reinforce them in the costuming of the reenactments of Jesus’ arrest, trial, and crucifixion: Jewish leaders wear the pointed hats used to designate Jews in Medieval anti-Jewish iconography.
I also was appalled that in the apparent attempt to include actors of color insufficient attention was paid to the ways in which casting might perpetuate racial stereotypes. Samson was depicted as a huge, violent man of African descent who could not control his passions. When his deadlocks were cut, he was bound in chains to a column. In the US context, this image too closely mirrors that of the slave on the auction block to pass for an attempt at “diversity.”
Neither the commentators nor the marketers of this program named the monolithic perspective that informed the presentation. Although the rhetoric of the program suggests that the commentators are simply reading the Bible, in reality the program recounted a particular Christian narrative about sin and Jesus’s role of overcoming it. Women were lauded as important to the degree that they were instrumental in advancing that narrative.
In turn, biblical texts that stray from this perspective are overlooked, such as:
Abraham’s willingness to give Sarah to another man—twice—to save himself.
The abuse suffered by Hagar.
The likelihood that the Israelite spies were visiting Rahab’s brothel rather than simply hiding.
Jesus’ statements that challenge the priority of family (Mark 10; Luke 14; Matthew 22). In this program, the distance between Jesus and his mother was described as a normal mother-son dynamic rather than part of Jesus’ message (Mark 3). The commentators stressed the ways in which Jesus provided for his mother from the cross, since “a son ought to love his mother and make sure she is looked after.”
Even though this program reflected a far more conservative religiosity than The Red Tent, similar ideologies of gender run through both productions. Women are valued primarily for being mothers, wives, and protectors of their families. Biblical women who do not fill these roles are passed in silence: Deborah, Huldah, Athalia, Miriam, and the women involved in ministry with Paul. (See an Index of Women in the Bible with relevant biblical passages.)
Responsible interpretation of the Bible requires a deep understanding of the ancient world reflected in its pages. Engagement with on-going biblical scholarship is crucial, since our knowledge of the past continues to grow through archaeological investigation, the discovery of new texts, and the development of research methodology. Responsible interpretation also requires a self-awareness of the lenses through which we read and the commitments that guide our choice of texts and our determination of their meaning.
Women of the Bible, sadly, reflects neither solid scholarship nor attentiveness to perspective. Based on the speculation of interpreters whose interests remain unnamed rather than on current research on gender in the ancient world, the Lifetime program perpetuates particular tropes for women rather than offering viewers fresh insight.
Prometheus, a Titan god, was exiled from Mount Olympus by Zeus because he stole fire from the gods and gave it to mankind. He was condemned, punished, and chained to a rock while eagles ate at his liver. His name, in ancient Greek, means “forethinker “and literary history lauds him as a prophetic hero who rebels against his society to help man progress. The stolen fire is symbolic of creative powers and scientific knowledge. His theft encompasses risk, unintended consequences, and tragedy. Centuries later, modern times has another Promethean hero, Alan Turing. Like the Greek Titan before him, Turing suffers for his foresight and audacity to rebel.
The riveting film, The Imitation Game, directed by Morten Tyldum and staring Benedict Cumberbatch, offers us a portrait of Alan Turing that few of us knew before. After this peak into his extraordinary life, we wonder, how is it possible that within our lifetime, society could condemn to eternal punishment such a special person? Turing accepts his tragic fate and blames himself.
“I am not normal,” he confesses to his ex-fiancée, Joan Clarke.
“Normal?” she responds, angrily. “Could a normal man have shortened World War ll by two years and have saved 16 million people?”
The Turing machine, the precursor to the computer, is the result of his “not normal” mind. His obsession was to solve the greatest enigma of his time – to decode Nazi war messages.
In the film, as the leader of a team of cryptologists at Bletchley Park in 1940, Turing’s Bombe deciphered coded messages where German U-boats would decimate British ships. In 1943, the Colossus machine, built by engineer Tommy Flowers of the group, was able to decode messages directly from Hitler.
The movie, The Imitation Game, while depicting the life of an extraordinary person, also raises philosophical questions, not only about artificial intelligence, but what it is to be human. Cumberbatch’s Turing recognizes the danger of his invention. He feared what would happen if a thinking machine is programmed to replace a man; if a robot is processed by artificial intelligence and not by a human being who has a conscience, a soul, a heart.
Einstein experienced a similar dilemma. His theory of relativity created great advances in physics and scientific achievement, but also had tragic consequences – the development of the atomic bomb.
The Imitation Game will open Pandora’s box. Viewers will ponder on what the film passed over quickly. Who was a Russian spy? Why did Churchill not trust Stalin? What was the role of the Americans during this period of decrypting military codes? How did Israel get involved?
And viewers will want to know more about Alan Turing. Did Turing really commit suicide by biting into an apple laced with cyanide? Or does statistical probability tell us that Turing knew too much about too many things and perhaps too many people wanted him silent? This will be an enigma to decode.
The greatest crime from a sociological perspective, is the one committed by humanity against a unique individual because he is different. The Imitation Game will make us all ashamed of society’s crime of being prejudiced. Alan Turing stole fire from the gods to give to man power and knowledge. While doing so, he showed he was very human. And society condemned him for being so.
One of the best-known musicals of the 20th century is Annie, which tells the story of a pluckyorphan girl who warms the hearts of all around her, and eventually finds a loving family of her own. The tale will be carried into the 21st century when the newest film adaptation (produced by Jay-Z and Will Smith; perhaps you’ve heard of them) is released on 19 December of this year. In honor of the long legacy of this famous story, here we take a look at the changing language of Annie.
Little orphant Allie
Speaking of long legacies, the 1977 musical Annie was not the first time the world had been introduced to the inspirational young character. The musical was based on an American comic strip entitled “Little Orphan Annie”. Well-known in its own time and called the most famous comic of 1937 by Fortune magazine, “Little Orphan Annie” ran for a whopping 86 years and even led to an equally famous radio show (religiously followed by Ralph in the 1983 film A Christmas Story). However, the story of Annie can be traced further back to a girl named Mary Alice Smith (nicknamed “Allie”), who inspired Indiana poet James Whitcomb Riley to pen the poem “The Elf Child” in 1885. He would eventually rename it “Little Orphant Allie”.
“Orphant”? Not a typo—just a US regional variant spelling that has since fallen largely out of use, as have other variants orphaunt, orfant, and even orphing (among many others). However, a literal typo or typographical error did come into play with Riley’s poem when the name “Annie” was accidentally typeset instead of “Allie”. When the poem gained popularity, Riley decided to stick with the new name.
The original hard knocks
People looking for the familiar plot or song lyrics in the original poem will be disappointed: there is almost no resemblance between the Annie of the poem and Annie as she is popularly known today. The poem, like several of Riley’s others, is written in Hoosier dialect—the midland dialect of American English, or more specifically that from Indiana. In the poem, “little orphant Annie” tells stories to other orphaned children in which “gobble-uns” (goblins) steal poorly behaved children away (hence the original title “The Elf Child”). At the end of the didactic poem, Annie says
You better mind yer parunts, an’ yer teachurs fond an’ dear, An’ churish them ‘at loves you, an’ dry the orphant’s tear, An’ he’p the pore an’ needy ones ‘at clusters all about, Er the Gobble-uns ‘ll git you Ef you Don’t Watch Out!
However, like the Annie of the later comic strip, musical, and film adaptations, “little orphant Annie” is happy to take the “pore an’ needy” under her wing and to teach them what she knows.
Hoovervilles and Prohibition
Though the musical Annie opened on Broadway in 1977 and its film adaptation was released in 1982, the plot takes place in the 1930s. Apart from the clothing styles and the Hoovervilles, the song lyrics themselves—with many words unfamiliar to the modern English-speaker— are intended to transport audiences to the early 20th century.
Yank the whiskers from her chin! Jab her with a safety-pin! Make her drink a Mickey Finn!
Dilly, an alteration of the first syllable of delightful or delicious, is a North American word for an excellent example of something.
You spend your evenings in the shanties, Imbibing quarts of bathtub gin. And here you’re dancing in your scanties.
To a modern-day reader, it may not be clear how much Daddy Warbucks is insulting Miss Hannigan in the song “Sign” from the 1982 film. When he accuses her of spending time in the shanties, he is probably referring to shantytowns: run-down areas consisting of large numbers of shanties, or small, crudely built shacks. These shantytowns (or Hoovervilles, as they were sometimes called, after the US President Herbert Hoover) were an all-too-familiar sight during the Great Depression, when as much as 25% of Americans were unemployed.
As for bathtub gin, readers familiar with the Prohibition era in the United States may know what it is—a concoction of spirits intended to simulate the taste of gin, representative of a time in which alcoholic drinks (rendered illegal by the 18th Amendment to the US Constitution in 1920) were often surreptitiously made in homes (and sometimes, presumably, in bathtubs). It goes without saying that, generally, the quality of “bathtub gin” was probably not very high.
Daddy Warbucks gets in one final jab by accusing Miss Hannigan of dancing around in her scanties, or brief underwear. (The word comes from scant + -y; scant is from the Old Norse word for “short”.) Interestingly, a modern word for a similar type of women’s underwear—panties—could be substituted here without sacrificing rhyme.
On the topic of modernizing lyrics, the upcoming movie Annie will debut such changes of its own; in the song “Hard-Knock Life”, what originally was
No one cares for you a smidge When you’re in an orphanage
has been updated to
No one cares for you a bit When you’re a foster kid
Here, bit may have replaced smidge as a better near rhyme, or it may been considered a safer bet in terms of plausible vocabulary for a 10-year-old in 2014 (it doesn’t seem a stretch to say that smidge is probably not in the parlance of today’s youth). As for the replacement of orphanage with “foster kid”, given that the new movie doesn’t involve an orphanage—instead, Annie is in a foster home—this change is practical.
However, it can also be noted that fostering has gradually taken the place of institutional care and sociocultural developments have shaped the concept of child welfare as we understand it today. For these reasons in part, it may not be surprising that the use of the word “foster child” has been increasing somewhat steadily over the last two centuries, while use of the word orphan (though still more common overall) has dwindled over the same period of time.
Though Annie has been around long enough for “orphant” to eventually turn into “foster kid”, the fact remains that American audiences are perennial lovers of the rags-to-riches theme. For this reason, it should come as no surprise that the story of Annie is just as well-known today as when Ralph was racing to the radio—or that virtually everyone you know can sing at least a few bars of “Tomorrow”. It probably goes without saying that we’ll see many more iterations of Annie in the century to come.
Seth Rogen isn’t the only actor to have a film about North Korea nixed: A script helmed by Bob Hope met a similar fate in 1954.
If US government sources are correct, North Korea cowed Sony Pictures into withholding a bawdy comedy about assassinating supreme leader Kim Jong-Un. Sony’s corporate computers were hacked and many bytes of tawdry Hollywood secrets were disgorged. The technical achievement lent credibility to the hackers’ threats of mass murder in theaters if Rogen’s The Interview was released. (Editors’ note: The Interview is currently in limited release and no attacks have been reported.) Governments can be expected to decry movies about murdering sitting presidents, but the bombast of Pyongyang’s apparent reaction lacks proportionality and appreciation of blowback from global audiences, which are sure to make Kim Jong-un a universal punch line. This cluelessness no doubt derives from the cultish isolation of Pyongyang, but it is not the first comedy set in North Korea to discomfit officials.
In 1954, the military-friendly jokester Bob Hope dropped plans for a screwball comedy on the Korean peninsula after the US Army refused to support it. The similarities and differences from the current episode tell us something about government influence over cinema, a vital conduit to the mass mind.
Only months after the end of the Korean War (1950-1953), Hope pitched a film to the Army’s Motion Picture office for approval. The military routinely lent expensive war equipment and technical advice to movie studios in return for a veto over scripts. Hope’s timing was awful. The “sour little war” was so unpopular it ended the political career of President Harry Truman and prompted years of soul searching into the American character and its failure to vanquish the enemy. The Army was touchy about cinematic portrayals of anything Korea, so much so that it reversed itself on a Ronald Reagan movie it had previously supported.
In March 1954, the same month Hope’s proposal was under consideration, the Army yanked approval of MGM’s P.O.W. Military bands had to cancel plans to play at premiers and all Army commands were ordered to cease publicizing the film. This was curious since the Army Motion Picture office had assisted P.O.W. throughout production, providing a former prisoner as consultant and requesting and receiving four pages of script revisions. The problem? Image management. The hastily-made movie was coming out at the same time the Army was beginning prosecutions of former prisoners accused of collaborating with their captors. The Chinese ran the prison camps in North Korea and persuaded some inmates to assist them on shortwave radio and other propaganda tasks. Collaboration became a big stir in the United States, especially after 21 American POWs defected to China after the war. Court martials of repatriated prisoners were part of a Cold War panic that the nation’s youth had gone soft, unable to resist Chinese indoctrination.
The difficulty with the Reagan film P.O.W. was that it was relentlessly brutal, even by today’s standards. Prisoners were subjected to awful tortures that were sure to arouse audience sympathy just when court martials were underway. Movies too heavy on torture or brainwashing would seem to excuse the behavior of soldiers who were now facing years at hard labor. Hence the Army bands repacking their instruments.
The delicacy of national morale helps explain the Army’s discomfort with the Bob Hope proposal. Donald E. Baruch, head of the Motion Pictures office, wrote Hope’s agent that the Army valued its previous work with the comedian:
However, in this instance, we believe no military purpose would be served in the production of this story. When Mr. Hope called while recently here, I did not react negatively because all he mentioned was that the story was about a U.S.O. tour to Korea and the repatriation of a prisoner. The subject is considered of too great importance and seriousness especially at this time to be treated in the farcical manner indicated by the outline. Other basic story objections are ‘stealing’ of the helicopter, Jane, Jimmy and Bob in North Korea, and the rescuing of Lloyd.
A serious prisoner of war movie that did get Army approval was MGM’s The Rack (1956) with Paul Newman. This courtroom-bound film was a psychological exploration of an officer’s conscience and why he failed to resist collaboration. However, The Rack was broody and talky and made no impression on the box office. The same occurred with Time Limit (United Artists, 1957), another courtroom film approved by the Army that failed to move audiences. To get a Pentagon subsidy and imprimatur, POW films set in Korea could not follow the tried and true formula of action and escape; collaboration was too imposing an issue. The small sub-genre of Korea POW films was steered into amnesia.
US Army influence on Korea POW films was gentle. Studios wanted subsidies and association with the military brand, so they were usually cooperative. In itself, Rogan’s The Interview has little in common with the patriotic cinema of the 1950s, but the apparent reaction of North Korea provides an interesting contrast. Some pundits have been quick to accuse Sony of letting Pyongyang become a censor by holding the film industry hostage. With this one film, they might have a point. But Pyongyang’s method of influencing movie content is really one of weakness. The Pentagon, neither today nor in the 1950s, has to threaten Hollywood, it simply waits for producers to come to it for set pieces and shrouds of official martial aura. In contrast, Kim Jong-Un’s royal court is so isolated and unable to shape the narrative that it resorted to the threats of a desperate loner. If North Korea’s apparent intervention in Hollywood still has an effect two years from now, it will only serve to focus more attention on the regime worldwide. Look for more hidden camera documentaries. Any other lasting influence is unlikely, since Kim Jong-Un can’t open a Hollywood office or even do lunch.
Featured image: Bob Hope (center) and other guests salute while “The Star Spangled Banner” is played during a ceremony to award Hope the Distinguished Public Service Award. Jan. 31, 1971. Public domain via Wikimedia Commons.
This time the fuss is about already critically acclaimed (The New York Times critic in residence, AO Scott, called it “a triumph of efficient, emphatic cinematic storytelling”) biopic Selma, starring David Oyelowo as the Rev Dr Martin Luther King, Jr.
The film starts with King’s acceptance of the Nobel Peace Prize in December 1964 and focuses on the three 1965 marches in Alabama that eventually led to the adoption of the Voting Rights Act later that year.
The King estate has not expressly objected to the making of this film. However, back in 2009 the same estate had granted DreamWorks and Warner Bros a licence to reproduce King’s speeches in a film that Steven Spielberg is set to produce but has yet to see the light. Apparently Selma producers attempted in vain to get permission to reproduce King’s speeches in their film. What happened in the end was that the authors of the script had to convey the same meaning of King’s speeches without using the actual words he had employed.
Put it otherwise: Selma is a film about Martin Luther King that does not feature any actual extracts from his historic speeches.
Still in his NYT review, AO Scott wrote that “Dr. King’s heirs did not grant permission for his speeches to be quoted in “Selma,” and while this may be a blow to the film’s authenticity, [the film director] turns it into an advantage, a chance to see and hear him afresh.”
Indeed, the problem of authenticity has been raised by some commentators who have argued that, because of copyright constraints, historical accuracy has been negatively affected.
But is this all copyright’s fault? Is it really true that if you are not granted permission to reproduce a copyright-protected work, you cannot quote from it?
“The social benefit in having a truthful depiction of King’s actual words would be much greater than the copyright owners’ loss.”
Well, probably not. Copyright may have many faults and flaws, but certainly does not prevent one from quoting from a work, provided that use of the quotation can be considered a fair use (to borrow from US copyright language) of, or fair dealing (to borrow from other jurisdictions, e.g. UK) with such work. Let’s consider the approach to quotation in the country of origin, i.e. the United States.
§107 of the US Copyright Act states that the fair use of a work is not an infringement of copyright. As the US Supreme Court stated in the landmark Campbell decision, the fair use doctrine “permits and requires courts to avoid rigid application of the copyright statute when, on occasion, it would stifle the very creativity that the law is designed to foster.”
Factors to consider to determine whether a certain use of a work is fair include:
the purpose and character of the use, including whether the use is commercial or for nonprofit educational purposes (the fact that a use is commercial is not per se a bar from a finding of fair use though);
the nature of the copyright-protected work, e.g. if it is published or unpublished;
amount and substantiality of the taking; and
the effect upon the potential market for or value of the copyright-protected work.
There is fairly abundant case law on fair use as applied to biographies. With particular regard to the re-creation of copyright-protected works (as it would have been the case of Selma, should Oyelowo/King had reproduced actual extracts from King’s speeches), it is worth recalling the recent (2014) decision of the US District Court for the Southern District of New York in Arrow Productions v The Weinstein Company.
This case concerned Deep Throat‘s Linda Lovelace biopic, starring Amanda Seyfried. The holders of the rights to the “famous  pornographic film replete with explicit sexual scenes and sophomoric humor” claimed that the 2013 film infringed – among other things – their copyright because three scenes from Deep Throat had been recreated without permission. In particular, the claimants argued that the defendants had reproduced dialogue from these scenes word for word, positioned the actors identically or nearly identically, recreated camera angles and lighting, and reproduced costumes and settings.
The court found in favour of the defendants, holding that unauthorised reproduction of Deep Throat scenes was fair use of this work, also stressing that critical biographical works (as are both Lovelace and Selma) are “entitled to a presumption of fair use”.
In my opinion reproduction of extracts from Martin Luther King’s speeches would not necessarily need a licence. It is true that the fourth fair use factor might weigh against a finding of fair use (this is because the Martin Luther King estate has actually engaged in the practice of licensing use of his speeches). However the social benefit in having a truthful depiction of King’s actual words would be much greater than the copyright owners’ loss. Also, it is not required that all four fair use factors weigh in favour of a finding of fair use, as recent judgments, e.g. Cariou v Princeor Seltzer v Green Day, demonstrate. Additionally, in the context of a film like Selma in which Martin Luther King is played by an actor (not incorporating the filmed speeches actually delivered by King), it is arguable that the use of extracts would be considered highly transformative.
In conclusion, it would seem that in principle that US law would not be against the reproduction of actual extracts from copyright-protected works (speeches) for the sake of creating a new work (a biographic film).
This article originally appeared on The IPKat in a slightly different format on Monday 12 January 2015.
Featured image credit: Dr. Martin Luther King speaking against war in Vietnam, St. Paul Campus, University of Minnesota, by St. Paul Pioneer Press. Minnesota Historical Society. CC-BY-2.0 via Flickr.
Picture this. A legendary hotel concierge and serial womaniser seduces a rich, elderly widow who regularly stays in the hotel where he works. Just before her death, she has a new will prepared and leaves her vast fortune to him rather than her family.
For a regular member of the public, these events could send alarm bells ringing. “She can’t have known what she was doing!” or “What a low life for preying on the old and vulnerable!” These are some of the more printable common reactions. However, for cinema audiences watching last year’s box office smash, The Grand Budapest Hotel directed by Wes Anderson, they may have laughed, even cheered, when it was Tilda Swinton (as Madame Céline Villeneuve Desgoffe und Taxis) leaving her estate to Ralph Fiennes (as Monsieur Gustave H) rather than her miffed relatives. Thus the rich, old lady disinherits her bizarre clan in what recently became 2015’s most BAFTA-awarded film, and is still up for nine Academy Awards in next week’s Oscars ceremony.
Wills have always provided the public with endless fascination, and are often the subject of great books and dramas. From Bleak House and The Quincunx to Melvin and Howard and The Grand Budapest Hotel, wills are often seen as fantastic plot devices that create difficulties for the protagonists. For a large part of the twentieth century, wills and the lives of dissolute heirs have been regular topics for Sunday journalism. The controversy around the estate of American actress and model, Anna Nicole Smith, is one such case that has since been turned into an opera, and there is little sign that interest in wills and testaments will diminish in the entertainment world in the coming years.
“[The Vegetarian Society v Scott] is probably the only case around testamentary capacity where the testator’s liking for a cooked breakfast has been offered as evidence against the validity of his will.”
Aside from the drama depicted around wills in films, books, and stage shows, there is also the drama of wills in real life. There are two sides to every story with disputed wills and the bitter, protracted, and expensive arguments that are generated often tear families apart. While in The Grand Budapest Hotel the family attempted to solve the battle by setting out to kill Gustave H, this is not an option families usually turn to (although undoubtedly many families have thought about it!).
Usually, the disappointed family members will claim that either the ‘seducer’ forced the relative into making the will, or the elderly relative lacked the mental capacity to make a will; this is known as ‘testamentary capacity’. Both these issues are highly technical legal areas, which are resolved dispassionately by judges trying to escape the vehemence and passion of the protagonists. Regrettably, these arguments are becoming far more common as the population ages and the incidence of dementia increases.
The diagnosis of mental illness is now far more advanced and nuanced than it was when courts were grappling with such issues in the nineteenth century. While the leading authority on testamentary capacity still dates from a three-part test laid out in the 1870 Banks v Goodfellow case, it is still a common law decision, and modern judges can (and do) adapt it to meet advancing medical views.
This can be seen in one particular case, The Vegetarian Society v Scott, in which modern diagnosis provided assistance when a question arose in relation to a chronic schizophrenic with logical thought disorder. He left his estate to The Vegetarian Society as opposed to his sister or nephews, for whom he had a known dislike. There was evidence provided by the solicitor who wrote the will that the deceased was capable of logical thought for some goal-directed activities, since the latter was able to instruct the former on his wishes. It was curious however that the individual should have left his estate to The Vegetarian Society, as he was in fact a meat eater. However unusual his choice of heir, the deceased’s carnivorous tendencies were not viewed as relevant to the issues raised in the court case.
As the judge put it, “The sanity or otherwise of the bequest turns not on [the testator’s] for food such as sausages, a full English breakfast or a traditional roast turkey at Christmas; nor does it turn on the fact that he was schizophrenic with severe thought disorder. It really turns on the rationality or otherwise of his instructions for his wills set in the context of his family relations and other relations at various times.”
This is probably the only case around testamentary capacity where the testator’s liking for a cooked breakfast has been offered as evidence against the validity of his will.
For lawyers, The Grand Budapest Hotel’s Madame Céline Villeneuve Desgoffe und Taxis is potentially a great client. Wealth, prestige, and large fees for the will are then followed by even bigger fees in the litigation. If we are to follow the advice of the judge overseeing The Vegetarian Society v Scott, Gustave H would have inherited all of Madame Céline’s money if she was seen to be wholly rational when making her will.
Will disputes will always remain unappealing and traumatic to the family members involved. However, as The Grand Budapest Hotel has shown us, they still hold a strong appeal for cinema audiences and will continue to do so for the foreseeable future.
Feature image: Reflexiones by Serge Saint. CC-BY-2.0 via Flickr.
As an Africanist historian who has long been committed to reaching broader publics, I was thrilled when the research team for the BBC’s popular genealogy program Who Do You Think You Are? contacted me late last February about an episode they were working on that involved mixed race relationships in colonial Ghana. I was even more pleased when I realized that their questions about the practice and perception of intimate relationships between African women and European men in the Gold Coast, as Ghana was then known, were ones I had just explored in a newly published American Historical Review article, which I readily shared with them. This led to a month-long series of lengthy email exchanges, phone conversations, Skype chats, and eventually to an invitation to come to Ghana to shoot the Who Do You Think You Are? episode.
After landing in Ghana in early April, I quickly set off for the coastal town of Sekondi where I met the production team, and the episode’s subject, Reggie Yates, a remarkable young British DJ, actor, and television presenter. Reggie had come to Ghana to find out more about his West African roots, but discovered instead that his great grandfather was a British mining accountant who worked in the Gold Coast for several years. His great grandmother, Dorothy Lloyd, was a mixed-race Fante woman whose father—Reggie’s great-great grandfather—was rumored to be a British district commissioner at the turn of the century in the Gold Coast.
The episode explores the nature of the relationship between Dorothy and George, who were married by customary law around 1915 in the mining town of Broomassi, where George worked as the paymaster at the local mine. George and Dorothy set up house in Broomassi and raised their infant son, Harry, there for two years before George left the Gold Coast in 1917 for good. Although their marriage was relatively short lived, it appears that Dorothy’s family and the wider community that she lived in regarded it as a respectable union and no social stigma was attached to her or Harry after George’s departure from the coast.
George and Dorothy lived openly as man and wife in Broomassi during a time period in which publicly recognized intermarriages were almost unheard of. As a privately employed European, George was not bound by the colonial government’s directives against cohabitation between British officers and local women, but he certainly would have been aware of the informal codes of conduct that regulated colonial life. While it was an open secret that white men “kept” local women, these relationships were not to be publicly legitimated.
Precisely because George and Dorothy’s union challenged the racial prescripts of colonial life, it did not resemble the increasingly strident characterizations of interracial relationships as immoral and insalubrious in the African-owned Gold Coast press. Although not a perfect union, as George was already married to an English woman who lived in London with their children, the trajectory of their relationship suggests that George and Dorothy had a meaningful relationship while they were together, that they provided their son Harry with a loving home, and that they were recognized as a respectable married couple. No doubt this had much to do with why the wider African community seemingly embraced the couple, and why Dorothy was able to “marry well” after George left. Her marriage to Frank Vardon, a prominent Gold Coaster, would have been unlikely had she been regarded as nothing more than a discarded “whiteman’s toy,” as one Gold Coast writer mockingly called local women who casually liaised with European men. In her own right, Dorothy became an important figure in the Sekondi community where she ultimately settled and raised her son Harry, alongside the children she had with Frank Vardon.
The “white peril” commentaries that I explored in my AHR article proved to be a rhetorically powerful strategy for challenging the moral legitimacy of British colonial rule because they pointed to the gap between the civilizing mission’s moral rhetoric and the sexual immorality of white men in the colony. But rhetoric often sacrifices nuance for argumentative force and Gold Coasters’ “white peril” commentaries were no exception. Left out of view were men like George Yates, who challenged the conventions of their times, even if imperfectly, and women like Dorothy Lloyd who were not cast out of “respectable” society, but rather took their place in it.
This sense of conflict and connection and of categorical uncertainty is what I hope to have contributed to the research process, storyline development, and filming of the Reggie Yates episode of Who Do You Think You Are? The central question the show raises is how do we think about and define relationships that were so heavily circumscribed by racialized power without denying the “possibility of love?” By “endeavor[ing] to trace its imperfections, its perversions,” was Martinican philosopher and anticolonial revolutionary Frantz Fanon’s answer. While I have yet to see the episode, Fanon’s insight will surely reverberate throughout it.
What is jihad? What do fundamentalists want? How will moderate Islamists react? These are questions that should be discussed. We may not have easy answers, but if we don’t start a dialogue, we may miss an opportunity to curtail horror.
The film Timbuktu from African director Abderrahmane Sassako about his native country serves as a needed point of departure for discussion — in government, in schools, in boardrooms, and in families.
Jihadism and terrorism are the 21st century’s “-isms,” following the horrors of fascism and communism. In hindsight, we wonder if we could have prevented the horrors of the 20th century. The devastating results have taught us that people do not want war; they want to live and work in peace. Should we not learn from history’s mistakes and prevent future genocides?
In the name of jihad, innocent victims are beheaded, kidnapped, raped, tortured, terrorized, left without families, and without homes. Extremist Muslims wage war against Christians and Jews, and against other Muslims (Sunnis vs. Shiites). Havoc is occurring in Syria, Iraq, Lebanon, Gaza, West Bank, Mali, Sudan, etc. It may soon take hold of our cities where jihadists threaten to set up terrorist cells.
Powerful and courageous, Timbuktu mesmerizes us with its blend of colors and music amidst a gentle background of sand dunes. Yet, juxtaposed to the serene beauty of Mali’s nature is the ferocious narrative of men turned into animals, forcing their machine guns on the quiet people of Timbuktu. We bear witness to the atrocious acts of barbarism.
Based on a true story when jihadists took over northern Mali in 2012, Sassako gives us a mosaic of characters who represent multi-cultural Africa. The camera takes us directly into their tragedies using a cause and effect structure:
We see a fisherwoman who refuses to wear a veil and gloves, for how would she be able to see or pick up the fish she must sell? Her rebellion, despite her mother’s pleas and the jihadist threats, is frightening.
Several friends play the guitar and sing together in the quiet of their home. The result? They are arrested and stoned to death.
A boy has a soccer ball, and accidentally the ball rolls down steps and through sand dunes to fall in front of several jihadists. The punishment? 40 lashes.
A caring man defends his young shepherd when their cow is killed. The outcome? A fight and the destruction of a family.
The leader of the community, the imam, tells several jihadists to leave the mosque with their guns and boots. People are praying. He warns them that Allah does not want destruction or terror. We fear the imam’s end.
These characters are not abstract; they are real victims. We follow their story, care for them, empathize with their pride, and suffer with their courage.
The contrast between good and evil, beauty and terror, are presented in alternating scenes and play havoc with our emotions. Sometimes we want to close our eyes as the evil becomes unbearable; we fear what horror will follow.
Sassako is a master storyteller and painter of landscape. His color palette holds our eyes as our hearts cringe at the story. Beautiful moments linger amidst savage reality. We see ballet in the scene when a dozen young men play soccer without a soccer ball. How graceful is their athletic movements and how deep their pleasure. We are mesmerized, and at the same time, we are panicked to think what the next scene will bring. The film’s power comes from its majestic beauty – a beauty that we fear cannot exist with the evil we are watching.
Sassako parallels the opening scene with the final scene. The film begins showing an elegant deer running through the soft dunes. It ends with the same scene, but the animal is replaced by the twelve-year-old heroine who runs desperately through the same dunes as she tries to escape her tragic reality. Sassako’s circle is a vicious cycle with no end to crimes against humanity.
Timbuktu is a difficult film to watch because it depicts a possible future that no one wants to see: genocide. All the more reason to see this film now.
From eighteenth century Gothic novels to contemporary popular culture, the tropes and sacred culture of Catholicism endure as themes in entertainment. OUP author Diana Walsh Pasulka sat down with The Conjuring (2013) screenwriters Chad Hayes and Carey Hayes to discuss their cinematic focus on “the Catholic Supernatural” and the enduring appeal of Catholic culture to moviegoers.
Diana Walsh Pasulka: Your recent movie The Conjuring was financially very successful and is the third highest grossing horror film about the supernatural, behind only The Exorcist (1973) and The Sixth Sense (1999). Each of these films engage Catholic themes, and more specifically, the supernatural. The Conjuring, of course, is based on the lives of Catholics Ed and Lorraine Warren. What is it about Catholic culture that you think resonates with audiences?
Carey Hayes: Catholic culture is global. It also has a long history that almost everyone in the West identifies with on some level. Medieval cathedrals, priests in black robes and white collars and nuns in habits, in many ways these visuals are like short hand or code, and audiences understand them. For example, take the movie, The Exorcist. When it is apparent in the movie that the little girl is possessed by evil, they call in the priest. The priest, with his identifiable clothing, his crucifix and holy water, is the representation, visually, of the antidote to evil. Of course it doesn’t hurt that authors and filmmakers have used these themes over and over again, and this adds to the recognizable effects. The more we see elements of Catholic culture used in visual culture this way, the more we understand what they mean.
Diana Walsh Pasulka: That’s interesting. The meaning of these tropes, then, can take on a second life, of sorts, in popular culture. Non-Catholic audiences might equate what they see about Catholicism in the movies, with Catholic-lived practice.
Chad Hayes: That could be the case, of course, but in our experience we’ve had only positive reinforcement from Catholics. When we promoted The Conjuring in San Francisco a Catholic priest approached me and said “Thank you for getting it right.” That one comment was one of the best compliments I’ve received about the movie. We were also interviewed for U.S. Catholic, and they were very positive.
Diana Walsh Pasulka: A few years ago, Carey, you coined the term “The Religious Supernatural” to differentiate what you were doing from other screenwriters who wrote movies about the supernatural. Why designate it “religious?”
Carey Hayes: I coined the term to identify a certain framework, and, I suppose, to suggest a history. Today there is a lot of focus in popular culture on the supernatural or the paranormal. It is almost all secular. In the past, the supernatural and paranormal occurred within a worldview that allowed for the supernatural but within a religious framework. People had tools like prayers to deal with the supernatural, which, you have to admit, is scary. We wanted, in our movies, to return to that. We thought that, in many ways, religion deals with the big questions, and the supernatural is usually a scary thing that interrupts daily life and causes people to think about the big questions. So, we wanted to pair the two, religion and the supernatural, and remind audiences that this is, ultimately, what scary movies are about: ultimate questions about life.
Diana Walsh Pasulka: Are you ever frightened by what you write about?
Chad Hayes: We’re not afraid when we write and produce movies about the supernatural. But our research frightens us!
Carey Hayes: Right! It is frightening because some of this is supposed to be true, or based on events that are true.
Diana Walsh Pasulka: I wondered about that. Part of the appeal of your movies, and other movies like it such as The Exorcist, is that they play on the ambiguity of fiction and non-fiction, or the realism of your subject. The Blair Witch Project (1999) is a great example of the play on realism. The movie was presented as recovered footage of an actual university student project. I was in Berkeley, California for the pre-release of that movie, and I couldn’t get tickets for three days because the lines outside of the theaters were so long. When I finally got to see the movie members of the audience were wondering, is this real? Of course, we knew that it wasn’t, but we were also intrigued that it was presented as real. That definitely contributed to its popularity. The marketing campaign for that movie was unique at the time, too, in that they emphasized the question of the potential realism of the movie.
Chad Hayes: We purposely look for stories that are based on true events. We do that for this very reason: because people can relate. They can Google the story and see that maybe its folklore, or its real, but it is out there and is an experience for other people. So that contributes, no doubt, to the scare factor.
Diana Walsh Pasulka: Do you think this also has something to do with the appeal of the Catholic aesthetic, like the use of real Catholic sacred objects — the sacramentals, the crucifix, and the robes of the priests?
Chad Hayes: Absolutely. Ed and Lorraine Warren are practicing Catholics. Ed has passed away, but Lorrain still attends a Catholic Mass almost every day. That part of The Conjuring is based on her real Catholic practice. We were in contact with Lorraine throughout the writing of the movie and we included the objects that she and Ed actually used, like the sacramentals, the blessed objects, and holy water. My Catholic friends tell me that most Catholics don’t use these objects in their daily lives, but then they aren’t exorcizing demons, are they?
In the Catholic tradition, purgatory is an afterlife destination reserved for souls who are ultimately bound for heaven. It is still a doctrine of the Catholic Church, despite confusion about its status. In 2007, the residing Pope Benedict XVI asked Church theologians to reconsider another Catholic afterlife destination: limbo. Limbo was traditionally thought to be on the “lip of hell” or the edge of heaven (hence the name limbo, which derives from the Latin limbus, for edge). Limbo was believed to be the final destination for the souls of unbaptized babies. The unsettling implications of belief in limbo, in part, was what motivated Pope Benedict and contemporary theologians to conclude that Catholics should hope for God’s mercy for deceased unbaptized babies—that no, they probably didn’t end up in limbo. The popular press interpreted this move as the abolition of limbo, which never was, ironically, a Catholic doctrine, although certainly lots of influential Catholics believed in it and wrote about it, like Augustine and Thomas Aquinas. With limbo off the table, public discussion focused on the status of purgatory.
Popular headlines reflected confusion: would purgatory be next? Unlike limbo, purgatory is a doctrine of the Church, yet its representations have undergone significant modifications. Historically, the diversity of conceptions of purgatory boggles the mind. An entrance to purgatory was once thought to reside in Ireland on a rocky island; it was also considered to be a punitive “neighborhood” to hell; in the 1860s a cleric in France wrote that purgatory was in the middle of the earth; and more commonly after the nineteenth century, it is conceived of as a purifying “state” or condition of a soul, and not as a place at all. The common thread running through each of these descriptions is that they all derive from Catholic culture, although each was advocated in different eras and within unique contexts.
Today, one is more likely to find representations of purgatory and limbo in virtual reality and popular culture than in the local Catholic Church. In particular, the creators of video games and online role playing environments incorporate stereotypical images that reinforce particularly punitive versions of these post-death destinations that are usually associated with the late medieval era. The somber, award-winning video game LIMBO features a narrative story line similar to the “edge of hell” version of limbo rather than its representation as the edge of heaven. Released in July 2010 by the Danish game developer Play Dead, the game follows a young boy in search of his sister. LIMBO’s environments are entirely black, white, and shades of gray, featuring fear factors like giant shadowy spiders, eerie, lonesome forests, and cold industrial landscapes. The game’s creators state that they intentionally kept the storyline minimal, with no inherent meaning so that gamers can speculate on their own as to what is the ultimate meaning.
Purgatory is the main theme of an anticipated 3D role-playing game called Graywalkers: Purgatory. The game environment is a post-apocalyptic world where the afterlife merges with human lives. Demons and angels war with each other over the fate of humanity. Thirty-six heroes called Graywalkers emerge to assist the angels. Creator Russell Tomas of Dreamlords Digital stated that Purgatory is a game of action and consequence, where player’s actions will directly impact the results of the game. Characters like Father Rueben wear traditional Catholic vestments with the additional innovation of weapons and religiously themed tattoos.
Purgatory also figures in the popular television show Sleepy Hollow, which premiered in 2013 on the Fox network. Protagonist Katrina Crane is relegated to purgatory, which is imagined as an eerie waiting area for souls who are destined for either heaven or hell. This is obviously an alternation from the doctrinal version of purgatory—imagined as a place where souls are destined for heaven—and it has spawned online conversations focused on whether or not the version of purgatory represented in the show is actually correct. It is not, of course, but in this respect it conforms to other, much older versions of purgatory that were ultimately considered to be erroneous, such as those that placed it in the middle of the earth, or on a rocky island in Ireland.
One of the more interesting recent developments in film studies is the recognition that what has seemed to be separate histories — documentary filmmaking and avant-garde filmmaking — are, once again, converging. I say “once again” because the interplay between documentary and avant-garde film has long been more significant than seems generally understood.
An intersection of an avant-garde artistic practice and a documentary impulse helped to instigate the dawn of cinema itself. When Eadweard Muybridge and Etienne-Jules Marey were discovering and exploring the possibilities of photographic motion study, they were the photographic avant-garde of that moment. And their subject was the documentation of the motion of animals, birds, and human beings, presumably so that we could know, more fully, the truth about this motion. And at the moment when W. K. L. Dickson perfected the Kinetograph and Kinetoscope and the Lumière Brothers perfected the Cinématographe and the projected motion picture, they in turn became the photographic avant-garde; and their primary fascination, too, was the documentation of motion, specifically human activity, first, in the world around them and soon, in the case of the Lumières, across the globe.
Flaherty’s Nanook (1922) was both a breakthrough documentary and an avant-garde experiment in collaborative filmmaking; and the City Symphonies that emerged in the 1920s (Berlin: Symphony of a Big City, 1926, e.g., and The Man with a Movie Camera, 1929) were documentary interpretations of reality and avant-garde experiments.
During the 1940s, the most important development for independent cinema in the United States was the emergence of a full-fledged film society movement. The leading contributor was Cinema 16, founded by Amos and Marcia Vogel in New York City in 1947. At its height, Cinema 16 had 7,000 members, and filled a 1,500-seat auditorium twice a night for monthly screenings. Cinema 16’s programming was an inventive mixture of documentary and avant-garde film.
The development of light-weight cameras and tape recorders, more flexible microphones, and faster film stocks during the late 1950s created additional options that in one sense, drove documentary filmmaking and avant-garde filmmaking apart, but in another sense, created a different kind of intersection between them. Sync-sound shooting expanded the options available to filmmakers committed to documentary, instigating forms of cinematic entertainment that functioned as critiques of Hollywood filmmaking and early television. Drew Associates, D. A. Pennebaker, Frederick Wiseman, and the Maysles Brothers fashioned engaging melodrama out of real life in Crisis: Behind a Presidential Commitment (1963), Don’t Look Back (1967), Hospital (1968), and Salesman (1968).
During the same decade, avant-garde filmmakers were producing very different forms of documentary, often by abjuring sound altogether. Stan Brakhage was committed to the idea of cinema as a visual art, and created remarkable—silent—confrontations of visual taboo such as Window Water Baby Moving (1959) and The Act of Seeing with One’s Own Eyes (1972)—now recognized as canonical documentaries. These films could hardly have been more different from the cinema verite films, but we can now see that Brakhage shared the mission of the cinema verite documentarians: the cinematic confrontation of convention-bound commercial media.
In 1955, Francis Flaherty, Robert Flaherty’s widow, established a symposium to honor her husband’s filmmaking oeuvre and to promote his commitment to filmmaking “without preconceptions.” In recent decades “the Flaherty,” as the symposium has come to be called, has attracted dozens of filmmakers, programmers, teachers, students, and other cine-aficionados for week-long immersions in programs of screenings and discussions. Modern Flaherty seminars have often been driven by an implicit debate about what the correct balance between documentary and avant-garde film should be at the seminar.
Since the 1940s, avant-garde filmmakers have found ways of exploring the personal, first by psycho-dramatizing their inner disturbances (Maya Deren’s Meshes of the Afternoon and Kenneth Anger’s Fireworks are landmark instances), and later by filming the particulars of their personal lives. Brakhage documented dimensions of his personal life in many films, as did Carolee Schneemann, in Fuses (1967), and Jonas Mekas, in Walden (1969) and Lost Lost Lost (1976). And during the 1980s, avant-garde filmmakers Su Friedrich (in The Ties that Bind, 1984; and Sink or Swim, 1990) and Alan Berliner (in Intimate Stranger, 1991; and Nobody’s Business, 1996), used experimental techniques learned from other avant-garde filmmakers to directly engage their family histories.
What has come to be called “personal documentary” (basically, the use of sync-sound to explore personal issues) was instigated in the early 1970s by Ed Pincus’s Diaries (filmed from 1971-1976; completed in 1981), Miriam Weinstein’s Living with Peter (1973), Amalie Rothschild’s Nana, Mom and Me (1974), Alfred Guzzetti’s Family Portrait Sittings (1975). By the 1980s, several of Pincus’s students at MIT were contributing to this approach, among them Ross McElwee, whose films, including Sherman’s March (1986), Time Indefinite (1994), and Photographic Memory (2011) are an on-going personal saga.
Globalization and the standardization of so many dimensions of modern life, along with threats to the environment, have created a desire on the part of many filmmakers to pay a deeper attention to the particulars of Place. Since the early 1970s, contemplations of Place have been produced by avant-garde filmmakers Larry Gottheim (Fog Line, 1970; Horizons, 1973), Nathaniel Dorsky (Hours for Jerome, 1982), James Benning (13 Lakes, 2004), Peter Hutton (Landscape (for Manon), 1987; At Sea, 2007), Sharon Lockhart (Double Tide, 2009) and many others. A fascination with Place, or more precisely, people-in-place, also characterizes the documentaries coming out of Harvard’s Sensory Ethnography Lab (SEL), including Ilisa Barbash and Lucien Castaing-Taylor’s Sweetgrass (2009), Castaing-Taylor and Véréna Paravel’s Leviathan (2013), and Stephanie Spray and Pacho Velez’s Manakamana (2014). Indeed, the films of Hutton, Benning, and Lockhart, in particular, have been shown regularly at the SEL.
The interviewees in Avant-Doc reveal a wide range of ways in which their own work and the work of colleagues function creatively within the liminal zone between documentary and avant-garde and the ways in which the intersections between these histories have played into their work.
Headline image credit: Camera. Public domain via Pixabay.
The riveting film, The Artist and the Model (L’Artiste et son Modèle) from Spain’s leading director, Fernando Trueba, focuses on a series of “one seconds” in the life of French sculptor Marc Cross.
The film director transfers himself into his protagonist, played brilliantly by Jean Rochefort, to explore what serves as inspiration for an artist. “An idea,” says the sculptor as he shares with his young model a sketch made by Rembrandt of a child’s first walking steps. “It is the tenderness of the sketch,“ the “one second of an idea,” that Marc Cross searches for to unblock his aging loss of creativity.
And it is the sculptor’s wife, played by beautiful Claudia Cardinale, who will find this “idea” for him. She will save him, help him create.
In one second, the “good wife” sees a driftless girl in their town, sleeping on the ground at a doorstep. She knows nothing about this vagabond who has found her way to their small French village at the Pyrenees’ border with Spain. The only thing the wife knows is that this homeless, hungry girl, wrapped in a bulky, woolen coat, has a face and body that her husband would love to sculpt. This street urchin could become his inspiration. Claudia Cardinale brings the girl home, shelters and feeds her, and teaches Mercè (Aïda Folch) how to pose.
After weeks of sketches and small sculptures, in one second, by chance, the sculptor sees his model in a new position, resting. It is the angle of her arm, the tilt of her head, her leaning down to reflect that gives him “his idea.” He sees in one second before him, a girl who has become a beautiful woman. Marc Cross realizes his model is thinking of the War, worrying about the people she has been transporting secretly during the night to both sides of the Pyrenees. They are “Jews, Resistance, anyone,” who want to escape German-occupied France of 1943-44, as well as from Franco’s military dictatorship of Spain.
In that one second, the sculptor feels her sensitivity, her attempts to do what is right. He sees her in a different light and feels her soul. She has become more than a body or model. He feels in one second that she is Beauty, Art. It is what the artist has been searching for. With tenderness and love, he sculpts his final masterpiece.
When his work is coming to an end, so is the War. The girl leaves to model for another, perhaps Matisse in Nice, as she bikes to the Riviera with a letter of introduction. At this time, the sculptor’s wife leaves him for a few days to care for her sick sister. It is not a coincidence that this is his moment, his one second, to create the most courageous act of all. And he does, with the beautiful finished sculpture of the woman in his garden — surrounded by perfect light and birds chirping – giving him peace.
The Artist and the Model speaks to an age when all men and women search for one second of Hope.
Seinfeld famously added a ton of terms to English, such as low talker, high talker, spongeworthy, and unshushables. It also made obscure terms into household words. Shrinkage and yada yada existed before Seinfeld, but it’s doubtful you learned them anywhere else.
Another successful Seinfeld term has gone under the radar: Jerk Store. The term was coined in “The Comeback,” when George is unselfconsciously stuffing his face with shrimp during a meeting. A co-worker sees George’s gluttony and says, “Hey, George, the ocean called. They’re running out of shrimp.” George is speechless, but later he crafts a comeback: “Oh yeah? Well, the Jerk Store called, and they’re running out of you.” The episode shows George going to absurd lengths to find a way to use his comeback, as well as his friends’ unwanted workshopping of the joke.
In a way, that workshopping has never ended—at least on Twitter, which is likely the largest collection of jokes, good and bad, by professionals and amateurs, ever created. Many of those jokes involve formulas, and the Jerk Store has become a popular one. On Twitter, every day is the Summer of George.
Most variations start with “The Jerk Store called,” which is as trusty a joke starter as “Relationship status:” and “When life hands you lemons.” From there, the joke can go just about anywhere. Comic Warren Holstein makes a food joke out of the formula: “The Jerk Store called but I couldn’t understand their thick Jamaican accents.” Matt Koff reveals what would likely happen to a real-life Jerk Store: “The Jerk Store called. It’s closing because it couldn’t compete with Amazon. :(“ Some use the formula to comment on politics: “The Jerk Store called; they’re no longer hiring because of fear of Obamacare mandates.” I particularly like this joke, which finds the funny in sadness: “The jerk store called. We didn’t chat for long but it was good to hear their voice. It was good to hear anyone’s voice. I’m so alone.”
Other tweeters abandon the formula when making Jerk Store jokes, like Laura Palmer: “I’m applying at the Jerk Store and I need references.” This holiday tweet sounds like perfect storm of jerkdom: “Looking forward to the Black Friday deals at the Jerk Store.” Food trends also get spoofed: “when will the jerk store start getting organic jerks. tired of getting these jerks full of gmos.” Here’s a particularly clever joke, playing on an annoying Frankenstein-related correction: “Actually, the jerk store’s monster called.”
This term/joke formula isn’t going anywhere for at least a few reasons. Seinfeld is still omnipresent in reruns, and I reckon the entire series is imprinted on the collective unconscious. Plus, the world is full of jerks. The following are some recent epistles from the Jerk Store to help you get through the polar jerk-tex. Jerk Store might never make the OED, but it’s one of the most successful joke franchises in the world.
The jerk store called, you left your credit card at the register. They are open until 8 if you want to pick it up today.
Well known is music’s power to stir emotions; less well known is that the stirring of specific emotions can result from the use of very simple yet still characteristic music. Consider the music that accompanies this sweet, sorrowful conclusion of pop culture’s latest cinematic saga.
When the on-set footage begins, so does some soft music that is rather uncomplicated because, in part, it simply alternates between two chords which last about four seconds each. These two chords are shown on the keyboard below. In classical as well as pop music, these two chords typically do not alternate with one another like this. Although the music for this featurette eventually makes room for other chords, the musical message of the more distinctive opening has clearly been sent, and it apparently worked on this blogger, who admits to shedding a few tears and recommends the viewer have a tissue nearby.
This simple progression has been used to accompany loss-induced sadness in numerous mainstream (mostly Hollywood) cinematic scenes for nearly 30 years. This association is not simply confined to movies, yet inhabits a larger media universe. For example, while the pop song “Comeback Story” by Kings of Leon, which opens this movie’s trailer, helps to convey the genre of the advertised product, the same two-chord progression—let’s call it the “loss gesture”—highlights the establishing narrative: a patriarchal death has brought a mourning family together (for comedic and sentimental results).
Loss gestures can play upon one’s heartstrings less discriminately; they can elicit both tears of joy as well as tears of sadness. Climaxes in Dreamer and Invincible, both underdog-comes-from-behind movies, are punctuated with loss gestures. As demonstrated at 2:06 in the following video, someone employed by the Republican Party appears to be keenly aware of this simple progression’s powerful capacity for moving a viewer (and potential voter).
Within the universe of contemporary media, the loss gesture has been used in radio as well. The interlude music that plays before or after a story on National Public Radio often has some relation to the content of the story. A week after the Sandy Hook school shootings, NPR aired a story by Kirk Siegler entitled “Newtown Copes With Grief, Searches For Answers.” Immediately after the story’s poignant but hopeful ending, the opening of Dustin O’Halloran’s “Opus 14” faded in, musically encapsulating the emotions of the moment.
How the loss gesture works its magic on listeners is a Gordian knot. However, it is undeniable that producers from several different corners of the media world know that the loss gesture works.
In order to spread some festive cheer, Blackstone’s Policing has compiled a watchlist of some of the best criminal Christmas films. From a child inadvertently left home alone to a cop with a vested interest, and from a vigilante superhero to a degenerate pair of blaggers, it seems that (in Hollywood at least) there’s something about this time of year that calls for a special kind of policing. So let’s take a look at some of Tinseltown’s most arresting Christmas films:
1. Die Hard, directed by John McTiernan, 1988
Considered by many to be one of the greatest action/Christmas films of all time, Die Hard remains the definitive cinematic alternative to the usual saccharine cookie-cut Christmas film offering. This is the infinitely watchable story of officer John McClane’s Christmas from hell. When a trip to win back his estranged wife goes awry and he unwittingly finds himself amidst an international terrorist plot, he must find a way to save the day armed only with a few guns, a walkie talkie, and a bloodied vest. With firefights and exploding fairy lights abundant, this Bruce Willis tour de force is the undisputed paragon of policing in Christmas films.
2. Home Alone, directed by Chris Columbus, 1990
In a parental blunder tantamount to criminal neglect, the McCallister family accidentally leave their youngest member, Kevin (played by precocious child star Macaulay Culkin), ‘home alone’ to fend for himself over Christmas as two omnishambolic burglars target the McCallister household. As the Chicago Police Department work through the confusion of the situation, Kevin traverses his way through a far from silent night. Cue copious booby traps and slapstick as the imagination of an eight-year-old boy ingeniously holds the line in this family-fun classic.
3. Batman Returns, directed by Tim Burton, 1992
Gotham is a city perennially infested with arch-criminals whose seemingly endless financial resources demand that they be tackled head-on by a force who can match them pound-for-pound (or dollar-for-dollar, if you prefer). Enter Gotham’s very own Christmas miracle: billionaire Bruce Wayne and his vigilante alter ego Batman (Michael Keaton), who provides a singular justice-hungry scourge against the criminal underworld. As the Penguin (Danny DeVito) hatches a nefarious plot which threatens the city, Batman’s wholly goodwill must prove resilient. Though director Tim Burton went on to make The Nightmare Before Christmas the following year, Batman Returns itself is hardly a Christmas classic.
4. Lethal Weapon, directed by Richard Donner, 1987
With a blizzard of bullets and completely bereft of snow, LA-based Lethal Weapon lacks nearly all the usual trimmings of a Christmas film. Seasoned detective Roger Murtaugh (Danny Glover) is close to retirement when he’s paired with the young (and morose) Martin Riggs (Mel Gibson) to tackle a drug smuggling gang. As their stormy investigation progresses, Murtaugh and Riggs’ unlikely union flourishes into a double-act worthy of Donner and Blitzen (and, judging by the pair’s return in a subsequent three installments of the series, their entertaining policing partnership always leaves audiences wanting myrrh…).
5. National Lampoon’s Christmas Vacation, directed by Jeremiah Chechik, 1989
In this third installment of the Griswold family’s catastrophic holidays, Clark (Chevy Chase) navigates his way through the perils of yet another disastrous calamity, but at least this time he has his Christmas bonus to look forward to. Things take a bizarre turn for the criminal when the bonus isn’t forthcoming, resulting in a myriad of mishaps of Christmas paraphernalia and SWAT teams. As the tagline for the film attests, ‘Yule crack up!’
6. Kiss Kiss Bang Bang, directed by Shane Black, 2005
Petty thief Harry Lockhart (Robert Downey Jr.) finds himself embroiled in a series of increasingly byzantine cases of mistaken identity as both a method actor and criminal investigator. Reality cuts through when Harry is shepherded into a murder investigation involving the sister of his childhood crush, Harmony Lane (Michelle Monaghan). Perhaps one of the less christmassy films on this list, there are definitely still a few seasonal signs parceled in to this murder/mystery thriller.
“There’s something about this time of year that calls for a special kind of policing”
7. Miracle on 34th Street, directed by George Seaton, 1947
Arguably the ultimate Christmas film, Miracle on 34th Street is the classic tale of the legal battle around the sanity and freedom of a man who claims to be the real Santa Claus. This original film won three Academy Awards including Best Actor in a Supporting Role for Edmund Gwenn’s portrayal of Kris Kringle (‘the real Santa Claus’). Despite being remade in 1994 and adapted into various other forms, the 1947 version remains the quintessential Christmas film which no comprehensive watchlist could be without.
8. Bad Santa, directed by Terry Zwigoff, 2003
Dastardly duo Willie (Billy Bob Thornton) and Marcus (Tony Cox) make their criminal living by posing as Santa and his Little Helper for department stores, and then opportunistically stealing as much as they can. As the security team for their latest blag hunts them down, Willie meets a boy determined that he is the real Santa and the race is on for the degenerate pair to reform their lifestyles before they are stuffed.
What would would you add to this list? Tell us your favourite policing Christmas film in the comments section below or let us know directly on Twitter. Merry Christmas everyone!
Headline image credit: [365 Toy Project: 019/365] Batman: Scarlet Part 1. CC-BY-NC-SA-2.0 via Flickr.
There are plenty of operas about teenage girls—love-sick, obsessed, hysterical teenage girls who dance, scheme, and murder in a frenzy of musical passion. Disney Princess films are also about teenage girls—lonely, skinny, logical teenage girls who follow their hearts because the plot gives them no other option. The music Disney Princesses sing can be divided into three periods that correspond to distinct animation styles:
Onto these three periods we can map the themes of the princess anthems, the single song for which each princess is remembered:
The relative lack of variance in these songs tells us something important—while animation styles have changed, the aspirations of girlhood have not been radically altered.
But then there’s Frozen.
Elsa’s anthem, “Let It Go,” combines aspects from all three periods: Frozen is a computer animated film, Idina Menzel is a Tony Award-winning singer, and, most importantly, the song and the Snow Queen who sings it have an operatic legacy rooted in representations of madness and infirmity. “Let It Go” is a tribute to passion, spontaneity, and instinct—elements celebrated by both the opera (which nevertheless punishes the bearer severely) and the Disney film (which channels them into heterosexual romance). Frozen does neither.
Unlike the songs of longing for belonging that came before it, “Let It Go” insists that being like everyone else is bound to fail. It’s a coming out song often read as a queer anthem and easily interpreted to account for a number of stigmatized identities. As such, Elsa is a screen onto which may be projected our fantasies and fears. While her transformation into a shapely princess swaying in a sparkly gown with wispy blond hair may be familiar, the scene where this takes place, the way she looks back at the viewer, and the music she sings define Elsa as more ambiguous than she appears. Is Elsa sick, is she mentally ill, is she asexual, is she gay? What is Elsa and why does she resonate so strongly with young girls?
Elsa is like the women of 19th-century opera in her exclusion from the world the other characters comfortably occupy. Marred by magical ability, Elsa must isolate herself if she does not want to scar those she loves—or so the dialogue tells us. The imagery suggests an illness; Elsa behaves as if she were contagious. Indeed, she is consumptive like Mimi, but she is also betrayed like Tosca and scandalous like The Queen of the Night. As Catherine Clément says of women in the opera: “they suffer, they cry, they die…Glowing with tears, their decolletés cut to the heart, they expose themselves to the gaze of those who come to take pleasure in their pretend agonies.” Operatic women express their hysteria skillfully. At the pinnacle of her agony, Elsa builds a magnificent castle while singing her most beautiful song, a song that has itself become infectious. In its final moments, she exposes herself, only to slam the door on viewers who would like nothing more than to gawk at the excess.
Most princess anthems end satisfactorily on the tonic chord, their musical conclusions coinciding with lyrical expectations that assure the story will fulfill the princesses’ desires. For example, when Ariel wishes she could be “part of that world”, she sings a high F, which a trombone echoes an octave lower, reinforcing the song’s key and suggesting the narrative’s interest in giving Ariel what she wants. In “Someday My Prince Will Come,” Snow White’s final line repeats the home pitch no less than six times as if to insist the screenwriters pay attention. “Let It Go,” on the other hand, ends unresolved. The score establishes a sharp distinction between the assertive melodic phrase sung by Elsa, “The cold never bothered me anyway,” and the harmonic manifestation of the accompaniment. Elsa turns her back to the camera after singing the downward moving line, which ends rather abruptly on the tonic, while the chord that ought to have shifted with Elsa’s exit lingers in the icy upper register of the strings, as if refusing to acknowledge the message. Is the music condemning the singer’s difference by suggesting that her immunity to the elements is indicative of a physical or psychic malady?
Unlike Donizetti’s operatic heroine, Lucia, whose infamous “mad scene” prompts the chorus to weep for her, Elsa stares into the camera, eyebrow raised, as if daring the spectators to pity her. This is the look of a woman who refuses to capitulate to patriarchy. And with our endless covers and video parodies of “Let It Go” we have rallied to her defense. Rather than constrain her by Frozen’s story, “Let It Go” lets Elsa escape again into possibility. The new princess message, “Leave Me Alone,” is echoed by little girls everywhere.
Peter Conrad says of opera, “It is the song of our irrationality, of the instinctual savagery which our jobs and routines and our nonsinging voices belie, or the music our bodies make. It is an art devoted to love and death (and especially to the cryptic alliance between them); to the definition and the interchangeability of the sexes; to madness and devilment…” Such is also a fair description of Frozen, for what are its final moments than an act of love to stave off death, what is Elsa but a mad and devilish woman who revels in the impermanence of sexuality, what is a fairytale but a story full of savage beasts that prey on our emotions. “Let It Go” releases an archetype from the hollows of diva history into the digital world of children’s animation.
Headline Image: Disney’s Frozen. DVD screenshot via Jennfier Fleeger.
Director Robert Altman made more than thirty feature films and dozens of television episodes over the course of his career. The Altman retrospective currently showing at MoMA is a treasure trove for rediscovering Altman’s best known films (M*A*S*H, Nashville, Gosford Park) as well as introducing unreleased shorts and his little-known early work as a writer.
Every Altman fan has her or his own list of favorite films. For me, Altman’s use of music is always so innovative, original, and unprecedented that a few key films stand out from the crowd based on their soundtracks. Here are my top five Altman films based on their soundtracks:
1. Gosford Park (2001): The English heritage film meets an Agatha Christie murder mystery, combining an all-star ensemble cast and gorgeous location shooting with a tribute to Jean Renoir’s La Règle du Jeu (1939). Jeremy Northam plays the real-life British film star and composer Ivor Novello. Watch for the integration of Northam/Novello’s live performances of period songs with the central murder scene, in which the songs’ lyrics explain (in hindsight) who really committed the murder, and why.
2. Nashville (1975): Altman’s brilliant critique of American society in the aftermath of Vietnam and Watergate. Nashville stands as an excellent example of “Altmanesque” filmmaking, in which several separate story strands merge in the climactic final scene. Many, although not all, of the songs were provided by the cast, which includes Henry Gibson as pompous country music star Haven Hamilton, and the Oscar-nominated Lily Tomlin as the mother of two deaf children drawn into a relationship with sleazy rock star Tom Frank (Keith Carradine, whose song “I’m Easy” won the film’s sole Academy Award).
3. M*A*S*H (1970): Ok, I will admit it. It took me a long, long time to appreciate M*A*S*H. Growing up in 1970s Toronto, I couldn’t accept Donald Sutherland and Elliot Gould as Hawkeye Pierce and Trapper John — familiar characters from the weekly CBS TV series (but played by different actors). Looking back, I realize that M*A*S*H really did break all the rules of filmmaking in 1970, not least of which because it appealed to the anti-Vietnam generation. Like so many later Altman films, what appears to be a sloppy, improvised, slap-dash film is in fact sutured together through the brilliant, carefully edited use of Japanese-language jazz standards blared over the disembodied voice of the base’s loudspeaker.
4. McCabe and Mrs. Miller (1971): Filmed outside of Vancouver, Altman’s reinvention of the Western genre stars Warren Beatty and Julie Christie. The film uses several of Leonard Cohen’s songs from his 1967 album The Songs of Leonard Cohen, allowing the songs to speak for often inarticulate characters. Watch for how the opening sequence, showing Beatty/McCabe riding into town, is closely choreographed to “The Stranger Song” as is Christie/Miller’s wordless monologue to “Winter Lady” later in the film — all to the breathtaking cinematography of Vilmos Zsigmond, who worked with Altman on Images (1972) and The Long Goodbye (1973) as well.
5. Aria (segment: “Les Boréades”) (1987): Made during Altman’s “exile” from Hollywood in the 1980s, this film combines short vignettes set to opera excerpts by veteran directors including Derek Jarman, Jean-Luc Godard, and Julien Temple. Altman’s contribution employs the music of 18th-century French composer Jean-Philippe Rameau. The sequence was a revelation to me personally, since it contains the only feature film documentation of Altman’s significant contributions to the world of opera. One of the first film directors to work on the opera stage, Altman directed a revolutionary production of Stravinsky’s The Rake’s Progress at the University of Michigan in the early 1980s: the work was restaged in France and used for the Aria Later, Altman collaborated with Pulitzer-Prize winning composer William Bolcom and librettist Arnold Weinstein to create new operas (McTeague, A Wedding) for the Lyric Opera of Chicago.
Rounding out the top ten would be Short Cuts (1993), Kansas City (1996), The Long Goodbye (1973), California Split (1974), and Popeye (1980) — Robin Williams’ first film, and definitely an off-beat but entertaining musical.