JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: TV & Film, Most Recent at Top [Help]
Results 1 - 25 of 90
How to use this Page
You are viewing the most recent posts tagged with the words: TV & Film in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
Seth Rogen isn’t the only actor to have a film about North Korea nixed: A script helmed by Bob Hope met a similar fate in 1954.
If US government sources are correct, North Korea cowed Sony Pictures into withholding a bawdy comedy about assassinating supreme leader Kim Jong-Un. Sony’s corporate computers were hacked and many bytes of tawdry Hollywood secrets were disgorged. The technical achievement lent credibility to the hackers’ threats of mass murder in theaters if Rogen’s The Interview was released. (Editors’ note: The Interview is currently in limited release and no attacks have been reported.) Governments can be expected to decry movies about murdering sitting presidents, but the bombast of Pyongyang’s apparent reaction lacks proportionality and appreciation of blowback from global audiences, which are sure to make Kim Jong-un a universal punch line. This cluelessness no doubt derives from the cultish isolation of Pyongyang, but it is not the first comedy set in North Korea to discomfit officials.
In 1954, the military-friendly jokester Bob Hope dropped plans for a screwball comedy on the Korean peninsula after the US Army refused to support it. The similarities and differences from the current episode tell us something about government influence over cinema, a vital conduit to the mass mind.
Only months after the end of the Korean War (1950-1953), Hope pitched a film to the Army’s Motion Picture office for approval. The military routinely lent expensive war equipment and technical advice to movie studios in return for a veto over scripts. Hope’s timing was awful. The “sour little war” was so unpopular it ended the political career of President Harry Truman and prompted years of soul searching into the American character and its failure to vanquish the enemy. The Army was touchy about cinematic portrayals of anything Korea, so much so that it reversed itself on a Ronald Reagan movie it had previously supported.
In March 1954, the same month Hope’s proposal was under consideration, the Army yanked approval of MGM’s P.O.W. Military bands had to cancel plans to play at premiers and all Army commands were ordered to cease publicizing the film. This was curious since the Army Motion Picture office had assisted P.O.W. throughout production, providing a former prisoner as consultant and requesting and receiving four pages of script revisions. The problem? Image management. The hastily-made movie was coming out at the same time the Army was beginning prosecutions of former prisoners accused of collaborating with their captors. The Chinese ran the prison camps in North Korea and persuaded some inmates to assist them on shortwave radio and other propaganda tasks. Collaboration became a big stir in the United States, especially after 21 American POWs defected to China after the war. Court martials of repatriated prisoners were part of a Cold War panic that the nation’s youth had gone soft, unable to resist Chinese indoctrination.
The difficulty with the Reagan film P.O.W. was that it was relentlessly brutal, even by today’s standards. Prisoners were subjected to awful tortures that were sure to arouse audience sympathy just when court martials were underway. Movies too heavy on torture or brainwashing would seem to excuse the behavior of soldiers who were now facing years at hard labor. Hence the Army bands repacking their instruments.
The delicacy of national morale helps explain the Army’s discomfort with the Bob Hope proposal. Donald E. Baruch, head of the Motion Pictures office, wrote Hope’s agent that the Army valued its previous work with the comedian:
However, in this instance, we believe no military purpose would be served in the production of this story. When Mr. Hope called while recently here, I did not react negatively because all he mentioned was that the story was about a U.S.O. tour to Korea and the repatriation of a prisoner. The subject is considered of too great importance and seriousness especially at this time to be treated in the farcical manner indicated by the outline. Other basic story objections are ‘stealing’ of the helicopter, Jane, Jimmy and Bob in North Korea, and the rescuing of Lloyd.
A serious prisoner of war movie that did get Army approval was MGM’s The Rack (1956) with Paul Newman. This courtroom-bound film was a psychological exploration of an officer’s conscience and why he failed to resist collaboration. However, The Rack was broody and talky and made no impression on the box office. The same occurred with Time Limit (United Artists, 1957), another courtroom film approved by the Army that failed to move audiences. To get a Pentagon subsidy and imprimatur, POW films set in Korea could not follow the tried and true formula of action and escape; collaboration was too imposing an issue. The small sub-genre of Korea POW films was steered into amnesia.
US Army influence on Korea POW films was gentle. Studios wanted subsidies and association with the military brand, so they were usually cooperative. In itself, Rogan’s The Interview has little in common with the patriotic cinema of the 1950s, but the apparent reaction of North Korea provides an interesting contrast. Some pundits have been quick to accuse Sony of letting Pyongyang become a censor by holding the film industry hostage. With this one film, they might have a point. But Pyongyang’s method of influencing movie content is really one of weakness. The Pentagon, neither today nor in the 1950s, has to threaten Hollywood, it simply waits for producers to come to it for set pieces and shrouds of official martial aura. In contrast, Kim Jong-Un’s royal court is so isolated and unable to shape the narrative that it resorted to the threats of a desperate loner. If North Korea’s apparent intervention in Hollywood still has an effect two years from now, it will only serve to focus more attention on the regime worldwide. Look for more hidden camera documentaries. Any other lasting influence is unlikely, since Kim Jong-Un can’t open a Hollywood office or even do lunch.
Featured image: Bob Hope (center) and other guests salute while “The Star Spangled Banner” is played during a ceremony to award Hope the Distinguished Public Service Award. Jan. 31, 1971. Public domain via Wikimedia Commons.
One of the best-known musicals of the 20th century is Annie, which tells the story of a pluckyorphan girl who warms the hearts of all around her, and eventually finds a loving family of her own. The tale will be carried into the 21st century when the newest film adaptation (produced by Jay-Z and Will Smith; perhaps you’ve heard of them) is released on 19 December of this year. In honor of the long legacy of this famous story, here we take a look at the changing language of Annie.
Little orphant Allie
Speaking of long legacies, the 1977 musical Annie was not the first time the world had been introduced to the inspirational young character. The musical was based on an American comic strip entitled “Little Orphan Annie”. Well-known in its own time and called the most famous comic of 1937 by Fortune magazine, “Little Orphan Annie” ran for a whopping 86 years and even led to an equally famous radio show (religiously followed by Ralph in the 1983 film A Christmas Story). However, the story of Annie can be traced further back to a girl named Mary Alice Smith (nicknamed “Allie”), who inspired Indiana poet James Whitcomb Riley to pen the poem “The Elf Child” in 1885. He would eventually rename it “Little Orphant Allie”.
“Orphant”? Not a typo—just a US regional variant spelling that has since fallen largely out of use, as have other variants orphaunt, orfant, and even orphing (among many others). However, a literal typo or typographical error did come into play with Riley’s poem when the name “Annie” was accidentally typeset instead of “Allie”. When the poem gained popularity, Riley decided to stick with the new name.
The original hard knocks
People looking for the familiar plot or song lyrics in the original poem will be disappointed: there is almost no resemblance between the Annie of the poem and Annie as she is popularly known today. The poem, like several of Riley’s others, is written in Hoosier dialect—the midland dialect of American English, or more specifically that from Indiana. In the poem, “little orphant Annie” tells stories to other orphaned children in which “gobble-uns” (goblins) steal poorly behaved children away (hence the original title “The Elf Child”). At the end of the didactic poem, Annie says
You better mind yer parunts, an’ yer teachurs fond an’ dear, An’ churish them ‘at loves you, an’ dry the orphant’s tear, An’ he’p the pore an’ needy ones ‘at clusters all about, Er the Gobble-uns ‘ll git you Ef you Don’t Watch Out!
However, like the Annie of the later comic strip, musical, and film adaptations, “little orphant Annie” is happy to take the “pore an’ needy” under her wing and to teach them what she knows.
Hoovervilles and Prohibition
Though the musical Annie opened on Broadway in 1977 and its film adaptation was released in 1982, the plot takes place in the 1930s. Apart from the clothing styles and the Hoovervilles, the song lyrics themselves—with many words unfamiliar to the modern English-speaker— are intended to transport audiences to the early 20th century.
Yank the whiskers from her chin! Jab her with a safety-pin! Make her drink a Mickey Finn!
Dilly, an alteration of the first syllable of delightful or delicious, is a North American word for an excellent example of something.
You spend your evenings in the shanties, Imbibing quarts of bathtub gin. And here you’re dancing in your scanties.
To a modern-day reader, it may not be clear how much Daddy Warbucks is insulting Miss Hannigan in the song “Sign” from the 1982 film. When he accuses her of spending time in the shanties, he is probably referring to shantytowns: run-down areas consisting of large numbers of shanties, or small, crudely built shacks. These shantytowns (or Hoovervilles, as they were sometimes called, after the US President Herbert Hoover) were an all-too-familiar sight during the Great Depression, when as much as 25% of Americans were unemployed.
As for bathtub gin, readers familiar with the Prohibition era in the United States may know what it is—a concoction of spirits intended to simulate the taste of gin, representative of a time in which alcoholic drinks (rendered illegal by the 18th Amendment to the US Constitution in 1920) were often surreptitiously made in homes (and sometimes, presumably, in bathtubs). It goes without saying that, generally, the quality of “bathtub gin” was probably not very high.
Daddy Warbucks gets in one final jab by accusing Miss Hannigan of dancing around in her scanties, or brief underwear. (The word comes from scant + -y; scant is from the Old Norse word for “short”.) Interestingly, a modern word for a similar type of women’s underwear—panties—could be substituted here without sacrificing rhyme.
On the topic of modernizing lyrics, the upcoming movie Annie will debut such changes of its own; in the song “Hard-Knock Life”, what originally was
No one cares for you a smidge When you’re in an orphanage
has been updated to
No one cares for you a bit When you’re a foster kid
Here, bit may have replaced smidge as a better near rhyme, or it may been considered a safer bet in terms of plausible vocabulary for a 10-year-old in 2014 (it doesn’t seem a stretch to say that smidge is probably not in the parlance of today’s youth). As for the replacement of orphanage with “foster kid”, given that the new movie doesn’t involve an orphanage—instead, Annie is in a foster home—this change is practical.
However, it can also be noted that fostering has gradually taken the place of institutional care and sociocultural developments have shaped the concept of child welfare as we understand it today. For these reasons in part, it may not be surprising that the use of the word “foster child” has been increasing somewhat steadily over the last two centuries, while use of the word orphan (though still more common overall) has dwindled over the same period of time.
Though Annie has been around long enough for “orphant” to eventually turn into “foster kid”, the fact remains that American audiences are perennial lovers of the rags-to-riches theme. For this reason, it should come as no surprise that the story of Annie is just as well-known today as when Ralph was racing to the radio—or that virtually everyone you know can sing at least a few bars of “Tomorrow”. It probably goes without saying that we’ll see many more iterations of Annie in the century to come.
Moses and Pharaoh are returning to the big screen in Ridley Scott’s seasonal blockbuster, Exodus: Gods and Kings. With a $200m budget and Christian Bale in the leading role, the British director will hope to replicate the success of Gladiator (where he resurrected the sword and sandals genre) and surpass the shock and awe of Cecil B. DeMille’s The Ten Commandments. Even before its release, the movie sparked controversy. The casting of white actors as Egyptians provoked charges of racial discrimination; describing Moses as ‘barbaric’ and ‘schizophrenic’ did not endear the leading actor to traditional believers; and casting a truculent young boy as the voice of Yahweh was bound to raise eyebrows. In other respects, the storyline remains traditional. Indeed, the film follows a long tradition of interpretation by presenting the Exodus as a political saga of slavery and liberation. 600,000 slaves are delivered as an oppressive empire is overwhelmed by divine power.
This political reading of the biblical epic will be familiar to anyone who has studied its remarkable reception history. In Christian preaching, liturgy and hymnology, Exodus has been read as spiritual typology — Israel points forward to the Church, Pharaoh’s Egypt to enslavement by Satan, Moses to the Messiah, the Red Sea to salvation, the Wilderness Wanderings to earthly pilgrimage, the Promised Land to heavenly rest.
Yet there has been an almost equally potent tradition of reading Exodus politically. It originated with Eusebius of Caesarea in the fourth century, who hailed the Emperor Constantine as a Mosaic deliverer of the persecuted Church. It took on new intensity when the Protestant Reformation was promoted as liberation from ‘popish bondage’. As a vulnerable minority, European Calvinists identified with the oppressed children of Israel in Egypt and then celebrated national reformations in Britain and the Netherlands as a new exodus. The title page of the Geneva Bible (1560) pictured the Israelites pinned against the Red Sea by the chariots and horsemen of Pharaoh, the moment before their deliverance. Deliverance became a keyword in Anglophone political rhetoric, a term that fused Providence and Liberation.
Over the coming centuries, this Protestant reading of Exodus would go through some surprising twists. The Reformers had sought deliverance from the Papacy, but radical Puritans condemned intolerant Protestant clergy as ‘Egyptian taskmasters’. Rhetoric that had once been trained on ecclesiastical oppression was turned against ‘political slavery’, as revolutionaries in 1649, 1688 and 1776 co-opted biblical narrative. For Oliver Cromwell, Israel’s journey from Egypt through the Wilderness towards Canaan was ‘the only parallel’ to the course of English Revolution. For John Milton, tolerationist and republican, England’s Exodus led to ‘civil and religious liberty’, a phrase coined in Cromwellian England. The most startling development occurred during the American Revolution, when Patriots unleashed the language of slavery and deliverance against ‘the British Pharaoh’, George III. The contradiction between their libertarian rhetoric and American slaveholding galvanized the nascent anti-slavery movement on both sides of the Atlantic. Black Protestants now seized upon Exodus and the language of deliverance. ‘For the first time in history’, writes historian John Saillant, ‘slaves had a book on their side’.
African Americans inhabited the story like no other people before them. When they fled from slavery and segregation and migrated to the North, they consciously re-enacted the Exodus. In slave revolts and in the American Civil War they called on God for deliverance from Egyptian taskmasters. In the spiritual ‘Go Down Moses’, they re-imagined the United States as ‘Egyptland’, throwing into question the biblical construction of the nation as an ‘American Zion’. They sang of a deliverer who would tell old Pharaoh, ‘Let my People go’. They celebrated the abolition of the slave trade, West Indian emancipation, and Lincoln’s Emancipation Proclamation by recalling the song of Moses and Miriam at the Red Sea.
The black use of Exodus was not without its ironies. It owed more than has been recognized to the long tradition of Protestant Exodus politics, albeit reworked and subverted. African Americans took pride in the fact that Moses married an Ethiopian (Numbers 12:1), but they were embarrassed by the sanction given to slavery in the Mosaic Law, and by the Hebrews’ oppression at the hands of African Pharaohs. Yet Exodus spoke to African American experience like no other text. Like the Children of Israel, their Red Sea moment was followed by a long and bitter Wilderness experience. On the night before his assassination, Martin Luther King Jr assured his black audience that he had ‘seen the Promised Land’. Barack Obama talked of ‘the Joshua Generation’ completing the work of King’s ‘Moses Generation’, but the land of milk and honey can still seem like a distant prospect.
Heading image: Dura Europos Synagogue wall painting showing the Hebrews leaving Egypt. Adaptation by Gill/Gillerman slides collection, Yale. Public domain via Wikimedia Commons.
The Red Tent was perfect for the Lifetime channel. The network’s four-hour miniseries closely followed Anita Diamont’s 1997 novel, which gave voice—and agency—to the biblical character of Dinah. In both the novel and the miniseries, Dinah the daughter of Jacob is characterized not as a victim (as in Genesis 34) but as a strong, assertive woman raised by a band of mothers who draw power from one another and from their worship of the Divine Mother rather than the patriarchal god of Jacob. And yet, as much as she delivers strong speeches against patriarchal ways, Dinah Redux does not stray from the traditional scripts for women. Her life is shaped by romances with muscled men and by motherhood.
Dinah is tenderly loved by two men. Her first husband Shalem, who in Genesis 34 is called Shechem and is described as seizing Dinah by force, becomes in The Red Tent Dinah’s consensual spouse. Refusing to request permission to marry from her father, she claims her union with Shalem as “my life, my future, my choice.” It is the men of her family who construe her choice as defilement, using it as a pretext for slaughtering Shalem and all the men of his village. Her second husband, created for the novel, overcomes her reluctance to marry again and, like her first husband, consummates their union in slow motion on a dimly-lit bed of mutual pleasure and tenderness. While criticizing patriarchal ideas in general and some men in particular (including Laban, who is depicted as a drunk, gambling, abusive tyrant), Dinah clearly loves her husbands as well as her brother Joseph.
From the beginning of her pregnancy with Shalem’s child, Dinah’s identity rests in her role as mother. When her son is claimed by Shalem’s Egyptian mother, Dinah is willing to live in a mice-infested cellar and be treated as a slave in order to remain in her son’s life. Childbearing as the essential essence of womanhood, indeed, runs throughout The Red Tent. Even as a child, Dinah learns from her mothers in the women’s-only space of the tent the power of menstrual blood and the ability to give birth; her later role as midwife allows her to continue to participate in this most female of activities.
In placing romance and the mother-child bond at the center of women’s lives, The Red Tent follows a very modern script. Like the heroines of romance novels, Dinah willingly surrenders to the attentions of attractive men and is passionately devoted to her son. Other modern tropes appear as well. She and her mothers attempt to protect Laban’s wife from domestic violence, treat slaves as their equals, and eventually manage their anger. While Dinah resists patriarchy as a system, she ultimately forgives the people (like her father) who embody that system. Dinah is strong and independent but still desirable to men, still a devoted mother, still kind in a self-sacrificing way.
The novel The Red Tent is so beloved by many women because it offers a relatable female biblical character, one whose loves, commitments, and challenges resonate in the modern world. Presented as the recovery of the lost voices of ancient women, it also plays well with a current climate of distrust in religious traditions and institutions. Like The Da Vinci Code, The Red Tent is fiction, but its claim that history has demeaned women’s stories rings true for many who are desperately seeking a usable past.
And yet, by making the past mirror the present, this retelling of the biblical story not only does disservice to the past but also reinscribes the very gender scripts it claims to resist.
My recent work as the editor in chief of The Oxford Encyclopedia of the Bible and Gender Studies aims to work against such anachronistic assumptions. In the case of ancient Israel, our participating scholars explored topics such as the nature of goddess worship, marriage, gender roles, and the social significance of children. They argue that the worship of female deities was not limited to women and had little bearing on the well-being of human women; that children’s importance was as much economic as affectional; that “biblical marriage” required neither female consent, mutual vow making, nor romance; and that low life expectancies not only promoted the “marriage” of females by the age of 13 but also meant that few people would have ever known their grandparents. Johanna Stiebert, author of “Social Scientific Approaches,” contextualizes The Red Tent as one strategy of feminist appropriation of the ancient world, while Susanne Scholz (“Second Wave Feminism”) and Teresa J. Hornsby (“Heterosexism/Heteronormativity”) explain the perspectives of those who find the valorization of romance and motherhood as reflective of rather than resistant to patriarchy. Deborah W. Rooke (“Patriarchy/Kyriarchy”) traces the history of conversations about goddesses and women in the ancient world.
These and other entries suggest just how speculative, selective, and skewed many of The Red Tent’s portrayals of the ancient world are. In Diamant’s world, four women willingly share Jacob as husband and experience little competition within women’s space. In the red tent, they cooperate with one another, sharing stories and essential oils. Such portrayals downplay not only biblical stories of tensions between women but also the modern systems that pit women against one another.
By paying attention to the ways in which gender is constructed in the diverse texts, cultures, and readers that constitute “the world of the Bible,” gender-sensitive biblical scholarship seeks to move beyond such stereotypes of women. It suggests that women—and men and those whom societies place as “other”—operate within systems and structures that must be named and, when necessary, critiqued. Though giving Dinah agency within a world that limits women’s roles to romance and motherhood might seem liberating to some readers/viewers of The Red Tent, gender studies brings into focus the socially constructed nature of these limits of women’s worth.
On the surface, the Lifetime channel’s special Women of the Bible tells a very different story than The Red Tent. The two-hour program which aired just prior to the miniseries premiere claims to read with the Bible rather than against it, suggesting that the text itself depicts strong and faithful women—no retelling necessary. Moreover, while the miniseries adaptation of Anita Diamont’s novel valorizes goddess worship and condemns the patriarchal bias of the Bible, Women of the Bible recounts the story of selected biblical women from a decidedly conservative Christian perspective.
This perspective is clearly evident in the choice of the “experts” chosen to comment on the biblical narratives. Victoria Osteen, wife of evangelist Joel Osteen, and Joyce Meier, described on her website as a “charismatic Christian author,” appear alongside a woman designated as “Bible Teacher” and several female leaders of Christian ministries. Those outside this circle include a female rabbi and a female professor at Notre Dame, though their comments are integrated with rather than contrasted with the majority of conservative Christian voices.
Conservative Christian theology is also reflected in the choice of biblical women and the aspects of their stories eliciting commentary.
Eve. The program spends little time on Eve as a character. Instead, commentators use her story to discuss “the Fall,” a distinctively Christian understanding that Genesis 3 depicts a universal human fall from grace to which Jesus later provides a remedy.
Sarah. The two episodes selected from Sarah’s story are (1) her motherhood late in life and (2) her response to Abraham’s near sacrifice of Isaac on Mt. Moriah (Genesis 22). Although the Bible does not include Sarah in this latter story, commentators speculate on how she must have felt, and the visual reenactment depicts her running to find her son. This passage is far less relevant to understanding the Bible’s characterization of Sarah than it is to certain strands of Christianity theology. In Christianity, Abraham’s willingness to sacrifice his son has traditionally been invoked as prefiguring God’s willingness to sacrifice his son Jesus on the cross. This linkage is clearly implied in the video footage. Although Genesis 22 indicates that God provided a ram as a substitute sacrifice, the program shows a lamb instead (in the gospels and later Christian tradition, Jesus is called the “lamb of God”).
Rahab. This brothel owner who saved the Israelite spies is praised for her willingness to protect her family. Commentators also expound upon the significance of the red cord she uses to mark her house for deliverance. Following traditional Christian interpretation, they connect Rahab’s red cord with Jesus’ blood shed on the cross to save humanity. They also explicitly trace Rahab’s genealogy to Jesus, following the gospel of Matthew.
Samson’s mother and his mistress Delilah. In the program, these two women are not explicitly linked with the Christian message. The commentators instead use their stories to advance important morals and teachings. Samson’s mother is explained as providing hope to “mothers who try to be good parents but the children stray,” and Delilah becomes a cautionary tale of being “tempted like Eve.”
The Marys. The majority of the program (close to one half) is devoted to Mary the mother of Jesus and Mary Magdalene. Mary Magdalene is depicted as playing an important role in early Christianity, and yet most of the scenes depicting both women recounted the life and death of Jesus. Their stories offer windows into his story. In keeping with a particular understanding of the importance of Jesus shedding blood at his crucifixion, scenes graphically depict Jesus’ flogging and crucifixion (“he came to die”). The imagined feelings of the Marys become a means to reflect on the painfulness of Jesus’ sacrifice: “I would imagine they felt this way,” “They must have felt this way.” Although the program insists that the Magdalene was instrumental in the growth of Christianity, it provides no support for this claim.
As a biblical scholar devoted to gender critical work, I was amazed and disturbed that this program demonstrated no awareness of the important discussions conducted by feminist interpreters of the Bible over the past 40 years. Reassessments of Eve, Sarah, Mary Magdalene, and our traditions of reading are now old news, as is the recognition that standard ways of depicting Jesus as female-friendly have anti-Jewish dimensions. At least since the 1990s, Jewish feminists have insisted upon the inaccuracy and the danger of statements like those made in the program: “a Jewish rabbi wouldn’t talk to a woman,” “women were devalued in that culture.” The program leaves these statements to stand unchallenged and actually reinforce them in the costuming of the reenactments of Jesus’ arrest, trial, and crucifixion: Jewish leaders wear the pointed hats used to designate Jews in Medieval anti-Jewish iconography.
I also was appalled that in the apparent attempt to include actors of color insufficient attention was paid to the ways in which casting might perpetuate racial stereotypes. Samson was depicted as a huge, violent man of African descent who could not control his passions. When his deadlocks were cut, he was bound in chains to a column. In the US context, this image too closely mirrors that of the slave on the auction block to pass for an attempt at “diversity.”
Neither the commentators nor the marketers of this program named the monolithic perspective that informed the presentation. Although the rhetoric of the program suggests that the commentators are simply reading the Bible, in reality the program recounted a particular Christian narrative about sin and Jesus’s role of overcoming it. Women were lauded as important to the degree that they were instrumental in advancing that narrative.
In turn, biblical texts that stray from this perspective are overlooked, such as:
Abraham’s willingness to give Sarah to another man—twice—to save himself.
The abuse suffered by Hagar.
The likelihood that the Israelite spies were visiting Rahab’s brothel rather than simply hiding.
Jesus’ statements that challenge the priority of family (Mark 10; Luke 14; Matthew 22). In this program, the distance between Jesus and his mother was described as a normal mother-son dynamic rather than part of Jesus’ message (Mark 3). The commentators stressed the ways in which Jesus provided for his mother from the cross, since “a son ought to love his mother and make sure she is looked after.”
Even though this program reflected a far more conservative religiosity than The Red Tent, similar ideologies of gender run through both productions. Women are valued primarily for being mothers, wives, and protectors of their families. Biblical women who do not fill these roles are passed in silence: Deborah, Huldah, Athalia, Miriam, and the women involved in ministry with Paul. (See an Index of Women in the Bible with relevant biblical passages.)
Responsible interpretation of the Bible requires a deep understanding of the ancient world reflected in its pages. Engagement with on-going biblical scholarship is crucial, since our knowledge of the past continues to grow through archaeological investigation, the discovery of new texts, and the development of research methodology. Responsible interpretation also requires a self-awareness of the lenses through which we read and the commitments that guide our choice of texts and our determination of their meaning.
Women of the Bible, sadly, reflects neither solid scholarship nor attentiveness to perspective. Based on the speculation of interpreters whose interests remain unnamed rather than on current research on gender in the ancient world, the Lifetime program perpetuates particular tropes for women rather than offering viewers fresh insight.
Prometheus, a Titan god, was exiled from Mount Olympus by Zeus because he stole fire from the gods and gave it to mankind. He was condemned, punished, and chained to a rock while eagles ate at his liver. His name, in ancient Greek, means “forethinker “and literary history lauds him as a prophetic hero who rebels against his society to help man progress. The stolen fire is symbolic of creative powers and scientific knowledge. His theft encompasses risk, unintended consequences, and tragedy. Centuries later, modern times has another Promethean hero, Alan Turing. Like the Greek Titan before him, Turing suffers for his foresight and audacity to rebel.
The riveting film, The Imitation Game, directed by Morten Tyldum and staring Benedict Cumberbatch, offers us a portrait of Alan Turing that few of us knew before. After this peak into his extraordinary life, we wonder, how is it possible that within our lifetime, society could condemn to eternal punishment such a special person? Turing accepts his tragic fate and blames himself.
“I am not normal,” he confesses to his ex-fiancée, Joan Clarke.
“Normal?” she responds, angrily. “Could a normal man have shortened World War ll by two years and have saved 16 million people?”
The Turing machine, the precursor to the computer, is the result of his “not normal” mind. His obsession was to solve the greatest enigma of his time – to decode Nazi war messages.
In the film, as the leader of a team of cryptologists at Bletchley Park in 1940, Turing’s Bombe deciphered coded messages where German U-boats would decimate British ships. In 1943, the Colossus machine, built by engineer Tommy Flowers of the group, was able to decode messages directly from Hitler.
The movie, The Imitation Game, while depicting the life of an extraordinary person, also raises philosophical questions, not only about artificial intelligence, but what it is to be human. Cumberbatch’s Turing recognizes the danger of his invention. He feared what would happen if a thinking machine is programmed to replace a man; if a robot is processed by artificial intelligence and not by a human being who has a conscience, a soul, a heart.
Einstein experienced a similar dilemma. His theory of relativity created great advances in physics and scientific achievement, but also had tragic consequences – the development of the atomic bomb.
The Imitation Game will open Pandora’s box. Viewers will ponder on what the film passed over quickly. Who was a Russian spy? Why did Churchill not trust Stalin? What was the role of the Americans during this period of decrypting military codes? How did Israel get involved?
And viewers will want to know more about Alan Turing. Did Turing really commit suicide by biting into an apple laced with cyanide? Or does statistical probability tell us that Turing knew too much about too many things and perhaps too many people wanted him silent? This will be an enigma to decode.
The greatest crime from a sociological perspective, is the one committed by humanity against a unique individual because he is different. The Imitation Game will make us all ashamed of society’s crime of being prejudiced. Alan Turing stole fire from the gods to give to man power and knowledge. While doing so, he showed he was very human. And society condemned him for being so.
In the Catholic tradition, purgatory is an afterlife destination reserved for souls who are ultimately bound for heaven. It is still a doctrine of the Catholic Church, despite confusion about its status. In 2007, the residing Pope Benedict XVI asked Church theologians to reconsider another Catholic afterlife destination: limbo. Limbo was traditionally thought to be on the “lip of hell” or the edge of heaven (hence the name limbo, which derives from the Latin limbus, for edge). Limbo was believed to be the final destination for the souls of unbaptized babies. The unsettling implications of belief in limbo, in part, was what motivated Pope Benedict and contemporary theologians to conclude that Catholics should hope for God’s mercy for deceased unbaptized babies—that no, they probably didn’t end up in limbo. The popular press interpreted this move as the abolition of limbo, which never was, ironically, a Catholic doctrine, although certainly lots of influential Catholics believed in it and wrote about it, like Augustine and Thomas Aquinas. With limbo off the table, public discussion focused on the status of purgatory.
Popular headlines reflected confusion: would purgatory be next? Unlike limbo, purgatory is a doctrine of the Church, yet its representations have undergone significant modifications. Historically, the diversity of conceptions of purgatory boggles the mind. An entrance to purgatory was once thought to reside in Ireland on a rocky island; it was also considered to be a punitive “neighborhood” to hell; in the 1860s a cleric in France wrote that purgatory was in the middle of the earth; and more commonly after the nineteenth century, it is conceived of as a purifying “state” or condition of a soul, and not as a place at all. The common thread running through each of these descriptions is that they all derive from Catholic culture, although each was advocated in different eras and within unique contexts.
Today, one is more likely to find representations of purgatory and limbo in virtual reality and popular culture than in the local Catholic Church. In particular, the creators of video games and online role playing environments incorporate stereotypical images that reinforce particularly punitive versions of these post-death destinations that are usually associated with the late medieval era. The somber, award-winning video game LIMBO features a narrative story line similar to the “edge of hell” version of limbo rather than its representation as the edge of heaven. Released in July 2010 by the Danish game developer Play Dead, the game follows a young boy in search of his sister. LIMBO’s environments are entirely black, white, and shades of gray, featuring fear factors like giant shadowy spiders, eerie, lonesome forests, and cold industrial landscapes. The game’s creators state that they intentionally kept the storyline minimal, with no inherent meaning so that gamers can speculate on their own as to what is the ultimate meaning.
Purgatory is the main theme of an anticipated 3D role-playing game called Graywalkers: Purgatory. The game environment is a post-apocalyptic world where the afterlife merges with human lives. Demons and angels war with each other over the fate of humanity. Thirty-six heroes called Graywalkers emerge to assist the angels. Creator Russell Tomas of Dreamlords Digital stated that Purgatory is a game of action and consequence, where player’s actions will directly impact the results of the game. Characters like Father Rueben wear traditional Catholic vestments with the additional innovation of weapons and religiously themed tattoos.
Purgatory also figures in the popular television show Sleepy Hollow, which premiered in 2013 on the Fox network. Protagonist Katrina Crane is relegated to purgatory, which is imagined as an eerie waiting area for souls who are destined for either heaven or hell. This is obviously an alternation from the doctrinal version of purgatory—imagined as a place where souls are destined for heaven—and it has spawned online conversations focused on whether or not the version of purgatory represented in the show is actually correct. It is not, of course, but in this respect it conforms to other, much older versions of purgatory that were ultimately considered to be erroneous, such as those that placed it in the middle of the earth, or on a rocky island in Ireland.
One of the more interesting recent developments in film studies is the recognition that what has seemed to be separate histories — documentary filmmaking and avant-garde filmmaking — are, once again, converging. I say “once again” because the interplay between documentary and avant-garde film has long been more significant than seems generally understood.
An intersection of an avant-garde artistic practice and a documentary impulse helped to instigate the dawn of cinema itself. When Eadweard Muybridge and Etienne-Jules Marey were discovering and exploring the possibilities of photographic motion study, they were the photographic avant-garde of that moment. And their subject was the documentation of the motion of animals, birds, and human beings, presumably so that we could know, more fully, the truth about this motion. And at the moment when W. K. L. Dickson perfected the Kinetograph and Kinetoscope and the Lumière Brothers perfected the Cinématographe and the projected motion picture, they in turn became the photographic avant-garde; and their primary fascination, too, was the documentation of motion, specifically human activity, first, in the world around them and soon, in the case of the Lumières, across the globe.
Flaherty’s Nanook (1922) was both a breakthrough documentary and an avant-garde experiment in collaborative filmmaking; and the City Symphonies that emerged in the 1920s (Berlin: Symphony of a Big City, 1926, e.g., and The Man with a Movie Camera, 1929) were documentary interpretations of reality and avant-garde experiments.
During the 1940s, the most important development for independent cinema in the United States was the emergence of a full-fledged film society movement. The leading contributor was Cinema 16, founded by Amos and Marcia Vogel in New York City in 1947. At its height, Cinema 16 had 7,000 members, and filled a 1,500-seat auditorium twice a night for monthly screenings. Cinema 16’s programming was an inventive mixture of documentary and avant-garde film.
The development of light-weight cameras and tape recorders, more flexible microphones, and faster film stocks during the late 1950s created additional options that in one sense, drove documentary filmmaking and avant-garde filmmaking apart, but in another sense, created a different kind of intersection between them. Sync-sound shooting expanded the options available to filmmakers committed to documentary, instigating forms of cinematic entertainment that functioned as critiques of Hollywood filmmaking and early television. Drew Associates, D. A. Pennebaker, Frederick Wiseman, and the Maysles Brothers fashioned engaging melodrama out of real life in Crisis: Behind a Presidential Commitment (1963), Don’t Look Back (1967), Hospital (1968), and Salesman (1968).
During the same decade, avant-garde filmmakers were producing very different forms of documentary, often by abjuring sound altogether. Stan Brakhage was committed to the idea of cinema as a visual art, and created remarkable—silent—confrontations of visual taboo such as Window Water Baby Moving (1959) and The Act of Seeing with One’s Own Eyes (1972)—now recognized as canonical documentaries. These films could hardly have been more different from the cinema verite films, but we can now see that Brakhage shared the mission of the cinema verite documentarians: the cinematic confrontation of convention-bound commercial media.
In 1955, Francis Flaherty, Robert Flaherty’s widow, established a symposium to honor her husband’s filmmaking oeuvre and to promote his commitment to filmmaking “without preconceptions.” In recent decades “the Flaherty,” as the symposium has come to be called, has attracted dozens of filmmakers, programmers, teachers, students, and other cine-aficionados for week-long immersions in programs of screenings and discussions. Modern Flaherty seminars have often been driven by an implicit debate about what the correct balance between documentary and avant-garde film should be at the seminar.
Since the 1940s, avant-garde filmmakers have found ways of exploring the personal, first by psycho-dramatizing their inner disturbances (Maya Deren’s Meshes of the Afternoon and Kenneth Anger’s Fireworks are landmark instances), and later by filming the particulars of their personal lives. Brakhage documented dimensions of his personal life in many films, as did Carolee Schneemann, in Fuses (1967), and Jonas Mekas, in Walden (1969) and Lost Lost Lost (1976). And during the 1980s, avant-garde filmmakers Su Friedrich (in The Ties that Bind, 1984; and Sink or Swim, 1990) and Alan Berliner (in Intimate Stranger, 1991; and Nobody’s Business, 1996), used experimental techniques learned from other avant-garde filmmakers to directly engage their family histories.
What has come to be called “personal documentary” (basically, the use of sync-sound to explore personal issues) was instigated in the early 1970s by Ed Pincus’s Diaries (filmed from 1971-1976; completed in 1981), Miriam Weinstein’s Living with Peter (1973), Amalie Rothschild’s Nana, Mom and Me (1974), Alfred Guzzetti’s Family Portrait Sittings (1975). By the 1980s, several of Pincus’s students at MIT were contributing to this approach, among them Ross McElwee, whose films, including Sherman’s March (1986), Time Indefinite (1994), and Photographic Memory (2011) are an on-going personal saga.
Globalization and the standardization of so many dimensions of modern life, along with threats to the environment, have created a desire on the part of many filmmakers to pay a deeper attention to the particulars of Place. Since the early 1970s, contemplations of Place have been produced by avant-garde filmmakers Larry Gottheim (Fog Line, 1970; Horizons, 1973), Nathaniel Dorsky (Hours for Jerome, 1982), James Benning (13 Lakes, 2004), Peter Hutton (Landscape (for Manon), 1987; At Sea, 2007), Sharon Lockhart (Double Tide, 2009) and many others. A fascination with Place, or more precisely, people-in-place, also characterizes the documentaries coming out of Harvard’s Sensory Ethnography Lab (SEL), including Ilisa Barbash and Lucien Castaing-Taylor’s Sweetgrass (2009), Castaing-Taylor and Véréna Paravel’s Leviathan (2013), and Stephanie Spray and Pacho Velez’s Manakamana (2014). Indeed, the films of Hutton, Benning, and Lockhart, in particular, have been shown regularly at the SEL.
The interviewees in Avant-Doc reveal a wide range of ways in which their own work and the work of colleagues function creatively within the liminal zone between documentary and avant-garde and the ways in which the intersections between these histories have played into their work.
Headline image credit: Camera. Public domain via Pixabay.
The riveting film, The Artist and the Model (L’Artiste et son Modèle) from Spain’s leading director, Fernando Trueba, focuses on a series of “one seconds” in the life of French sculptor Marc Cross.
The film director transfers himself into his protagonist, played brilliantly by Jean Rochefort, to explore what serves as inspiration for an artist. “An idea,” says the sculptor as he shares with his young model a sketch made by Rembrandt of a child’s first walking steps. “It is the tenderness of the sketch,“ the “one second of an idea,” that Marc Cross searches for to unblock his aging loss of creativity.
And it is the sculptor’s wife, played by beautiful Claudia Cardinale, who will find this “idea” for him. She will save him, help him create.
In one second, the “good wife” sees a driftless girl in their town, sleeping on the ground at a doorstep. She knows nothing about this vagabond who has found her way to their small French village at the Pyrenees’ border with Spain. The only thing the wife knows is that this homeless, hungry girl, wrapped in a bulky, woolen coat, has a face and body that her husband would love to sculpt. This street urchin could become his inspiration. Claudia Cardinale brings the girl home, shelters and feeds her, and teaches Mercè (Aïda Folch) how to pose.
After weeks of sketches and small sculptures, in one second, by chance, the sculptor sees his model in a new position, resting. It is the angle of her arm, the tilt of her head, her leaning down to reflect that gives him “his idea.” He sees in one second before him, a girl who has become a beautiful woman. Marc Cross realizes his model is thinking of the War, worrying about the people she has been transporting secretly during the night to both sides of the Pyrenees. They are “Jews, Resistance, anyone,” who want to escape German-occupied France of 1943-44, as well as from Franco’s military dictatorship of Spain.
In that one second, the sculptor feels her sensitivity, her attempts to do what is right. He sees her in a different light and feels her soul. She has become more than a body or model. He feels in one second that she is Beauty, Art. It is what the artist has been searching for. With tenderness and love, he sculpts his final masterpiece.
When his work is coming to an end, so is the War. The girl leaves to model for another, perhaps Matisse in Nice, as she bikes to the Riviera with a letter of introduction. At this time, the sculptor’s wife leaves him for a few days to care for her sick sister. It is not a coincidence that this is his moment, his one second, to create the most courageous act of all. And he does, with the beautiful finished sculpture of the woman in his garden — surrounded by perfect light and birds chirping – giving him peace.
The Artist and the Model speaks to an age when all men and women search for one second of Hope.
Seinfeld famously added a ton of terms to English, such as low talker, high talker, spongeworthy, and unshushables. It also made obscure terms into household words. Shrinkage and yada yada existed before Seinfeld, but it’s doubtful you learned them anywhere else.
Another successful Seinfeld term has gone under the radar: Jerk Store. The term was coined in “The Comeback,” when George is unselfconsciously stuffing his face with shrimp during a meeting. A co-worker sees George’s gluttony and says, “Hey, George, the ocean called. They’re running out of shrimp.” George is speechless, but later he crafts a comeback: “Oh yeah? Well, the Jerk Store called, and they’re running out of you.” The episode shows George going to absurd lengths to find a way to use his comeback, as well as his friends’ unwanted workshopping of the joke.
In a way, that workshopping has never ended—at least on Twitter, which is likely the largest collection of jokes, good and bad, by professionals and amateurs, ever created. Many of those jokes involve formulas, and the Jerk Store has become a popular one. On Twitter, every day is the Summer of George.
Most variations start with “The Jerk Store called,” which is as trusty a joke starter as “Relationship status:” and “When life hands you lemons.” From there, the joke can go just about anywhere. Comic Warren Holstein makes a food joke out of the formula: “The Jerk Store called but I couldn’t understand their thick Jamaican accents.” Matt Koff reveals what would likely happen to a real-life Jerk Store: “The Jerk Store called. It’s closing because it couldn’t compete with Amazon. :(“ Some use the formula to comment on politics: “The Jerk Store called; they’re no longer hiring because of fear of Obamacare mandates.” I particularly like this joke, which finds the funny in sadness: “The jerk store called. We didn’t chat for long but it was good to hear their voice. It was good to hear anyone’s voice. I’m so alone.”
Other tweeters abandon the formula when making Jerk Store jokes, like Laura Palmer: “I’m applying at the Jerk Store and I need references.” This holiday tweet sounds like perfect storm of jerkdom: “Looking forward to the Black Friday deals at the Jerk Store.” Food trends also get spoofed: “when will the jerk store start getting organic jerks. tired of getting these jerks full of gmos.” Here’s a particularly clever joke, playing on an annoying Frankenstein-related correction: “Actually, the jerk store’s monster called.”
This term/joke formula isn’t going anywhere for at least a few reasons. Seinfeld is still omnipresent in reruns, and I reckon the entire series is imprinted on the collective unconscious. Plus, the world is full of jerks. The following are some recent epistles from the Jerk Store to help you get through the polar jerk-tex. Jerk Store might never make the OED, but it’s one of the most successful joke franchises in the world.
The jerk store called, you left your credit card at the register. They are open until 8 if you want to pick it up today.
Well known is music’s power to stir emotions; less well known is that the stirring of specific emotions can result from the use of very simple yet still characteristic music. Consider the music that accompanies this sweet, sorrowful conclusion of pop culture’s latest cinematic saga.
When the on-set footage begins, so does some soft music that is rather uncomplicated because, in part, it simply alternates between two chords which last about four seconds each. These two chords are shown on the keyboard below. In classical as well as pop music, these two chords typically do not alternate with one another like this. Although the music for this featurette eventually makes room for other chords, the musical message of the more distinctive opening has clearly been sent, and it apparently worked on this blogger, who admits to shedding a few tears and recommends the viewer have a tissue nearby.
This simple progression has been used to accompany loss-induced sadness in numerous mainstream (mostly Hollywood) cinematic scenes for nearly 30 years. This association is not simply confined to movies, yet inhabits a larger media universe. For example, while the pop song “Comeback Story” by Kings of Leon, which opens this movie’s trailer, helps to convey the genre of the advertised product, the same two-chord progression—let’s call it the “loss gesture”—highlights the establishing narrative: a patriarchal death has brought a mourning family together (for comedic and sentimental results).
Loss gestures can play upon one’s heartstrings less discriminately; they can elicit both tears of joy as well as tears of sadness. Climaxes in Dreamer and Invincible, both underdog-comes-from-behind movies, are punctuated with loss gestures. As demonstrated at 2:06 in the following video, someone employed by the Republican Party appears to be keenly aware of this simple progression’s powerful capacity for moving a viewer (and potential voter).
Within the universe of contemporary media, the loss gesture has been used in radio as well. The interlude music that plays before or after a story on National Public Radio often has some relation to the content of the story. A week after the Sandy Hook school shootings, NPR aired a story by Kirk Siegler entitled “Newtown Copes With Grief, Searches For Answers.” Immediately after the story’s poignant but hopeful ending, the opening of Dustin O’Halloran’s “Opus 14” faded in, musically encapsulating the emotions of the moment.
How the loss gesture works its magic on listeners is a Gordian knot. However, it is undeniable that producers from several different corners of the media world know that the loss gesture works.
In order to spread some festive cheer, Blackstone’s Policing has compiled a watchlist of some of the best criminal Christmas films. From a child inadvertently left home alone to a cop with a vested interest, and from a vigilante superhero to a degenerate pair of blaggers, it seems that (in Hollywood at least) there’s something about this time of year that calls for a special kind of policing. So let’s take a look at some of Tinseltown’s most arresting Christmas films:
1. Die Hard, directed by John McTiernan, 1988
Considered by many to be one of the greatest action/Christmas films of all time, Die Hard remains the definitive cinematic alternative to the usual saccharine cookie-cut Christmas film offering. This is the infinitely watchable story of officer John McClane’s Christmas from hell. When a trip to win back his estranged wife goes awry and he unwittingly finds himself amidst an international terrorist plot, he must find a way to save the day armed only with a few guns, a walkie talkie, and a bloodied vest. With firefights and exploding fairy lights abundant, this Bruce Willis tour de force is the undisputed paragon of policing in Christmas films.
2. Home Alone, directed by Chris Columbus, 1990
In a parental blunder tantamount to criminal neglect, the McCallister family accidentally leave their youngest member, Kevin (played by precocious child star Macaulay Culkin), ‘home alone’ to fend for himself over Christmas as two omnishambolic burglars target the McCallister household. As the Chicago Police Department work through the confusion of the situation, Kevin traverses his way through a far from silent night. Cue copious booby traps and slapstick as the imagination of an eight-year-old boy ingeniously holds the line in this family-fun classic.
3. Batman Returns, directed by Tim Burton, 1992
Gotham is a city perennially infested with arch-criminals whose seemingly endless financial resources demand that they be tackled head-on by a force who can match them pound-for-pound (or dollar-for-dollar, if you prefer). Enter Gotham’s very own Christmas miracle: billionaire Bruce Wayne and his vigilante alter ego Batman (Michael Keaton), who provides a singular justice-hungry scourge against the criminal underworld. As the Penguin (Danny DeVito) hatches a nefarious plot which threatens the city, Batman’s wholly goodwill must prove resilient. Though director Tim Burton went on to make The Nightmare Before Christmas the following year, Batman Returns itself is hardly a Christmas classic.
4. Lethal Weapon, directed by Richard Donner, 1987
With a blizzard of bullets and completely bereft of snow, LA-based Lethal Weapon lacks nearly all the usual trimmings of a Christmas film. Seasoned detective Roger Murtaugh (Danny Glover) is close to retirement when he’s paired with the young (and morose) Martin Riggs (Mel Gibson) to tackle a drug smuggling gang. As their stormy investigation progresses, Murtaugh and Riggs’ unlikely union flourishes into a double-act worthy of Donner and Blitzen (and, judging by the pair’s return in a subsequent three installments of the series, their entertaining policing partnership always leaves audiences wanting myrrh…).
5. National Lampoon’s Christmas Vacation, directed by Jeremiah Chechik, 1989
In this third installment of the Griswold family’s catastrophic holidays, Clark (Chevy Chase) navigates his way through the perils of yet another disastrous calamity, but at least this time he has his Christmas bonus to look forward to. Things take a bizarre turn for the criminal when the bonus isn’t forthcoming, resulting in a myriad of mishaps of Christmas paraphernalia and SWAT teams. As the tagline for the film attests, ‘Yule crack up!’
6. Kiss Kiss Bang Bang, directed by Shane Black, 2005
Petty thief Harry Lockhart (Robert Downey Jr.) finds himself embroiled in a series of increasingly byzantine cases of mistaken identity as both a method actor and criminal investigator. Reality cuts through when Harry is shepherded into a murder investigation involving the sister of his childhood crush, Harmony Lane (Michelle Monaghan). Perhaps one of the less christmassy films on this list, there are definitely still a few seasonal signs parceled in to this murder/mystery thriller.
“There’s something about this time of year that calls for a special kind of policing”
7. Miracle on 34th Street, directed by George Seaton, 1947
Arguably the ultimate Christmas film, Miracle on 34th Street is the classic tale of the legal battle around the sanity and freedom of a man who claims to be the real Santa Claus. This original film won three Academy Awards including Best Actor in a Supporting Role for Edmund Gwenn’s portrayal of Kris Kringle (‘the real Santa Claus’). Despite being remade in 1994 and adapted into various other forms, the 1947 version remains the quintessential Christmas film which no comprehensive watchlist could be without.
8. Bad Santa, directed by Terry Zwigoff, 2003
Dastardly duo Willie (Billy Bob Thornton) and Marcus (Tony Cox) make their criminal living by posing as Santa and his Little Helper for department stores, and then opportunistically stealing as much as they can. As the security team for their latest blag hunts them down, Willie meets a boy determined that he is the real Santa and the race is on for the degenerate pair to reform their lifestyles before they are stuffed.
What would would you add to this list? Tell us your favourite policing Christmas film in the comments section below or let us know directly on Twitter. Merry Christmas everyone!
Headline image credit: [365 Toy Project: 019/365] Batman: Scarlet Part 1. CC-BY-NC-SA-2.0 via Flickr.
There are plenty of operas about teenage girls—love-sick, obsessed, hysterical teenage girls who dance, scheme, and murder in a frenzy of musical passion. Disney Princess films are also about teenage girls—lonely, skinny, logical teenage girls who follow their hearts because the plot gives them no other option. The music Disney Princesses sing can be divided into three periods that correspond to distinct animation styles:
Onto these three periods we can map the themes of the princess anthems, the single song for which each princess is remembered:
The relative lack of variance in these songs tells us something important—while animation styles have changed, the aspirations of girlhood have not been radically altered.
But then there’s Frozen.
Elsa’s anthem, “Let It Go,” combines aspects from all three periods: Frozen is a computer animated film, Idina Menzel is a Tony Award-winning singer, and, most importantly, the song and the Snow Queen who sings it have an operatic legacy rooted in representations of madness and infirmity. “Let It Go” is a tribute to passion, spontaneity, and instinct—elements celebrated by both the opera (which nevertheless punishes the bearer severely) and the Disney film (which channels them into heterosexual romance). Frozen does neither.
Unlike the songs of longing for belonging that came before it, “Let It Go” insists that being like everyone else is bound to fail. It’s a coming out song often read as a queer anthem and easily interpreted to account for a number of stigmatized identities. As such, Elsa is a screen onto which may be projected our fantasies and fears. While her transformation into a shapely princess swaying in a sparkly gown with wispy blond hair may be familiar, the scene where this takes place, the way she looks back at the viewer, and the music she sings define Elsa as more ambiguous than she appears. Is Elsa sick, is she mentally ill, is she asexual, is she gay? What is Elsa and why does she resonate so strongly with young girls?
Elsa is like the women of 19th-century opera in her exclusion from the world the other characters comfortably occupy. Marred by magical ability, Elsa must isolate herself if she does not want to scar those she loves—or so the dialogue tells us. The imagery suggests an illness; Elsa behaves as if she were contagious. Indeed, she is consumptive like Mimi, but she is also betrayed like Tosca and scandalous like The Queen of the Night. As Catherine Clément says of women in the opera: “they suffer, they cry, they die…Glowing with tears, their decolletés cut to the heart, they expose themselves to the gaze of those who come to take pleasure in their pretend agonies.” Operatic women express their hysteria skillfully. At the pinnacle of her agony, Elsa builds a magnificent castle while singing her most beautiful song, a song that has itself become infectious. In its final moments, she exposes herself, only to slam the door on viewers who would like nothing more than to gawk at the excess.
Most princess anthems end satisfactorily on the tonic chord, their musical conclusions coinciding with lyrical expectations that assure the story will fulfill the princesses’ desires. For example, when Ariel wishes she could be “part of that world”, she sings a high F, which a trombone echoes an octave lower, reinforcing the song’s key and suggesting the narrative’s interest in giving Ariel what she wants. In “Someday My Prince Will Come,” Snow White’s final line repeats the home pitch no less than six times as if to insist the screenwriters pay attention. “Let It Go,” on the other hand, ends unresolved. The score establishes a sharp distinction between the assertive melodic phrase sung by Elsa, “The cold never bothered me anyway,” and the harmonic manifestation of the accompaniment. Elsa turns her back to the camera after singing the downward moving line, which ends rather abruptly on the tonic, while the chord that ought to have shifted with Elsa’s exit lingers in the icy upper register of the strings, as if refusing to acknowledge the message. Is the music condemning the singer’s difference by suggesting that her immunity to the elements is indicative of a physical or psychic malady?
Unlike Donizetti’s operatic heroine, Lucia, whose infamous “mad scene” prompts the chorus to weep for her, Elsa stares into the camera, eyebrow raised, as if daring the spectators to pity her. This is the look of a woman who refuses to capitulate to patriarchy. And with our endless covers and video parodies of “Let It Go” we have rallied to her defense. Rather than constrain her by Frozen’s story, “Let It Go” lets Elsa escape again into possibility. The new princess message, “Leave Me Alone,” is echoed by little girls everywhere.
Peter Conrad says of opera, “It is the song of our irrationality, of the instinctual savagery which our jobs and routines and our nonsinging voices belie, or the music our bodies make. It is an art devoted to love and death (and especially to the cryptic alliance between them); to the definition and the interchangeability of the sexes; to madness and devilment…” Such is also a fair description of Frozen, for what are its final moments than an act of love to stave off death, what is Elsa but a mad and devilish woman who revels in the impermanence of sexuality, what is a fairytale but a story full of savage beasts that prey on our emotions. “Let It Go” releases an archetype from the hollows of diva history into the digital world of children’s animation.
Headline Image: Disney’s Frozen. DVD screenshot via Jennfier Fleeger.
Director Robert Altman made more than thirty feature films and dozens of television episodes over the course of his career. The Altman retrospective currently showing at MoMA is a treasure trove for rediscovering Altman’s best known films (M*A*S*H, Nashville, Gosford Park) as well as introducing unreleased shorts and his little-known early work as a writer.
Every Altman fan has her or his own list of favorite films. For me, Altman’s use of music is always so innovative, original, and unprecedented that a few key films stand out from the crowd based on their soundtracks. Here are my top five Altman films based on their soundtracks:
1. Gosford Park (2001): The English heritage film meets an Agatha Christie murder mystery, combining an all-star ensemble cast and gorgeous location shooting with a tribute to Jean Renoir’s La Règle du Jeu (1939). Jeremy Northam plays the real-life British film star and composer Ivor Novello. Watch for the integration of Northam/Novello’s live performances of period songs with the central murder scene, in which the songs’ lyrics explain (in hindsight) who really committed the murder, and why.
2. Nashville (1975): Altman’s brilliant critique of American society in the aftermath of Vietnam and Watergate. Nashville stands as an excellent example of “Altmanesque” filmmaking, in which several separate story strands merge in the climactic final scene. Many, although not all, of the songs were provided by the cast, which includes Henry Gibson as pompous country music star Haven Hamilton, and the Oscar-nominated Lily Tomlin as the mother of two deaf children drawn into a relationship with sleazy rock star Tom Frank (Keith Carradine, whose song “I’m Easy” won the film’s sole Academy Award).
3. M*A*S*H (1970): Ok, I will admit it. It took me a long, long time to appreciate M*A*S*H. Growing up in 1970s Toronto, I couldn’t accept Donald Sutherland and Elliot Gould as Hawkeye Pierce and Trapper John — familiar characters from the weekly CBS TV series (but played by different actors). Looking back, I realize that M*A*S*H really did break all the rules of filmmaking in 1970, not least of which because it appealed to the anti-Vietnam generation. Like so many later Altman films, what appears to be a sloppy, improvised, slap-dash film is in fact sutured together through the brilliant, carefully edited use of Japanese-language jazz standards blared over the disembodied voice of the base’s loudspeaker.
4. McCabe and Mrs. Miller (1971): Filmed outside of Vancouver, Altman’s reinvention of the Western genre stars Warren Beatty and Julie Christie. The film uses several of Leonard Cohen’s songs from his 1967 album The Songs of Leonard Cohen, allowing the songs to speak for often inarticulate characters. Watch for how the opening sequence, showing Beatty/McCabe riding into town, is closely choreographed to “The Stranger Song” as is Christie/Miller’s wordless monologue to “Winter Lady” later in the film — all to the breathtaking cinematography of Vilmos Zsigmond, who worked with Altman on Images (1972) and The Long Goodbye (1973) as well.
5. Aria (segment: “Les Boréades”) (1987): Made during Altman’s “exile” from Hollywood in the 1980s, this film combines short vignettes set to opera excerpts by veteran directors including Derek Jarman, Jean-Luc Godard, and Julien Temple. Altman’s contribution employs the music of 18th-century French composer Jean-Philippe Rameau. The sequence was a revelation to me personally, since it contains the only feature film documentation of Altman’s significant contributions to the world of opera. One of the first film directors to work on the opera stage, Altman directed a revolutionary production of Stravinsky’s The Rake’s Progress at the University of Michigan in the early 1980s: the work was restaged in France and used for the Aria Later, Altman collaborated with Pulitzer-Prize winning composer William Bolcom and librettist Arnold Weinstein to create new operas (McTeague, A Wedding) for the Lyric Opera of Chicago.
Rounding out the top ten would be Short Cuts (1993), Kansas City (1996), The Long Goodbye (1973), California Split (1974), and Popeye (1980) — Robin Williams’ first film, and definitely an off-beat but entertaining musical.
Films trick our senses in many ways. Most fundamentally, there’s the illusion of motion as “moving pictures” don’t really move at all. Static images shown at a rate of 24 frames per second can give the semblance of motion. Slower frame rates tend to make movements appear choppy or jittery. But film advancing at about 24 frames per second gives us a sufficient impression of fluid motion.
However, birds–such as pigeons–have a much higher threshold for detecting movement. A bird’s visual system is keenly sensitive to moving stimuli as this is essential to their survival. Whether swooping down to snatch live prey, fleeing from a predator, or zeroing in on a nest for a precise landing, birds must rely on their fine-tuned ability to hone in on moving targets. So the frame rate at which most of our films are shown is far too slow for birds to perceive continuous motion. Their threshold of visual processing exceeds the standard frame rate, allowing them to see component frames … and the illusion of motion pictures would be broken.
If a pigeon had been roosting in the theater where 19th century crowds first gaped at the Lumière Brothers’ steam train looming towards them, it may have been less than impressed — especially as early silent films were often played at only 16 frames per second.
Even a film shown at today’s industry standard of 24 frames per second would most likely look like a series of flashing slides to a pigeon. We’re mesmerized by Marilyn Monroe’s white skirts billowing over the subway grate in The Seven-Year Itch, but a pigeon may see something more like a slide show of the skirt in frozen increments.
Further, most humans cannot distinguish individual lights flashed at 60 cycles per second, perceiving instead a single continuous beam of light. This gives an impression of constant light while watching a film (despite the shutter actually shutting out light several times per frame). But birds have much higher critical flicker-fusion frequency, such as 90-100 cycles per second or higher (e.g., Lisney et al., 2011). So while humans do not perceive the flicker in a movie, a pigeon may see flashes like strobelights along with the jumpy frames of Marilyn’s airborne skirt.
One of the creepiest scenes in Hitchcock’s The Birds shows Melanie (Tippi Hedren) smoking on a bench in a school playground while birds are flocking on a jungle gym behind her. She finally spots a lone bird flying overhead and turns around to discover every rung of the jungle gym crowded with large black birds. Actually, Hitchcock used cardboard cut-outs for most of the “birds” on the jungle gym, figuring that most people would not notice these stationary objects if interspersed with live birds.
Birds in a school playground in Hitchcock’s (1964) The Birds
Indeed, the illusion works on most of us. We are also often tricked by illusory “crowds” in films–made of real people and dummies, or multiple images of the same people patched together to make a “crowd”. However birds are especially observant of the movement of other birds–and combined with the much faster ‘refresh rate’ of the avian visual system (as their visual information is “updated” more frequently than humans)–the jungle gym scene would not likely fool any birds.
Studies suggest that birds do perceive some information via video images (using video at 30 frames per second). For instance, a video of wild chickens feeding elicits feeding in birds of the same species (McQuoid & Galef, 1993); videos showing a hawk or raccoon elicit aerial and ground alarm calls respectively in roosters (Evans, Evans, and Marler, 1993); and video images of female pigeons elicit courtship displays in male pigeons (Shimizu, 1998).
So birds seem to pick up some information from video images, at a somewhat higher frame rate and screen-refresh rate than film–though color may be distorted (Wright & Cumming, 1971), and gaps in movement and flicker are likely perceived (Lea & Dittrich, 1999). These discrepancies would be much more pronounced for moving images on cinematic film.
A fine-tuned visual system gives birds of prey an advantage when pursuing a fast-moving target. And it allows pigeons those few extra seconds to peck at grubs and seeds–and flap away at the last moment possible when your car approaches.
The anniversaries of conflicts seem to be more likely to capture the public’s attention than any other significant commemorations. When I first began researching the nurses of the First World War in 2004, I was vaguely aware of an increase in media attention: now, ten years on, as my third book leaves the press, I find myself astonished by the level of interest in the subject. The Centenary of the First World War is becoming a significant cultural event. This time, though, much of the attention is focussed on the role of women, and, in particular, of nurses. The recent publication of several nurses’ diaries has increased the public’s fascination for the subject. A number of television programmes have already been aired. Most of these trace journeys of discovery by celebrity presenters, and are, therefore, somewhat quirky – if not rather random – in their content. The BBC’s project, World War One at Home, has aired numerous stories. I have been involved in some of these – as I have, also, in local projects, such as the impressive recreation of the ‘Stamford Military Hospital’ at Dunham Massey Hall, Cheshire. Many local radio stories have brought to light the work of individuals whose extraordinary experiences and contributions would otherwise have remained hidden – women such as Kate Luard, sister-in-charge of a casualty clearing station during the Battle of Passchendaele; Margaret Maule, who nursed German prisoners-of-war in Dartford; and Elsie Knocker, a fully-trained nurse who established an aid post on the Belgian front lines. One radio story is particularly poignant: that of Clementina Addison, a British nurse, who served with the French Flag Nursing Corps – a unit of fully trained professionals working in French military field hospitals. Clementina cared for hundreds of wounded French ‘poilus’, and died of an unnamed infectious disease as a direct result of her work.
The BBC drama, The Crimson Field was just one of a number of television programmes designed to capture the interest of viewers. I was one of the historical advisers to the series. I came ‘on board’ quite late in the process, and discovered just how difficult it is to transform real, historical events into engaging drama. Most of my work took place in the safety of my own office, where I commented on scripts. But I did spend one highly memorable – and pretty terrifying – week in a field in Wiltshire working with the team producing the first two episodes. Providing ‘authentic background detail’, while, at the same time, creating atmosphere and constructing characters who are both credible and interesting is fraught with difficulty for producers and directors. Since its release this spring, The Crimson Field has become quite controversial, because whilst many people appear to have loved it, others complained vociferously about its lack of authentic detail. Of course, it is hard to reconcile the realities of history with the demands of popular drama.
I give talks about the nurses of the First World War, and often people come up to me to ask about The Crimson Field. Surprisingly often, their one objection is to the fact that the hospital and the nurses were ‘just too clean’. This makes me smile. In these days of contract-cleaners and hospital-acquired infection, we have forgotten the meticulous attention to detail the nurses of the past gave to the cleanliness of their wards. The depiction of cleanliness in the drama was, in fact one of its authentic details.
One of the events I remember most clearly about my work on set with The Crimson Field is the remarkable commitment of director, David Evans, and leading actor, Hermione Norris, in recreating a scene in which Matron Grace Carter enters a ward which is in chaos because a patient has become psychotic and is attacking a padre. The matron takes a sedative injection from a nurse, checks the medication and administers the drug with impeccable professionalism – and this all happens in the space of about three minutes. I remember the intensity of the discussions about how this scene would work, and how many times it was ‘shot’ on the day of filming. But I also remember with some chagrin how, the night after filming, I realised that the injection technique had not been performed entirely correctly. I had to tell David Evans that I had watched the whole sequence six times without noticing that a mistake had been made. Some historical adviser! The entire scene had to be re-filmed. The end result, though, is an impressive piece of hospital drama. Norris looks as though she has been giving intramuscular injections all her life. I shall never forget the professionalism of the director and actors on that set – nor their patience with the absent-minded-professor who was their adviser for the week.
In a centenary year, it can be difficult to distinguish between myths and realities. We all want to know the ‘facts’ or the ‘truths’ about the First World War, but we also want to hear good stories – and it is all the better if those elide facts and enhance the drama of events – because, as human beings, we want to be entertained as well. The important thing, for me, is to fully realise what it is we are commemorating: the significance of the contributions and the enormity of the sacrifices made by our ancestors. Being honest to their memories is the only thing that really matters –the thing that makes all centenary commemoration projects worthwhile.
Image credit: Ministry of Information First World War Collection, from Imperial War Museum Archive. IWM Non Commercial Licence via Wikimedia Commons.
If you share my jealousy of Peter Capaldi and his new guise as the Doctor, then read on to discover how you could become the next Time Lord with a fondness for Earth. However, be warned: you can’t just pick up Matt Smith’s bow-tie from the floor, don Tom Baker’s scarf, and expect to save planet Earth every Saturday at peak viewing time. You’re going to need training. This is where Oxford’s online products can help you. Think of us as your very own Companion guiding you through the dimensions of time, only with a bit more sass. So jump aboard (yes it’s bigger on the inside), press that button over there, pull that lever thingy, and let’s journey through the five things you need to know to become the Doctor.
Being called two-faced may not initially appeal to you. How about twelve-faced? No wait, don’t leave, come back! Part of the appeal of the Doctor is his ability to regenerate and assume many faces. Perhaps the most striking example of regeneration we have on our planet is the Hydra fish which is able to completely re-grow a severed head. Even more striking is its ability to grow more than one head if a small incision is made on its body. I don’t think it’s likely the BBC will commission a Doctor with two heads though so best to not go down that route. Another example of an animal capable of regeneration is Porifera, the sponges commonly seen on rocks under water. These sponge-type creatures are able to regenerate an entire limb which is certainly impressive but are not quite as attractive as The David Tenants or Matt Smiths of this world.
(2) Fighting aliens
Although alien invasion narratives only crossed over to mainstream fiction after World War II, the Doctor has been fighting off alien invasions since the Dalek War and the subsequent destruction of Gallifrey. Alien invasion narratives are tied together by one salient issue: conquer or be conquered. Whether you are battling Weeping Angels or Cybermen, you must first make sure what you are battling is indeed an alien. Yes, that lady you meet every day at the bus-stop with the strange smell may appear to be from another dimension but it’s always better to be sure before you whip out your sonic screwdriver.
(3) Visiting unknown galaxies
The Hubble Ultra Deep Field telescope captures a patch of sky that represents one thirteen-millionth of the area of the whole sky we see from Earth, and this tiny patch of the Universe contains over 10,000 galaxies. One thirteen-millionth of the sky is the equivalent to holding a grain of sand at arm’s length whilst looking up at the sky. When we look at a galaxy ten billion light years away, we are actually only seeing it by the light that left it ten billion years ago. Therefore, telescopes are akin to time machines.
The sheer vastness and mystery of the universe has baffled us for centuries. Doctor Who acts as a gatekeeper to the unknown, helping us imagine fantastical creatures such as the Daleks, all from the comfort of our living rooms.
(4) Operating the T.A.R.D.I.S.
The majority of time-travel narratives avoid the use of a physical time-machine. However, the Tardis, a blue police telephone box, journeys through time dimensions and is as important to the plot of Doctor Who as upgrades are to Cybermen. Although it looks like a plain old police telephone box, it has been known to withstand meteorite bombardment, shield itself from laser gun fire and traverse the time vortex all in one episode. The Tardis’s most striking characteristic, that it is “much bigger on the inside”, is explained by the Fourth Doctor, Tom Baker, by using the analogy of the tesseract.
(5) Looking good
It’s all very well saving the Universe every week but what use is that without a signature look? Tom Baker had the scarf, Peter Davison had the pin-stripes, John Hurt even had the brooding frown, so what will your dress-sense say about you? Perhaps you could be the Doctor with a cravat or the time-traveller with a toupee? Whatever your choice, I’m sure you’ll pull it off, you handsome devil you.
Don’t forget a good sense of humour to compliment your dashing visage. When Doctor Who was created by Donald Wilson and C.E. Webber in November 1963, the target audience of the show was eight-to-thirteen-year-olds watching as part of a family group on Saturday afternoons. In 2014, it has a worldwide general audience of all ages, claiming over 77 million viewers in the UK, Australia, and the United States. This is largely due to the Doctor’s quick quips and mix of adult and childish humour.
You’ve done it! You’ve conquered the cybermen, exterminated the daleks, and saved Earth (we’re eternally grateful of course). Why not take the Tardis for another spin and adventure through more of Oxford’s online products?
Image credit: Doctor Who poster, by Doctor Who Spoilers. CC-BY-SA-2.0 via Flickr.
This month marks the 50th anniversary of Disney’s beloved film Mary Poppins, starring the legendary Julie Andrews. Although Andrews was only twenty-nine at the time of the film’s release, she had already established herself as a formidable star with numerous credits to her name and performances opposite Richard Burton, Rex Harrison, and other leading actors of Hollywood’s Golden Age. Mary Poppins would earn Andrews an Academy Award for Best Actress and serve as a milestone in a career that continues today. Herewith are some of our favorite songs from Andrew’s illustrious career.
“I Could Have Danced All Night”
Andrews belted out this song in the 1956 Broadway performance of My Fair Lady. Andrews proved her singing capabilities playing Eliza Doolittle opposite Rex Harrison as Professor Higgins, although she was replaced in the film version (with Audrey Hepburn acting and Marni Nixon dubbing).
Andrews performed the play’s title track during its 1960 performance on Broadway. The actress played Queen Guenevere – a title she was apparently comfortable with, later playing Queen Renaldi in Disney’s Princess Diaries – opposite Richard Burton as King Arthur.
“Impossible; It’s Possible”
Starring in another royal role, Andrews played the title character in CBS’ 1957 production of Cinderella, written by Richard Rodgers and Oscar Hammerstein.
People are still reciting this tongue twister performed by Andrews in Disney’s 1964 hit film Mary Poppins. In addition to earning her an Oscar, Andrews’ role as the angelic English Nanny cemented her name in silver screen history.
“My Favorite Things”
Hot on the heels of her success from Mary Poppins, Andrews starred as Maria von Trapp in The Sound of Music, expanding her international fame and branding herself as a singer to be reckoned with in Hollywood and on Broadway.
Tragedies certainly aren’t the most popular types of performances these days. When you hear a film is a tragedy, you might think “outdated Ancient Greek genre, no thanks!” Back in those times, Athenians thought it their civic duty to attend tragic performances of dramas like Antigone or Agammemnon. Were they on to something that we have lost in contemporary Western society? That there is something specifically valuable in a tragic performance that a spectator doesn’t get from other types or performances, such as those of our modern genres of comedy, farce, and melodrama?
Since films reach a greater audience in our culture than plays, after updating Aristotle’s Poetics for the twenty-first century, we analyzed what we call “cinematic tragedies”: films that demonstrate the key components of Aristotelian tragedy. We conclude that a tragedy must consist in the representation of an action that is: (1) complete; (2) serious; (3) probable; (4) has universal significance; (5) involves a reversal of fortune (from good to bad); (6) includes recognition (a change in epistemic state from ignorance to knowledge); (7) includes a specific kind of irrevocable suffering (in the form of death, agony or a terrible wound); (8) has a protagonist who is capable of arousing compassion; and (9) is performed by actors. The effects of the tragedy must include: (10) the arousal in the spectator of pity and fear; and (11) a resolution of pity and fear that is internal to the experience of the drama.
Unlike melodrama (which we hold is the most common film genre), tragedy calls on spectators to ponder thorny moral issues and to navigate them with their own moral compass. One such cinematic tragedy — Into The Wild, 2007, directed by Sean Penn — thematizes the preciousness and precariousness of human life alongside environmental problems, raising questions about human beings’ apparent inability to live on earth without despoiling the beauty and integrity of the biosphere. Other cinematic tragedies deal with a variety of problems with which our modern societies must grapple.
One such topic is illegal immigration, a highly politicized issue that is far more complex than national governments seem equipped to handle, especially beyond the powers of the two parties in the American system. Cinematic tragedies that deal with this issue have been produced over several decades involving immigration into various Western countries, especially the United States; these include Black Girl (France, 1966), El norte (US/UK, 1983), and Sin nombre (Mexico, 2009), the last of which we will expand on here.
In US director Cary Fukunaga’s Sin nombre (which means “Nameless” but which was released in the United States under the Spanish title), Hondurans escaping from their harsh political and economic realities risk their lives in order to make it to the United States, through Mexico, on the tops of rail cars. They travel in this manner since, as we all know, there would be no other legal way for most of these foreign citizens to come to the United States. Over the course of the journey, the immigrants endure terrible suffering or die at the hands of gang members who rob, rape, and even kill some of them.
The film focuses on just a few of the multitudes atop the trains: on a teenage Honduran girl, Sayra, migrating with her father and uncle; and on a few of the gang members. One of them, Casper, has had a change of heart and is no longer loyal to the gang, after its leader killed Casper’s girlfriend after trying to rape her. Casper and other gang members are atop the train robbing the migrants, but he defends Sayra by killing the leader when he tries to rape her. Ultimately, Sayra will arrive in the United States. However, she realizes that the cost has been too great—her father has died falling off of the train; she has lost Casper who is, ironically, shot to death by the pre-pubescent boy whom he himself had trained in the ways of the gang in the opening scenes of the film.
The tremendous losses, and the scenes of suffering, rape, and murder, make unlikely the possibility that the spectator will feel that Sayra’s arrival constitutes a happy ending. In some other aesthetic treatment, Casper’s ultimate death might have been melodramatized as redemptive selflessness for the sake of his new girlfriend. But in Fukunaga’s film, the juxtaposed images imply a continuing cycle of despair and death: Casper’s young killer in Mexico is promoted up the ranks of the gang with a new tattoo, while Sayra’s uncle, back in Honduras after being deported from Mexico, starts the voyage to the United States all over again. Sayra too may face deportation in the future. Following the scene of the reinvigoration of the criminal gang system, as its new young leader gets his first tattoo, the viewer sees Sayra outside a shopping mall in the American southwest. The teenage girl has arrived in the United States and may aspire to participate in advanced consumer capitalism, yet she has lost so much and suffered so undeservingly.
This aesthetic juxtaposition prompts the spectator to attend to the failure of Western political leaders to create a humane system of immigration for the twenty-first century, one which cannot be reached with the entrenched politicized views of the “two sides of the aisle” who miss the human story of immigrants’ plight. This film—like all tragedies—promotes the spectator’s active pondering, that is, it challenges them to respond in some way.
In the tradition of philosophers as various as Aristotle, Seneca, Schopenhauer, Nietzsche, Martha Nussbaum, and Bernard Williams, we find that tragedies bring to conscious awareness the most significant moral, social, political, and existential problems of the human condition. A film such as Sin nombre, through its tragic performance, points to one of these terrible necessities with which our contemporary Western culture must grapple. While it doesn’t offer an answer, this cinematic tragedy prompts us to recognize and deal with a seemingly intractable problem that needs to move beyond the current impasse of political debate, as we in the industrialized nations continue to shop for and watch movies in the comfort of our malls.
Today, 5 October, we celebrate James Bond Day, and this year has been a great one for 007. In January, both song and score for Skyfall won Grammys, and 18 September marked the 50th anniversary of the general release of the film Goldfinger in UK cinemas. Shirley Bassey’s extraordinary rendition of the title song played a key role in its success. In these extracts from The Music of James Bond, Jon Burlingame recounts the stories behind some of the great title songs.
More significantly, the public seemed to be paying equal attention to Goldfinger’s bold, brassy Barry score. “The musical soundtrack is slickly furnished by John Barry, who also composed the title song,” noted Variety’s film critic; its music critic later praised the album as “the strongest Bond film score to date.” In the United Kingdom, the soundtrack album made the charts on October 31 and reached number 14. But in America, it appeared on December 12 and rocketed up the charts, reaching number 1 on March 20, 1965. It edged out the Mary Poppins soundtrack (which in turn had displaced Beatles ’65 at the top) and remained the most popular album in America for three weeks.
Goldfinger would be the only Bond soundtrack album to reach the top of the charts. Barry was nominated for a Grammy Award, and although there was no Oscar attention—for Barry, that would come later, and not for James Bond—there was the satisfaction of worldwide commercial success. United Artists Records released Barry’s driving rock instrumental of Goldfinger (with Flick on guitar) and, a few months later, an LP titled John Barry Plays Goldfinger (acompilation of his arrangements from the first three Bond films plus a handful of easy-listening tunes).
The whole song was written over a mid-September weekend. And Welshborn singer Tom Jones, an old friend of Black’s who had already had two top-10 hits earlier that year (“It’s Not Unusual” and “What’s New Pussycat?”), quickly agreed to sing it. Black liked his “steely, manly voice.” Britain’s New Musical Express announced Jones’s signing on September 24, and they went into the studio on October 11 to lay down the track.
“I was thrilled to bits when they asked me to do Thunderball,” Jones remembered many years later. “There was a connection, because Les Reed, who wrote a lot of my big songs, was John Barry’s pianist. The most memorable thing about the session was hitting that note at the end. John told me to hold on to this very high note for as long as possible.” Jones’s now-legendary final note lasts nine full seconds, and in the isolated vocal recording he can be heard running out of breath, although that last part is buried in the final mix with the orchestra. “I closed my eyes, hit the note and held on,” Jones said on another occasion. “When I opened my eyes the room was spinning. I had to grab hold of the booth I was in to steady myself. If I hadn’t, I would not have passed out, but maybe fallen down. But it paid off, because it is a long note and it’s high.”
Diamonds Are Forever
Eighteen years earlier, Marilyn Monroe had sung “Diamonds Are a Girl’s Best Friend” to iconic status in Gentlemen Prefer Blondes. Black’s words would make a Bond song equally famous. “Diamonds Are Forever” is more about fleeting relationships and less about the permanence of those shiny jewels that are often the remnant of a love affair—although one phrase in particular would result in the song becoming slightly infamous, and possibly costing it an Academy Award nomination.
It’s in the second verse: “hold one up and then caress it / touch it, stroke it and undress it.” “Seediness was what we wanted,” Black would later explain. “Sleaziness, theatrical vulgarity. It had to be over the top.” Or, as Barry himself would reveal in numerous interviews 20 years later, that particular verse was more about male genitalia than about precious stones: “Write it as though she’s thinking about a penis,” had been Barry’s advice to Black.
Williams met with Sinatra and his longtime aide “Sarge” Weiss at Sinatra’s office on the old General Services lot in Hollywood. “The amazing thing is, there was nothing there to play the demo on,” Williams recalled. “Sarge finally came up with a rusty old portable radio with a cassette player, mono, salty from the beach. And that’s what Frank heard the song on. And he loved it. ‘Marvelous, Mr. Paulie, marvelous.’ This from Music Royalty to me, and I was thrilled,” Williams said.
Sinatra opened a briefcase, which contained his datebook (and a .38, Williams noted), and they discussed possible dates for recording. “I left his office walking on air. We were all delighted. Then Frank was out. I don’t know what happened but, I was told at the time, Cubby and Frank had a big fight and he was history.”
No one remembers for certain why Sinatra ultimately declined to sing “Moonraker.” It may be that he had second thoughts, or that his ambitious Trilogy album was already in preparation and he preferred to concentrate on that. The story of a falling-out between Sinatra and Broccoli may be apocryphal, because Frank and Barbara Sinatra were all smiles at the New York premiere of Moonraker on June 28.
The final honors to come their way were the Grammy Awards, nearly a year later because of the later eligibility period of the National Academy of Recording Arts and Sciences. Both song and score were nominated and, on January 26, 2014, both won. Newman was present to accept his award. Skyfall had been a worldwide sensation: it became the highest-grossing film ever in Great Britain, taking in over £94 million in just six weeks. It eventually earned more than $304 million in the U.S. to rank as the fourth highest-grossing film of 2012. Its final worldwide box-office tally of $1.1 billion propelled it to the no. 8 spot among all-time box-office leaders.
Its title song had become the first Bond music ever to win an Academy Award, its score only the second ever nominated. By the end of 2013, the Adele single had gone platinum, selling over 2 million units, while Newman’s score album had sold over 30,000. Sam Mendes was signed to direct the next Bond film, set for release in October 2015. Bond, and Bond music, was bigger than ever.
As part of the Oral History Association conference, we asked Abbie Reese to write about her film-in-progress, which evolved in parallel to her book, Dedicated to God: An Oral History of Cloistered Nuns. This summer, Abbie was awarded a grant by Harvard University’s Schlesinger Library to conduct follow-up interviews with a half dozen women she began interviewing more than five years ago — women contemplating religious life. Abbie is preparing for post-production of a collaborative film made with and focused on a young woman in the process of becoming a cloistered contemplative nun.
Recently, a journalist asked me how I convinced the Poor Clare Colettine nuns, back in 2005, to let me write a book about their lives, and how I convinced them to help me in that endeavor. I explained that was not my approach. I asked the Mother Abbess if I could undertake a long-term project about their lives; I said that although I did not know the outcome, I would keep the community apprised.
At that time, I wanted to understand: What compels a young woman to make this radical departure to a cloistered monastery? I believed that there was value in the stories, perspectives, and memories of women who remove themselves from the world to pray for humanity — to become mothers of souls and saints on earth.
About the same time that I began to engage with the Poor Clare Colettine nuns in oral history interviews, I began interviewing young women around the States in the process of “discernment.” Each was contemplating if she had been called to a religious vocation.
I arranged to meet “Heather” in 2005. We met at her dorm at Elmhurst College in the suburbs of Chicago, and then we met up again a few hours later at the Corpus Christi Monastery in Rockford where she would stay overnight for the first time. (She stayed in an area outside the enclosure and visited with the Mother Abbess and the Novice Mistress, separated by the metal grille.)
Heather and I met over the years; I interviewed her as she maintained hope that she would join a cloistered order. Her parents required her to finish college first, and then she dealt with school debt as she struggled to find a job.
In 2011, I met Heather and her family at the monastery when she was delivered there. I continued to conduct oral history interviews and I was allowed to enter the enclosure to record video footage. At that time, I was enrolled in an MFA in visual arts program at the University of Chicago. I had sensed even before she joined the Poor Clares that Heather was hesitant in our interviews. I wasn’t sure the reason: her uncertainty, not knowing if she truly has been called to cloistered contemplative life; the familial opposition that led her to talk less about the prospect of a religious vocation; or the possibility that she was not as articulate verbally as she is sophisticated visually. (She was a painter and studied graphic design.) From her blogs, I read her open tone.
An expatriate, Heather has made the exodus from mainstream society. A year after entering the monastery, Heather became “Sister Amata” in the Clothing Ceremony. (She chose both aliases to reflect and preserve the Poor Clare value of anonymity.) As she slowly integrates, Sister Amata is governed by a schedule that determines when she prays, sleeps, eats, and works, while she learns the expectations and the culture. Sister Amata continues the six-year formation process as she transitions into a new social role and new identity as a member of a community following an 800-year-old rule.
The enclosure is an intermediary space. The Poor Clare Colettine nuns intercede between humanity and an unseen realm; they believe their prayers and penances can change the course of history. Like the Poor Clares, Sister Amata inhabits a threshold — a space between worlds.
A contemporary practice that depends upon social contracts and long-term relationships is a complicated endeavor; representing others and representing otherness are problematic territories, following an imperialistic tradition of exploiting native resources. As in Bronislaw Malinowski’s model, boundaries between insider and outsider collapse, and the notion of “the outsider” slips. This hybrid of genres has probably sustained my focus and dedication because I find it challenging and nuanced.
To enact co-authorship and shared authority, to remove myself as the mediator holding the camera and the microphone, I obtained permission to lend Sister Amata a video camera. In essence, I chose Sister Amata as the cinematographer. I asked her to use the camera as if it were eyes encountering her world. I made three requests: document the daily rhythms of prayer, meals, and manual labor within the monastery’s rich material culture; record impressionistic moving images that place primacy on the visual over the discursive; and turn the camera upon herself to make video diaries of her impressions and motivations and experiences as she assimilates into the community.
Even though I was not physically present, my relationship with Sister Amata is embedded in the visual dialogue that transpired; the history of our engagement since 2005 fed the new film endeavor. Sister Amata’s video diaries are raw, sincere, and vulnerable. The nature of this as an exchange is evident when she addresses me directly.
The nuns gave me all of their documentation and I agreed to give them copies of it, as well. I met with Sister Amata and her novice mistress, “Sister Nicolette,” to download the digital files, to look at footage and to discuss it with them. I made additional requests.
Because of other nuns’ interest in contributing documentation, I lent a second camera. (The older nun constructed enactments of monastic life, instructing fellow nuns what to do, when.) I also recorded video footage inside the enclosure and my interviews with the nuns.
I am now working on post-production of a feature-length film that will be released theatrically. This project in-progress embeds the negotiations of a para-ethnographic, collaborative documentary:
How do we pursue our inquiry when our subjects are themselves engaged in intellectual labors that resemble approximately or are entirely indistinguishable from our own methodological practices?
Para-ethnography answers this question by proposing an analytical relationship in which we and our subjects — keenly reflexive subjects — can experiment collaboratively with the conventions of ethnographic enquiry. This methodological stance demands that we treat our subjects as epistemic partners who are not merely informing our research but who participate in shaping its theoretical agendas and its methodological exigencies. (Holmes, Douglas R. and George E. Marcus. “Para-Ethnography.” Ed. Lisa M. Given. The SAGE Encyclopedia of Qualitative Research Methods. Thousand Oaks, Calif.: SAGE Publications, Inc., 2008. Page 595.)
Film-making addresses some of the questions and interests that drive my practice. In giving Sister Amata and the other nuns the video cameras, they selected and composed what was recorded, essentially the same dynamic in my other interactions with them. Enunciating our “visual dialogue,” video cameras are seen crossing the threshold into the “Jesus cage,” passing between slats in the metal grille separating the monastery from our world. Through this exchange, the viewer will be granted Sister Amata’s vantage point — her painterly eye and the risks she has taken.
Once, a documentary film professor at the University of Chicago described her own work with a tribe in Alaska; she said that just as she chose to work with the tribe, they chose her. This professor said the same was true of my work — just as I chose to work with the nuns, they chose me. The title, Chosen, also reflects the nuns’ belief that God has chosen them for this ancient rule and demanding life.
Featured image: Poor Clare Colettine nuns return to the monastery after a funeral service on the premises, in 2010, for a cloistered nun who served in WWII. Courtesy of Abbie Reese.
This summer saw the release of Hercules (Radical Studios, dir. Brett Ratner). Dwayne “The Rock” Johnson took his place in the long line of strongmen to portray Greece’s most enduring icon. It was a lot of fun, and you should go see it. But, as one might expect from a Hollywood piece, the film takes a revisionist approach to the world of Greek myth, especially to its titular hero. A man of enormous sexual appetite, sacker of cities, and murderer of his own family, Hercules is glossed over here as a seeker of justice, characterized by his humanity and humility. And it is once again Hercules, not Heracles: the Romanized version loses the irony of the Greek, “Glory of Hera.”
This is neither the Hercules of ancient myth, nor is it the Hercules of Steve Moore’s graphic novel, Hercules: The Thracian Wars (Radical Comics, 2008), on which the film is loosely based. It is perhaps not surprising then that Moore fought to have his name removed from the project, at least according to long-time friend Alan Moore. Steve Moore died earlier this year and buried deep in the closing credits of the film is a dedication in his memory.
When he wrote his comic, Moore strove to fit his story into the world of Greek myth in a “realistic” way. Though the story (and that of its sequel, The Knives of Kush) is original, the characters and setting are consistent with the pseudo-historic Bronze Age of Greek legend. The film jettisons much of this careful integration for little narrative gain. I am never opposed to revisions to the myth (myth, after all, can be defined by its malleability), but why, for instance, set the opening of the film in Macedonia in 358 BCE instead of 1200? It adds nothing to the story, but confuses anyone with even a passing knowledge of Greek history — our heroes should be rubbing elbows with Philip II of Macedon, Alexander the Great’s father. The answer to this question, I suspect, is a sort of Wikipedial historicity: Hercules and his companions are hired by a fictional King Cotys, a name chosen by Moore as suitably Thracian — and there was a historical Cotys in 358.
The Thracian Wars is set well after Hercules has completed his twelve labors: in the loose chronology of Greek myth, we are somewhere between the Calydonian Boar Hunt and the battle of the Seven Against Thebes. Hercules arrives in Thrace as a mercenary, along with his companions Iolaus, Tydeus, Autolycus, Amphiarus, Atalanta, Meleager, and Meneus, the only character made up by Moore. (The Hollywood film production jettisons those characters who might have LGBT overtones: Meneus is Hercules’s male lover, and Meleager is constantly frustrated by and therefore exposes Atalanta’s lesbianism.) Though no story of Greek myth involves all these characters, they all belong to roughly the same generation — the generation before the Trojan War. These characters could have interacted in untold stories.
But they don’t interact well. As Moore notes in the afterword to the trade paperback, “Hercules was a murderer, a rapist, a womanizer, subject to catastrophic rages and plainly bisexual…I wouldn’t have wanted to spend much time in his company.” The rest of the band is not much better. Where the film presents a band of brothers, faithful to each other to the death, in the comic these characters loathe each other and are clearly bound not by love of each other but the need to earn a living. They are mercenaries, with little interest in the morality of their actions.
Legendary Greece, then, is without a moral center. Violence and bloodshed are never far away. Sexual activity is fueled only by deceit or lust. The Greek characters speak of their Thracian surroundings as barbaric, but we are never shown any better. The art of the comic articulates this grim reality. Eyes are frequently lost in shadow, for instance, dehumanizing the characters further. Throughout, artist Admira Wijaya deploys a somber color palette of greys, browns, and muted reds to convey a bleak world.
This, then, is the great disconnect of Greek myth with the modern world. In our times, our heroes of popular culture must be morally pure; only black and white values can be understood. So-called “anti-heroes” are occasionally tolerated in marginal media, but even here their transgressions are typically mitigated somehow (think of the recent television series Dexter, in which the serial killer is validated by his targeting of other serial killers — the real bad guys). The heroes of Greek legend — the word “hero” itself only denoted those who performed memorable or noteworthy deeds, without a moral element — often existed solely because they were transgressors. Tantalus, Oedipus, Orestes: their stories are of broken taboos, stories of cannibalism, incest, kin-slaying. Later authors may have complicated their stories, but violation is at the core of their being.
Sure, the common people of ancient Greece benefited from Hercules’s actions as a slayer of monsters, but none of his actions were motivated by altruism. Rather, it was shame at best that moved him: in most tellings, his famous twelve labors were penance for the death of his family at his own hands. Many of his other deeds were motivated by hunger, lust, or just boredom. In the film, Johnson’s Hercules finds a sort of absolution for his past crimes. In the comic, redemption is not an objective; in fact, Hercules doesn’t even seem to recognize the concept.
Hercules is a figure of strength and power, a conqueror of the unknown, a slayer of dragons (and giant boars and lions). The Hercules of Hollywood shows us strength. The Hercules of myth — and of Moore’s comic — shows us the consequences of that strength when it’s not carefully contained. There is a primal energy there, a reflection of that part of our souls that is fascinated with, even desires, transgression. As healthy, moral humans, most of us conquer that fascination. But myth is our reminder that it always, always bears watching. Hollywood isn’t going to help you do that.
Featured image: An engraving from The Labours of Hercules by Hans Sebald Beham, c. 1545. Public domain via Wikimedia Commons.
As an Africanist historian who has long been committed to reaching broader publics, I was thrilled when the research team for the BBC’s popular genealogy program Who Do You Think You Are? contacted me late last February about an episode they were working on that involved mixed race relationships in colonial Ghana. I was even more pleased when I realized that their questions about the practice and perception of intimate relationships between African women and European men in the Gold Coast, as Ghana was then known, were ones I had just explored in a newly published American Historical Review article, which I readily shared with them. This led to a month-long series of lengthy email exchanges, phone conversations, Skype chats, and eventually to an invitation to come to Ghana to shoot the Who Do You Think You Are? episode.
After landing in Ghana in early April, I quickly set off for the coastal town of Sekondi where I met the production team, and the episode’s subject, Reggie Yates, a remarkable young British DJ, actor, and television presenter. Reggie had come to Ghana to find out more about his West African roots, but discovered instead that his great grandfather was a British mining accountant who worked in the Gold Coast for several years. His great grandmother, Dorothy Lloyd, was a mixed-race Fante woman whose father—Reggie’s great-great grandfather—was rumored to be a British district commissioner at the turn of the century in the Gold Coast.
The episode explores the nature of the relationship between Dorothy and George, who were married by customary law around 1915 in the mining town of Broomassi, where George worked as the paymaster at the local mine. George and Dorothy set up house in Broomassi and raised their infant son, Harry, there for two years before George left the Gold Coast in 1917 for good. Although their marriage was relatively short lived, it appears that Dorothy’s family and the wider community that she lived in regarded it as a respectable union and no social stigma was attached to her or Harry after George’s departure from the coast.
George and Dorothy lived openly as man and wife in Broomassi during a time period in which publicly recognized intermarriages were almost unheard of. As a privately employed European, George was not bound by the colonial government’s directives against cohabitation between British officers and local women, but he certainly would have been aware of the informal codes of conduct that regulated colonial life. While it was an open secret that white men “kept” local women, these relationships were not to be publicly legitimated.
Precisely because George and Dorothy’s union challenged the racial prescripts of colonial life, it did not resemble the increasingly strident characterizations of interracial relationships as immoral and insalubrious in the African-owned Gold Coast press. Although not a perfect union, as George was already married to an English woman who lived in London with their children, the trajectory of their relationship suggests that George and Dorothy had a meaningful relationship while they were together, that they provided their son Harry with a loving home, and that they were recognized as a respectable married couple. No doubt this had much to do with why the wider African community seemingly embraced the couple, and why Dorothy was able to “marry well” after George left. Her marriage to Frank Vardon, a prominent Gold Coaster, would have been unlikely had she been regarded as nothing more than a discarded “whiteman’s toy,” as one Gold Coast writer mockingly called local women who casually liaised with European men. In her own right, Dorothy became an important figure in the Sekondi community where she ultimately settled and raised her son Harry, alongside the children she had with Frank Vardon.
The “white peril” commentaries that I explored in my AHR article proved to be a rhetorically powerful strategy for challenging the moral legitimacy of British colonial rule because they pointed to the gap between the civilizing mission’s moral rhetoric and the sexual immorality of white men in the colony. But rhetoric often sacrifices nuance for argumentative force and Gold Coasters’ “white peril” commentaries were no exception. Left out of view were men like George Yates, who challenged the conventions of their times, even if imperfectly, and women like Dorothy Lloyd who were not cast out of “respectable” society, but rather took their place in it.
This sense of conflict and connection and of categorical uncertainty is what I hope to have contributed to the research process, storyline development, and filming of the Reggie Yates episode of Who Do You Think You Are? The central question the show raises is how do we think about and define relationships that were so heavily circumscribed by racialized power without denying the “possibility of love?” By “endeavor[ing] to trace its imperfections, its perversions,” was Martinican philosopher and anticolonial revolutionary Frantz Fanon’s answer. While I have yet to see the episode, Fanon’s insight will surely reverberate throughout it.
What is jihad? What do fundamentalists want? How will moderate Islamists react? These are questions that should be discussed. We may not have easy answers, but if we don’t start a dialogue, we may miss an opportunity to curtail horror.
The film Timbuktu from African director Abderrahmane Sassako about his native country serves as a needed point of departure for discussion — in government, in schools, in boardrooms, and in families.
Jihadism and terrorism are the 21st century’s “-isms,” following the horrors of fascism and communism. In hindsight, we wonder if we could have prevented the horrors of the 20th century. The devastating results have taught us that people do not want war; they want to live and work in peace. Should we not learn from history’s mistakes and prevent future genocides?
In the name of jihad, innocent victims are beheaded, kidnapped, raped, tortured, terrorized, left without families, and without homes. Extremist Muslims wage war against Christians and Jews, and against other Muslims (Sunnis vs. Shiites). Havoc is occurring in Syria, Iraq, Lebanon, Gaza, West Bank, Mali, Sudan, etc. It may soon take hold of our cities where jihadists threaten to set up terrorist cells.
Powerful and courageous, Timbuktu mesmerizes us with its blend of colors and music amidst a gentle background of sand dunes. Yet, juxtaposed to the serene beauty of Mali’s nature is the ferocious narrative of men turned into animals, forcing their machine guns on the quiet people of Timbuktu. We bear witness to the atrocious acts of barbarism.
Based on a true story when jihadists took over northern Mali in 2012, Sassako gives us a mosaic of characters who represent multi-cultural Africa. The camera takes us directly into their tragedies using a cause and effect structure:
We see a fisherwoman who refuses to wear a veil and gloves, for how would she be able to see or pick up the fish she must sell? Her rebellion, despite her mother’s pleas and the jihadist threats, is frightening.
Several friends play the guitar and sing together in the quiet of their home. The result? They are arrested and stoned to death.
A boy has a soccer ball, and accidentally the ball rolls down steps and through sand dunes to fall in front of several jihadists. The punishment? 40 lashes.
A caring man defends his young shepherd when their cow is killed. The outcome? A fight and the destruction of a family.
The leader of the community, the imam, tells several jihadists to leave the mosque with their guns and boots. People are praying. He warns them that Allah does not want destruction or terror. We fear the imam’s end.
These characters are not abstract; they are real victims. We follow their story, care for them, empathize with their pride, and suffer with their courage.
The contrast between good and evil, beauty and terror, are presented in alternating scenes and play havoc with our emotions. Sometimes we want to close our eyes as the evil becomes unbearable; we fear what horror will follow.
Sassako is a master storyteller and painter of landscape. His color palette holds our eyes as our hearts cringe at the story. Beautiful moments linger amidst savage reality. We see ballet in the scene when a dozen young men play soccer without a soccer ball. How graceful is their athletic movements and how deep their pleasure. We are mesmerized, and at the same time, we are panicked to think what the next scene will bring. The film’s power comes from its majestic beauty – a beauty that we fear cannot exist with the evil we are watching.
Sassako parallels the opening scene with the final scene. The film begins showing an elegant deer running through the soft dunes. It ends with the same scene, but the animal is replaced by the twelve-year-old heroine who runs desperately through the same dunes as she tries to escape her tragic reality. Sassako’s circle is a vicious cycle with no end to crimes against humanity.
Timbuktu is a difficult film to watch because it depicts a possible future that no one wants to see: genocide. All the more reason to see this film now.
From eighteenth century Gothic novels to contemporary popular culture, the tropes and sacred culture of Catholicism endure as themes in entertainment. OUP author Diana Walsh Pasulka sat down with The Conjuring (2013) screenwriters Chad Hayes and Carey Hayes to discuss their cinematic focus on “the Catholic Supernatural” and the enduring appeal of Catholic culture to moviegoers.
Diana Walsh Pasulka: Your recent movie The Conjuring was financially very successful and is the third highest grossing horror film about the supernatural, behind only The Exorcist (1973) and The Sixth Sense (1999). Each of these films engage Catholic themes, and more specifically, the supernatural. The Conjuring, of course, is based on the lives of Catholics Ed and Lorraine Warren. What is it about Catholic culture that you think resonates with audiences?
Carey Hayes: Catholic culture is global. It also has a long history that almost everyone in the West identifies with on some level. Medieval cathedrals, priests in black robes and white collars and nuns in habits, in many ways these visuals are like short hand or code, and audiences understand them. For example, take the movie, The Exorcist. When it is apparent in the movie that the little girl is possessed by evil, they call in the priest. The priest, with his identifiable clothing, his crucifix and holy water, is the representation, visually, of the antidote to evil. Of course it doesn’t hurt that authors and filmmakers have used these themes over and over again, and this adds to the recognizable effects. The more we see elements of Catholic culture used in visual culture this way, the more we understand what they mean.
Diana Walsh Pasulka: That’s interesting. The meaning of these tropes, then, can take on a second life, of sorts, in popular culture. Non-Catholic audiences might equate what they see about Catholicism in the movies, with Catholic-lived practice.
Chad Hayes: That could be the case, of course, but in our experience we’ve had only positive reinforcement from Catholics. When we promoted The Conjuring in San Francisco a Catholic priest approached me and said “Thank you for getting it right.” That one comment was one of the best compliments I’ve received about the movie. We were also interviewed for U.S. Catholic, and they were very positive.
Diana Walsh Pasulka: A few years ago, Carey, you coined the term “The Religious Supernatural” to differentiate what you were doing from other screenwriters who wrote movies about the supernatural. Why designate it “religious?”
Carey Hayes: I coined the term to identify a certain framework, and, I suppose, to suggest a history. Today there is a lot of focus in popular culture on the supernatural or the paranormal. It is almost all secular. In the past, the supernatural and paranormal occurred within a worldview that allowed for the supernatural but within a religious framework. People had tools like prayers to deal with the supernatural, which, you have to admit, is scary. We wanted, in our movies, to return to that. We thought that, in many ways, religion deals with the big questions, and the supernatural is usually a scary thing that interrupts daily life and causes people to think about the big questions. So, we wanted to pair the two, religion and the supernatural, and remind audiences that this is, ultimately, what scary movies are about: ultimate questions about life.
Diana Walsh Pasulka: Are you ever frightened by what you write about?
Chad Hayes: We’re not afraid when we write and produce movies about the supernatural. But our research frightens us!
Carey Hayes: Right! It is frightening because some of this is supposed to be true, or based on events that are true.
Diana Walsh Pasulka: I wondered about that. Part of the appeal of your movies, and other movies like it such as The Exorcist, is that they play on the ambiguity of fiction and non-fiction, or the realism of your subject. The Blair Witch Project (1999) is a great example of the play on realism. The movie was presented as recovered footage of an actual university student project. I was in Berkeley, California for the pre-release of that movie, and I couldn’t get tickets for three days because the lines outside of the theaters were so long. When I finally got to see the movie members of the audience were wondering, is this real? Of course, we knew that it wasn’t, but we were also intrigued that it was presented as real. That definitely contributed to its popularity. The marketing campaign for that movie was unique at the time, too, in that they emphasized the question of the potential realism of the movie.
Chad Hayes: We purposely look for stories that are based on true events. We do that for this very reason: because people can relate. They can Google the story and see that maybe its folklore, or its real, but it is out there and is an experience for other people. So that contributes, no doubt, to the scare factor.
Diana Walsh Pasulka: Do you think this also has something to do with the appeal of the Catholic aesthetic, like the use of real Catholic sacred objects — the sacramentals, the crucifix, and the robes of the priests?
Chad Hayes: Absolutely. Ed and Lorraine Warren are practicing Catholics. Ed has passed away, but Lorrain still attends a Catholic Mass almost every day. That part of The Conjuring is based on her real Catholic practice. We were in contact with Lorraine throughout the writing of the movie and we included the objects that she and Ed actually used, like the sacramentals, the blessed objects, and holy water. My Catholic friends tell me that most Catholics don’t use these objects in their daily lives, but then they aren’t exorcizing demons, are they?