JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: this day in history, Most Recent at Top [Help]
Results 1 - 25 of 120
How to use this Page
You are viewing the most recent posts tagged with the words: this day in history in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
Today marks the forty-sixth anniversary of Prince Charles’s formal investiture as Prince of Wales. At the time of this investiture, Charles himself was just shy of his twenty-first birthday, and in a video clip from that year, the young prince looks lean and fresh-faced in his suit, his elbows resting on his knees, his hands clasping and unclasping as he speaks to the importance of the investiture.
This year marks the fiftieth anniversary of the congressional passage of the Hart-Celler Immigration and Nationality Act, signed into law by President Lyndon B. Johnson. It was the culmination of a trend toward reforming immigrant admissions and naturalization policies that had gathered momentum in the early years of the Cold War era.
On this day in 1863, General Thomas J. “Stonewall” Jackson, one of the wiliest military commanders this country ever produced, died eight days after being shot by his own men. He had lost a massive amount of blood before having his left arm amputated by Dr. Hunter Holmes McGuire, arguably the most celebrated Civil War surgeon of either side.
Of the many controversies surrounding the life and legacy of Christopher Columbus, who died on this day 510 years ago, one of the most intriguing but least discussed questions is his true country of origin. For reasons lost in time, Columbus has been identified with unquestioned consistency as an Italian of humble beginnings from the Republic of Genoa. Yet in over 536 existing pages of his letters and documents, not once does the famous explorer claim to have come from Genoa.
On this day in 1953, the New Zealand mountaineer Edmund Hillary and Nepali-Indian Sherpa mountaineer Tenzing Norgay became the first people to reach the summit of Mount Everest. In the following excerpt from his book, Exploration: A Very Short Introduction (OUP, 2015), Stewart A. Weaver discusses why we, as humans, want to explore and discover. For all the different forms it takes in different historical periods, for all the worthy and unworthy motives that lie behind it, exploration, travel for the sake of discovery and adventure, seems to be a human compulsion.
On June 21, Mt. Zion United Methodist Church in Philadelphia, Mississippi will hold its fifty-first memorial service for three young civil rights workers murdered by the Ku Klux Klan at the start of the Freedom Summer. Andrew Goodman, James Chaney, and Michael Schwerner were activists who planned to create a voting rights school at the church, located in rural Neshoba County.
Two hundred years ago American and British delegates signed a treaty in the Flemish town of Ghent to end a two-and-a-half-year conflict between the former colonies and mother country. Overshadowed by the American Revolution and Napoleonic Wars in the two nations’ historical memories, the War of 1812 has been somewhat rehabilitated during its bicentennial. Yet arguing for the importance of a status quo antebellum treaty that concluded a war in which neither belligerent achieved its war aims, no territory was exchanged, and no victor formally declared can be a tough sell. Compared to the final defeat of Napoleon at the Battle of Waterloo, fought a just a few months later and forty odd miles down the road from Ghent, the end of the War of 1812 admittedly lacked cinema-worthy drama.
But the Treaty of Ghent mattered enormously (and not just to historians interested in the War of 1812). The war it ended saw relatively light casualties, measured in the thousands compared to the millions who died in the French Revolutionary and Napoleonic Wars that raged across the rest of the globe. Nevertheless, for the indigenous and colonizing peoples that inhabited the borderlands surrounding the United States, the conflict had proved devastating. Because the American and British economies were intertwined, the war had also wreaked havoc on American agriculture and British manufacturing, and wrecked each other’s merchant navies. Moreover, public support for the war in the British Empire and the United States had been lukewarm with plenty of outspoken opposition who had worked tirelessly to prevent and then quickly end the war.
Not surprisingly, peace resulted in widespread celebration across the Atlantic. The Leeds Mercury, many of whose readers were connected to the manufacturing industries that had relied on American markets, even compared the news with that of the Biblical account of the angelic chorus’s announcement of the birth of Jesus: “This Country, thanks to the good Providence of God, is now at Peace with Europe, with America, and with the World. . . . There is at length ‘Peace on Earth,’ and we trust the revival of ‘Good-will among men’ will quickly follow the close of national hostilities.” When the treaty reached Washington for ratification, President James Madison and Congress fell over themselves in a rush to sign it.
Far more interesting than what the relatively brief Treaty of Ghent includes is what was left out. When the British delegation arrived at Ghent in August 2014, they had every possible advantage. Britain had won the naval war, the United States was on brink of bankruptcy, and the end of Britain’s war with France meant that hardened veterans were being deployed for an imminent invasion of the United States. Later that month British troops would humiliatingly burn Washington. Even Ghent itself was a home field advantage, as it was occupied by British troops and within a couple of days of communication with ministers in London.
In consequence, Britain’s initial demands were severe. If the United States wanted peace, it had to cede 250,000 square miles of its northwestern lands (amounting to more than 15% of US territory, including all or parts of the modern states of Michigan, Illinois, Indiana, Ohio, Missouri, Iowa, Wisconsin, and Minnesota). These lands would be used to create an independent American Indian state—promises of which the British had used to recruit wary Indian allies. Britain also demanded a new border for Canada, which included the southern shores of the Great Lakes and a chunk of British-occupied Maine—changes that would have given Canada considerable natural defenses. The Americans, claimed the British, were “aggrandizers”, and these measures would ensure that such ambitions would be forever thwarted.
The significance of the terms is difficult to underestimate. Western expansion would have ground to a halt in the face of a powerful British-led alliance with the Spanish Empire and new American Indian state. The humiliation would likely have resulted in the collapse of the United States. The long-marginalized New England Federalists had been outspoken in their opposition to the war and President James Madison’s Southern-dominated Republican Party, with some of their leaders openly threatening secession. The Island of Nantucket had already signed a separate peace with Britain, and many inhabitants of British-occupied Maine had signed oaths of allegiance to Britain. The Governor of Massachusetts had even sent an agent to Canada to discuss terms of British support for his state’s secession, which included a request for British troops. The counterfactuals of a New England secession are too great to explore here, but the implications are epic—not least because, unlike in 1861, the US government in 1814 was in no position to stop one. In the end, a combination of the American delegates’ obstinacy and a rapidly fading British desire to keep the nation on an expensive war footing solely to fight the Americans led the British to abandon their harsh terms.
In consequence, the Treaty of Ghent cemented the United States rather than destroyed it. Historians have long debated who truly won the war. However, what mattered most was that neither side managed a decisive victory. The Americans lacked the organization and national unity to win; the British lacked the will to wage an expensive, offensive war in North America. American inadequacy ensured that all of Canada would prosper as part of the British Empire, even though Upper Canada (now Ontario) had arguably closer links to the United States and was populated largely by economic migrants from the United States. British desire to avoid further confrontation enabled the Americans to focus its attentions on eliminating the other, and considerably weaker, obstacles to continental supremacy: the American Indians and the remnants of the Spanish Empire, who proved to be the real losers of the War of 1812 and the Treaty of Ghent.
Featured image: The Signing of the Treaty of Ghent, Christmas Eve, 1814, Amédée Forestier (1814). Public domain via Wikimedia Commons.
Today, 8 January, is the 80th birthday of Elvis Presley. Born to Vernon Elvis Presley and Gladys Love Presley (née Smith) in 1935, the ‘King of Rock and Roll’ left an indelible mark on American popular culture. In celebration, we present a brief extract from Elvis Presley: A Southern Life by Joel Williamson.
One photograph of the small Presley family captures the essence of their lives then and thereafter. Elvis, about three years old, is posed with Gladys and Vernon. Elvis is standing, and his parents are sitting on either side of him.
The exact date of the picture is unknown. Decades later it showed up in the photograph collection of the Official Elvis Presley Fan Club in Leicester, England. Interviews with pediatricians, pediatric nurses, mothers, fathers, grandmothers, and grandfathers have estimated Elvis’s age.
The blank, clean, slightly gray background is probably the concrete wall of the brand-new Lee County jail in Tupelo. Vernon is a prisoner, having been arrested on November 16, 1937, for forging a check. The county jail had recently been built by the Works Progress Administration (WPA), a New Deal project to employ the unemployed. Previously, county prisoners had been lodged in the run-down town jail. Only the white prisoners were moved to the new jail.
In the photograph, mother, child, and father are close, body to body as if huddled against a coming moment of separation. Gladys’s left arm reaches behind and across Elvis’s back to Vernon. Her open hand rests lightly on Vernon’s left shoulder, as if to hold him in gently, to affirm her presence with him. It is a hand that seeks to comfort, but its loose openness signals her powerlessness.
Vernon had been charged with forging a check on Orville Bean, the dairy farmer who was his landlord and employer. He had been arrested and arraigned during the fall term of criminal court. He pled not guilty, but he would not get a speedy trial. His plea came too late for him to be tried in the fall term of court. His case would have to wait for the spring term, which began six months later on Monday, May 23, 1938. Before the court convened that spring, the local papers were full of suggestions that the docket was overfull and that justice in Lee County must be meted out more rapidly than before.
Only days before Vernon’s case would have been tried, he changed his plea to guilty. Justice swiftly followed. On Wednesday, May 25, Judge Thomas H. Johnston sentenced Vernon to three years in the state penitentiary. He got no credit for the six months he had spent in the county jail. After sentencing came the anxious wait before the prison guards trucked him off to Parchman Farm.
On Saturday, May 28, Circuit Court Clerk Joe J. Kilgo wrote out the papers committing Vernon and eleven other convicts to Parchman. The twelve men waited in the county jail for the dreaded arrival of “Long Chain Charley,” a sergeant on the guard force at Parchman who circulated through the state collecting convicts for transport to prison. He always brought a long chain to which he shackled his prisoners to prevent their escape.
Six months in the county jail waiting for a trial had been bad enough, but there was always at least some hope for relief. Orville Bean might decide not to press charges against Vernon. Relatives and friends might somehow intervene. If it came to a trial, a good lawyer might rise to defend him and the jury might find him innocent. Having changed his plea to guilty, Vernon faced the certainty of serving at hard labor in a notoriously tough prison for three long years, years in which he could not come home every night to his wife and child in their little two-room wooden house in East Tupelo nor earn money to support them.
Sensing the pathos in the photograph does not require knowledge of its history. The bodies of the man and woman are tense with anxiety and dread. The child is anxious and confused. Vernon has put his hat on his head as if making ready to leave. He faces the camera, but his eyes cut to his left as if watching fearfully for someone or something to appear that he already hears. Gladys also stares to the left, her body stiff.
The little boy’s gaze is less focused, as if he were told to look at the camera but senses something he needs to see off to the left too. He wears bib overalls over a dark, long-sleeved shirt, charmingly trimmed with white cuffs and a white collar. Gladys is a talented seamstress. She wears a flower-print dress. Her dress, like Elvis’s shirt, is attractively set off by a collar of a different color. Elvis, like his father, wears a hat. His hat seems almost man-sized, cocked at a rakish angle on his round little head. His full cherubic lips are twisted down to the right as if he realizes that he should say something and set his jaw in some certain way to assert an attitude, but he doesn’t know what to say or how.
This is the earliest photograph of Elvis. The photographer was most likely a friend or a relative who had driven Gladys and Elvis a couple of miles over from their home in East Tupelo. It was a defining moment in the lives of Elvis, Gladys, and Vernon Presley, individually and collectively. The very fact of the visit, the camera, and the one photograph that has been preserved indicates that they understood that they were at a critical juncture in their lives. The petty and foolish crime that Vernon committed in the fall of 1937, when he was twenty-one, Gladys twenty-five, and Elvis less than three, deeply marked their lives.
Today, 8 January, would have been Elvis Presley’s 80th birthday. In remembrance of his fascinating life we’re sharing a slideshow from the beautiful images inElvis Presley: A Southern Life by Joel Williamson. How did this Southern boy make it from Nashville and Vegas, to Grafenwoehr and the White House?
Elvis with his parents, 1950. Joseph A. Tunzi/ JAT Publishing.
Elvis Presley with Scotty and Bill poster, Cape Girardeau, Mo., July 1955. Taken at the Country Music Hall of Fame in Nashville, Tennessee. Thomas Hawk, photographer. Available via Flikr.
Elvis on his way to fame at the Louisiana Hayride, 1956. LSU-Shreveport Archives and Special Collections.
An impromptu session with Jerry Lee Lewis, Carl Perkins, Elvis Presley, and Johnny Cash at the Sun Record Studios in Memphis, Tennessee, on December 4, 1956. Originally published in the Memphis Press-Scimitar. Courtesy of the Memphis and Shelby County Room, Memphis Public Library & Information Center.
Headline and 1956 photo from article on Elvis and Mae Axton, who wrote “Heartbreak Hotel,” just after the record sold 1 million copies, 1956. Published in the Memphis Commercial Appeal. Courtesy of the Memphis and Shelby County Room, Memphis Public Library & Information Center.
Elvis Presley in Grafenwoehr, 1958. Courtesy of U.S. Army Garrison Grafenwoehr.
The façade of Graceland in the late 1950s or early 1960s.
Elvis during his ’68 Comeback Special on NBC. Available via Joseph A. Tunzi/ JAT
Elvis and Priscilla’s wedding at the Aladdin Hotel, Las Vegas, May 1, 1967. Available via Getty.
Priscilla and Elvis at a dinner. Memphis and Shelby County Room, Available via Memphis Public Library & Information Center.
Elvis after a performance in Las Vegas, January or February 1970. Available via Joseph A. Tunzi/ JAT Publishing.
Elvis rehearsing in Las Vegas for his 1970 documentary, “Elvis: That’s the Way It Is. Available via Joseph A. Tunzi/ JAT Publishing.
Elvis Presley meets President Richard Nixon on December 21, 1970. White House Chief Photographer Oliver F. Atkins. General Services Administration. National Archives and Records Service. Office of Presidential Libraries. Office of Presidential Papers. Collection RN-WHPO: White House Photo Office Collection (Nixon Administration), 01/20/1969–08/09/1974.
Marquee of the International Hotel, Las Vegas, 1971. Available via Joseph A. Tunzi/ JAT Publishing.
From just behind the gates at Graceland, a look at the mourners gathered on the day Elvis died, as the police try to hold back the crowds, August 16, 1977. Photographed by Saul Brown. Memphis and Shelby County Room, Memphis Public Library & Information Center.
Featured image credit: Headline and 1956 photo from article on Elvis and Mae Axton, who wrote “Heartbreak Hotel,” just after the record sold 1 million copies, 1956. Published in the Memphis Commercial Appeal. Courtesy of the Memphis and Shelby County Room, Memphis Public Library & Information Center.
As anyone knows who has looked at the newspapers over the festive season, 2015 is a bumper year for anniversaries: among them Magna Carta (800 years), Agincourt (600 years), and Waterloo (200 years). But it is January which sees the first of 2015’s major commemorations, for it is fifty years since Sir Winston Churchill died (on the 24th) and received a magnificent state funeral (on the 30th). As Churchill himself had earlier predicted, he died on just the same day as his father, Lord Randolph Churchill, had done, in 1895, exactly seventy years before.
The arrangements for Churchill’s funeral, codenamed ‘Operation Hope Not’, had long been in the planning, which meant that Churchill would receive the grandest obsequies afforded to any commoner since the funerals of Nelson and Wellington. And unlike Magna Carta or Agincourt or Waterloo, there are many of us still alive who can vividly remember those sad yet stirring events of half a century ago. My generation (I was born in 1950) grew up in what were, among other things, the sunset years of Churchillian apotheosis. They may, as Lord Moran’s diary makes searingly plain, have been sad and enfeebled years for Churchill himself, but they were also years of unprecedented acclaim and veneration. During the last decade of his life, he was the most famous man alive. On his ninetieth birthday, thousands of greeting cards were sent, addressed to ‘The Greatest Man in the World, London’, and they were all delivered to Churchill’s home. During his last days, when he lay dying, there were many who found it impossible to contemplate the world without him, just as Queen Victoria had earlier wondered, at the time of his death in 1852, how Britain would manage without the Duke of Wellington.
Like all such great ceremonial occasions, the funeral itself had many meanings, and for those of us who watched it on television, by turns enthralled and tearful, it has also left many memories. In one guise, it was the final act homage to the man who had been described as ‘the saviour of his country’, and who had lived a life so full of years and achievement and honour and controversy that it was impossible to believe anyone in Britain would see his like again. But it was also, and in a rather different emotional and historical register, not only the last rites of the great man himself, but also a requiem for Britain as a great power. While Churchill might have saved his country during the Second World War, he could not preserve its global greatness thereafter. It was this sorrowful realization that had darkened his final years, just as his funeral, attended by so many world leaders and heads of state, was the last time that a British figure could command such global attention and recognition. (The turn out for Margaret Thatcher’s funeral, in 2013, was nothing like as illustrious.) These multiple meanings made the ceremonial the more moving, just as there were many episodes which made it unforgettable: the bearer party struggling and straining to carry the huge, lead-lined coffin up the steps of St Paul’s; Clement Attlee—Churchill’s former political adversary—old and frail, but determined to be there as one of the pallbearers, sitting on a chair outside the west door brought especially for him; the cranes of the London docks dipping in salute, as Churchill’s coffin was born up the Thames from Tower Pier to Waterloo Station; and the funeral train, hauled by a steam engine of the Battle of Britain class, named Winston Churchill, steaming out of the station.
For many of us, the funeral was made the more memorable by Richard Dimbleby’s commentary. Already stricken with cancer, he must have known that this would be the last he would deliver for a great state occasion (he would, indeed, be dead before the year was out), and this awareness of his own impending mortality gave to his commentary a tone of tender resignation that he had never quite achieved before. As his son, Jonathan, would later observe in his biography of his father, ‘Richard Dimbleby’s public was Churchill’s public, and he had spoken their emotions.’
Fifty years on, the intensity of those emotions cannot be recovered, but many events have been planned to commemorate Churchill’s passing, and to ponder the nature of his legacy. Two years ago, a committee was put together, consisting of representatives of the many institutions and individuals that constitute the greater Churchill world, both in Britain and around the world, which it has been my privilege to chair. Significant events are planned for 30 January: in Parliament, where a wreath will be laid; on the River Thames, where Havengore, the ship that bore Churchill’s coffin, will retrace its journey; and at Westminster Abbey, where there will be a special evensong. It will be a moving and resonant day, and the prelude to many other events around the country and around the world. Will any other British prime minister be so vividly and gratefully remembered fifty years after his—or her—death?
Headline image credit: Franklin D. Roosevelt and Winston Churchill, New Bond Street, London. Sculpted by Lawrence Holofcener. Public domain via Wikimedia Commons.
Since Beethoven’s death on this day 188 years ago, debate has raged as to the cause of his deafness, generating scores of diagnoses ranging from measles to Paget’s disease. If deafness had been his only problem, diagnosing the disorder might have been easier, although his ear problem was of a strange character no longer seen. It began ever so surreptitiously and took over two decades to complete its destruction of Beethoven’s hearing.
When Charles Darwin died at age 73 on this day 133 years ago, his physicians decided that he had succumbed to “degeneration of the heart and greater vessels,” a disorder we now call “generalized arteriosclerosis.” Few would argue with this diagnosis, given Darwin’s failing memory, and his recurrent episodes of “swimming of the head,” “pain in the heart”, and “irregular pulse” during the decade or so before he died.
‘Anzac’ (soon transmuting from acronym to word) came to sum up the Australian desire to reflect on what the war had meant. What was the first Anzac Day? At least four explanations exist of the origins of the idea of Anzac, the most enduring legacy of Australia’s Great War.
May the Fourth be with you! Playing off a pun on one of the movie’s most famous quotes, May the 4th is the unofficial holiday in which Star Wars fans across the globe celebrate the beloved blockbuster series. The original Star Wars movie, now known as Star Wars IV: A New Hope, was released on 25 May 1977, but to those of us who waited in line after line to see it again and again in theaters, it will always be just Star Wars.
Harriet Ross Tubman’s heroic rescue effort on behalf of slaves before and during the Civil War was a lifetime fight against social injustice and oppression.
Most people are aware of her role as what historian John Hope Franklin considered the greatest conductor for the Underground Railroad. However, her rescue effort also included her work as a cook, nurse, scout, spy, and soldier for the Union Army. As a nurse, she cared for black soldiers by working with Clara Barton, founder of the American Red Cross, who was in charge of front line hospitals. Over 700 slaves were rescued in the Tubman-led raid against the Confederates at the Combahee River in South Carolina. She became the only woman in U.S. history to plan and lead both white and black soldiers in such a military coup.
It is the latter activity which caused black feminists in Roxbury, Massachusetts to organize themselves during the seventies as the Combahee River Collective. When Tubman died, she was given a military burial with honors. It is also Tubman’s work as an abolitionist, advocate for women’s suffrage, and care for the elderly that informs black feminist thought. It is only fitting that we remember the life of this prominent nineteenth century militant social reformer on the 165th anniversary of her escape from slavery on 17 September 1849.
Tubman was born into slavery around 1820 to Benjamin and Harriet Ross and given the name Araminta. She later took her mother’s name, Harriet. As a slave child, she worked in the household first and then was assigned to work in the fields. Her early years as a slave on the Eastern Shore of Maryland were traumatic and she was sickly. An overseer threw an object that accidentally hit Tubman in the head. The head injury she sustained caused her to have seizures and blackouts all of her life. She even had visions and this combined with her religiosity caused her to believe that she was called by God to lead slaves to freedom. It is believed that her work in the fields gave her the physical stamina to make her rescues. She was married in 1844 to John Tubman, a free black man, but her anxiety about being sold caused her to run away to Philadelphia and leave John behind. Runaways were rare among slave women, but prevalent among slave men.
Between 1846 and 1860, Tubman successfully rescued close to 300 family members and other slaves. She became part of a network of prominent abolitionists who created escape havens for passage from the South to Northern cities and then on to Canada. The recent award winning film, Twelve Years a Slave reminds us that even free blacks were subject to being turned in as a runaway after passage of The Fugitive Slave Law of 1850. Tubman was bothered by this new law and was eager to go directly to Canada where she herself resided for a time. She made anywhere from 11 to 19 rescue trips. The exact count is unclear because such records were notkept in this clandestine social movement. Maryland plantation owners put a $40,000 bounty on Tubman’s head. She was never caught and she never lost a passenger. Like Patrick Henry, her motto was give me liberty or give me death. She carried a pistol with her and threatened to shoot any slave who tried to turn back. The exodus from slavery was so successful that the slaves she led to freedom called her Moses. She was such a master of disguise and subterfuge that these skills were used after she joined the Union Army. It has also been reported that the skills she developed were so useful to the military that her scouting and spy strategies were taught at West Point. She purchased a home in Auburn, New York where she resided after the Civil War. Her husband, John Tubman, died after the war, and she married Nelson Davis, another Civil War veteran. From her home in Auburn, she continued to help former slaves.
The Social Reformer
Historian Gerda Lerner once described Tubman as a revolutionist who continued her organizing activities in later life. Tubman supported women’s suffrage, gave speeches at organizing events for both black and white women, and was involved in the organizing efforts of the National Federation of Afro-American Women. After a three decade delay, Tubman was given $20 a month by the government for her military service. Tubman lived in poverty, but her mutual aid activities continued. She used her pension and money from fundraising activities to provide continued aid to freed slaves and military families. She died in 1913 in the home she established for the elderly and poor, the Harriet Tubman Home for Aged and Indigent Colored People, now a National Historic Monument.
Harriet Ross Tubman escaped from slavery, but remembered those she left behind. She was truly an historic champion for civil rights and social justice.
Heading image: Underground Railway Map. Compiled from “The Underground Railroad from Slavery to Freedom” by Willbur H. Siebert Wilbur H. Siebert, The Macmillan Company, 1898. Public Domain via Wikimedia Commons.
2014 marks not just the centenary of the start of World War I, and the 75th anniversary of World War II, but on 29 September it is 60 years since the establishment of CERN, the European Centre for Nuclear Research or, in its modern form, Particle Physics. Less than a decade after European nations had been fighting one another in a terrible war, 12 of those nations had united in science. Today, CERN is a world laboratory, famed for having been the home of the world wide web, brainchild of then CERN scientist Tim Berners-Lee; of several Nobel Prizes for physics, although not (yet) for Peace; and most recently, for the discovery of the Higgs Boson. The origin of CERN, and its political significance, are perhaps no less remarkable than its justly celebrated status as the greatest laboratory of scientific endeavour in history.
Its life has spanned a remarkable period in scientific culture. The paradigm shifts in our understanding of the fundamental particles and the forces that control the cosmos, which have occurred since 1950, are in no small measure thanks to CERN.
In 1954, the hoped for simplicity in matter, where the electron and neutrino partner a neutron and proton, had been lost. Novel relatives of the proton were proliferating. Then, exactly 50 years ago, the theoretical concept of the quark was born, which explains the multitude as bound states of groups of quarks. By 1970 the existence of this new layer of reality had been confirmed, by experiments at Stanford, California, and at CERN.
During the 1970s our understanding of quarks and the strong force developed. On the one hand this was thanks to theory, but also due to experiments at CERN’s Intersecting Storage Rings: the ISR. Head on collisions between counter-rotating beams of protons produced sprays of particles, which instead of flying in all directions, tended to emerge in sharp jets. The properties of these jets confirmed the predictions of quantum chromodynamics – QCD – the theory that the strong force arises from the interactions among the fundamental quarks and gluons.
CERN had begun in 1954 with a proton synchrotron, a circular accelerator with a circumference of about 600 metres, which was vast at the time, although trifling by modern standards. This was superseded by a super-proton synchrotron, or SPS, some 7 kilometres in circumference. This fired beams of protons and other particles at static targets, its precision measurements building confidence in the QCD theory and also in the theory of the weak force – QFD, quantum flavourdynamics.
QFD brought the electromagnetic and weak forces into a single framework. This first step towards a possible unification of all forces implied the existence of W and Z bosons, analogues of the photon. Unlike the massless photon, however, the W and Z were predicted to be very massive, some 80 to 90 times more than a proton or neutron, and hence beyond reach of experiments at that time. This changed when the SPS was converted into a collider of protons and anti-protons. By 1984 experiments at the novel accelerator had discovered the W and Z bosons, in line with what QFD predicted. This led to Nobel Prizes for Carlo Rubbia and Simon van der Meer, in 1984.
The confirmation of QCD and QFD led to a marked change in particle physics. Where hitherto it had sought the basic templates of matter, from the 1980s it turned increasingly to understanding how matter emerged from the Big Bang. For CERN’s very high-energy experiments replicate conditions that were prevalent in the hot early universe, and theory implies that the behaviour of the forces and particles in such circumstances is less complex than at the relatively cool conditions of daily experience. Thus began a period of high-energy particle physics as experimental cosmology.
This raced ahead during the 1990s with LEP – the Large Electron Positron collider, a 27 kilometre ring of magnets underground, which looped from CERN towards Lake Geneva, beneath the airport and back to CERN, via the foothills of the Jura Mountains. Initially designed to produce tens of millions of Z bosons, in order to test QFD and QCD to high precision, by 2000 its performance was able to produce pairs of W bosons. The precision was such that small deviations were found between these measurements and what theory implied for the properties of these particles.
The explanation involved two particles, whose subsequent discoveries have closed a chapter in physics. These are the top quark, and the Higgs Boson.
As gaps in Mendeleev’s periodic table of the elements in the 19th century had identified new elements, so at the end of the 20th century a gap in the emerging pattern of particles was discerned. To complete the menu required a top quark.
The precision measurements at LEP could be explained if the top quark exists, too massive for LEP to produce directly, but nonetheless able to disturb the measurements of other quantities at LEP courtesy of quantum theory. Theory and data would agree if the top quark mass were nearly two hundred times that of a proton. The top quark was discovered at Fermilab in the USA in 1995, its mass as required by the LEP data from CERN.
As the 21st century dawned, all the pieces of the “Standard Model” of particles and forces were in place, but one. The theories worked well, but we had no explanation of why the various particles have their menu of masses, or even why they have mass at all. Adding mass into the equations by hand is like a band-aid, capable of allowing computations that agree with data to remarkable precision. However, we can imagine circumstances, where particles collide at energies far beyond those accessible today, where the theories would predict nonsense — infinity as the answer for quantities that are finite, for example. A mathematical solution to this impasse had been discovered fifty years ago, and implied that there is a further massive particle, known as the Higgs Boson, after Peter Higgs who, alone of the independent discoveries of the concept, drew attention to some crucial experimental implications of the boson.
Discovery of the Higgs Boson at CERN in 2012 following the conversion of LEP into the LHC – Large Hadron Collider – is the climax of CERN’s first 60 years. It led to the Nobel Prize for Higgs and Francois Englert, theorists whose ideas initiated the quest. Many wondered whether the Nobel Foundation would break new ground and award the physics prize to a laboratory, CERN, for enabling the experimental discovery, but this did not happen.
CERN has been associated with other Nobel Prizes in Physics, such as to Georges Charpak, for his innovative work developing methods of detecting radiation and particles, which are used not just at CERN but in industry and hospitals. CERN’s reach has been remarkable. From a vision that helped unite Europe, through science, we have seen it breach the Cold War, with collaborations in the 1960s onwards with JINR, the Warsaw Pact’s scientific analogue, and today CERN has become truly a physics laboratory for the world.
Edgar Allan Poe died 165 years ago today in the early morning of 7 October 1849. Only a few details of the illness that extinguished his “bright but unsteady light” are known because his physician, Dr. John Joseph Moran, used the illness to promote his own celebrity and in the process denied posterity an accurate clinical description. One of his later accounts, one summarized by Charles Scarlett, Jr. in the Maryland Historical Magazine (1978; 73: 360-75) came to my attention shortly after returning to Baltimore after 14 years at the Dallas VA Hospital in 1988. I was so taken by Moran’s fascinating and detailed description of Poe’s final days, I decided to use it as the subject of a clinical conference that has long been my favorite – the Clinical Pathologic Case Conference (CPC) Conference. This would prove to be the first of an ongoing series of historical CPCs devoted to the likes of Alexander, Columbus, Mozart and Lenin, stretching over two decades and spawning too-numerous-to-count articles in the international press, scores of manuscripts published in medical journals, and two books.
The clinicopathological conference is a standard medical conference designed to teach physicians and physicians-in-training basic medical concepts and clinical problem-solving techniques. It is a case-based exercise, in which the featured speaker and the audience struggle together to diagnose a particularly challenging illness of some patient using only the information included in a clinical summary prepared especially for the conference. That clinical summary, distributed well in advance of the conference, typically contains all of the medical information pertaining to the case in question, except for the definitive, diagnostic test result. That result, known only to the conference organizers, is revealed at the very end of the conference as a validation or repudiation of the presenter’s conclusions. To my knowledge, our “Poe Historical CPC” was the first to use an historical, rather than a current, patient as the subject of the conference.
In 1995, during this first Historical CPC at the University of Maryland, Dr. R. Michael Benitez concluded that Poe died of rabies resulting from an unrecorded and most likely unrecognized animal exposure prior to his hospitalization in Baltimore. His diagnosis became a media sensation covered in venues as diverse as Science magazine and the answer to the final Jeopardy question of the TV show of the same name. Benitez based his diagnosis on evidence of autonomic instability (dilating and contracting pupils and an irregular pulse which alternated between rapid and slow), fluctuating delirium, and hydrophobia (suggested by Poe’s adamant refusal of alcohol and difficulty swallowing water) included in Moran’s later descriptions of the terminal illness.
Rabies, in fact, has much in common with Moran’s later description of Poe’s final illness. It is a viral encephalitis (i.e., an infection of the brain) marked by acute onset of confusion, hallucinations, combativeness, muscle spasms and seizures, all of which tend to wax and wane during the course of the illness. Autonomic instability marked by alternating tachycardia (racing pulse) and bradycardia (slow pulse), profuse sweating, lacrymation, and salivation are also characteristic. The infection is virtually always fatal, with a median survival time after the onset of symptoms of four days. Poe died four days after being admitted to the hospital.
Moran gave no such indication of autonomic instability or hydrophobia in the letter he wrote to Mrs. Clemm a month after her son-in-law’s death. Only decades later, most likely relying on memory alone, does he mention a “very low pulse” and that his famous patient’s “pulse which had been as low as fifty was rising rapidly, though still feeble and variable.”
Many diagnoses have since been offered to explain Poe’s death. The earliest and most persistent has been that of alcohol-induced delirium tremens. Moran’s later case summary, one almost certainly written to satisfy his public’s appetite for ever more moving and ironic details of his patient’s final hours, has generated several more. These include homicide, carbon monoxide poisoning, suicide, syphilis, and mercury intoxication, reflecting more an unwillingness on the part of the proposers to accept an ordinary disease as the cause of Poe’s death than any convincing clinical evidence of such disorders.
Given numerous well-documented instances of Poe’s refractory alcohol abuse and its adverse effects on his physical and mental health prior to his departure from Richmond in late September of 1849, and the nature of the illness described by Moran in his letter of 15 November 1849 to Poe’s mother-in-law, one need look no further than delirium tremens as an explanation for his death. Whether his last bout with alcohol was the result of “cooping,” his own inability to control the craving that had for so many years driven him to drink, or a second (successful) attempt at suicide will never be known. However, if one ignores Moran’s later expanded description of Poe’s final illness, which deviates so spectacularly from his initial description in his letter to Maria Clemm a month after his patient’s death, neither rabies, homicide, mercury intoxication, nor, for that matter, any of the myriad other explanations proposed in the century and a half since Poe’s death, offers a better fit than delirium tremens.
Headline image credit: A photograph (taken by C.T. Tatman in 1904) of a daguerreotype (taken by Edwin H. Manchester in 1848) of Edgar Allan Poe. Public domain via Wikimedia Commons.
Time passes quickly. As we track the progression of events hundred years ago on the Western Front, the dramas flash by. In the time it takes to answer an e-mail the anniversary of another battle has come and gone.
We have celebrated the fumbling British skirmishes at Mons and Le Cateau in late August, but largely forgotten the French triumph at the Battle of the Marne which first stemmed and threw back the German wheeling attack through Belgium into Northern France under the Schlieffen Plan. We have already bypassed the spirited Franco-British attempts at the Battle of the Aisne in September to take the Chemin des Dames. The Race to the Sea was under way: the British and German Armies desperately trying to turn their enemy’s northern flank.
Throughout, the performance of the British Expeditionary Force has often been exaggerated. Imaginative accounts of Germans advancing in massed columns and being blown away by rapid rifle fire are common. A rather more realistic assessment is that the British infantry were steadfast enough in defence, but unable to function properly in coordination with their artillery or machine guns. The Germans seemed to have a far better grip of the manifold disciplines of modern warfare.
Yet everything changed in October. The Germans were scraping the barrel for manpower and decided to throw new reserve formations into the battle. Young men with the minimum of training, incapable of sophisticated battle tactics. They were marched forward in a last gambler’s throw of the dice to try and break through to the Channel Ports. To do that they needed first to capture the small Belgian city of Ypres.
One might have thought that Ypres was some fabled city, fought over to secure untold wealth or a commanding tactical position. Nothing could be further from the truth. Ypres was just an ordinary town, lying in the centre of the fertile Western Flanders plain. Yet the low ridges to the east represented one of the last feasible lines of defence. The British also saw the town, not as an end in itself, but as a stepping stone to more strategically important locations pushing eastwards, such as the rail centre at Roulers or the ports of Ostend and Zeebrugge. For both sides Ypres was on the road to somewhere.
The battle began in mid-October and soon began to boil up. Time and time the Germans hurled themselves forward, the grey-green hordes pressing forwards and being shot down in their hundreds. The British had learnt many lessons and this was where they finally proved themselves worthy adversaries for the German Army. On the evening of 23 October young Captain Harry Dillon was fighting for his life:
A great grey mass of humanity was charging, running for all God would let them, straight on to us not 50 yards off. Everybody’s nerves were pretty well on edge as I had warned them what to expect, and as I fired my rifle the rest all went off almost simultaneously. One saw the great mass of Germans quiver. In reality some fell, some fell over them, and others came on. I have never shot so much in such a short time, could not have been more than a few seconds and they were down. Suddenly one man – I expect an officer – jumped up and came on. I fired and missed, seized the next rifle and dropped him a few yards off. Then the whole lot came on again and it was the most critical moment of my life. Twenty yards more and they would have been over us in thousands, but our fire must have been fearful, and at the very last moment they did the most foolish thing they possibly could have done. Some of the leading people turned to the left for some reason, and they all followed like a great flock of sheep. We did not lose much time, I can give you my oath. My right hand is one huge bruise from banging the bolt up and down. I don’t think one could have missed at the distance and just for one short minute or two we poured the ammunition into them in boxfuls. My rifles were red hot at the finish. The firing died down and out of the darkness a great moan came. People with their arms and legs off trying to crawl away; others who could not move gasping out their last moments with the cold night wind biting into their broken bodies and the lurid red glare of a farm house showing up clumps of grey devils killed by the men on my left further down. A weird awful scene; some of them would raise themselves on one arm or crawl a little distance, silhouetted as black as ink against the red glow of the fire. [p. 287-288, Fire & Movement, by Peter Hart]
Some of the Germans had got within 25 yards of Dillon’s line. It had been a close run thing and after they had been relieved by the French later that night the French reported that some 740 German corpses littered the ground in front of his trenches. This was the real war: not a skirmishes like the earlier battles, this was the real thing.
The German attacks continued, followed as day follows night, by French and British counter-attacks to restore the situation. The Germans nibbled at the Allied line but were unable to achieve anything of importance. Yet for all the sound and fury, over the next few days the front line stayed relatively static. The German troops were flagging in their efforts. After one last effort on 11 November the Germans threw in the towel. They would not break through the Allied lines in 1914. The British and French lines had held. Battered, bruised, but unbroken. The First Battle of Ypres had confirmed the strategic victory gained by the French at the Marne. The German advance in the west had been blocked, if they sought victory in 1915 they would have to look to the east and attack Russia.
The 1914 campaign would prove decisive to the war. The utter failure of the Schlieffen Plan, designed to secure the rapid defeat of France, meant that Germany would be condemned to ruinous hostilities on two fronts. This was the great turning-point of the whole war. The pre-war predictions from the German strategists that they could not prevail in a long-drawn out war against the combined forces of France and Russia proved accurate, especially when the British Empire and United States joined the fight. The German Army fought with a sustained skill and endurance, but after 1914, the odds really were stacked against them.
As we head towards another General Election in 2015, once again politicians from the Right and Left will battle it out, hoping to persuade the electorate that either a big state or small state will best address the challenges facing our society. For 40 years, Germans living behind the Iron Curtain in the German Democratic Republic (GDR) had first-hand experience of a big state, with near-full employment and heavily subsidized rent and basic necessities. Then, when the Berlin Wall fell, and East Germany was effectively taken over by West Germany in the reunification process, they were plunged into a new capitalist reality. The whole fabric of daily life changed, from the way people voted, to the brand of butter they bought, to the newspapers that they read. Circumstances forced East Germans to swap Communism for Capitalism, and their feelings about this change remain quite diverse.
Initially, East Germans flooded across the border, bursting with excitement and curiosity to see what the West was like – a ‘West’ that most had only known through watching Western television. For some, sampling a McDonald’s hamburger – the ultimate symbol of Western capitalism – was high on the to-do list, for others it was access to Levi’s jeans or exotic fruit that was particularly novel.
At the same time as delighting in consumer improvements however, many East Germans felt ambivalent about the wider changes. Decades of state propaganda that painted Western societies as unjust places where homelessness, drugs and unemployment were rife, had left its mark, and many East Germans felt unsure and slightly fearful of what was to come.
From a position of full employment in 1989, 3 years after reunification 15% of East Germans were out of work. For those who struggled to put bread on the table after reunification, the advantages of having a wider range of goods to buy remained a largely theoretical gain. For others however, reunification led to greater freedom to pursue individual career choices that were not dictated by the state’s needs.
For those who were made to feel vulnerable and afraid by a regime that watched, trailed and threatened to imprison them, such as political opponents, Christians, environmental activists or other non-conformists, the fall of the Wall and the end of the GDR most often brought relief: the Western set-up allowed for greater freedom of expression and greater freedom of movement.
For the majority of East Germany though, this was not how they felt: since many say they had no idea of the extent to which the Stasi was intertwined with daily life, the end of the GDR did not bring with it a sense of relief. In fact, many East Germans felt that there were many things the West could learn from the GDR, and were resentful at the lack of openness to incorporating any East German policies or practices in the reunification process.
“Everywhere is becoming like a foreign land. I have long wished to travel to foreign parts, but I have always wanted to be able to come home … The landscapes remain the same, the towns and villages have the same names, but everything here is becoming increasingly unfamiliar.”
This view was echoed my many East Germans, who were conscious that they, for example, dressed differently from their Western compatriots, didn’t know how to pronounce items on the McDonalds menu when they were ordering and didn’t know how to work coin-operated supermarket trolleys in the West. With the fall of the Wall, a whole way of life evaporated. The certainties on which day to day routines had been built ceased to exist.
Swapping Communism for Capitalism has prompted diverse reactions from East Germans. Few would wish to return to the GDR, even if it were possible. However while many delight in having greater individual choice about what they eat, where they go, what they do and what they say, they often also have a wistful nostalgia for life before reunification, where the disparity between rich and poor was smaller and the solidarity between citizens seemed to be greater.
The collapse of the Berlin Wall twenty-five years ago this month prompted a diverse range of musical responses. While Mstislav Rostropovich celebrated the momentous event by giving a very personal, impromptu performance of Bach’s Cello Suites in front of the Wall two days after it had been breached, David Hasselhoff regaled Berliners from atop of what remained of the Wall on New Year’s Eve of 1989 with a glitter-studded rendition of his chart hit “Looking for Freedom.” The values ascribed in the West to the demise of communism were captured especially clearly in the role assigned to Beethoven during this heady period. From the free concert of the First Piano Concerto and Seventh Symphony that was offered to the newly-liberated East Germans by West Berlin’s flagship Philharmonic Orchestra three days after the fall of the Wall to Leonard Bernstein’s performances of the Ninth Symphony in the East and West of the city a month later, the message was consistent; Beethoven’s music was deemed to embody both the triumph of liberal democracy and the unification of a once-divided people. This symbolism was particularly apparent in Bernstein’s two concerts. While the emancipation of East Germany was reflected in his substitution of “Freiheit” (freedom) for “Freude” (joy) in the choral finale, the performances themselves staged a musical act of universal brotherhood; the orchestra compromised of musicians from both Germanies and all four of the Allied powers, and the events were broadcast live across the globe.
Such unironic celebrations of the universal Beethoven obscured the complex relationship that East Germans had with the composer. Cultural policy in the German Democratic Republic was driven by an Enlightenment conviction in the transformative, unifying, and humanizing powers of art, and from the outset the government had harnessed Beethoven as a pivotal figure in their plans to implement state socialism. His revolutionary zeal was hailed as a template for East German citizens and his heroic style as the ultimate expression of socialist ideals. This narrative persisted in official portrayals of the composer for the duration of the state’s forty-year existence. Beyond the sphere of official rhetoric, however, Beethoven’s reception was far more conflicted. As the promised socialist utopia failed to materialize and the chasm widened between the rhetoric of revolution propagated by the party and the realities of life in the socialist state, Beethoven was approached increasingly not as an iconic statesman but as a vehicle for exploring of the problematic position of art and the artist in East German society.
An early example of this phenomenon can be observed in Reiner Kunze’s 1962 poem “Die bringer Beethovens” (the bringers of Beethoven), in which Beethoven serves as an allegory for totalitarianism rather than Enlightenment humanism. Published in the 1969 collection sensible wege, the poem relates the battle of “the man M.” against the faceless “bringers of Beethoven,” who inculcate the masses by subjecting them to a recording of the Fifth Symphony. The man attempts to resist this indoctrination by retreating inside his house only to have the bringers first fix loudspeakers over his windows, and then force their way in armed with the record. He responds by beating them with an iron ladle. A trial follows at which he is judged to be redeemable. Redemption, however, inevitably lies in the Fifth Symphony to which he is sentenced to listen. His punishment kills him, but death provides no release: at his funeral his children request that none other than the Fifth Symphony be played.
In the years that followed, alternative readings of Beethoven became increasingly prominent. Characteristic, for instance, is Reiner Bredemeyer’s short piece for orchestra and piano Bagatellen für B. of 1970, which sets the opening chords of the “Eroica” Symphony against the Bagatelles op. 119, no. 4 and op. 126 no. 2. Bredemeyer’s focus on the Bagatelles was a pointed challenge to the socialist Beethoven. Grounded in the femininity of the nineteenth-century drawing room rather than the revolutionary public sphere, these piano miniatures represented a significant affront to the one-dimensional heroic portrayals of the composer that dominated in the German Democratic Republic. A similar confrontation can be observed in Horst Seemann’s 1976 biopic Beethoven – Tage aus einem Leben. Here Beethoven is depicted not as a socialist deity, but as a very human and conflicted individual, plagued by failed love affairs, domestic ineptitude, and an inability to reconcile himself with society around him.
Driving these works was a pointed questioning of the social and political efficacy of art. Striking in this context are the opening scenes of Seemann’s film, which cut between a live performance of Beethoven’s “Battle Symphony” and a gory re-enactment of the Battle of Vittoria itself. Far from heralding the symphony as a revolutionary force, however, this juxtaposition of concert hall and battlefield exposes the fallacy of the Beethoven myth. Graphic images of wounded and dying soldiers sit uncomfortably with frames of the concert audience, who clap delightedly in response to the music and eat chocolates as they listen. Art serves here not as a harbinger of political change but simply as a mode of entertainment.
The demise of the German Democratic Republic did little to quell this disillusioned perspective, and the triumphant narratives of Beethoven that resurfaced in 1989 stood decidedly at odds with the prevailing mood among East German artists. Crucially, the euphoria that accompanied the fall of the Wall was tempered for the latter group by a sense of a loss, a loss not for their repressive country but for the ideals and hopes that this country had once promised. Key among these was the notion that art could serve as a force for good and play a profound role in effecting social reform. From this perspective, the failure of the German Democratic Republic was not just a failure of socialism. It was also a failure of art. As Bredemeyer observed in a 1992 interview with reference to the East German composer Hanns Eisler: “If music is an instrument of intervention in the sense of Eisler … then I have to say, very well then. Eisler lost, I too; it doesn’t work anymore.”
Headline image credit: Partly destructed Berlin Wall with border police, view from west, Brandenburg Gate in the background in November 1989. Photo by Stefan Richter. CC BY-SA 3.0 via Wikimedia Commons.
“This is a historic day. East Germany has announced that, starting immediately, its borders are open to everyone. The GDR is opening its borders … the gates in the Berlin Wall stand open.”
—Hans Joachim Friedrichs, reporting for the Tagesthemen, 9 November 1989
On 9 November 1989, at midnight, the East German government opened its borders to West Germany for the first time in almost thirty years: a city divided, families and friends separated for a generation, reunited again. For much of its existence, attempting to cross the wall meant almost certain death, and around 80 East Germans were killed in the attempt, shot down by the border guards as they tried to make their escape. With this announcement, however, the gates were thrown open.
The mood was euphoric. East Germans surged through the opened gates, shouting and cheering, to be met by the West Germans on the other side. That same night, they began dismantling the barrier which had kept them apart together, chipping away the bricks to keep as mementos. The fall of the Wall — an ugly scar across Berlin, adorned in barbed wire and patrolled by guards with machine guns — was a pivotal event in German history. A nation crippled by the most devastating conflict in living memory, and then carved up and separated from itself by the victors, could finally shrug off the long shadow cast by a dark history, and look toward a brighter, unified future.
The seismic consequences of the demolition were also felt well beyond the borders of Germany, and, along with the slow rusting and decay of the Iron Curtain, helped to spell the end for the USSR. In just two years after the wall’s demolition, the Soviet Union would cease to exist, thus ending the era we now call the Cold War. A period of around fifty years, marked by suspicion, space rockets, assassinations, espionage, show trials, paranoia, and propaganda, and which brought the world to the brink of destruction with the Cuban Missile Crisis, was finally at an end.
To mark the 25th anniversary of this momentous moment, we’ve compiled a selection of free chapters and articles across our online resources, which shed further light on the history behind the wall, what it meant to live in a city divided by it, and how the USSR declined and eventually fell.
On Sunday 13 August 1961, the wall was erected. This chapter, drawing on first-hand accounts, examines the initial reactions to the wall. As quoted in the chapter, one source describes the atmosphere of the day the wall went up as if “East Berlin was dead. It was as if a bell‐jar had been placed over it and all the air sucked out. The same oppressiveness which hung over us, hung over all Berlin. There was no trace of big city life, of hustle and bustle. Like when a storm moves across the city. Or when the sky lowers and people ask if hail is on the way.”
Whilst taking very little direct action against the wall, the West did offer covert assistance to groups of East Berlin activists trying to provide escape routes for those who wanted it. One of these groups was led by Rudolf Müller and his associates, who had dug a tunnel underneath the wall and were busy ferrying through escapees when a group of East German soldiers surprised them. Though they escaped unscathed, the confrontation left a twenty-one year-old soldier – Egon Schultz – dead. This chapter examines how Schultz and his death became idealised and politicised by the East German state, transforming him into a hero-victim of the ‘socialist frontier.’
In the early 1980s the USSR was struggling with a war in Afghanistan, economic problems, and changes of leadership. From the middle of the 1980s, Soviet policy changes under Gorbachev ended the arms race and eventually relinquished control of Eastern Europe, bringing about an end to the Cold War and the USSR. This chapter looks at these final years of the Cold War, and explores the impact of Reagan and Gorbachev.
One of the key factors in the demise of the USSR was the USSR itself – or, rather, the reforms of Gorbachev. With twin policies of ‘perestroika’ (literally ‘restructuring’) and ‘glasnost’ (a policy calling for increased transparency in the Soviet Union), Gorbachev began the slow process toward democratization, dismantling the totalitarian psychology that had marked previous Soviet regimes and paving the way for progressive reforms.
Gorbachev’s policies, coupled with Hungary opening its borders to tens of thousands of East Germans, left the state with a crisis on its hands. When it decided to close its Hungarian borders, many citizens took to the streets to protest in what quickly became a large movement. Troops were sent to forcibly dispel the protesters, but their use of non-violent tactics made it difficult to justify the use of force, leading many of the troops to defy orders and defect. As the momentum for the movement grew, the strength of the state declined, leading to the fall of the wall and the eventual dismantling of East Germany.
Guyatt examines different historical perspectives on what caused the end of the Cold War, as well as the psychological, strategic, and political effects of its aftermath. Was it the press statement made by Gorbachev’s spokesman after the fall of the Berlin Wall that the tensions, which spread “from Yalta to Malta,” were over that marked the War’s official end? Perhaps the end came with Gorbachev’s statement to the United Nations announcing the end of Soviet Union military force to subdue the satellite states of the Warsaw Pact in 1988. The article explores these catalysts, among others, to present a comprehensive look at the War’s end and its resulting feelings of anxiety, fear, and “triumphalism” that abounded in Western Europe.
As the wall came down, Germans were faced with a new challenge: how to forge a new, modern Germany. Linking the ‘macro’ worlds of institutional change to the ‘micro’ worlds of the lives and individual histories of its citizens, this chapter paints a fascinating portrait of once socialist and totalitarian state transitioning into the democratic Federal Republic of Germany.
The dismantling of the wall, which was both a symbolic and literal division between East and West, could have served as a potent symbol for a unified Germany and played an integral part in its foundation myth, yet this was not the case. Why was this? Charting the reasons behind this – including the pre-existing German fields of memory left by its dark past – the chapter explains why the fall of the wall is likely to remain a “muted, tempered memory” in German politics.
What books would you add to our Berlin Wall reading list?
On this day in 1984, musical aficionados from the worlds of pop and rock came together to record the iconic ‘Do They Know It’s Christmas?’ single for Band Aid. The single has gone down in history as an example of the power of music to help right the wrongs in the world. The song leapt to the number one spot over the Christmas of 1984, selling over a million copies in under a week and totalling sales of three million by the end of that year. The Band Aid super-group featured the cream of eighties pop, including David Bowie, Phil Collins, George Michael, Sting, Cliff Richard and Paul McCartney.
The sales target for the single was £70,000, all of which was to be donated to the African famine relief fund. With support from Radio 1 DJs and a Top of the Pops Christmas Special, sales sky-rocketed and Geldof, feeling the strength of public opinion behind him, went toe-to-toe with the conservative government in an attempt to have tax on the single waived. Margaret Thatcher initially refused the plea, but as public outcry grew, Thatcher caved-in to public demands and the tax on sales worth nearly £9 million was donated back to charity.
Bob Geldof and a host of artists old and new have re-recorded the single to help raise funds to stem the Ebola crisis. Our infographic marks the 30th anniversary of the original recording and illustrates the movers and shakers that made this monumental milestone in pop history possible.
The disease that carried Mozart off 224 years ago today was as sudden as it was mysterious. It struck during a year in which he was uncommonly healthy and also spectacularly productive. Only its essential elements are known, the most striking of which was progressive swelling (i.e., edema) of the entire body, ultimately so profound that a few days before Mozart died he was unable to make the smallest movement and had to be fitted with a gown that opened at the back to facilitate changing. By then, according to his son, Carl Thomas, he also had a stench so awful (likely due to retained urinary waste products), that “an autopsy was rendered impossible.” Although Mozart was the disorder’s most notable victim, he was by no means its only one. According to Dr. Eduard Vincent Guldener von Lobes, one of several consulting physicians: “A great number of inhabitants of Vienna were at this time laboring under the same complaint, and the number of cases which terminated fatally, like that of Mozart’s, was considerable. I saw the body after death, and it exhibited no appearances beyond those usual in such cases.” Von Lobes’ statement was recently confirmed by Zeger, Weigl and Steptoe, who found a marked increase in “deaths from edema among young men” recorded in Vienna’s official daily register in the weeks surrounding Mozart’s death compared with previous and following years.
Although over 100 different diagnoses have been proposed as the cause of Mozart’s fatal illness, none fits its character, course, and epidemiological characteristics better than acute glomerulonephritis – acute inflammation of the microscopic filters of the kidneys (the glomeruli) induced by a preceding streptococcal infection. Mozart, in fact, was no stranger to streptococcal infections and their complications. He had a series of severe illnesses as a child, which were almost certainly recurrent episodes of strep throat and streptococcus-induced acute rheumatic fever. Therefore, if his final, fatal illness was acute, post-streptococcal glomerulonephritis, it would have been just one of many times his life could have been cut short by an encounter with streptococci. However, unlike acute rheumatic fever, post-streptococcal glomerulonephritis is typically a benign disorder of young children, which virtually always resolves fully in a matter of weeks. How then, could acute, post-streptococcal glomerulonephritis explain not just Mozart’s death, but also those of the many other young Viennese men who died of “edema” during the winter of 1791/2?
The answer lies with the particular species of streptococcus responsible for the cases of acute glomerulonephritis. Streptococcus pyogenes is the species of streptococcus responsible for the vast majority of acute post-streptococcal glomerulonephritis (also “strep throat’ and rheumatic fever) – those benign cases, involving children who recover completely after relatively brief illnesses. There is, however, another rarer form of post-streptococcal glomerulonephritis, a much severer form, which attacks and sometimes kills adults. It’s caused by a different species of streptococcus, Streptococcus equi, the agent responsible for “strangles,” a highly contagious infection of horses. The bacterium also attacks cows, and in rare instances in which humans are infected, consumption of milk or milk products from S. equi-infected cows is responsible. The infection produces an illness typical of acute glomerulonephritis, in which over 90% of the victims are adults. One in 50 dies, even with the best care available today. One in 20 requires renal dialysis to recover, which, of course, was not available in Mozart’s day.
In the final analysis, of the myriad diagnoses proposed to date, only an epidemic of acute post-streptococcal glomerulonephritis caused by milk or cheese contaminated with S. equi, explains both the clinical and the epidemiological features of Mozart’s fatal illness.
Headline image credit: Mozart family portrait, circa 1780. Public domain via Wikimedia Commons.
Many students, when asked by a teacher or professor to volunteer in front of the class, shy away, avoid eye contact, and try to seem as plain and unremarkable as possible. The same is true in dental school – unless it comes to laughing gas.
As a fourth year dental student, I’ve had times where I’ve tried to avoid professors’ questions about anatomical variants of nerves, or the correct way to drill a cavity, or what type of tooth infection has symptoms of hot and cold sensitivity. There are other times where you cannot escape having to volunteer. These include being the first “patient” to receive an injection from one of your classmate’s unsteady and tentative hands. Or having an impression taken with too much alginate so that all of your teeth (along with your uvula and tonsils) are poured up in a stone model.
But volunteering in the nitrous oxide lab … that’s a different story. The lab day is about putting ourselves in our patients’ shoes, to be able to empathize with them when they need to be sedated. For me, the nitrous oxide lab might have been the most enjoyable 5 minutes of my entire dental education.
In today’s dental practice, nitrous oxide is a readily available, well-researched, incredibly safe method of reducing patient anxiety with little to no undesired side effects. But this was not always the case.
The Oxford Textbook of Anaesthesia for Oral and Maxillofacial Surgery argues that “with increasingly refined diets [in the mid-nineteenth century] and the use of copious amounts of sugar, tooth decay, and so dentistry, were on the increase.” Prior to the modern day local anesthesia armamentarium, extractions and dental procedures were completed with no anesthesia. Patients self-medicated with alcohol or other drugs, but there was no predictable or controllable way to prevent patients from experiencing excruciating pain.
That is until Horace Wells, a dentist from Hartford, Connecticut started taking an interest in nitrous oxide as a method of numbing patients to pain.
Wells became convinced of the analgesic properties of nitrous oxide on December 11, 1844 after observing a public display in Hartford of a man inhaling the gas and subsequently hitting his shin on a bench. After the gas wore off, the man miraculously felt no pain. With inspiration from this demonstration and a strong belief in the analgesic (and possibly the amnestic) qualities of nitrous oxide, on December 12, Wells proceeded to inhale a bag of the nitrous oxide and have his associate John Riggs extract one of his own teeth. It was risky—and a huge success. With this realization that dental work could be pain free, Wells proceeded to test his new anesthesia method on over a dozen patients in the following weeks. He was proud of his achievement, but he chose not to patent his method because he felt pain relief should be “as free as the air.”
This discovery brought Wells to the Ether Dome at the Massachusetts General Hospital in Boston. Before an audience of Harvard Medical School faculty and students, Wells convinced a volunteer from the audience to have their tooth extracted after inhaling nitrous oxide. Wells’ success came to an abrupt halt when this volunteer screamed out in pain during the extraction. Looking back on this event, it is very likely that the volunteer did not inhale enough of the gas to achieve the appropriate anesthetic effect. But the reason didn’t matter—Wells was horrified by his volunteer’s reaction, his own apparent failure, and was laughed out of the Ether Dome as a fraud.
The following year, William Morton successfully demonstrated the use of ether as an anesthetic for dental and medical surgery. He patented the discovery of ether as a dental anesthetic and sold the rights to it. To this day, most credit the success of dental anesthesia to Morton, not Wells.
After giving up dentistry, Horace Wells worked unsuccessfully as a salesman and traveled to Paris to see a presentation on updated anesthesia techniques. But his ego had been broken. After returning the U.S, he developed a dangerous addiction to chloroform (perhaps another risky experiment for patient sedation, gone awry) that left him mentally unstable. In 1848, he assaulted a streetwalker under the influence. He was sent to prison and in the end, took his own life.
This is the sad story of a man whose discovery revolutionized dentists’ ability to effectively care for patients while keeping them calm and out of pain. As a student at the University of Connecticut School of Dental Medicine, it is a point of pride knowing that Dr. Wells made this discovery just a few miles from where I have learned about the incredible effects of nitrous oxide. My education has taught me to use it effectively for patients who are nervous about a procedure and to improve the safety of care for patients with high blood pressure. This is a day we can remember a brave man who risked his own livelihood in the name of patient care.
Featured image credit: Laughing gas, by Rumford Davy. Public domain via Wikimedia Commons.
Seventy years ago today, in Korematsu v. United States, the Supreme Court upheld the constitutionality of the Japanese-American internment program authorized by President Franklin Roosevelt’s Executive Order 9066. The Korematsu decision and the internment program that forcibly removed over 100,000 people of Japanese ancestry from their homes during World War II are often cited as ugly reminders of the dangers associated with wartime hysteria, racism, fear-mongering, xenophobia, an imperial president, and judicial dereliction of duty. But the events surrounding Korematsu are also a harrowing reminder of what happens to liberty when the “Madisonian machine” breaks down — that is, when the structural checks and balances built into our system of government fail and give way to the worst forms of tyranny.
Our 18th century system of separated and fragmented government — what Gordon Silverstein calls the “Madisonian machine” — was engineered to prevent tyranny, or rather tyrannies. Madison’s Federalist 51 outlines a prescription for avoiding “Big T Tyranny” — the concentration of power in any one branch of government. This would be accomplished by dividing and separating powers among the three branches of government and between the federal government and the states. “Ambition must be made to counteract ambition,” Madison wrote. Each branch would jealously protect its own powers while guarding against encroachments by the others.
But this wasn’t the only form of tyranny the framers worried about. In a democracy, minorities are always at risk of being oppressed by majorities — what I call “little t tyranny.” Madison’s solution to this kind of tyranny is articulated in Federalist 10. The cure to this disease was firstly to elect representatives who could filter the passions of the masses and make more enlightened decisions. Secondly, Madison observed that as long as the citizenry is sufficiently divided and carved up into numerous smaller “factions,” it would be unlikely that a unified majority would emerge to oppress a minority faction.
In the events leading up to and including the Supreme Court’s decision in Korematsu, these safeguards built into the Madisonian machine broke down, giving way to both forms of T/tyranny. Congress not only acquiesced to President Roosevelt’s executive order, it responded with alacrity to support it. After just one hour of floor debate and virtually no dissent, Congress passed Public Law 503, which promulgated the order and assigned criminal penalties for violating it. And the branch furthest removed from the whims and passions of the majority, the Supreme Court, declined to second-guess the wisdom of the elected branches. As Justice Hugo Black wrote for the majority in Korematsu, “we cannot reject as unfounded the judgment of the military authorities and of Congress…” If Congress had been more skeptical, perhaps the Supreme Court might have been, too. But the Supreme Court has a long track record of deference to the executive when Congress gives express consent for his actions – especially in times of war. Unfortunately, under the Madisonian design, this is exactly when the Supreme Court ought to be the most skeptical of executive power.
To be sure, these checks and balances built into the Madisonian system were only meant to function as “auxiliary precautions.” The most important safeguard against T/tyranny would be the people themselves. Through a campaign of misinformation and fear-mongering, however, this protection was also rendered ineffective. Public opinion data was used selectively to convey the impression to both legislators and west coast citizens that the majority of Americans supported the internment program. The passions of the public were further manipulated by the media and west coast newspaper headlines such as “Japanese Here Sent Vital Data to Tokyo,” “Lincoln Would Intern Japs,” and “Danger in Delaying Jap Removal Cited.” Any dissent or would-be countervailing “factions,” to use Madison’s phrase, were effectively silenced.
In Korematsu, ambition did not counteract ambition as Madison had intended, and the machine broke down. That’s because in order to function properly, the Madisonian machine requires access to information and time for genuine deliberation. It also requires friction. It requires people to disagree – for our elected representatives to disagree with one another, for the Supreme Court to police the elected branches, for citizens to pause, faction off, and check one another. So we can complain of gridlock in government, but let’s not forget that the alternative, as demonstrated by the unforgivable and tragic events of Korematsu, exposes the most vulnerable among us to the worst forms of tyranny.
Featured image credit: A young evacuee of Japanese ancestry waits with the family baggage before leaving by bus for an assembly center. US National Archives and Records Administration. Public domain via Wikimedia Commons.