JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: Africa, Most Recent at Top [Help]
Results 1 - 25 of 182
How to use this Page
You are viewing the most recent posts tagged with the words: Africa in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
At first, I wasn’t sure quite why. I get what they meant. It seems like Ebola’s everywhere! It’s constantly on the news, all over the internet, and everyone’s talking about it. It makes sense to be sick of hearing about it. We’re bound to get sick of hearing about anything that much!
But still, I couldn’t shake the discomfort that rung in my head over that status. Ebola seems far away, after all, it’s only been diagnosed four times in the US. It’s easy to tuck it away in your mind as something distant that doesn’t affect you and forget why it’s a big deal.
It’s even become a hot topic for jokes on social media:
Because so many see this very real disease as a far away concept, we find safety in our distance and it’s easy to make light of it.
4,877 deaths. 9,935 sufferers. That’s not funny. That’s not something to ask to “omg shut up.”
The idea of disease never really hit home for me until my little sister was diagnosed with cancer. Yes, Ebola and cancer are two very different things. But I know what it’s like to watch someone I love very dearly suffer. I know what it’s like to hold my sister’s hand while she cries because she can’t escape the pain or the fear that comes with her disease. I know what it’s like to cry myself to sleep begging God to take her illness away. And I can’t help but imagine a sister somewhere in Africa in a situation very similar to my own, watching her loved one suffer, hearing her cries, and begging for it to all be over- but without the blessings of medicine and technology that my sister has access to.
We are quick to throw on our pink gear for breast cancer awareness and dump ice on our head for ALS because that kind of awareness is fun and easy. I’m not trying to diminish those causes- they are great causes that deserve promotion. But I mean to make note of the fact that when another very real disease with very real consequences is brought to light and gains awareness, people groan that it’s in the news again and make jokes about it on the internet. Because Ebola doesn’t have the fun and cute promotional package, we complain and make light of it and its need for awareness and a solution.
People are suffering and dying from Ebola. Just because that suffering seems far away, doesn’t make it any less significant.
This is a guest post from my oldest daughter, Meredith. I begged her to let me post it.
One of the fun things of being friends with illustrators is getting sneak-peaks at art spreads before the book is published. I fell in love with this story back last Christmas when Hazel was busy working on the front cover, … Continue reading →
I have never agreed with the dismissal of Wikipedia as a source of information, even for students. This is because that while, yes, there are pages that are full of misinformation, others are excellent. The latter are carefully maintained by experts and highly knowledgeable people regarding the topic in question. I’d long ago read about scientists who were seeing to it that Wikipedia pages on their subjects of expertise were being properly maintained. I think that rather than teaching students NOT to use Wikipedia, we’d be better off teaching them to use it and other sources carefully and critically.
And so now with all the ever-growing hysteria about Ebola in this country (sadly reminding me like this person of the early days of AIDS), I wasn’t at all surprised to read the New York Times article “Wikipedia is Emerging as a Trusted Internet Source for Information on Ebola.” And I think those of us who have been negative about Wikipedia need to rethink our position. Here’s a good source that balances out the misinformation going on all over the place. Rather than casting it out, embrace it, help people develop skills to use it in the best way rather than not at all.
Like many this past week, our attention has been fixated on the media coverage of the Ebola outbreak: images of experts showing off the proper way to put on and take off protective gloves to avoid exposure to the virus; political pundits quarrelling over the appropriateness of travel restrictions; reassuring press conferences by the director of the Centers for Disease Control. It is an event that has received immediate and intense attention and generated compelling journalism, for sure, but does it really give us an emotional understanding of the impact of the event?
What is it like for a mother or a father to watch their child die and not to be able to touch them? What happens within a community that has experienced a major outbreak? Are people brought closer through a shared suffering or are the bonds that held the community together forever broken? There are infinite questions that we could ask of the human heart in the midst or the aftermath of such an event. Oral history with its emphasis on empathy is an effective method of asking these questions.
Hopefully the epidemic will be contained, but by the time it is, it is likely that the public’s appetite for more analysis on the outbreak will have been satiated. Journalists will be compelled to move onto the new topic of the day. Oral historians, however, can — and should — linger on this event.
For oral historians, who have increasingly worked in the aftermath of crisis over the past decade, the motivation to document is fueled by both a humanitarian impulse to respond to crisis and a scholar’s desire to inquire and understand. Times of widespread crisis have an elusive complexity which defies any attempt at meta-narrative. Aspiring to get at a comprehensive picture and the countless ways in which the epidemic is impacting so many seems unfeasible. For many researchers, the most profound way to begin is to try to appreciate how this crisis manifests itself for an individual, for a family, or for a community is oral history.
Doing oral history in West Africa in the aftermath of the epidemic will present unique challenges for interviewers. Navigating the emotional and political resonance of the Ebola outbreak will require caution, compassion, and courage, as well as flexibility in the application of oral history best practices. The outcome of this work, however, can offer insight into how the individual human heart and mind respond to the terror of an epidemic, and how an individual’s responses to fear and grief impact their communities.
The personal perspective oral history provides has so often been left out of our analysis of crisis. We are left with dry academic reports often composed by responding agencies trained to exclude emotion from their analysis. But without this emotion, without this individual perspective, we don’t understand crisis and the impact it has on those who are left to pick up the pieces of shattered lives and communities. Oral history provides a means for the people most affected by crisis or disaster to be recorded, archived, and shared, to put them, not the devastation, at the center of the story. It is an effort that often runs counter to our collective response to emergency and, for that reason alone, it offers meaningful and enduring outcomes.
Featured image: Hospital in Kenema, Sierra Leone, where the Ebola virus samples are tested. June 2014. By Leasmhar. CC-BY-SA-3.0 via Wikimedia Commons.
As an Africanist historian who has long been committed to reaching broader publics, I was thrilled when the research team for the BBC’s popular genealogy program Who Do You Think You Are? contacted me late last February about an episode they were working on that involved mixed race relationships in colonial Ghana. I was even more pleased when I realized that their questions about the practice and perception of intimate relationships between African women and European men in the Gold Coast, as Ghana was then known, were ones I had just explored in a newly published American Historical Review article, which I readily shared with them. This led to a month-long series of lengthy email exchanges, phone conversations, Skype chats, and eventually to an invitation to come to Ghana to shoot the Who Do You Think You Are? episode.
After landing in Ghana in early April, I quickly set off for the coastal town of Sekondi where I met the production team, and the episode’s subject, Reggie Yates, a remarkable young British DJ, actor, and television presenter. Reggie had come to Ghana to find out more about his West African roots, but discovered instead that his great grandfather was a British mining accountant who worked in the Gold Coast for several years. His great grandmother, Dorothy Lloyd, was a mixed-race Fante woman whose father—Reggie’s great-great grandfather—was rumored to be a British district commissioner at the turn of the century in the Gold Coast.
The episode explores the nature of the relationship between Dorothy and George, who were married by customary law around 1915 in the mining town of Broomassi, where George worked as the paymaster at the local mine. George and Dorothy set up house in Broomassi and raised their infant son, Harry, there for two years before George left the Gold Coast in 1917 for good. Although their marriage was relatively short lived, it appears that Dorothy’s family and the wider community that she lived in regarded it as a respectable union and no social stigma was attached to her or Harry after George’s departure from the coast.
George and Dorothy lived openly as man and wife in Broomassi during a time period in which publicly recognized intermarriages were almost unheard of. As a privately employed European, George was not bound by the colonial government’s directives against cohabitation between British officers and local women, but he certainly would have been aware of the informal codes of conduct that regulated colonial life. While it was an open secret that white men “kept” local women, these relationships were not to be publicly legitimated.
Precisely because George and Dorothy’s union challenged the racial prescripts of colonial life, it did not resemble the increasingly strident characterizations of interracial relationships as immoral and insalubrious in the African-owned Gold Coast press. Although not a perfect union, as George was already married to an English woman who lived in London with their children, the trajectory of their relationship suggests that George and Dorothy had a meaningful relationship while they were together, that they provided their son Harry with a loving home, and that they were recognized as a respectable married couple. No doubt this had much to do with why the wider African community seemingly embraced the couple, and why Dorothy was able to “marry well” after George left. Her marriage to Frank Vardon, a prominent Gold Coaster, would have been unlikely had she been regarded as nothing more than a discarded “whiteman’s toy,” as one Gold Coast writer mockingly called local women who casually liaised with European men. In her own right, Dorothy became an important figure in the Sekondi community where she ultimately settled and raised her son Harry, alongside the children she had with Frank Vardon.
The “white peril” commentaries that I explored in my AHR article proved to be a rhetorically powerful strategy for challenging the moral legitimacy of British colonial rule because they pointed to the gap between the civilizing mission’s moral rhetoric and the sexual immorality of white men in the colony. But rhetoric often sacrifices nuance for argumentative force and Gold Coasters’ “white peril” commentaries were no exception. Left out of view were men like George Yates, who challenged the conventions of their times, even if imperfectly, and women like Dorothy Lloyd who were not cast out of “respectable” society, but rather took their place in it.
This sense of conflict and connection and of categorical uncertainty is what I hope to have contributed to the research process, storyline development, and filming of the Reggie Yates episode of Who Do You Think You Are? The central question the show raises is how do we think about and define relationships that were so heavily circumscribed by racialized power without denying the “possibility of love?” By “endeavor[ing] to trace its imperfections, its perversions,” was Martinican philosopher and anticolonial revolutionary Frantz Fanon’s answer. While I have yet to see the episode, Fanon’s insight will surely reverberate throughout it.
Scholars have written a lot about the difficulties in the study of religion generally. Those difficulties become even messier when we use the words black or African American to describe religion. The adjectives bear the burden of a difficult history that colors the way religion is practiced and understood in the United States. They register the horror of slavery and the terror of Jim Crow as well as the richly textured experiences of a captured people, for whom sorrow stands alongside joy. It is in this context, one characterized by the ever-present need to account for one’s presence in the world in the face of the dehumanizing practice of white supremacy, that African American religion takes on such significance.
To be clear, African American religious life is not reducible to those wounds. That life contains within it avenues for solace and comfort in God, answers to questions about who we take ourselves to be and about our relation to the mysteries of the universe; moreover, meaning is found, for some, in submission to God, in obedience to creed and dogma, and in ritual practice. Here evil is accounted for. And hope, at least for some, assured. In short, African American religious life is as rich and as complicated as the religious life of other groups in the United States, but African American religion emerges in the encounter between faith, in all of its complexity, and white supremacy.
I take it that if the phrase African American religion is to have any descriptive usefulness at all, it must signify something more than African Americans who are religious. African Americans practice a number of different religions. There are black people who are Buddhist, Jehovah Witness, Mormon, and Baha’i. But the fact that African Americans practice these traditions does not lead us to describe them as black Buddhism or black Mormonism. African American religion singles out something more substantive than that.
The adjective refers instead to a racial context within which religious meanings have been produced and reproduced. The history of slavery and racial discrimination in the United States birthed particular religious formations among African Americans. African Americans converted to Christianity, for example, in the context of slavery. Many left predominantly white denominations to form their own in pursuit of a sense of self- determination. Some embraced a distinctive interpretation of Islam to make sense of their condition in the United States. Given that history, we can reasonably describe certain variants of Christianity and Islam as African American and mean something beyond the rather uninteresting claim that black individuals belong to these different religious traditions.
The adjective black or African American works as a marker of difference: as a way of signifying a tradition of struggle against white supremacist practices and a cultural repertoire that reflects that unique journey. The phrase calls up a particular history and culture in our efforts to understand the religious practices of a particular people. When I use the phrase, African American religion, then, I am not referring to something that can be defined substantively apart from varied practices; rather, my aim is to orient you in a particular way to the material under consideration, to call attention to a sociopolitical history, and to single out the workings of the human imagination and spirit under particular conditions.
When Howard Thurman, the great 20th century black theologian, declared that the slave dared to redeem the religion profaned in his midst, he offered a particular understanding of black Christianity: that this expression of Christianity was not the idolatrous embrace of Christian doctrine which justified the superiority of white people and the subordination of black people. Instead, black Christianity embraced the liberating power of Jesus’s example: his sense that all, no matter their station in life, were children of God. Thurman sought to orient the reader to a specific inflection of Christianity in the hands of those who lived as slaves. That difference made a difference. We need only listen to the spirituals, give attention to the way African Americans interpreted the Gospel, and to how they invoked Jesus in their lives.
We cannot deny that African American religious life has developed, for much of its history, under captured conditions. Slaves had to forge lives amid the brutal reality of their condition and imagine possibilities beyond their status as slaves. Religion offered a powerful resource in their efforts. They imagined possibilities beyond anything their circumstances suggested. As religious bricoleurs, they created, as did their children and children’s children, on the level of religious consciousness and that creativity gave African American religion its distinctive hue and timber.
African Americans drew on the cultural knowledge, however fleeting, of their African past. They selected what they found compelling and rejected what they found unacceptable in the traditions of white slaveholders. In some cases, they reached for traditions outside of the United States altogether. They took the bits and pieces of their complicated lives and created distinctive expressions of the general order of existence that anchored their efforts to live amid the pressing nastiness of life. They created what we call African American religion.
Headline image credit: Candles, by Markus Grossalber, CC-BY-2.0 via Flickr.
Grove Art Online recently updated with new content on African art and architecture. We sat down with Dr. Steven Nelson, who supervised this update, to learn more about his background and the field of African art history.
Can you tell us a little about your background?
As an undergraduate at Yale, after flirting with theater, music, and sociology, I majored in studio art and focused on bookmaking, graphic design, printmaking, and photography. Majors were required to take three art history classes. By the end of my college career, I had taken eight and had seriously thought about changing my major. Within art history, I was most attracted to modern and Japanese art, and studying the two fields had profound effects on art making. After a six-year-long stint in newspaper design, I went to Harvard to pursue a Ph.D. in modern art. After coursework in myriad fields, serving on a search committee for a new faculty member in African art (the search resulted in the hiring Suzanne Preston Blier), and a trip to Kenya to study medieval Swahili architecture, I changed my field to African art. My dissertation is a study of Mousgoum architecture (one of the fields covered in Grove Art Online’s African update) in Cameroon. The thesis became my first book, entitled From Cameroon to Paris: Mousgoum Architecture In and Out of Africa (University of Chicago Press, 2007). Having been an artist has had a profound effect on how I encounter art objects and the built environment.
You recently served as editor for the Grove Art OnlineAfrican art update. What was your favorite part about this experience?
My favorite part of serving as editor for the Grove Art Online African art update was the opportunity to have a widely used and respected resource as a platform to reassess and to reshape the canon of African art. More to the point, Grove provided a unique opportunity to rearticulate the field’s various methods, to acknowledge shifts in scholarly focus, and to expand the subjects we consider when hearing the very term “African art.” As someone who has served at various points as an editor, I also enjoyed working with authors to produce essays that are both rich in content and accessible to a broad audience. I’m also very pleased that the authors included in the update range from very eminent art historians to graduate students with whom I closely worked.
What is your favorite work of art of all time, and why?
My favorite work of art of all time changes day-by-day. Right now Malick Sidibé’s party photographs of the 1960s, which explore a burgeoning, international youth culture in Bamako on the heels of independence, hold this title.
Which recently added African art article(s) stand out to you, and why?
While I am really happy with all of the new content, the material on African film and the essay on African modern art are particular importance for me. In African art history, broadly speaking, film has received very little attention (in full disclosure, I write on it myself). However, it is critical in understanding the complexity of modern and contemporary African art. The essay on modern African art is important in that it marks an important expansion of the field, one in which scholars are insisting on understanding modernity and how African artists engage with it with more nuance and precision.
How has your field changed in the past 20 years?
The past 20 years have witnessed a groundswell in studies of modern and contemporary African art. Alongside of this development, the past 20 years have also seen a lot of energy (for better or worse) spent on understanding the relationship between modern and contemporary and “traditional” or “classical” African art. On the one hand, some feel that the two should be considered as separate fields, with the former being a kind of offshoot of global contemporary art. On the other, some feel that the two can inform each other in exciting ways. Having done research on topics ranging from medieval Swahili architecture to contemporary art in Africa and its diasporas, I personally ascribe to the latter view. Methodologically, much has changed as well. Africanist art historians have become much more willing to incorporate poststructuralist and post-colonial scholarship into their studies, and the results have enriched how we understand the subjects of our endeavors. There has also been much welcome attention paid to the important intersections of African art and Islam as well as African art and Christianity.
How do you envision art history research being done in 20 years?
Digital humanities will no doubt have an enormous impact research on art history research. Digital tools allow for quick aggregation, and this can add rich dimensions to our research. One of the challenges, however, will be to see how — or if — the digital realm provides the means for new questions and new art historical knowledge. I helped facilitate a digital art history workshop at UCLA this past summer, and that question, one that really strikes at the place of the digital as we move forward, is one that I engage with both optimism and a healthy skepticism.
Political economy is back on the centre stage of development studies. The ultimate test of its respectability is that the World Bank has realised that it is not possible to separate social and political issues such as corruption and democracy from other factors that influence the effectiveness of its investments, and started using the concept.
It predates the creation of “economics” as a discipline. Adam Smith, David Ricardo, Thomas Malthus, James Mill, and a generation later Karl Marx and Friederich Engels, explored how groups or classes in society exploited each other or were exploited, and used their conclusions to create theories of change or growth.
Marx’s ideas were taken up in the 1950s by economists and sociologists of the left, such as Paul Baran (The Political Economy of Growth, 1957) and later Samir Amin (The Political Economy of the Twentieth Century, 2000) who linked it to theories of imperialism and neo-colonialism to interpret what was happening in newly independent African countries where nationalist political parties had taken power.
Marx and Engels in their early writings, and Marxist orthodoxy subsequently, espoused determinist theories in which development went through pre-determined stages – primitive forms of social organisation, feudalism, capitalism, and then socialism. But in their later writings Marx and Engels were much more open, and recognised that some pre-capitalist formations could survive, and that there was no single road to socialism. Class analysis, and exploration of the economic interests of powerful classes, and their uses of the technologies available to them, could inform a study of history, but not substitute for it.
That was how I interpreted what happened in Tanzania in the 1970s. The country was built around the economic interests of those involved, and the mistakes made, both inside Tanzania but also outside. It focussed on the choices made by those who controlled the Tanzanian state or negotiated “foreign aid” deals with Western governments—Issa Shivji’s bureaucratic bourgeoisie. These themes are still current today.
I am not alone. Michael Lofchie’s (A Political Economy of Tanzania, 2014) focuses on the difficult years of structural adjustment in the 1980s and 1990s). He argues how the salaried elite could personally benefit from an overvalued exchange rate. From 1979 on, under the influence of the charismatic President Julius Nyerere, Tanzania resisted the IMF and World Bank which urged it to devalue. But eventually, around the mid-1980s, they realised that they had the possibility of making even bigger financial gains if the country devalued and there were open markets, which would allow them to make money from trade or production. They were becoming a productive bourgeoisie.
Lofchie’s analysis can be contested. The benefits of the chaos that resulted from the extremely over-valued exchange rates of the 1980s were reaped by only a few. It is true that rapid growth followed from around 1990 to the present, but that is also due to the high price of gold on international markets and the rapid expansion of gold mining and tourism. There is still plenty of evidence of individuals making money illegitimately – corruption is ever present in the political discourse, and will continue to be so up till the Presidential elections due in October 2015.
A challenge for the ruling class in Tanzania, leaving the 1970’s, was would they be able to convert their economic strategies into meaningful growth and benefits for the population? By 2011 the challenge was even more acute, because very large reserves of gas had been discovered off the coast of Southern Tanzania, so money for investment would no longer be a binding constraint. But would those resources be used to create real assets which would create the prerequisites for rapid expansions in manufacturing, services and especially agriculture? Or would they be frittered away through imports of non-productive machinery and infrastructure (such as the non-existent electricity generators purchased through the Richmond Project in 2006 in which several leading members of the ruling political party were implicated)? Or end up in Swiss bank accounts? The jury is very much still out. To achieve the current ambition of a rapid transition to a middle income country will require much greater understanding of engineering, agricultural science, and much better contracts than have been recently achieved – and more proactive responses to the challenges of corruption. It will need to take its own political economy seriously.
Headline image credit: Tanzania – Mikumi by Marc Veraart. CC-BY-2.0 via Flickr.
It wasn’t surprising that Western journalists would react with doom-and-gloom when the Ebola outbreak began in West Africa. Or that the crisis would not be treated as a problem confronting all humanity — a force majeure — but as one of “those diseases” that afflict “those people” over there in Africa. Most Western media immediately fell into fear-mongering. Rarely did they tell the stories of Africans who survived Ebola, or meaningfully explore what it means to see your child or parent or other family member or friend be stricken with the disease. Where are the stories of the wrenching decisions of families forced to abandon loved ones or the bravery required to simply live as a human in conditions where everyone walks on the edge of suspicion?
And then he writes some hard truths.
Given our interconnected world, it’s no longer possible to excuse such treatment as a lack of access to the facts. So what is the explanation? To borrow the words of Nigerian novelist Chinua Achebe, “Quite simply it is the desire — one might indeed say the need — in Western psychology to set Africa up as a foil to Europe, as a place of negations at once remote and vaguely familiar, in comparison with which Europe’s own state of spiritual grace will be manifest.”
This thinking is so deeply entrenched in the minds of people in the West that it has become a reflex. Still, the ways in which Africans are portrayed as less human have not lost the power to shock. [b0ld is mine] Each new crisis, it seems, offers a platform for some to exercise their prejudices.
The hysteria is also fueling racism beyond the continent. In Germany, an African woman who recently traveled to Kenya — far from the affected countries — fell ill with a stomach virus at work; the entire building was locked down. In Brussels, an African man had a simple nosebleed at a shopping mall, and the store where it happened was sterilized. In Seoul, a bar put up a sign saying, “We apologize but due to the Ebola Virus we are not accepting Africans at the moment.” Here in the United States, each time I have been to a doctor’s office since the outbreak, I have noticed an anxious look on the faces of the assistants that dissipates only when I say that I haven’t been to my country recently.
For Western media, this is just another one of those stories about the “killer virus” and the “poor Africans” who must once again be saved and spoken for by Westerners. And, always, there is the most important question: Will the virus come to the United States or Europe?
If you are reading this and believe you do not think about us the ways I have described, ask yourself the following questions: When was the last time you saw, and took the time to read, a positive front-page article about an African country? Have you ever met someone from Africa and decided to tell her what you know about her country and her continent, even if you have never been there? Have you ever noticed yourself speaking slowly and using exaggerated gestures while talking to someone from Africa, assuming that he doesn’t understand English well?
Hadrian’s Wall has been in the news again recently for all the wrong reasons. Occasional wits have pondered on its significance in the Scottish Referendum, neglecting the fact that it has never marked the Anglo-Scottish border, and was certainly not constructed to keep the Scots out. Others have mistakenly insinuated that it is closed for business, following the widely reported demise of the Hadrian’s Wall Trust. And then of course there is the Game of Thrones angle, best-selling writer George R R Martin has spoken of the Wall as an inspiration for the great wall of ice that features in his books.
Media coverage of both Hadrian’s Wall Trust’s demise and Game of Thrones’ rise has sometimes played upon and propagated the notion that the Hadrian’s Wall was manned by shivering Italian legionaries guarding the fringes civilisation – irrespective of the fact that the empire actually trusted the security of the frontier to its non-citizen soldiers, the auxilia rather than to its legionaries. The tendency to overemphasise the Italian aspect reflects confusion about what the Roman Empire and its British frontier was about. But Martin, who made no claims to be speaking as a historian when he spoke of how he took the idea of legionaries from Italy, North Africa, and Greece guarding the Wall as a source of inspiration, did at least get one thing right about the Romano-British frontier.
There were indeed Africans on the Wall during the Roman period. In fact, at times there were probably more North Africans than Italians and Greeks. While all these groups were outnumbered by north-west Europeans, who tend to get discussed more often, the North African community was substantial, and its stories warrant telling.
Perhaps the most remarkable tale to survive is an episode in the Historia Augusta (Life of Severus 22) concerning the inspection of the Wall by the emperor Septimius Severus. The emperor, who was himself born in Libya, was confronted by a black soldier, part of the Wall garrison and a noted practical joker. According to the account the notoriously superstitious emperor saw in the soldier’s black skin and his brandishing of a wreath of Cyprus branches, an omen of death. And his mood was not further improved when the soldier shouted the macabre double entendre iam deus esto victor (now victor/conqueror, become a god). For of course properly speaking a Roman emperor should first die before being divinized. The late Nigerian classicist, Lloyd Thompson, made a powerful point about this intriguing passage in his seminal work Romans and Blacks, ‘the whole anecdote attributes to this man a disposition to make fun of the superstitious beliefs about black strangers’. In fact we might go further, and note just how much cultural knowledge and confidence this frontier soldier needed to play the joke – he needed to be aware of Roman funerary practices, superstitions, and the indeed the practice of emperor worship itself.
Why is this illuminating episode not better known? Perhaps it is because there is something deeply uncomfortable about what could be termed Britain’s first ‘racist joke’, or perhaps the problem lies with the source itself, the notoriously unreliable Historia Augusta. And yet as a properly forensic reading of this part of the text by Professor Tony Birley has shown, the detail included around the encounter is utterly credible, and we can identify places alluded to in it at the western end of the Wall. So it is quite reasonable to believe that this encounter took place.
Not only this, but according to the restoration of the text preferred by Birley and myself, there is a reference to a third African in this passage. The restoration post Maurum apud vallum missum in Britannia indicates that this episode took place after Severus has granted discharge to a soldier of the Mauri (the term from which ‘Moors’ derives). And has Birley has noted, we know that there was a unit of Moors stationed at Burgh-by-Sands on the Solway at this time.
Sadly, Burgh is one of the least explored forts on Hadrian’s Wall, but some sense of what may one day await an extensive campaign of excavation there comes from Transylvania in Romania, where investigations at the home of another Moorish regiment of the Roman army have revealed a temple dedicated to the gods of their homelands. Perhaps too, evidence of different North African legacies would emerge. The late Vivian Swann, a leading expert in the pottery of the Wall has presented an attractive case that the appearance of new forms of ceramics indicates the introduction of North African cuisine in northern Britain in the second and third centuries AD.
What is clear is that the Mauri of Burgh-by-Sands were not the only North Africans on the Wall. We have an African legionary’s tombstone from Birdoswald, and from the East Coast the glorious funerary stela set up to commemorate Victor, a freedman (former slave) by his former master, a trooper in a Spanish cavalry regiment. Victor’s monument now stands on display in Arbeia Museum at South Shields next to the fine, and rather better known, memorial to the Catuvellunian Regina, freedwoman and wife of Barates from Palmyra in Syria. Together these individuals, and the many other ethnic groups commemorated on the Wall, remind us of just how cosmopolitan the people of Roman frontier society were, and of how a society that stretched from the Solway and the Tyne to the Euphrates was held together.
Refugee identity is often shrouded in suspicion, speculation and rumour. Of course everyone wants to protect “real” refugees, but it often seems – upon reading the papers – that the real challenge is to find them among the interlopers: the “bogus asylum seekers”, the “queue jumpers”, the “illegals”.
Yet these distinctions and definitions shatter the moment we subject them to critical scrutiny. In Syria, no one would deny a terrible refugee crisis is unfolding. Western journalists report from camps in Jordan and Turkey documenting human misery and occasionally commenting on political manoeuvring, but never doubting the refugees’ veracity.
But once these same Syrians leave the overcrowded camps to cross the Mediterranean, a spell transforms these objects of pity into objects of fear. They are no longer “refugees”, but “illegal migrants” and “terrorists”. However data on migrants rescued in the Mediterranean show that up to 80% of those intercepted by the Italian Navy are in fact deserving of asylum, not detention.
Other myths perpetuate suspicion and xenophobia. Every year in the UK, refugee charity and advocacy groups spend precious resources trying to counter tabloid images of a Britain “swamped” by itinerant swan-eaters and Islamic extremists. The truth – that Britain is home to just 1% of refugees while 86% are hosted in developing countries, including some of the poorest on earth, and that one-third of refugees in the UK hold University degrees – is simply less convenient for politicians pushing an anti-migration agenda.
We are increasingly skilled in crafting complacent fictions intended not so much to demonise refugees as exculpate our own consciences. In Australia, for instance, ever-more restrictive asylum policies – which have seen all those arriving by boat transferred off-shore and, even when granted refugee status, refused the right to settle in Australia – have been presented by supporters as merely intended to prevent the nefarious practice of “queue-jumping”. In this universe, the border patrols become the guardians ensuring “fair” asylum hearings, while asylum-seekers are condemned for cheating the system.
That the system itself now contravenes international law is forgotten. Meanwhile, the Sri Lankan asylum-seeking mothers recently placed on suicide watch – threatening to kill themselves in the hope that their orphaned, Australian-born children might then be saved from detention – are judged guilty of “moral blackmail”.
Such stories foster complacency by encouraging an extraordinary degree of confidence in our ability to sort the deserving from the undeserving. The public remain convinced that “real” refugees wait in camps far beyond Europe’s borders, and that they do not take their fate into their own hands but wait to be rescued. But this “truth” too is hypocritical. It conveniently obscures the fact that the West will not resettle one-tenth of the refugees who have been identified by the United Nations High Commission for Refugees as in need of resettlement.
In fact, only one refugee in a hundred will ever be resettled from a camp to a third country in the West. In January 2014 the UK Government announced it would offer 500 additional refugee resettlement places for the “most vulnerable” refugees as a humanitarian gesture: but it’s better understood as political rationing.
Research shows us that undue self-congratulation when it comes to “helping” refugees is no new habit. Politicians are fond of remarking that Britain has a “long and proud” tradition of welcoming refugees, and NGOs and charities reiterate the same claim in the hope of grounding asylum in British cultural values.
But while the Huguenots found sanctuary in the seventeenth century, and Russia’s dissidents sought exile in the nineteenth, closer examination exposes the extent to which asylees’ ‘warm welcome’ has long rested upon the convictions of the few prepared to defy the popular prejudices of the many.
Poor migrants fleeing oppression have always been more feared than applauded in the UK. In 1905, the British Brothers’ League agitated for legislation to restrict (primarily Jewish) immigration from Eastern Europe because of populist fears that Britain was becoming ‘the dumping ground for the scum of Europe’. Similarly, the bravery of individual campaigners who fought to secure German Jews’ visas in the 1930s must be measured against the groundswell of public anti-semitism that resisted mass refugee admissions.
British MPs in 1938 were insistent that ‘it is impossible for us to absorb any large number of refugees here’, and as late as August 1938 the Daily Mail warned against large number of German Jews ‘flooding’ the country. In the US, polls showed that 94% of Americans disapproved of Kristallnacht, 77% thought immigration quotas should not be raised to allow additional Jewish migration from Germany.
All this suggests that Western commitment after 1951 to uphold a new Refugee Convention should not be read as a marker of some innate Western generosity of spirit. Even in 1947, Britain was forcibly returning Soviet POWs to Stalin’s Russia. Many committed suicide en route rather than face the Gulags or execution. When in 1972, Idi Amin expelled Ugandan’s Asians – many of whom were British citizens – the UK government tried desperately to persuade other Commonwealth countries to admit the refugees, before begrudgingly agreeing to act as a refuge of “last resort”. If forty years on the 40,000 Ugandan Asians who settled in the UK are often pointed to as a model refugee success story, this is not because but in spite of the welcome they received.
Many refugee advocates and NGOs are nevertheless wary of picking apart the public belief that a “generous welcome” exists for “real” refugees. The public, after all, are much more likely to be flattered than chastised into donating much needed funds to care for those left destitute – sometime by the deliberate workings of the asylum system itself. But it is important to recognise the more complex and less complacent truths that researchers’ work reveals.
For if we scratch the surface of our asylum policies beneath a shiny humanitarian veneer lies the most cynical kind of politics. Myth making sustains false dichotomies between deserving “refugees” there and undeserving “illegal migrants” here – and conveniently lets us forget that both are fleeing the same wars in the same leaking boats.
While Ebola seems to be off the New York Times front page, the articles are still there. “If They Survive in the Ebola Ward, They Work On” features some heroic people in and around Kenema, an area I knew when I lived in Sierra Leone. (For a different sort of context, this is center Mende country where the Amistad captives of Africa is My Home were from.)
Today marks the 127thbirthday of Marcus Mosiah Garvey, the first National Hero of Jamaica, and one of my spiritual ancestors.
Marcus Garvey through his life and work helped me to understand a question that has haunted me and many other Africans at home and abroad: What does it mean to be a man?
After travelling through the Americas and into the center of colonial power in the West Indies, Garvey realized that Africans at home and abroad in order to survive the brutalities of slavery had been reduced to a childish state in which they had relinquished personal and collective power. Cowed into submission, Africans at home and abroad lived in fear of outside forces over which they had no control, and even after gaining “freedom,” their existence was based on the level of servility to their former masters.
As Garvey saw it, Africans at home and abroad could either live in a reactionary state in which they only responded to crises (and once the crisis was over resume a passive, dormant existence) or take control of their lives by assuming personal and collective responsibility.
“A race without authority and power, is a race without respect,” said Garvey, and to remedy the situation, he created the Universal Negro Improvement Association and African Communities League, which is celebrating its 100th anniversary this year.
Men and nations assume responsibility for their lives. Personal and collective responsibility guided Garvey’s philosophy of manhood and nationhood, which were organized around these principles:
Redemption of Africans at home and abroad Education Self-Respect
Purpose Economics Community
Garvey set a challenge before Africans at home and abroad when he wrote in the Philosophy and Opinions of Marcus Garvey: "The greatest weapon used against the Negro is disorganization.”
In the midst of Ferguson and other daily insults to Africans at home and abroad, either we can continue living in a childish, reactionary state where we do not assume responsibility for our lives or we can organize and plan accordingly.
The choice, as it was then and now, is ours.
The Coalition for the Exoneration of Marcus Garvey is petitioning President Barack Obama to exonerate Marcus Garvey:
This blog is a platform I normally reserve for the important issue of fashion in Sierra Leone, but this week, I’m struggling to find a fashion angle. Unless you’ve been living on mars, you will know that West Africa is suffering the worst ever outbreak of the world’s most deadly disease – Ebola. I traveled to Kenema district last week for an assignment to write about the outbreak. I live in Freetown and before leaving, the epidemic hadn’t really kicked off here. ‘EBOLA!’ (said with a loud voice and chuckle) was something that was happening in villages, places that didn’t affect the urban folk of Sierra Leone’s capital. I knew Kenema was a district suffering huge case numbers, but nothing prepared me for what I saw and heard in one of Sierra Leone’s most brutally affected areas.
Yet again Africa is in the news as the other, as a place of horror and misery. So just a few reminders:
Ebola is not throughout Africa. You don’t need to worry when coming into contact with someone from the continent or someone who has been there recently. Africa is a big continent and Ebola is not everywhere. In fact…
Ebola is currently in three West African countries: Guinea, Liberia, and Sierra Leone. But…
Ebola is not an air-borne illness. You will not contract it by being in the same plane or auditorium or building as someone who has it or has come from one of the countries where it is prevalent. In fact…
you would need to be directly exposed to fluids from someone with the illness to be exposed. And that means that it is in the affected areas, in direct contact with those who have the illness, that you would be most at risk. And that is just not true for those of us living in the United States. So stop worrying about getting it here. Instead worry…
that those in the affected areas do not have the basic health care we in the United States take for granted. And so while there is indeed not a cure for Ebola,…
with the sort of hospital care we in the US take for granted, treating the disease in early states, many who are dying would be saved. But…
Between now and 2018 they are planning to create create a total of 60 child-friendly spaces – Children’s Corners – in libraries in Tanzania, Uganda, Malawi, Cameroon, Zambia and Zimbabwe, in conjunction with local partners. They will train librarians to work effectively with children, supply new books from the UK and provide each library with a grant for refurbishments and the local purchase of books.
Children’s Digital learning pilot project in Kenya. Photo: bookaid.org
Why am I telling you this?
Open Doors will revolutionise access to books for thousands of children in sub-Saharan Africa, where many children live below the poverty line and literacy levels are among the lowest in the world.
With few books in their schools and no books at home, children struggle to read and to learn. For most children, a local library – where one exists – is the only place where they can read the books they need to prepare them for adulthood. However, few libraries have suitable spaces for children and most librarians are not trained to work with children.
It happens that I was born in Zambia and had my first books read to me there.
Me in Zambia
We didn’t have access to many books, but my favourite was Tiger Flower by Robert Vavra, illustrated Fleur Cowles which my Mum found in a bookshop in Ndola.
My first introduction to the power and beauty of books, and the way they open doors into worlds of opportunities happened in Zambia, one of the countries where Book Aid works. So this campaign not only appeals on a professional level, it matters to me personally.
If you’re reading this as a publisher or book distributor please do take a look at this information sheet about how you can get involved. Each library is looking for new stock and you could be the one to make a huge difference. The number of books being sought really isn’t enormous.
Each of the 60 participating libraries will receive 2,500 children’s books. This will be broadly made up as follows:
I’m always on the look-out for new information and new takes on the Amistad story. One recent one is Marcus Rediker’s The Amistad Rebellion: An Atlantic Odyssey of Slavery and Freedom in which the focus and viewpoint is on the captives. And now there is a film based on the book coming from filmmaker Tony Buba. The following description and preview has me very intrigued.
This film, made by Tony Buba, is based on Marcus Rediker’s book about the famous slave revolt of 1839, The Amistad Rebellion: An Atlantic Odyssey of Slavery and Freedom (Penguin, 2012) and is about a trip made by historians and a film crew to Sierra Leone in May 2013. All of the Amistad rebels were from southern and eastern Sierra Leone, so the filmmakers went to their villages of origin to interview elders about surviving local memory of the case. They also searched for the long lost ruins of Lomboko, the slave trading factory where the Amistad Africans were loaded onto a slave ship bound for the New World. This hour-long documentary chronicles a quest for a lost history from below.
The looming centennial of the Great War has inspired a predicable abundance of conferences, books, articles, and blog posts. Most are built on a familiar meme: the war as a symbol of futility. Soldiers and societies alike are presented as victims of flawed intentions and defective methods, which in turn reflected inability or unwillingness to adapt to the spectrum of innovations (material, intellectual, and emotional), that made the Great War the first modern conflict. That perspective is reinforced by the war’s rechristening, backlit by a later and greater struggle, as World War I—which confers a preliminary, test-bed status.
Homeward bound troops pose on the ship’s deck and in a lifeboat, 1919. The original image was printed on postal card (“AZO”) stock. Public Domain via Wikimedia Commons.
In point of fact, the defining aspect of World War I is its semi-modern character. The “classic” Great War, the war of myth, memory, and image, could be waged only in a limited area: a narrow belt in Western Europe, extending vertically five hundred miles from the North Sea to Switzerland, and horizontally about a hundred miles in either direction. War waged outside of the northwest European quadrilateral tended quite rapidly to follow a pattern of de-modernization. Peacetime armies and their cadres melted away in combat, were submerged by repeated infusions of unprepared conscripts, and saw their support systems, equine and material, melt irretrievably away.
Russia and the Balkans, the Middle East, and East Africa offer a plethora of case studies, ranging from combatants left without rifles in Russia, to the breakdown of British medical services in Mesopotamia, to the dismounting of entire regiments in East Africa by the tsetse fly. Nor was de-modernization confined to combat zones. Russia, Austria-Hungary, the Ottoman Empire, and arguably Italy, strained themselves to the breaking point and beyond in coping with the demands of an enduring total war. Infrastructures from railways to hospitals to bureaucracies that had functioned reasonably, if not optimally, saw their levels of performance and their levels of competence tested to destruction. Stress combined with famine and plague to nurture catastrophic levels of disorder, from the Armenian genocide to the Bolshevik Revolution.
Semi-modernity posed a corresponding and fundamental challenge to the wartime relationship of armed forces to governments. In 1914, for practical purposes, the warring states turned over control to the generals and admirals. This in part reflected the general belief in a short, decisive war—one that would end before the combatants’ social and political matrices had been permanently reconfigured. It also reflected civil authorities’ lack of faith in their ability to manage war-making’s arcana—and a corresponding willingness to accept the military as “competent by definition.”
Western Battle Front 1916. From J. Reynolds, Allen L. Churchill, Francis Trevelyan Miller (eds.): The Story of the Great War, Volume V. New York. Specified year 1916, actual year more likely 1917 or 1918. Public Domain via Wikimedia Commons.
The extended stalemate that actually developed had two consequences. A major, unacknowledged subtext of thinking about and planning for war prior to 1914 was that future conflict would be so horrible that the home fronts would collapse under the stress. Instead, by 1915 the generals and the politicians were able to count on unprecedented –and unexpected–commitment from their populations. The precise mix of patriotism, conformity, and passivity underpinning that phenomenon remains debatable. But it provided a massive hammer. The second question was how that hammer could best be wielded. In Russia, Austria-Hungary, the Ottoman Empire, neither soldiers nor politicians were up to the task. In Germany the military’s control metastasized after 1916 into a de facto dictatorship. But that dictatorship was contingent on a victory the armed forces could not deliver. In France and Britain, civil and military authorities beginning in 1915 came to more or less sustainable modi vivendi that endured to the armistice. Their durability over a longer run was considered best untested.
Even in the war’s final stages, on the Western Front that was its defining theater, innovations in methods and technology, could not significantly reduce casualties. They could only improve the ratio of gains. The Germans and the Allies both suffered over three-quarters of a million men during the war’s final months. French general Charles Mangin put it bluntly and accurately: “whatever you do, you lose a lot of men.” In contemplating future wars—a process well antedating 11 November 1918—soldiers and politicians faced a disconcerting fact. The war’s true turning point for any state came when its people hated their government more than they feared their enemies. From there it was a matter of time: whose clock would run out first. Changing that paradigm became—and arguably remains—a fundamental challenge confronting a state contemplating war.
Dennis Showalter is professor of history at Colorado College, where he has been on the faculty since 1969. He is Editor in Chief of Oxford Bibliographies in Military History, wrote “World War I Origins,” and blogged about “The Wehrmacht Invades Norway.” He is Past President of the Society for Military History, joint editor of War in History, and a widely-published scholar of military affairs. His recent books include Armor and Blood: The Battle of Kursk (2013), Frederick the Great: A Military History (2012), Hitler’s Panzers (2009), and Patton and Rommel: Men of War in the Twentieth Century (2005).
Developed cooperatively with scholars worldwide, Oxford Bibliographies offers exclusive, authoritative research guides. Combining the best features of an annotated bibliography and a high-level encyclopedia, this cutting-edge resource guides researchers to the best available scholarship across a wide variety of subjects.
Subscribe to the OUPblog via email or RSS.
Subscribe to only history articles on the OUPblog via email or RSS.
What a surprise to visit an African Water Hole with Irene Latham!
The fifteen poems in this picture book introduce us to the importance of the water hole to the African grassland ecosystem. Each poem is accompanied by a short bit of nonfiction text that tells more about the water hole or the animal featured in the poem.
Working alone or in small groups, I can imagine students using this book (and others like it that combine poetry and nonfiction) as a mentor text for their own writing about an ecosystem, their neighborhood, or the cultures they are studying in social studies.
Jone has the Poetry Friday roundup this week at Check it Out.
Roundup host/hostesses are still needed in July, August, November and December. Sign up here.
The Boko Haram (BH) terrorist group, responsible for the abduction of over 200 school girls in north-eastern Nigeria, has been Nigeria’s prime security threat since 2009. Although the group has carried out innumerable acts of terror in Nigeria since 2009, its abduction of more than 200 girls at Government Girls Secondary School Chibok, on 14 April 2014, outraged the world and gave it reinforced international currency. The global and Nigerian Muslim community has since distanced itself from Boko Haram’s violent ideology. In the face of current cosmopolitan campaign to rescue the Chibok girls, which is christened #BringBackOurGirls (#BBOG, #BBG), the question that dominates public discourse in the aftermath of Chibok abduction is whether #BringBackOurGirls as an isolated phenomenon, or the increasing de-legitimisation of Boko Haram’s extremism by Muslims generally, would serve as a rallying point against violent extremism in Nigeria, or rather reinforce the historic sharia question that has threatened peaceful co-existence in the country since independence in 1960. For those unfamiliar with Nigeria’s religious politics and relations, the following cursory background would suffice as clarification.
Hudreds of people gathered at Union Square in New York City on May 3 to demand the release of some 230 schoolgirls abducted by Boko Haram insurgents in Nigeria. Photo by Michael Fleshman. CC BY-NC 2.0 via fleshmanpix Flickr.
Boko Haram in the Context of Nigeria’s Religious Politics
In most parts of northern Nigeria, Islam and sharia predated the post-independence Western-secular system that was bequeathed to a unified Nigerian state at independence. Uthman dan Fodio’s jihad of 1810, which captured the Hausa states of northern Nigeria, brought about the establishment of an Islamic central authority under the Sokoto caliphate. Since dan Fodio’s jihad was aimed at establishing a theocratic state, Islam inevitably became a state religion in these captured Hausa states. Although the British colonial authorities protected the theocratic political order they met in these emirates for reasons of imperial convenience, they nonetheless introduced a legal system that modulated the sharia order. Notwithstanding this interference, Islam and sharia survived colonial invasion in these states. Although the sharia legal order was relatively modulated to protect the British and other European merchants, its application on the natives remained significantly strong. This arrangement remained so until it became obvious to the British that an Islamic political/legal order would not serve the commercial interest of Western merchants, particularly after independence. With this concern in mind, the British orchestrated a reversal of the sharia order, and cajoled the Muslim north into accepting a relatively secular system at independence, an arrangement that was christened “the Settlement of 1960”.
The settlement of 1960 was a pact between the British colonialists, as arbiter, the northern and Southern Animist-Christians on the one hand, and the Muslim north on the other. It was aimed at establishing a secular legal order side by side a modulated Islamic legal regime. It is intriguing to note that whereas the Christian community initially opposed this settlement for the fear of a covert Islamization agenda, the northern Muslim community was at first supportive of it. But the respective positions of the Christian and Muslim communities were to be reversed shortly after independence. The Christian community turned around to favour the settlement of 1960 while the northern Muslim community became avidly antagonistic to this arrangement.
Although many factors account for northern Muslims’ opposition to the settlement, the most significant factor is the sharia debate that ensued during the constitution-making process of 1976-78. At the constitutional conferences, there was considerable mobilisation by northern political and religious leaders for the entrenchment of sharia in Nigeria’s legal system. Unfortunately, the Muslim north suffered a humiliating defeat at the hands of Christians in their quest for the establishment of sharia. This bitter defeat meant that northern Muslims had lost most of the incentives that made the Settlement of 1960 attractive to them in the first place. Among other consequences, the sharia debate marked the beginning of vigorous and sustained activism by northern Muslims for an Islamic state, or much less, an Islamic legal, economic, and social order within the Nigerian state. This activism has taken both liberal and radical approaches. Whereas the intellectual and political classes continue to pressure the state for Islamic determinism, the Islamists and rustic northern Muslim folk often express this quest in violent ways.
The Islamic revivalism that followed the sharia debate of 1976-8, inspired the emergence and proliferation of radical Islamic sects and spurred the influx of radical Islamic clerics from neighbouring states and Senegal, into northern Nigeria. Within this period, acts of religious violence were often encouraged or ignored by state authorities in northern Nigeria. Consequently, religious violence became a common feature in this part of the country, as Christians became objects of religiously-motivated attacks at the least provocation, either directly or vicariously. For instance, the US invasion of Iraq in the 1990s led to pervasive attacks on Christians and their worship centres in northern Nigeria. In 2003, a Danish newspaper cartoon, which allegedly disparaged Prophet Mohammed, led to mass killing of northern Christians and destruction of their Churches and property. In the aftermath of 9/11 bombing in 2001, Muslims celebrated in Northern Nigeria and vandalized Churches in the process. More recently, Christians in northern Nigeria were subject of attack from Muslims, when US planes attacked Libya during the Arab Spring. The Boko Haram sect emerged in the context of this continuum of Islamic activism, which endorsed violence as one of its operational tools. Its ideology was therefore weaved around the establishment of an imaginary puritanical state governed by sharia. Fortunately or unfortunately, Boko Haram’s interpretation of kafir (heathen) transcends the simplistic description of “non-Muslims” and encompasses those Muslims who don’t subscribe to its fundamentalists brand of Islam.
Would #BringBackOurGirls Reverse this Tendency?
Paradoxically, Boko Haram which emerged as an ‘Islamic sect’ has taken its defence of Islam overboard, killing in the process moderate Islamic teachers, preachers, and other Muslims who deprecate its fanatical brand of Islam. Its indiscriminate attacks over the civilian population also do not distinguish Christians from Muslims. Specifically, Boko Haram’s policy of targeting moderate Muslims has become a significant paradox of sorts, given that it is a product of the overarching sharia struggle in northern Nigeria. With the unfolding of its extreme and caustic brand of Islam, the group has not only denounced the legitimacy of the Islamic leadership in Nigeria, it has declared them and other moderate Muslims as kafir and enemies of Allah. As #BringBackOurGirls draws global attention to Boko Haram specifically, and violent extremism in Nigeria generally, the global and Nigerian Islamic community have continued to condemn their activities, describing their activities as criminal un-Islamic. Both the Secretary General of the Organisation of Islamic Conference (OIC) and the President of Nigeria’s Supreme Council for Islamic Affairs have said so. However, many questions have been asked of the recent de-legitimisation of Boko Haram by the Muslim community: Is the condemnation of Boko Haram by Muslims inspired by a genuine concern over violent extremism or borne out of its indiscriminate attacks against Muslims? Would Muslims in northern Nigeria, continue to condemn the activities of individuals or groups who express extreme and violent tendencies in the name of Islam? Would any attack on Christians and their property be condoned or ignored in the future?
In the aftermath of the #BringBackOurGirls, two schools of thought have emerged.
There are those who opine that Boko Haram insurgency is a prelude to greater religious upheavals in northern Nigeria, if northern Muslims are neither allowed the liberty of having an Islamic state nor practicing sharia in its orthodox fashion. Those who hold this viewpoint argue that the Muslim community would not have genuinely distanced itself from Boko Haram, if its targets were solely Christians. They also contend that the general discord between liberal and fundamentalist Islam in the Middle East has not deterred the support for an age-long global Islamization agenda that is funded from this region. Relating this to the Nigeria situation, the logic is that Islamism or violent extremism would not deter the historic sharia activism in northern Nigeria hence the need to revisit the sharia debate.
Persuasive as these arguments may sound, I hold a contrary view. In my estimation, the Boko Haram and Maitatsine Islamic sects have clearly demonstrated that Islamism (rigid and extreme adherence to Islamic tradition and its violent expression) is totalitarian and provides no room for liberal adherence to Islam. Secondly, due to its anti-modernisation character, no state desirous of progress tolerates violent extremism. Saudi Arabia, which is the cradle of Islam, has zero tolerance for it. Moreover, the northern elite, who supported Islamic activism in the past, has become its biggest victim. As the northern economy crumbles under Boko Haram’s campaign of violence, the elite who hold the highest stakes in the economy are equally the biggest losers. They have also realized that there is no ideological discipline for men in arms, as they are bound to resort to violent crime for economic reasons. It is in realisation of these facts that the northern Governors admitted in their meeting in February 2014, that Boko Haram has destroyed the north economically, socially, and politically.
For these and many other reasons, I hold an optimistic view that #BringBackOurGirls would not only lead to the rescue of the abducted girls, it marks the beginning of the end of Boko Haram insurgency — but most importantly, the end of religious intolerance and violent extremism in northern Nigeria. #BringBackOurGirls presents an opportunity to Christians and Muslims in northern Nigeria to rally against violent extremism by treating the indiscriminate killing and destruction of property as criminal acts and not acts of religious deference. I believe these two religious communities would embrace this opportunity as was recently demonstrated in the city of Kaduna, where they united to wade off Boko Haram attackers.
The Oxford Journal of Law and Religion publishes a range of articles drawn from various sectors of the law and religion field, including: social, legal and political issues involving the relationship between law and religion in society; comparative law perspectives on the relationship between religion and state institutions; developments regarding human and constitutional rights to freedom of religion or belief; considerations of the relationship between religious and secular legal systems; empirical work on the place of religion in society; and other salient areas where law and religion interact.
Subscribe to the OUPblog via email or RSS.
Subscribe to only law articles on the OUPblog via email or RSS.
So, obviously I’m pretty into Miss Cayley’s Adventures. So into it that I was kind of terrified of reading anything else by Grant Allen, which is why Hilda Wade has been languishing on my Kindle (and then my other Kindle) for several years. I shouldn’t have worried, though. Hilda Wade is good and bad in almost exactly the same ways as Miss Cayley’s Adventures is good and bad.
It’s narrated by Dr. Hubert Cumberledge, who is to doctor-narrators what many of Carolyn Wells’ protagonists are to lawyer-narrators, except that unlike most Carolyn Wells protagonists, he is capable of seeing women as people. Most of Grant Allen’s characters are capable of seeing women as people. Grant Allen’s female characters command respect.
Anyway, Hilda Wade is a nurse, and she and Dr. Cumberledge work at a hospital with Professor Sebastian, who is a Great Man. That doesn’t necessarily mean he’s a good man, though, and Hilda Wade knows he’s not. It’s pretty clear to the reader early on that Hilda a) does not like Sebastian, b) had some special purpose in coming to work for him, and c) probably wants revenge for something he did to her father. Eventually these things also become clear to Sebastian, and even, eventually, to Dr. Cumberledge.
Dr. Cumberledge is only moderately bright, compared to Professor Sebastian’s genius and Hilda’s superhuman intuition, but he’s pretty likable, mostly because his awe of Hilda turns out to be greater than his awe of Professor Sebastian. Early on, he’s skeptical of her concerns about Sebastian, but she slowly convinces him, and it works because he respects her and listens to her and is willing to see her point of view. And for all that the novel goes way downhill once he is convinced, that’s a really nice thing.
After that, the book gets adventurous and racist and sentimental, but wound to a close entertainingly enough that I never wanted to put it down. Apparently the last chapter was written by Arthur Conan Doyle from Grant Allen’s notes after his death or during his final illness. I have to say, I wasn’t a huge fan of the last chapter, but I don’t know that him not dying would have helped–he has a tendency to fall apart toward the end of a book. That’s the thing about Grant Allen, though: he starts off so strong, and builds up enough good will, that he’s free to make a mess of things later on–it doesn’t really matter that much. I guess Grant Allen’s heroines are better than his books, which doesn’t bother me at all, because the opposite is so much more common.
Lois Cayley is still better than Hilda Wade, though. She’s funnier.
Nadine Gordimer has died at the age of 90, a significant age to reach, and yet, as always with the loss of a major figure (particularly one who stayed active and known) it feels like a robbery. We are greedy, we living people.
Writers satiate some of our greed against death by leaving us with their words. Gordimer's oeuvre is large (she began publishing fiction in South Africa in the late 1940s), and her fiction in particular will live long past this moment of her body's death.
Because Gordimer was so active in the anti-apartheid struggle, and her writing so often addresses the situation in South Africa at the time of its writing, it is easy to fall into the trap of reducing her to a political writer and to ignore or downplay the artistry of her work. She sometimes encouraged this view in her essays and interviews, but she also understood that she was not a propagandist, telling Jannika Hurwitt in 1979, "I am not by nature a political creature, and even now there is so much I don’t like in politics, and in political people—though I admire tremendously people who are politically active—there’s so much lying to oneself, self-deception, there has to be—you don’t make a good political fighter unless you can pretend the warts aren’t there."
Gordimer is often contrasted (sometimes by she herself) with the other white South African Nobel laureate, J.M. Coetzee. In that frame, Gordimer is the engaged realist, Coetzee the disengaged postmodernist. Like any caricature, this one contains some elements of truth, but it hides as much as it reveals. Though Gordimer had a bit more faith in the ability of words to represent immediate reality than Coetzee does, and was more comfortable participating in political arenas and writing about the recognizable here-and-now, both writers are strongly influenced by European high culture, particularly European Modernism — Kafka is a key influence for both, though Coetzee tends to wear that influence more obviously.
One of the qualities I value in Gordimer's work is her ability to show how people of different backgrounds and ideologies grapple with political ideas in their lives. She's often portrayed as an explicitly political writer because she writes about people embroiled in politics. In her best writing, she understood quite powerfully the difference between showing people engaged in politics and making her work into propaganda for a particular political line.
That's a wonder for me of a novel like Burger's Daughter, which I wrote about here in 2009. It shows politics in life, politics as life. It is at times merciless toward characters who could be considered the ones a nice, liberal reader is supposed to feel sympathy and affection for. It never forgets Renoir's great line from The Rules of the Game: "The awful thing about life is this: everyone has their reasons."
Gordimer's range is best demonstrated by her short stories, such as the parable-like "Loot", which I wrote about in 2010. Especially in the later decades of her career, her stories frequently experimented with form, perspective, and subjectivity — which is not to discount the powerful effect of her many rich, detailed, fiercely realistic stories (her Selected Stories from the mid-'70s remains a high point to me of her work).
The view of Gordimer as a writer of her times, for her times, limited to her times might try to prevail. That would be a shame. Though she certainly chronicled ways of living in South Africa throughout the last 60+ years, that specificity does not in any way make her work less important for us now. It is, rather, differently important — and as necessary as ever.
In most studies of economic growth the downloaded data from international databases is treated as primary evidence, although in fact it is not. The data available from the international series has been obtained from governments and statistical bureaus, and has then been modified to fit the purpose of the data retailer and its customers. These alterations create some problems, and the conclusions of any study that compares economic performance across several countries depend on which source of growth evidence is used.
The international databases provide no proper sources for their data and no data that would enable analysts to understand why the different sources disagree about growth. See, for example, the disagreement in economic growth series reported by the national statistical office, from Penn World Tables, The World Bank, and the Maddison dataset for Tanzania, 1961-2001.
The average annual disagreement between 1961 and 2001 is 6%. It is not evenly distributed; there is serious dissonance regarding growth in Tanzania in the 1980s and 1990s, and how the effects of economic crisis and structural adjustment affected theeconomy depends on which source you consult.
The problem is that growth evidence in the databases covers years for which no official data was available and the series are compiled from national data that use different base years. The only way to deal satisfactorily with inconsistencies in the data and the effects of revisions is to consult the primary source. The official national accounts are the primary sources.
The advantage of using the national accounts as published by the statistical offices is that they come with guidelines and commentaries. When the underlying methods or basic data used to assemble the accounts are changed, these changes are reported. The downside of the national accounts evidence is that the data is not readily downloadable. The publications may have to be manually collected, and then the process of data entry and interpretation follows. When such studies of growth are done carefully, it offers reconsiderations of what used to be accepted wisdom of economic growth narratives.
I propose a reconsideration of economic growth in Africa in three respects. First, that the focus has been on average economic growth and that there has been no failure of economic growth. In particular the gains made in the 1960s and 1970s have been neglected.
Secondly, for many countries the decline in economic growth in the 1980s was overstated, as was the improvement in economic growth in the 1990s. The coverage of economic activities in GDP measures is incomplete. In the 1980s many economic activities were increasingly missed in the official records thus the decline in the 1980s was overestimated (resulting from declining coverage) and the increase in the 1990s was overestimated (resulting from increasing coverage).
The third important reconsideration is that there is no clear association between economic growth and orthodox economic policies. This is counter to the mainstream interpretation, and suggests that the importance of sound economic policies has been overstated, and that the importance of the external economic conditions have been understated in the prevailing explanation of African economic performance.
We know less than we would like to think about growth and development in Africa based on the official numbers, and the problem starts with the basic input: information. The fact of the matter is that the great majority of economic transactions whether in the rural agricultural sector and in the medium and small scale urban businesses goes by unrecorded.
This is just not a matter of technical accuracy; the arbitrariness of the quantification process produces observations with very large errors and levels of uncertainty. This ‘numbers game’ has taken on a dangerously misleading air of accuracy, and international development actors use the resulting figures to make critical decisions that allocate scarce resources. Governments are not able to make informed decisions because existing data is too weak or the data they need does not exist; scholars are making judgments based on erroneous statistics.
Since the 1990s, in the study of economics, the distance between the observed and the observer is increasing. When international datasets on macroeconomic variables became available, such as the Penn World Tables, and the workhorse of study of economic growth became the cross-country growth regressions the trend turned away from carefully considered country case studies and then rather towards large country studies interested in average effects.
However, the danger of such studies is that it does not ask the right kind of questions of the evidence. As an economic historian, I approach the GDP evidence with the normal questions in source criticism: How good is this observation? Who made this observation? And under what circumstance was this observation made?
Subscribe to the OUPblog via email or RSS.
Subscribe to only business and economics articles on the OUPblog via email or RSS.
Image credit: Tanzanian farmers, by Fanny Schertzer. CC-BY-2.5 S.A via Wikimedia Commons.