JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans. Join now (it's free).
Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.
Viewing: Blog Posts Tagged with: philosophy, Most Recent at Top [Help]
Results 1 - 25 of 297
How to use this Page
You are viewing the most recent posts tagged with the words: philosophy in the JacketFlap blog reader. What is a tag? Think of a tag as a keyword or category label. Tags can both help you find posts on JacketFlap.com as well as provide an easy way for you to "remember" and classify posts for later recall. Try adding a tag yourself by clicking "Add a tag" below a post's header. Scroll down through the list of Recent Posts in the left column and click on a post title that sounds interesting. You can view all posts from a specific blog by clicking the Blog name in the right column, or you can click a 'More Posts from this Blog' link in any individual post.
I got a request this past year from my friends at Boston Green Academy (BGA) to help them consider their Humanities 4 curriculum, which focuses on philosophies, especially around happiness. This was a tough request for me, and certainly not one I had considered before. There aren’t any titles I can think of that say “Philosophy: Happiness” on their covers to pull me directly down this path.
But as I thought about it, I got more and more excited about how this topic is tackled in the YA world. The first set of books I considered were titles that dealt with “the meaning of life” in a variety of ways. Titles like Nothing by Janne Teller, Jeremy Fink and the Meaning of Life by Wendy Mass, and one of my personal favorites, The Spectacular Now by Tim Tharp give lots of food for thought about where we expend our energy and the wisdom of how we prioritize our attention in life.
This, of course, led to stories about facing challenges and finding happiness despite (or because) of the circumstances in our lives. So we pulled texts like The Fault in Our Stars by John Green, It’s Kind of a Funny Story by Ned Vizzini, and Marcelo in the Real World by Francisco X. Stork, which all deal with characters finding ways to deal with and even prosper alongside difficult circumstances.
Then we happened upon a set of titles that raise questions about whether you can be “happy” if you are or are not being yourself. We pulled segments of titles like Openly Straight by Bill Konigsberg, Aristotle and Dante Discover the Secrets of the Universe by Benjamin Alire Saenz, Tina’s Mouth by Keshni Kashyap, American-Born Chinese by Gene Luen Yang, and Rapture Practice, which I’ve talked about here before.
And then there were a world of nonfiction possibilities, those written for young people and those not — picture books by Demi about various figures, Mihaly Csikszentmihalyi’s ideas about work and play, and any number of great series texts about philosophers and religions and such.
So I guess the (happy) moral of this story is that it was much easier than I thought to revisit old texts with these new eyes of philosophies of happiness. I left the work feeling as though every text is about this very important topic in one way or another, and I can’t wait to see how the BGA curriculum around it continues to evolve!
Until the current epidemic, Ebola was largely regarded as not a Western problem. Although fearsome, Ebola seemed contained to remote corners of Africa, far from major international airports. We are now learning the hard way that Ebola is not—and indeed was never—just someone else’s problem. Yes, this outbreak is different: it originated in West Africa, at the border of three countries, where the transportation infrastructure was better developed, and was well under way before it was recognized. But we should have understood that we are “all in this together” for Ebola, as for any, infectious disease.
Understanding that we were profoundly wrong about Ebola can help us to see ethical considerations that should shape how we go forward. Here, I have space just to outline two: reciprocity and fairness.
In the aftermath of the global SARS epidemic that spread to Canada, the Joint Centre for Bioethics at the University of Toronto produced a touchstone document for pandemic planning, Stand on Guard for Thee, which highlights reciprocity as a value. When health care workers take risks to protect us all, we owe them special concern if they are harmed. Dr. Bruce Ribner, speaking on ABC, described Emory University Hospital as willing to take two US health care workers who became infected abroad because they believed these workers deserved the best available treatment for the risks they took for humanitarian ends. Calls to ban the return of US workers—or treatment in the United States of other infected front-line workers—forget that contagious diseases do not occur in a vacuum. Even Ann Coulter recognized, in her own unwitting way, that we owe support to first responders for the burdens they undertake for us all when she excoriated Dr. Kent Brantly for humanitarian work abroad rather than in the United States.
We too often fail to recognize that all the health care and public health workers at risk in the Ebola epidemic—and many have died—are owed duties of special concern. Yet unlike health care workers at Emory, health care workers on the front lines in Africa must make do with limited equipment under circumstances in which it is very difficult for them to be safe, according to a recent Wall Street Journal article. As we go forward we must remember the importance of providing adequately for these workers and for workers in the next predictable epidemics — not just for Americans who are able to return to the US for care. Supporting these workers means providing immediate care for those who fall ill, as well as ongoing care for them and their families if they die or are not longer able to work. But this is not all; health care workers on the front lines can be supported by efforts to minimize disease spread—for example conducting burials to minimize risks of infection from the dead—as well as unceasing attention to the development of public health infrastructures so that risks can be swiftly identified and contained and care can be delivered as safely as possible.
Fairness requires treating others as we would like to be treated ourselves. A way of thinking about what is fair is to ask what we would want done if we did not know our position under the circumstances at hand. In a classic of political philosophy, A Theory of Justice, John Rawls suggested the thought experiment of asking what principles of justice we would be willing to accept for a society in which we were to live, if we didn’t know anything about ourselves except that we would be somewhere in that society. Infectious disease confronts us all with an actual possibility of the Rawlsian thought experiment. We are all enmeshed in a web of infectious organisms, potential vectors to one another and hence potential victims, too. We never know at any given point in time whether we will be victim, vector, or both. It’s as though we were all on a giant airplane, not knowing who might cough, or spit, or bleed, what to whom, and when. So we need to ask what would be fair under these brute facts of human interconnectedness.
At a minimum, we need to ask what would be fair about the allocation of Ebola treatments, both before and if they become validated and more widely available. Ethical issues such as informed consent and exploitation of vulnerable populations in testing of experimental medicines certainly matter but should not obscure that fairness does, too, whether we view the medications as experimental or last-ditch treatment. Should limited supplies be administered to the worst off? Are these the sickest, most impoverished, or those subjected to the greatest risks, especially risks of injustice? Or, should limited supplies be directed where they might do the most good—where health care workers are deeply fearful and abandoning patients, or where we need to encourage people who have been exposed to be monitored and isolated if needed?
These questions of fairness occur in the broader context of medicine development and distribution. ZMAPP (the experimental monoclonal antibody administered on a compassionate use basis to the two Americans) was jointly developed by the US government, the Public Health Agency of Canada, and a few very small companies. Ebola has not drawn a great deal of drug development attention; indeed, infectious diseases more generally have not drawn their fair share of attention from Big Pharma, as least as measured by the global burden of disease.
WHO has declared the Ebola epidemic an international emergency and is convening ethics experts to consider such questions as whether and how the experimental treatment administered to the two Americans should be made available to others. I expect that the values of reciprocity and fairness will surface in these discussions. Let us hope they do, and that their import is remembered beyond the immediate emergency.
Headline Image credit: Ebola virus virion. Created by CDC microbiologist Cynthia Goldsmith, this colorized transmission electron micrograph (TEM) revealed some of the ultrastructural morphology displayed by an Ebola virus virion. Centers for Disease Control and Prevention’s Public Health Image Library, #10816 . Public domain via Wikimedia Commons.
Plato famously said that there is an ancient quarrel between philosophy and poetry. But with respect to one aspect of poetry, namely metaphor, many contemporary philosophers have made peace with the poets. In their view, we need metaphor. Without it, many truths would be inexpressible and unknowable. For example, we cannot describe feelings and sensations adequately without it. Take Gerard Manley Hopkins’s exceptionally powerful metaphor of despair:
selfwrung, selfstrung, sheathe- and shelterless,
thoughts against thoughts in groans grind.
How else could precisely this kind of mood be expressed? Describing how things appear to our senses is also thought to require metaphor, as when we speak of the silken sound of a harp, the warm colours of a Titian, and the bold or jolly flavour of a wine. Science advances by the use of metaphors – of the mind as a computer, of electricity as a current, or of the atom as a solar system. And metaphysical and religious truths are often thought to be inexpressible in literal language. Plato condemned poets for claiming to provide knowledge they did not have. But if these philosophers are right, there is at least one poetic use of language that is needed for the communication of many truths.
In my view, however, this is the wrong way to defend the value of metaphor. Comparisons may well be indispensable for communication in many situations. We convey the unfamiliar by likening it to the familiar. But many hold that it is specifically metaphor – and no other kind of comparison – that is indispensable. Metaphor tells us things the words ‘like’ or ‘as’ never could. If true, this would be fascinating. It would reveal the limits of what is expressible in literal language. But no one has come close to giving a good argument for it. And in any case, metaphor does not have to be an indispensable means to knowledge in order to be as valuable as we take it to be.
Metaphor may not tell us anything that couldn’t be expressed by other means. But good metaphors have many other effects on readers than making them grasp some bit of information, and these are often precisely the effects the metaphor-user wants to have. There is far more to the effective use of language than transmitting information. My particular interest is in how art critics use metaphor to help us appreciate paintings, architecture, music, and other artworks. There are many reasons why metaphor matters, but art criticism reveals two reasons of particular importance.
Take this passage from John Ruskin’s The Stones of Venice. Ruskin describes arriving in Venice by boat and seeing ‘the long ranges of columned palaces,—each with its black boat moored at the portal,—each with its image cast down, beneath its feet, upon that green pavement which every breeze broke into new fantasies of rich tessellation’, and observing how ‘the front of the Ducal palace, flushed with its sanguine veins, looks to the snowy dome of Our Lady of Salvation’.
One thing Ruskin’s metaphors do is describe the waters of Venice and the Ducal palace at an extraordinary level of specificity. There are many ways water looks when breezes blow across its surface. There are fewer ways it looks when breezes blow across its surface and make it look like something broken into many pieces. And there are still fewer ways it looks when breezes blow across its surface and make it look like something broken into pieces forming a rich mosaic with the colours of Venetian palaces and a greenish tint. Ruskin’s metaphor communicates that the waters of Venice look like that. The metaphor of the Ducal palace as ‘flushed with its sanguine veins’ likewise narrows the possible appearances considerably. Characterizing appearances very specifically is of particular use to art critics, as they often want to articulate the specific appearance an artwork presents.
A second thing metaphors like Ruskin’s do is cause readers to imagine seeing what he describes. We naturally tend to picture the palace or the water on hearing Ruskin’s metaphor. This function of metaphor has often been noted: George Orwell, for instance, writes that ‘a newly invented metaphor assists thought by evoking a visual image’.
Why do novel metaphors evoke images? Precisely because they are novel uses of words. To understand them, we cannot rely on our knowledge of the literal meanings of the words alone. We often have to employ imagination. To understand Ruskin’s metaphor, we try to imagine seeing water that looks like a broken mosaic. If we manage this, we know the kind of look that he is attributing to the water.
Imagining a thing is often needed to appreciate that thing. Knowing facts about it is often not enough by itself. Accurately imagining Hopkins’s despondency, or the experience of arriving in Venice by boat, gives us some appreciation of these experiences. By enabling us to imagine accurately and specifically, metaphor is exceptionally well suited to enhancing our appreciation of what it describes.
The philosopher Descartes set out to escape doubt and to find certainties. From the premise that he was thinking, even if falsely, he argued to what he took to be the certain conclusion that he existed. Cogito ergo sum. He is as well known for concluding that consciousness is not physical. Your being conscious right now is not an objective physical fact. It has a nature quite unlike that of the chair you are sitting on. Your consciousness is different in kind from objectively physical neural states and events in your head.
This mind-body dualism persists. It is not only a belief or attitude in religion or spirituality. It is concealed in standard cognitive science or computerism. The fundamental attraction of dualism is that we are convinced, since we have a hold on it, that consciousness is different. There really is a difference in kind between you and the chair you are sitting on, not a factitious difference.
But there is an awful difficulty. Consciousness has physical effects. Arms move because of desires, bullets come out of guns because of intentions. How could such indubitably physical events have causes that are not physical at all, for a start not in space?
Some philosophers used to accomodate the fact that movements have physical causes by saying conscious desires and intentions aren’t themselves causal but they go along with brain events. Epiphenomenalism is true. Conscious beliefs themselves do not explain your stepping out of the way of joggers. But epiphenomenalism is now believed only in remote parts of Australia, where the sun is very hot. I know only one epiphenomenalist in London, sometimes seen among the good atheists in Conway Hall.
A decent theory or analysis of consciousness will also have the recommendation of answering a clear question. It will proceed from an adequate initial clarification of a subject. The present great divergence in theories of consciousness is mainly owed to people talking about different things. Some include what others call the unconscious mind.
But there are also the criteria for a good theory. We have two already — a good theory will make consciousness different and it will make consciousness itself effective. In fact consciousness is to us not just different, but mysterious, more than elusive. It is such that philosopher Colin McGinn has said before now that we humans have no more chance of understanding it than a chimp has of doing quantum mechanics.
There’s a lot to the new theory of Actualism, starting with a clarification of ordinary consciousness in the primary or core sense as something called actual consciousness. Think along with me just of one good piece of the theory. Think of one part or side or group of elements of ordinary consciousness. Think of consciousness in ordinary perception — say seeing — as against consciousness in just thinking and wanting. Call it perceptual consciousness. What is it for you to perceptually conscious now, as we say, of the room you’re in? Being aware of it, not thinking about it or something in it? Well, the fact is not some internal thing about you. It’s for a room to exist.
It’s for a piece of a subjective physical world to exist out there in space — yours. That is something dependent both on the objective physical world out there and also on you neurally. A subjective physical world’s being dependent on something in you, of course, doesn’t take it out of space out there or deprive it of other counterparts of the characteristics you can assemble of the objective physical world. What is actual with perceptual consciousness is not a representation of a world — stuff called sense data or qualia or mental paint — whatever is the case with cognitive and affective consciousness.
That’s just a good start on Actualism. It makes consciousness different. It doesn’t reduce consciousness to something that has no effects. It also involves a large fact of subjectivity, indeed of what you can call individuality or personal identity, even living a life. One more criterion of a good theory is naturalism — being true to science. It is also philosophy, which is greater concentration on the logic of ordinary intelligence, thinking about facts rather than getting them. Actualism also helps a little with human standing, that motive of believers in free will as against determinism.
We are near, it seems, “peak skepticism.” We all know that the sweetest character in the movie we’re watching will turn out to be the serial killer. We all know that the stranger in the good suit and the great hair is up to something sinister. We all know that the honey-voiced therapist or the soothing guru or the brave leader of the heroic little NGO will turn out to be a fraud, embezzling here or seducing there.
“I read it on the Internet” became a rueful joke as quickly as there was an Internet. Politicians are all liars, priests are all pedophiles, professors are all blowhards: you can’t trust anyone or anything.
Notre Dame philosopher Alvin Plantinga shrugs off the contemporary storm of frightening doubt, however, with the robust common sense of his Frisian forebears:
Such Christian thinkers as Pascal, Kierkegaard, and Kuyper…recognize that there aren’t any certain foundations of the sort Descartes sought—or, if there are, they are exceedingly slim, and there is no way to transfer their certainty to our important non-foundational beliefs about material objects, the past, other persons, and the like. This is a stance that requires a certain epistemic hardihood: there is, indeed, such a thing as truth; the stakes are, indeed, very high (it matters greatly whether you believe the truth); but there is no way to be sure that you have the truth; there is no sure and certain method of attaining truth by starting from beliefs about which you can’t be mistaken and moving infallibly to the rest of your beliefs. Furthermore, many others reject what seems to you to be most important. This is life under uncertainty, life under epistemic risk and fallibility. I believe a thousand things, and many of them are things others—others of great acuity and seriousness—do not believe. Indeed, many of the beliefs that mean the most to me are of that sort. I realize I can be seriously, dreadfully, fatally wrong, and wrong about what it is enormously important to be right. That is simply the human condition: my response must be finally, “Here I stand; this is the way the world looks to me.”
In this attitude Plantinga follows in the cheerful train of Thomas Reid, the great Scottish Enlightenment philosopher. In his several epistemological books, Reid devotes a great deal of energy to demolishing what he sees to be a misguided approach to knowledge, which he terms the “Way of Ideas.” Unfortunately for standard-brand modern philosophy, and even for most of the rest of us non-philosophers, the Way of Ideas is not merely some odd little branch but the main trunk of epistemology from Descartes and Locke forward to Kant.
The Way of Ideas, roughly speaking, is the basic scheme of perception by which the things “out there” somehow cause us to have ideas of them in our minds, and thus we form appropriate beliefs about them. Reid contends, startlingly, that this scheme fails to illuminate what is actually happening. In fact, Reid pulverizes this scheme as simply incoherent—an understanding so basic that most of us take it for granted, even if we could not actually explain it. The “problem of the external world” remains intractable. We just don’t know how we reliably get “in here” (in our minds) what is “out there” (in the world).
Having set aside the Way of Ideas, Reid then stuns the reader again with this declaration: “I do not attempt to substitute any other theory in [its] place.” Reid asserts instead that it is a “mystery” how we form beliefs about the world that actually do seem to correspond to the world as it is. (Our beliefs do seem to have the virtue of helping us negotiate that world pretty well.)
The philosopher who has followed Reid to this point now might well be aghast. “What?” she might sputter. “You have destroyed the main scheme of modern Western epistemology only to say that you don’t have anything better to offer in its place? What kind of philosopher are you?”
“A Christian one,” Reid might reply. For Reid takes great comfort in trusting God for creating the world such that human beings seem eminently well equipped to apprehend and live in it. Reid encourages readers therefore to thank God for this provision, this “bounty of heaven,” and to obey God in confidence that God continues to provide the means (including the epistemic means) to do so. Furthermore, Reid affirms, any other position than grateful acceptance of the fact that we believe the way we do just because that is the way we are is not just intellectually untenable, but (almost biblically) foolish.
Thus Thomas Reid dispenses with modern hubris on the one side and postmodern despair on the other. To those who would say, “I am certain I now sit upon this chair,” Reid would reply, “Good luck proving that.” To those who would say, “You just think you’re sitting in a chair now, but in fact you could be anyone, anywhere, just imagining you are you sitting in a chair,” he would simply snort and perhaps chastise them for their ingratitude for the knowledge they have gained so effortlessly by the grace of God.
Having acknowledged the foolishness of claiming certainty, Reid places the burden of proof, then, where it belongs: on the radical skeptic who has to show why we should doubt what seems so immediately evident, rather than on the believer who has to show why one ought to believe what seems effortless to believe. Darkness, Reid writes, is heavy upon all epistemological investigations. We know through our own action that we are efficient causes of things; we know God is, too. More than this, however, we cannot say, since we cannot peer into the essences of things. Reid commends to us all sorts of inquiries, including scientific ones, but we will always be stymied at some level by the four-year-old’s incessant question: “Yes, but why?” Such explanations always come back to questions of efficient causation, and human reason simply cannot lay bare the way things are in themselves so as to see how things do cause each other to be this or that way.
Reid’s contemporary and countryman David Hume therefore was right on this score, Reid allows. But unlike Hume—very much unlike Hume—Reid is cheerful about us carrying on anyway with the practically reliable beliefs we generally do form, as God wants us to do. Far from being paralyzed by epistemological doubt, therefore, Reid offers all of us a thankful epistemology of trust and obedience.
But do Christians need to resort to such a breathtakingly bold response to the deep skepticism of our times? My last post offers an answer.
When we use a computer, its performance seems to degrade progressively. This is not a mere impression. An old version of Firefox, the free Web browser, was infamous for its “memory leaks”: it would consume increasing amounts of memory to the detriment of other programs. Bugs in the software actually do slow down the system. We all know what the solution is: reboot. We restart the computer, the memory is reset, and the performance is restored, until the bugs slow it down again.
Philosophy is a bit like a computer with a memory leak. It starts well, dealing with significant and serious issues that matter to anyone. Yet, in time, its very success slows it down. Philosophy begins to care more about philosophers’ questions than philosophical ones, consuming increasing amount of intellectual attention. Scholasticism is the ultimate freezing of the system, the equivalent of Windows’ “blue screen of death”; so many resources are devoted to internal issues that no external input can be processed anymore, and the system stops. The world may be undergoing a revolution, but the philosophical discourse remains detached and utterly oblivious. Time to reboot the system.
Philosophical “rebooting” moments are rare. They are usually prompted by major transformations in the surrounding reality. Since the nineties, I have been arguing that we are witnessing one of those moments. It now seems obvious, even to the most conservative person, that we are experiencing a turning point in our history. The information revolution is profoundly changing every aspect of our lives, quickly and relentlessly. The list is known but worth recalling: education and entertainment, communication and commerce, love and hate, politics and conflicts, culture and health, … feel free to add your preferred topics; they are all transformed by technologies that have the recording and processing of information as their core functions. Meanwhile, philosophy is degrading into self-referential discussions on irrelevancies.
The result of a philosophical rebooting today can only be beneficial. Digital technologies are not just tools merely modifying how we deal with the world, like the wheel or the engine. They are above all formatting systems, which increasingly affect how we understand the world, how we relate to it, how we see ourselves, and how we interact with each other.
The ‘Fourth Revolution’ betrays what I believe to be one of the topics that deserves our full intellectual attention today. The idea is quite simple. Three scientific revolutions have had great impact on how we see ourselves. In changing our understanding of the external world they also modified our self-understanding. After the Copernican revolution, the heliocentric cosmology displaced the Earth and hence humanity from the centre of the universe. The Darwinian revolution showed that all species of life have evolved over time from common ancestors through natural selection, thus displacing humanity from the centre of the biological kingdom. And following Freud, we acknowledge nowadays that the mind is also unconscious. So we are not immobile, at the centre of the universe, we are not unnaturally separate and diverse from the rest of the animal kingdom, and we are very far from being minds entirely transparent to ourselves. One may easily question the value of this classic picture. After all, Freud was the first to interpret these three revolutions as part of a single process of reassessment of human nature and his perspective was blatantly self-serving. But replace Freud with cognitive science or neuroscience, and we can still find the framework useful to explain our strong impression that something very significant and profound has recently happened to our self-understanding.
Since the fifties, computer science and digital technologies have been changing our conception of who we are. In many respects, we are discovering that we are not standalone entities, but rather interconnected informational agents, sharing with other biological agents and engineered artefacts a global environment ultimately made of information, the infosphere. If we need a champion for the fourth revolution this should definitely be Alan Turing.
The fourth revolution offers a historical opportunity to rethink our exceptionalism in at least two ways. Our intelligent behaviour is confronted by the smart behaviour of engineered artefacts, which can be adaptively more successful in the infosphere. Our free behaviour is confronted by the predictability and manipulability of our choices, and by the development of artificial autonomy. Digital technologies sometimes seem to know more about our wishes than we do. We need philosophy to make sense of the radical changes brought about by the information revolution. And we need it to be at its best, for the difficulties we are facing are challenging. Clearly, we need to reboot philosophy now.
Subscribe to the OUPblog via email or RSS.
Subscribe to only philosophy articles on the OUPblog via email or RSS.
Image credit: Alan Turing Statue at Bletchley Park. By Ian Petticrew. CC-BY-SA-2.0 via Wikimedia Commons.
“Some people who do not possess theoretical knowledge are more effective in action (especially if they are experienced) than others who do possess it.”
Aristotle was referring, in his Nicomachean Ethics, to an attribute called practical wisdom – a quality that many modern engineers have – but our western intellectual tradition has completely lost sight of. I will describe briefly what Aristotle wrote about practical wisdom, argue for its recognition and celebration and state that we need consciously to utilise it as we face up to the uncertainties inherent in the engineering challenges of climate change.
Necessarily what follows is a simplified account of complex and profound ideas. Aristotle saw five ways of arriving at the truth – he called them art (ars, techne), science (episteme), intuition (nous), wisdom (sophia), and practical wisdom – sometimes translated as prudence (phronesis). Ars or techne (from which we get the words art and technical, technique and technology) was concerned with production but not action. Art had a productive state, truly reasoned, with an end (i.e. a product) other than itself (e.g. a building). It was not just a set of activities and skills of craftsman but included the arts of the mind and what we would now call the fine arts. The Greeks did not distinguish the fine arts as the work of an inspired individual – that came only after the Renaissance. So techne as the modern idea of mere technique or rule-following was only one part of what Aristotle was referring to.
Episteme (from which we get the word epistemology or knowledge) was of necessity and eternal; it is knowledge that cannot come into being or cease to be; it is demonstrable and teachable and depends on first principles. Later, when combined with Christianity, episteme as eternal, universal, context-free knowledge has profoundly influenced western thought and is at the heart of debates between science and religion. Intuition or nous was a state of mind that apprehends these first principles and we could think of it as our modern notion of intelligence or intellect. Wisdom or sophia was the most finished form of knowledge – a combination of nous and episteme.
Aristotle thought there were two kinds of virtues, the intellectual and the moral. Practical wisdom or phronesis was an intellectual virtue of perceiving and understanding in effective ways and acting benevolently and beneficently. It was not an art and necessarily involved ethics, not static but always changing, individual but also social and cultural. As an illustration of the quotation at the head of this article, Aristotle even referred to people who thought Anaxagoras and Thales were examples of men with exceptional, marvelous, profound but useless knowledge because their search was not for human goods.
Aristotle thought of human activity in three categories praxis, poeisis (from which we get the word poetry), and theoria (contemplation – from which we get the word theory). The intellectual faculties required were phronesis for praxis, techne for poiesis, and sophia and nous for theoria.
Sculpture of Aristotle at the Louvre Museum. Photo by Eric Gaba, CC-BY-SA-2.5 via Wikimedia Commons
It is important to understand that theoria had total priority because sophia and nous were considered to be universal, necessary and eternal but the others are variable, finite, contingent and hence uncertain and thus inferior.
What did Aristotle actually mean when he referred to phronesis? As I see it phronesis is a means towards an end arrived at through moral virtue. It is concerned with “the capacity for determining what is good for both the individual and the community”. It is a virtue and a competence, an ability to deliberate rightly about what is good in general, about discerning and judging what is true and right but it excludes specific competences (like deliberating about how to build a bridge or how to make a person healthy). It is purposeful, contextual but not rule-following. It is not routine or even well-trained behaviour but rather intentional conduct based on tacit knowledge and experience, using longer time horizons than usual, and considering more aspects, more ways of knowing, more viewpoints, coupled with an ability to generalise beyond narrow subject areas. Phronesis was not considered a science by Aristotle because it is variable and context dependent. It was not an art because it is about action and generically different from production. Art is production that aims at an end other than itself. Action is a continuous process of doing well and an end in itself in so far as being well done it contributes to the good life.
Christopher Long argues that an ontology (the philosophy of being or nature of existence) directed by phronesis rather than sophia (as it currently is) would be ethical; would question normative values; would not seek refuge in the eternal but be embedded in the world and be capable of critically considering the historico-ethical-political conditions under which it is deployed. Its goal would not be eternal context-free truth but finite context-dependent truth. Phronesis is an excellence (arête) and capable of determining the ends. The difference between phronesis and techne echoes that between sophia and episteme. Just as sophia must not just understand things that follow from first principles but also things that must be true, so phronesis must not just determine itself towards the ends but as arête must determine the ends as good. Whereas sophia knows the truth through nous,phronesis must rely on moral virtues from lived experience.
In the 20th century quantum mechanics required sophia to change and to recognise that we cannot escape uncertainty. Derek Sellman writes that a phronimo will recognise not knowing our competencies, i.e. not knowing what we know, and not knowing our uncompetencies, i.e. not knowing what we do not know. He states that a longing for phronesis “is really a longing for a world in which people honestly and capably strive to act rightly and to avoid harm,” and he thinks it is a longing for praxis.
In summary I think that one way (and perhaps the only way) of dealing with the ‘wicked’ uncertainties we face in the future, such as the effects of climate change, is through collaborative ‘learning together’ informed by the recognition, appreciation, and exercise of practical wisdom.
What is the self, and how is it formed? In the case of Calvin, we might be given a glimpse at an answer if we consider the context from which he came. Calvin was part of a society that was still profoundly memorial in character; he lived with the vestiges of that medieval culture that’s discussed so brilliantly by Frances Yates and Mary Carruthers — a society which committed classical and Christian corpora to remembrance and whose self-identity was, in a large part, shaped and informed by memory. Understanding his society may help us to understand not only Calvin but, more specifically, something of his prophetic self-consciousness.
To explore this further, I might call to memory that wonderful story told by Carruthers of Heloise’s responding to her friends when they were trying to dissuade her from entering the convent. Heloise responded to them by citing the words of Cornelia from Lucan’s poem, “Pharsalia”. Carruthers explains that Heloise had not only memorized Cornelia’s lament but had so imbibed it that it, as set down in words by Lucan, helped her explain her own feelings and in fact constituted part of her constructed self. Lucan’s words, filling her mind and being memorized and absorbed through the medieval method of reading, helped Heloise give expression to her own emotional state and, being called upon at a moment of such personal anguish, represented something of who she was; they helped form and give expression to her self-identity. The account, and Carruthers’s interpretation of it, is so fascinating because it raises such interesting questions about how self-identity is shaped. Was a medieval man or woman in some sense the accumulation of the thoughts and experiences about which he or she had been reading? Is that how Heloise’s behaviour should be interpreted?
Does this teach us anything about Calvin’s self-conception? One can imagine that if Calvin memorized and deeply imbibed the Christian corpus, particularly the prophetic books, that perhaps this affected his self-identity; that it was his perceptive matrix when he looked both at the world and at himself. To dig deeper, we might examine briefly one of Calvin’s experiences. One thinks, for instance, of his account of being stopped in his tracks by Guillaume Farel in Geneva in 1536. He recounts that Farel, when he learned that Calvin was in Geneva, came and urged him to stay and help with the reforming of the church. Farel employed such earnestness, Calvin explains, that he felt stricken with a divine terror which compelled him to stop his travels and stay in Geneva. The account reads not unlike the calling of an Old Testament prophet, such as Isaiah’s recorded in Isaiah 6 (it reads, incidentally, like the calling of John Knox as well). So what is one to make of this? This account was written in the early 1550s. It was written by one whose memory was, by this point in his life, saturated with the language of the prophetic authors. Indeed, it might be noted that Calvin claims in numerous places in his writings that his life is like the prophet David’s; that his times are a “mirror” of the prophets’ age. So is all of this the depiction of his constructed self spilling out of his memory, just as it was with Heloise?
The question is actually an incredibly fascinating one: how is the self formed? Does one construct one’s ‘self’ in a deliberate, self-conscious manner? What is so interesting, in relation to Calvin and the story just recounted, is not merely that he seems to have interpreted this episode in his life as a divine calling — so important was it, in fact, that he rehearsed it in his preface to his commentary on the Psalms, the one document in which he gives anything like a personal account of his calling to the ministry in fairly unambiguous language — but that his account should be crafted after the manner of Old Testament prophets descriptions of their callings. That is what is so intriguing and important here. It is true, as I have just said, that he wrote this many years after the event and it seems most probably to have been something which he did exercise some care over. All of that is true. But none of this takes anything away from the fact that Calvin, when he wanted to tell the story of his calling, used imagery from the prophetic books to do so. He could easily have mentioned many things or adopted various methods for explaining the way in which God called him into divine service, but he didn’t choose other methods, he turned to the prophets.
Why did he do this? Surely the answer to that question is complicated. But equally certain, it seems to me, is the fact that his ingesting of the prophetic writings represents a likely element in such an answer. For if, as Carruthers argues, memory is the matrix of perception, then Calvin’s matrix was profoundly biblical and, especially, prophetic. Naturally, much could be said by way of explaining why he interpreted this episode in his life in the way that he did. But the fact that his mind turned towards this prophetic trope says an immense amount about Calvin and the resource by which he interpreted himself and his life.
Jon Balserak is currently Associate Professor of Religious Studies at the University of Bristol. He is an historian of Renaissance and Early Modern Europe, particularly France and the Swiss Confederation. He also works on textual scholarship, electronic editing and digital editions. His latest book is John Calvin as Sixteenth-Century Prophet (OUP, 2014).
To learn more about John Calvin’s idea of the self, read “The ‘I’ of Calvin,” the first chapter of John Calvin as Sixteenth-Century Prophet, available via Oxford Scholarship Online. Oxford Scholarship Online (OSO) is a vast and rapidly-expanding research library. Launched in 2003 with four subject modules, Oxford Scholarship Online is now available in 20 subject areas and has grown to be one of the leading academic research resources in the world. Oxford Scholarship Online offers full-text access to academic monographs from key disciplines in the humanities, social sciences, science, medicine, and law, providing quick and easy access to award-winning Oxford University Press scholarship.
Subscribe to the OUPblog via email or RSS.
Subscribe to only religion articles on the OUPblog via email or RSS.
Imagine for a moment that through a special act of divine providence God assembled the greatest theologians throughout time to sit around a theological round table to solve the problem of evil. You would have many of the usual suspects: Athanasius, Augustine, Thomas Aquinas, Martin Luther, John Calvin, and Karl Barth. You would have the mystics: Gregory of Nyssa, Julian of Norwich, Catherine of Sienna, Teresa of Ávila, and Thomas Merton. You would have the scholastics: Anselm, Peter Lombard, Bonaventure, and John Duns Scotus. You would have the newcomers: Jürgen Moltmann, Sarah Coakley, and Miroslav Volf. You might even have some unknown names and faces. Feel free to place your favorite theologian around the table. With these diverse and dynamic minds, you could expect to have a spirited conversation.
If you were to moderate the discussion around our massive oak table you would have the daunting task of keeping pace with these agile intellects and perhaps of negotiating a few inflated egos. It might be difficult to get a word in edgewise. Augustine would be affable and loquacious. Aquinas would be precise and ponderous. Luther would be humorous and polemical. But where would Origen of Alexandria (c. 185-254) fit in, the greatest theologian of Eastern Christianity? What would he say about the problem of evil? All agree he deserves an honored seat at the table, but often others around the table suck all the oxygen out of the room, leaving little air for his profound insights, particularly on the problem of evil, which anticipate later developments while also reflecting his distinctive intellectual milieu. Let’s imagine how the conversation might go.
Disputa di Santo Stefano fra i Dottori nel Sinedrio by Vittore Carpaccio [Public domain or Public domain], via Wikimedia Commons.
Thomas Aquinas: “Welcome all. I’ve been asked to begin our discussion. Let me say first that the problem of evil represents the most formidable conceptual challenge to theism.”
Augustine: “I agree, but the problem’s resolved once we realize that evil doesn’t exist per se, like a malevolent substance, it’s simply the privation of the good. At any rate, God doesn’t create evil, we do, and God eventually brings good out of evil, so evil doesn’t have the final say.”
Sarah Coakley: “It can’t be settled that easily. I’m suspicious of grand theological narratives that simplify conceptual complexities. Let’s retrieve some neglected voices on the problem.”
Gregory of Nazianzus: “I’ve written a theological poem about it that I’d like to share.”
Basil of Caesarea: “Please don’t. I can’t sit through another one of your theological poems.”
Gregory of Nazianzus: “Fine. I’m out of here. I didn’t want to come in the first place.”
Jürgen Moltmann: “That was a little rude, Basil, you know Greg’s sensitive, especially about his theological poetry, but let’s get back to the topic at hand. We can’t answer the theodicy question in this life, but we can’t discard it either. All we can do is turn to the God who suffers with, from, and for the world for solidarity with us in our suffering. Only the suffering God can help.”
Dietrich Bonhoeffer: “I couldn’t have said it better myself.”
Karl Rahner: “The problem of evil is a fundamental question of human existence.”
John of the Cross: “I have endured many dark nights agonizing over it.”
Julian of Norwich: “Fear not, brother John, all will be well.”
John Calvin: “Not for those predestined to the fires of hell, but that’s part of the mystery of divine providence, which is inviolable, so in a refined theological sense, all will be well.”
Julian of Norwich: “I think we have different visions of what wellness means.”
Martin Luther: “You’re all crazy casuists. We’re probing into the deeps of divine mystery. We’re way out of our depth. We’re just small, sinful worms: we can’t possibly solve these riddles.”
F. D. W. Schleiermacher: “Settle down, Martin, we’re just talking. What do you think, Karl?”
Karl Barth and Karl Rahner (simultaneously): “Which Karl?”
Miroslav Volf: “Let’s give the Karls a pass. We heard enough from them last time, and we want to make room for others. Barth would probably just talk about ‘nothingness’ anyway.”
Hans Urs von Balthasar: “Origen, you’ve been quiet, and you haven’t touched your food, what are your thoughts on the problem of evil? Won’t you give us the benefit of your deep erudition?”
Origen: “I’ve often pondered the question of the justice of divine providence, especially when I observe the unfair conditions people inherit at birth. Some suffer more than others for no apparent reason, and some are born with major disadvantages, such as blindness or poverty.”
Dorthee Sölle: “I appreciate your attentiveness to the lived experience of suffering, Origen, and not just the theoretical problem of how to reconcile divine goodness and omnipotence with evil.”
Gregory of Nyssa: “Me too, but how do you account for the disparity of fortunes in the world? How do you preserve cosmic coherence in the face of so much injustice and misfortune?”
Origen: “I’ll tell you a plausible story that brings many of these theological threads together. Before the dawn of space and time, God created disembodied rational minds, including us. We existed in perfect harmony and happiness until through either neglect or temptation or both we drifted away from God. Since all reality participated in God’s goodness, we were in danger of drifting out of existence altogether the further we strayed from our original goodness, so God, in his benevolence, created the cosmos to catch us and to enable our ascent back to God. Our lot in life, therefore, reflects the degree of our precosmic fall, which preserves divine justice. The world, you see, exists as a schoolroom and hospital for fallen souls to return to God. Eventually, all may return to God, since the end is like the beginning, but not until undergoing spiritual transformation. We must all traverse the stages of purification, illumination, and union, both here and in the afterlife, until our journey back to God is complete and God will be all in all.”
John Hick: “That makes perfect sense to me.”
Irenaeus: “Should you really be here, John? That’s a little far out there for me, Origen.”
Athanasius: “Origen clearly has a complex, subtle mind that doesn’t lend itself to simplification. It’s a trait of Alexandrian thinkers, who are among the best theologians in church history.”
John Chrysostom: “Spare me.”
Augustine: “I think I see what Origen means, especially about the origin and ontological status of evil and God’s goodness. It’s not too far from my thoughts, except for his speculative flights.”
Thomas Aquinas: “Our time is up. We haven’t solved the problem of evil, but we seem confident that God ultimately brings good out of evil, however dire things seem, and that’s a start.”
Francis of Assisi: “Let’s end in prayer.”
Thank goodness Hans Urs von Balthasar asked for Origen’s opinion, since I doubt he would have offered it otherwise. What our imaginary theological roundtable and fictitious dialogue reveals, hopefully, is that there are a variety of voices in theology that speak to the problem of evil. Some, such as Augustine and Aquinas, are well known. Others, such as Origen, have been neglected, partly because of his complicated reception, and partly because of the subtlety and originality of his thought.
Sam Falconer’s fantastic illustrations reflect science and the human experience through digital, collage, and hand-painted textures. His clever scenes provoke philosophical thought while quickly getting to the heart of a story. His editorial illustrations regularly feature in top publications such as The Guardian, The Washington Post, and New Scientist magazine.
Both a unique witness of transformative events in the late 20th century, and a prescient analysis of our present economic crises from a major French philosopher, Michel Henry's From Communism to Capitalism adds an important economic dimension to his earlier social critique. It begins by tracing the collapse of communist regimes back to their failure to implement Marx's original insights into the irreplaceable value of the living individual. Henry goes on to apply this same criticism to the surviving capitalist economic systems, portending their eventual and inevitable collapse.
The influence of Michel Henry's radical revision of phenomenological thought is only now beginning to be felt in full force, and this edition is the first English translation of his major engagement with socio-economic questions. From Communism to Capitalism reinterprets politics and economics in light of the failure of socialism and the pervasiveness of global capitalism, and Henry subjects both to critique on the basis of his own philosophy of life. His notion of the individual is one that, as subjective affect, subtends both Marxist collectivism and liberalism simultaneously. In addition to providing a crucial economic elaboration of Henry's influential social critiques, this work provides a context for understanding the 2008 financial shock and offers important insights into the political motivations behind the 'Arab spring'.
June is Torture Awareness Month, so this seems like a good time to consider some difficult aspects of torture people in the United States might need to be aware of. Sadly, this country has a long history of involvement with torture, both in its military adventures abroad and within its borders. A complete understanding of that history requires recognizing that US torture practices have been forged in the furnace of white supremacy. Indeed the connection between torture and race on this continent began long before the formation of the nation itself.
Every torture regime identifies a group or groups of people whom it is legally and/or morally permissible to torture. To the ancient Romans and Greeks, only slaves were legitimate targets. As Hannah Arendt has observed, the Greeks in particular considered the compulsion to speak under torture a terrible affront to the liberty of a free person.
The activity of identifying a group as an acceptable torture target simultaneously signals and confirms the non-human status of its members. In Pinochet’s Chile, torture targets were called “humanoids” to distinguish them from actual human beings. In other places they are called “cockroaches,” or “worms.” In Brazil’s military dictatorship, people living on city streets suffered fates worse than those of the pickled frogs dissected in high school labs. They were swept up and used to demonstrate torture techniques in classes for police cadets. They were practice dummies.
In the photographs taken at Abu Ghraib, we see naked men cowering like prey before snarling dogs. In one of the most famous, we see a man who has been assigned a dog’s status, on all fours, collared and led on a leash by the US Army Reservist Lynndie England. As theologian William Cavanaugh has observed, it becomes easier to believe that that torture victims are not people when we treat them like dogs. Furthermore, the very vileness of torture reinforces the vileness of the prisoner in the minds of the public. Surely a “good” government such as our own could only be driven to such extremes by a terrible, inhuman enemy.
So what’s race got to do with it? In this country, the groups whom it is permissible to torture have historically been identified primarily by their race. The history of US torture begins with European settlers’ designation of the native peoples of this continent and of enslaved Africans as subhuman savages. Slaves—almost exclusively persons of African descent—are treated as literally less than human in Article 1 of the US Constitution; for purposes of apportioning representation in the House of Representatives to the various states, a slave was to count as three-fifths of a person. “Indians not taxed” didn’t count as persons at all. Members of both groups fell into categories of persons who might be tortured with impunity.
Institutionalized abuses that were ordinary practice among slaveholders—whipping, shackling, branding and other mutilations—were both common and legal. Nor were such practices incidental to the institution of chattel slavery. Rather, they were central to slavery’s fundamental rationale: the belief that enslaved African beings were not entirely human. As would happen centuries later in the US “war on terror,” the practice of torture actually ratified the prevailing belief in Africans’ inferiority. For surely no true human being would accept such degradation. Equally surely, good Christians would only be moved to such beastly behavior because they were confronted by beasts.
Nor did state-sanctioned torture of African Americans end with emancipation. The institution of lynching continued from the end of the Civil War well into the 20th century, with a resurgence during the Civil Rights movement of the 1960s. Lynching, in addition to its culminating murder by hanging or burning, often involved whippings, and castration of male victims, prior to death. Lynching served the usual purpose of institutionalized state torture—that is, the establishment and maintenance of the power of white authorities over Black populations. In many places in this country, lynchings were treated as popular entertainment. They were not only permitted but encouraged by local officials, who often participated themselves. The practice even developed a collateral form of popular art: photographs of lynchings decorated many postcards printed in the early part of the 20th century.
US torture in the “war on terror” has displayed its own racial dynamic, although this may not be obvious at first glance. Those tortured in the conduct of this “war” are identified in the public imagination as a particular kind of terrorist. They are Muslims. Some efforts have been made in political rhetoric to distinguish “Islamists” and “Islamofascists” from ordinary “good Muslims,” but a relationship to Islam remains the key identifier. But isn’t “Muslim” a religious, rather than racial, category? Not for most Americans, for whom Islam is a mysterious and foreign force, associated with dark people from dark places. Like “Hindoo,” which was at one time a racial category for US census purposes, in the American mind, the term “Muslim” often conflates religion with race.
There is another important locus of institutionalized state torture in this country, and it, too, is a deeply racialized practice. Abuse and torture—including rape, sexual humiliation, beatings, prolonged exposure to extremes of heat and cold—are routine in US prisons. Many people are beginning to recognize that solitary confinement—presently suffered by at least 80,000 people in US prisons and immigrant detention centers—is also a profound, psychosis-inducing form of torture. Of the more than two million prisoners in the United States today, roughly 60 percent are people of color, while almost three-quarters of prison guards are white.
Rebecca Gordon received her B.A. from Reed College and her M.Div. and Ph.D. in Ethics and Social Theory from Graduate Theological Union. She teaches in the Department of Philosophy and for the Leo T. McCarthy Center for Public Service and the Common Good at the University of San Francisco. She is the author of Letters From Nicaragua, Cruel and Usual: How Welfare “Reform” Punishes Poor People, and Mainstreaming Torture: Ethical Approaches in the Post-9/11 United States.
Subscribe to the OUPblog via email or RSS.
Subscribe to only religion articles on the OUPblog via email or RSS.
In 1994 Jacques Derrida participated in a seminar in Capri under the title “Religion”. Derrida himself thought “religion” might be a good word, perhaps the best word for thinking about our time, our “today”. It belongs, Derrida suggested, to the “absolute anachrony” of our time. Religion? Isn’t it that old thing that we moderns had thought had gone away, the thing that really does not belong in our time? And yet, so it seems, it is still alive and well.
Alive and well in a modern world increasingly marked by the death of God. How could this be?
A revival of religion is particularly surprising, perhaps even shocking, for those who thought it was all over for religion, for those who “believed naively that an alternative opposed religion”. This alternative would be the very heart of Europe’s modernity: “reason, Enlightenment, science, criticism (Marxist criticisms, Nietzschean genealogy, Freudian psychoanalysis)”. What is modernity if it is not an alternative opposed to religion, a movement in history destined to put an end to religion?
Derrida’s contribution to the seminar attempted to re-think this old “secularisation thesis”. He attempted to outline “an entirely different schema”, one which would be up to thinking the meaning and significance of a return of religion in our time, and capable of making sense of the new “fundamentalisms” that are, he suggested, “at work in all religions” today. And here, in 1994, Derrida drew special attention to what he called “Islamism”, carefully disassociating it from Islam: Islamism is not to be confused with Islam – but is always liable to be confused with it since it “operates in [its] name”.
Before making further steps Derrida noted that the group of philosophers he was in discussion with at the Capri seminar might themselves share a commitment thought to be opposed to religion: “an unreserved taste, if not an unconditional preference, for what in politics, is called republican democracy as a universalizable model.”
This taste or preference in politics is itself inseparable from “a commitment…to the enlightened virtue of public space. [A uniquely European achievement which consists in] emancipating [public space] from all external power (non-lay, non-secular), for example from religious dogmatism, orthodoxy or authority.” And hence, this commitment – the commitment to making decisions without recourse to religious revelation or religious authority – might itself seem to be part of the “modernity” that the revival of religion would seem to challenge.
But Derrida refused to present this commitment as one belonging to “an enemy of religion”. It does not have to be understood as a commitment opposed to religion. In fact, and surely to the surprise of many believers and non-believers alike, he argued for seeing how the preference for republican political secularity is essentiallyconnected to a thesis in Kant on the relation between morality – what it means to make decisions and conduct oneself morally as a human being – and, precisely, religion. A link that will make this European public space both secular and (specifically) Christian.
It is a thesis in Kant that Derrida attempted to use as an astonishing interpretive key to the question of religion and the religious revival today, a key also to the character of radicalised fundamentalisms which, in 1994, he already saw developing in the geo-political relations between this European Christianity and the other great monotheisms, Judaism and Islam.
The Kantian thesis could not be more simple, but Derrida asks us to “measure without flinching” the implications of it. If we follow Kant we will have to accept that Christian revelation teaches us something essential about the very idea of morality: “in order to conduct oneself in a moral manner, one must act as though God did not exist or no longer concerned himself with our salvation.” The crucial point here is that decisions on right conduct should not be made on the basis of any assumption that, by acting in a certain way, we are doing God’s will. The Christian is thus the one who “no longer turns towards God at the moment of acting in good faith”. In short, the good Christian, the Christian acting in good faith, is precisely the one who must decide in a fundamentally secular way. And so Derrida asked, regarding Kant’s thesis, “is it not also, at the core of its content, Nietzsche’s thesis”: that God is dead?
Derrida does not understate it: this thesis – the thesis that Christians are those who are called to endure the death of God in the world – tells us “something about the history of the world – nothing less.”
“Is this not another way of saying that Christianity can only answer to its moral calling and morality, to its Christian calling, if it endures in this world, in phenomenal history, the death of God, well beyond the figures of the Passion?… Judaism and Islam would thus be perhaps the last two monotheisms to revolt against everything that, in the Christianising of our world, signifies the death of God, two non-pagan monotheisms that do not accept death any more than multiplicity in God (the Passion, the Trinity etc), two monotheisms still alien enough at the heart of Greco-Christian, Pagano-Christian Europe that signifies the death of God, by recalling at all costs that “monotheism” signifies no less faith in the One, and in the living One, than belief in a single God.”
And what is the effect of this conflict among the monothesisms? With the Christianising of our world – globalization as “globalatinization” as Derrida put it – we are beginning to see nothing less than “an infinite spiral of outbidding, a maddening instability” in the dimension of revolt and mutual strangeness between these religions of the book. This scene is, Derrida suggests, the focal point of “the madness of our time”.
When Benigno Aquino III was elected Philippine President in 2010, combating entrenched corruption was uppermost on his projected reform agenda. Hitherto, it has been unclear what the full extent and nature of reform ambitions of his administration might be. The issue has now been forced by ramifications from whistleblowers’ exposure of an alleged US$224 scam involving discretionary funds by Congress representatives. Fallout has already put some prominent Senators in the hot seat, but will deeper and more systemic reforms follow?
A crucial but often overlooked factor shaping prospects for reform in the Philippines, and elsewhere, is contestation over the meaning and purposes of accountability. Accountability means different things to different people. Even authoritarian rulers increasingly lay claim to it. Therefore, whether it is liberal, moral or democratic ideology that exerts greatest reform influence matters greatly.
Liberal accountability champions legal, constitutional, and contractual institutions to restrain the ability of state agencies to violate the political authority of the individual. Moral accountability ideologues emphasize how official practices must be guided by a moral code, invoking religious, monarchical ethnic, nationalist, and other externally constituted political authority. Democratic accountability ideologies are premised on the notion that official action at all levels should be subject to sanction, either directly or indirectly, in a manner promoting popular sovereignty.
Anti-corruption movements usually involve coalitions incorporating all three ideologies. However, governments tend to be least responsive to democratic ideologies because their reforms are directed at fundamental power relations. The evolving controversy in the Philippines is likely to again bear this out.
What whistleblowers exposed in July 2013 was an alleged scam masterminded by business figure Janet Lim Napoles. Money was siphoned from the Priority Development Assistance Fund (PDAF), or ‘pork barrel’ as it is popularly known, providing members of Congress with substantial discretionary project funding.
This funding has been integral to political patronage and corruption in the Philippines, precisely why ruling elites have hitherto resolutely defended PDAF despite many scandals and controversies linked to it.
However, public reaction to this scam was on a massive scale. Social and mass media probing and campaigning combined with the ‘Million People March’ in Manila’s Rizal Park involving a series of protests starting in August 2013. After initially defending PDAF despite his anti-corruption platform, Aquino announced PDAF’s abolition. Subsequently, the Supreme Court reversed three earlier rulings to unanimously declare the PDAF unconstitutional for violating the separation of powers principle.
Then, on 1 April 2014, the Office of the Ombudsman (OMB) announced it found probable cause to indict three opposition senators – including the powerful Juan Ponce Enrile, who served as Justice Secretary and Defense Minister under Marcos and Senate President from 2008 until June 2013 – for plunder and multiple counts of graft for kickbacks or commissions channeled through bogus non-governmental organizations (NGOs).
These are the Philippines’ first senatorial indictments for plunder, conviction for which can lead to life imprisonment. Napoles and various state officials and employees of NGOs face similar charges. Aquino’s rhetoric about instituting clean and accountable governance is translating into action. But which ideologies are exerting greatest influence and what are the implications?
Moral ideology influences were evident under Aquino even before the abolition of PDAF through new appointments to enhance the integrity of key institutions. Conchita Morales, selected by the President in mid-2011 as the new Ombudsman, was strongly endorsed by Catholic Church leaders. Aquino also appointed Heidi Mendoza as a commissioner to the Commission of Audit. Mendoza played a vital whistleblower role leading to the resignation of the previous Ombudsman Merceditas Gutierrez and was depicted by the Church as a moral role model for Christians.
However, there have been many episodes in the past where authorities have selectively pruned ‘bad apples,’ but with a focus on those from competing political or economic orchards. Will Aquino this time go beyond appeals to moral ideology and intra-elite combat to progress liberal institutional reform?
The accused senators ask why they have been singled out from 40 named criminally liable following the whistleblowers’ claims, inferring political persecution. Yet if continuing investigations lead to charges against people closer to the administration it would indicate not. In a clear alignment with liberal ideology, Communications Secretary Herminio Coloma recently raised expectations of such a change: ‘We are a government of laws, not of men. Let rule of law take its course.’
The jury is still out too on just how substantive the institutional change to the PDAF will prove. The President’s own pork barrel lump sum appropriations in the national budget are unaltered, despite public calls for it too to go. Indeed, some argue the President is now even more powerful a pork dispenser through de facto PDAF concentration in his hands.
PDAF’s abolition is also in a transitional phase with the 2014 budget taking account of existing PDAF commitments. The P25-billion PDAF was directed to the major public funding implementing agencies incorporating these commitments on a line item basis. There is a risk, though, that a precedent has been set for legislators’ pet projects to be negotiated with departmental heads in private rather than scrutinized in the legislature.
Certainly the coalition for change is building. Alongside popular forces, internationally competitive globalized elements of the Philippines bourgeoisie are a growing support base for liberal accountability ideology. Yet longstanding inaction on corruption reflects entrenched power structures inside and outside Congress antithetical to the routine and institutionalized promotion of liberal and, especially, democratic accountability.
Thus, while the instigation of official action on the pork barrel scam following the whistleblowers’ actions is testimony to the power of public mobilizations and campaigns, there are serious obstacles to more effective accountability institutionalization promoting popular sovereignty.
Acute concentrations of wealth and social power in the Philippines not only affect relationships between public officials and some elites, they also fundamentally constrain political competition. Oligarchs enjoy massive electoral resource advantages including the capacity for vote buying and other questionable campaign strategies. Outright intimidation, including extrajudicial killings of some of the most concerted opponents of elite rule and vested interests, remains widespread.
Therefore, parallel with popular anti-pork demands is yet another push for Congress to pass enabling law to finally give effect to the provision in the 1987 Constitution to ban political dynasties. The proliferation of political dynasties and corruption has been mutually reinforcing. Congressional dominance by wealthy elites and political clans shapes the laws overseen by officials, the appointment of those officials and, in turn, the culture and practices of public institutions.
When Congress resumes sessions in May, it will have before it the first Anti-Dynasty Bill to have passed the committee level. Public mood has made it more difficult for the rich and powerful in Congress to be as dismissive as previously of such reform attempts. The prospects of the current Bill passing are nevertheless dim but the struggle for democratic accountability will continue.
We are constantly making decisions about what we ought to do. We have to make up our own minds, but does that mean that whatever we choose is right? Often we make decisions from a limited or biased perspective.
The nineteen century utilitarian philosopher Henry Sidgwick thought that it is possible for us to reach, by means of reasoning, an objective standpoint that is detached from our own perspective. He called it “the point of view of the universe”. We used this phrase as the title of our book, which is a defense of Sidgwick’s general approach to ethics and to utilitarianism. On one important problem we suggest a correction that we believe makes it possible to overcome a difficulty that greatly troubled him.
We argue that reason is capable of presenting us with objective, impartial, non-natural reasons for action. We agree with Sidgwick that only the presupposition that reasons are objective enables us to make sense of the disagreements we have with other people about what we ought to do, and the way in which we respond to them by presenting them with reasons for our views. Those who deny that we can have objective reasons for action claim that all reasons for action start from desires or preferences.
If we were to accept this view, we would also have to accept that if we have no preferences about the welfare of distant strangers, or of animals, or of future generations, then we have no reason to do anything to help them, or to avoid harming them (as, for example, we are harming future generations by continuing to emit greenhouse gases). We hold that people do have reasons to help distant strangers, animals, and future generations, irrespective of our preferences regarding their welfare.
If objective moral reasoning is possible, how does it get started? Sidgwick’s answer is, in brief, that it starts with a self-evident intuition. He does not mean by this, however, the intuitions of what he calls “common sense morality.” To see what he does mean, we must draw a distinction between intuitions that are self-evident truths of reason, and a very different kind of intuition. This distinction will become clearer if we look at an objection to the idea of moral intuition as a source of moral truth.
Sidgwick was a contemporary of Charles Darwin, so it is not surprising that already in his time the objection was raised that an evolutionary view of the origins of our moral judgments would completely discredit them. Sidgwick denied that any theory of the origins of our capacity for making moral judgments could discredit the very idea of morality, because he thought that no matter what the origin of our moral judgments, we will still have to decide what we ought to do, and answering that question is a worthwhile enterprise.
On the other hand, he agreed that some accounts of the origins of particular moral judgments might suggest that they are unlikely to be true, and therefore discredit them. We defend this important insight, and press it further. Many of our common and widely shared moral intuitions are the outcome of evolutionary selection, but the fact that they helped our ancestors to survive and reproduce does not show them to be true.
This might be taken as a ground for skepticism about morality as a whole, but our capacity for reasoning saves morality from this skeptical critique. The ability to reason has, of course, evolved, and clearly confers evolutionary advantages on those who possess it, but it does so by making it possible for us to discover the truth about our world, and this includes the discovery of some non-natural moral truths.
Sidgwick thought that his greatest work was a failure because it concluded by accepting that both egoism and universal benevolence were rational. Yet they pointed to different conclusions about what we ought to do. We argue that the evolutionary critique of some moral intuitions can be applied to egoism, but not to universal benevolence. The principle of universal benevolence can be seen as self-evident, once we understand that our own good is, from “the point of view of the universe” of no more importance than the similar good of anyone else. This is a rational insight, not an evolved moral intuition.
In this way, we resolve the so-called “dualism of practical reason.” This leaves us with a utilitarian reason for action that can be presented in the form of a utilitarian principle: we ought to maximize the good generally.
What is this good thing that we should maximize? Is my having a positive attitude towards something enough to make bringing it about good for me? Preference utilitarians have argued that it is, and one of us has, for many years, been well-known as a representative of that view.
Sidgwick, however, rejected such theories, arguing that the good must be, not what I actually desire but what I would desire if I were thinking rationally. He then develops the view that the only things that it is rational to desire for themselves are desirable mental states, or pleasure, and the absence of pain.
For those who hold that practical reasoning must start from desires, it is hard to understand the idea of what it would be rational to desire – or at least, that idea can be understood only in relation to other desires that the agent may have, so as to produce a greater harmony of desire.
This leads to a desire-based theory of the good.
One of us, for many years, became well-known as a defender of one such desire-based theory, namely preference utilitarianism. But if reason can take us to a more universal perspective, then we can understand the claim that it would be rational for us to desire some goods, even if we have no present desire for them. On that basis, it becomes more plausible to argue for the view that the good consists in having certain mental states, rather than in the satisfaction of desires or preferences.
Katarzyna de Lazari-Radek is a Polish utilitarian philosopher, working as an assistant professor at the Institute of Philosophy at the University of Lodz. Peter Singer is Ira W. DeCamp Professor of Bioethics at Princeton University, and a Laureate Professor at the Centre for Applied Philosophy and Public Ethics at the University of Melbourne; in 2005 Time magazine named him one of the 100 most influential people in the world. Katarzyna de Lazari-Radek and Peter Singer are authors of The Point of View of the Universe.
Today’s (post) political thought has been turned into an ethics and a legal philosophy. The business of politics is supposed to promote moral values and ethical policies which are reached either through a discursive will formation (human rights, humanitarianism, freedom etc.) or through the language of rights (original positions, striking a balance between individual rights and community goods, rights as trumps etc.).
Religion can help to revive the political, to re-politicize politics: it can help the construction of new political subjects who break out of the ethico-legal entanglement and ground a new collective space. In early Christianity, the communities of believers created the ecclesia, a new form of collectivity. Asimilar role was played in early Islam by the umma. Paraphrasing Kierkegaard, one can say that we need today the theologico-political suspension of the legal-ethical.
Science and morality are often seen as poles apart. Doesn’t science deal with facts, and morality with, well, opinions? Isn’t science about empirical evidence, and morality about philosophy? In my view this is wrong. Science and morality are neighbours. Both are rational enterprises. Both require a combination of conceptual analysis, and empirical evidence. Many, perhaps most moral disagreements hinge on disagreements over evidence and facts, rather than disagreements over moral principle.
Consider the recent child euthanasia law in Belgium that allows a child to be killed – as a mercy killing – if: (a) the child has a serious and incurable condition with death expected to occur within a brief period; (b) the child is experiencing constant and unbearable suffering; (c) the child requests the euthanasia and has the capacity of discernment – the capacity to understand what he or she is requesting; and, (d) the parents agree to the child’s request for euthanasia. The law excludes children with psychiatric disorders. No one other than the child can make the request.
Is this law immoral? Thought experiments can be useful in testing moral principles. These are like the carefully controlled experiments that have been so useful in science. A lorry driver is trapped in the cab. The lorry is on fire. The driver is on the verge of being burned to death. His life cannot be saved. You are standing by. You have a gun and are an excellent shot and know where to shoot to kill instantaneously. The bullet will be able to penetrate the cab window. The driver begs you to shoot him to avoid a horribly painful death.
Would it be right to carry out the mercy killing? Setting aside legal considerations, I believe that it would be. It seems wrong to allow the driver to suffer horribly for the sake of preserving a moral ideal against killing.
Thought experiments are often criticised for being unrealistic. But this can be a strength. The point of the experiment is to test a principle, and the ways in which it is unrealistic can help identify the factual aspects that are morally relevant. If you and I agree that it would be right to kill the lorry driver then any disagreement over the Belgian law cannot be because of a fundamental disagreement over mercy killing. It is likely to be a disagreement over empirical facts or about how facts integrate with moral principles.
There is a lot of discussion of the Belgian law on the internet. Most of it against. What are the arguments?
Some allow rhetoric to ride roughshod over reason. Take this, for example: “I’m sure the Belgian parliament would agree that minors should not have access to alcohol, should not have access to pornography, should not have access to tobacco, but yet minors for some reason they feel should have access to three grams of phenobarbitone in their veins – it just doesn’t make sense.”
But alcohol, pornography and tobacco are all considered to be against the best interests of children. There is, however, a very significant reason for the ‘three grams of phenobarbitone’: it prevents unnecessary suffering for a dying child. There may be good arguments against euthanasia but using unexamined and poor analogies is just sloppy thinking.
I have more sympathy for personal experience. A mother of two terminally ill daughters wrote in the Catholic Herald: “Through all of their suffering and pain the girls continued to love life and to make the most of it…. I would have done anything out of love for them, but I would never have considered euthanasia.”
But this moving anecdote is no argument against the Belgian law. Indeed, under that law the mother’s refusal of euthanasia would be decisive. It is one thing for a parent to say that I do not believe that euthanasia is in my child’s best interests; it is quite another to say that any parent who thinks euthanasia is in their child’s best interests must be wrong.
To understand a moral position it is useful to state the moral principles and the empirical assumptions on which it is based. So I will state mine.
A mercy killing can be in a person’s best interests.
A person’s competent wishes should have very great weight in what is done to her.
Parents’ views as to what it right for their children should normally be given significant moral weight.
Mercy killing, in the situation where a person is suffering and faces a short life anyway, and where the person is requesting it, can be the right thing to do.
There are some situations in which children with a terminal illness suffer so much that it is in their interests to be dead.
There are some situations in which the child’s suffering cannot be sufficiently alleviated short of keeping the child permanently unconscious.
A law can be formulated with sufficient safeguards to prevent euthansia from being carried out in situations when it is not justified.
This last empirical claim is the most difficult to assess. Opponents of child euthanasia may believe such safeguards are not possible: that it is better not to risk sliding down the slippery slope. But the ‘slippery slope argument’ is morally problematic: it is an argument against doing the right thing on some occasions (carrying out a mercy killing when that is right) because of the danger of doing the wrong thing on other occasions (carrying out a killing when that is wrong). I prefer to focus on safeguards against slipping. But empirical evidence could lead me to change my views on child euthanasia. My guess is that for many people who are against the new Belgian law, it is the fear of the slippery slope that is ultimately crucial. Much moral disagreement, when carefully considered, comes down to disagreement over facts. Scientific evidence is a key component of moral argument.
Subscribe to the OUPblog via email or RSS.
Subscribe to only science and medicine articles on the OUPblog via email or RSS.
Image credit: Legality of Euthanasia throughout the world By Jrockley. Public domain via Wikimedia Commons
In the trailer of Transcendence, an authoritative professor embodied by Johnny Depp says that “the path to building superintelligence requires us to unlock the most fundamental secrets of the universe.” It’s difficult to wrap our minds around the possibility of artificial intelligence and how it will affect society. Nick Bostrom, a scientist and philosopher and the author of the forthcoming Superintelligence: Paths, Dangers, Strategies, discusses the science and reality behind the future of machine intelligence in the following video series.
Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Program on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, and Human Enhancement. His next book, Superintelligence: Paths, Dangers, Strategies, will be published this summer in the UK and this fall in the US. He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.
Subscribe to the OUPblog via email or RSS.
Subscribe to only technology articles on the OUPblog via email or RSS.
We tend to think of ‘science’ and ‘literature’ in radically different ways. The distinction isn’t just about genre – since ancient times writing has had a variety of aims and styles, expressed in different generic forms: epics, textbooks, lyrics, recipes, epigraphs, and so forth. It’s the sharp binary divide that’s striking and relatively new. An article in Nature and a great novel are taken to belong to different worlds of prose. In science, the writing is assumed to be clear and concise, with the author speaking directly to the reader about discoveries in nature. In literature, the discoveries might be said to inhere in the use of language itself. Narrative sophistication and rhetorical subtlety are prized.
This contrast between scientific and literary prose has its roots in the nineteenth century. In 1822 the essayist Thomas De Quincey broached a distinction between the ‘the literature of knowledge’ and ‘the literature of power.’ As De Quincey later explained, ‘the function of the first is to teach; the function of the second is to move.’ The literature of knowledge, he wrote, is left behind by advances in understanding, so that even Isaac Newton’s Principia has no more lasting literary qualities than a cookbook. The literature of power, on the other hand, lasts forever and draws out the deepest feelings that make us human.
The effect of this division (which does justice neither to cookbooks nor the Principia) is pervasive. Although the literary canon has been widely challenged, the university and school curriculum remains overwhelmingly dominated by a handful of key authors and texts. Only the most naive student assumes that the author of a novel speaks directly through the narrator; but that is routinely taken for granted when scientific works are being discussed. The one nineteenth-century science book that is regularly accorded a close reading is Charles Darwin’s On the Origin of Species (1859). A number of distinguished critics have followed Gillian Beer’s Darwin’s Plots in attending to the narrative structures and rhetorical strategies of other non-fiction works – but surprisingly few.
It is easy to forget that De Quincey was arguing a case, not stating the obvious. A contrast between ‘the literature of knowledge’ and ‘the literature of power’ was not commonly accepted when he wrote; in the era of revolution and reform, knowledge was power. The early nineteenth century witnessed remarkable experiments in literary form in all fields. Among the most distinguished (and rhetorically sophisticated) was a series of reflective works on the sciences, from the chemist Humphry Davy’s visionary Consolations in Travel (1830) to Charles Lyell’s Principles of Geology (1830-33). They were satirised to great effect in Thomas Carlyle’s bizarre scientific philosophy of clothes, Sartor Resartus (1833-34).
These works imagined new worlds of knowledge, helping readers to come to terms with unprecedented economic, social, and cultural change. They are anything but straightforward expositions or outdated ‘popularisations’, and deserve to be widely read in our own era of transformation. Like the best science books today, they are works in the literature of power.
The problem of consciousness is real, deep and confronts us any time we care to look. Ask yourself this question ‘Am I conscious now?’ and you will reply ‘Yes’. Then, I suggest, you are lured into delusion – the delusion that you are conscious all the time, even when you are not asking about it.
Now ask another question, ‘What was I conscious of a moment ago?’ This may seem like a very odd question indeed but lots of my students have grappled with it and I have spent years playing with it, both in daily life and in meditation. My conclusion? Most of the time I do not know what I was conscious of just before I asked.
Try it. Were you aware of that faint humming in the background? Were you conscious of the birdsong? Had you even noticed the loud drill in the distance that something in your brain was trying to block out? And that’s just sounds. What about the feel of your bottom on the chair? My experience is that whenever I look I find lots of what I call parallel backwards threads – sounds, touch, sights, that in some way I seem to have been listening to for some time – yet when I asked the question I had the odd sensation that I’ve only just become conscious of it.
Back in 1890 William James (one of my great heroes of consciousness studies) remarked on the sounds of a chiming clock. You notice the chiming after several strikes. At that moment you can look back and count one, two, three, four and know that now it has reached five. But it was only at four that you suddenly became conscious of the sound.
What’s going on?
This, I suggest, is just one of the many curious features of our minds that lead us astray. Whenever we ask ‘Am I conscious now? we always are, so we leap to the conclusion that there must always be something ‘in my consciousness’, as though consciousness were a container. I reject this idea. Instead, I think that most of the time our brains are getting on with their amazing job of processing countless streams of information in multiple parallel threads, and none of those threads is actually ‘conscious’. Consciousness is an attribution we make after the fact. We look back and say ‘This is what I was conscious of’ and there is nothing more to consciousness than that.
Are we really so deluded? If so there are two important consequences: One spiritual and one scientific.
Many contemplative and mystical traditions claim we are living in illusion; that we need to throw off the dark glasses of the false self who seems to be in control, who seems to have consciousness and free will; that if we train our minds through meditation and mindfulness we can see through the illusion and live in clearly awareness right here and now. I am most familiar with Zen and I love such sayings as, ‘Actions exist and also their consequences but the person that acts does not’. Wow! Letting go of the person who sees, thinks, and decides is not a trivial matter and many people find it outrageous that one would even want to try. Yet it is quite possible to live without assuming that you are consciously making the decisions – that you are a persisting entity that has consciousness and free will.
From the scientific point of view, throwing off these illusions would totally transform the ‘hard problem of consciousness’. This is, as Dave Chalmers, the Australian philosopher, describes it, the question of ‘how physical processes in the brain give rise to subjective experience’. This is a modern version of the mind-body problem. Almost everyone who works on consciousness agrees that dualism does not work. There cannot be a separate spirit or soul or persisting inner self that is something other than ordinary matter. The world cannot be divided, as Descartes famously thought, into mind and matter – subjective and objective, physical material and mental thoughts. Somehow the two must ultimately be one – But how? This ‘nonduality’ is what mystical traditions have long described, but it is also the hope that science is grappling with.
And something strange is happening in the science of consciousness. The last few decades have seen fantastic progress in neuroscience. Yet paradoxically this makes the problem of consciousness worse, not better. We now know that decisions are initiated in part of the frontal lobe, actions are controlled by areas as far apart as the motor cortex, premotor cortex and cerebellum, visual information is processed in multiple parallel pathways at different speeds without ever constructing a picture-like representation that could correspond to ‘the picture I see in front of my eyes’. The brain manages all these amazing tasks in multiple parallel processes. So what need is there for ‘me’? And what need is there for subjective experience? So what is it and why do we have it?
Perhaps inventing an inner conscious self is a convenient way to live; perhaps it simplifies the brain’s complex task of keeping us alive; perhaps it has some evolutionary purpose. Whatever the answer, I am convinced that all our usual ideas about mind and consciousness are false. We can throw them off in the way we live our lives, and we must throw them off if our science of consciousness is ever to make progress.
If you go into a mathematics class of any university, it’s unlikely that you will find students reading Euclid. If you go into any physics class, it’s unlikely you’ll find students reading Newton. If you go into any economics class, you probably won’t find students reading Keynes. But if you go a philosophy class, it is not unusual to find students reading Plato, Kant, or Wittgenstein. Why? Cynics might say that all this shows is that there is no progress in philosophy. We are still thrashing around in the same morass that we have been thrashing around in for over 2,000 years. No one who understands the situation would be of this view, however.
So why are we still reading the great dead philosophers? Part of the answer is that the history of philosophy is interesting in its own right. It is fascinating, for example, to see how the early Christian philosophers molded the ideas of Plato and Aristotle to the service of their new religion. But that is equally true of the history of mathematics, physics, and economics. There has to be more to it than that—and of course there is.
Plato, Museo Pio-Clementino, Vatican
Great philosophical writings have such depth and profundity that each generation can go back and read them with new eyes, see new things in them, apply them in different ways. So we study the history of philosophy that we may do philosophy.
One of my friends said that he regards the history of philosophy as rather like a text book of chess openings. Just as it is part of being a good chess player to know the openings, it is part of being a good philosopher to know standard views and arguments, so that they can pick them up and run with them.
There is a lot of truth in this analogy, but it sells the history of philosophy short as well. Chess is pursued within a fixed and determinate set of rules. These cannot be changed. But part of good philosophy (like good art) involves breaking the rules. Past philosophers may have played by various sets of rule; but sometimes we can see their projects and ideas can fruitfully (perhaps more fruitfully) be articulated in different frameworks—perhaps frameworks of which they could have had no idea—and so which can plumb their ideas to depths of which they were not aware.
Such is my view anyway. It is certainly one that I try to put into practice in my own teaching and writing. I find that using the tools of modern formal logic is a particularly fruitful way of doing this. Let me give a couple of examples.
One debate in contemporary metaphysics concerns how the parts of an object cooperate to produce the unity which they constitute. The problem was put very much on the agenda by the great 19th century German philosopher and logician Gottlob Frege. Consider the thought that Pheidippidesruns. This has two parts, Pheidippidesand runs. But the thought is not simply a list, <Pheidippides, runs>. Somehow, the two parts join together. But how? Frege’s answer (we do not need to go in the details) ran into apparently insuperable problems.
Aristotle went part of the way to solving the problem over two millenia ago. He suggested that there must be something which joins the parts together, the form (morphe), F, of the proposition. But that can be only a start, as a number of the Medieval European philosophers noted. For <Pheidippides, F, runs> seems just as much a list as our original one, so there has to be something which joins all these things together—and we are off on a vicious infinite regress.
The regress is broken if F is actually identical with Pheidippides and runs. For then nothing is required to join F and Pheidippides: they are the same. Similarly for F and runs. But Pheidippides and runs are obviously not identical. So identity is not, as logicians say, transitive. You can have a=b and b=c without a=c. It is not clear that this is even a coherent possibility. Yet it is, as modern techniques in a branch of logic called paraconsistent logic can be used to show. I spare you the details.
A quite different problem concerns the topic in modern metaphysics called grounding. Some things depend for their existence on others. Thus, a chair depends for its existence on the molecules which are its parts; these, in turn, depend for their existence on the atoms which are their parts; and so on.
It contemporary debates, it is standardly assumed that this process must ground out in some fundamental bedrock of reality. That idea was attacked by the great Buddhist philosopher Nāgārjuna (c. 2c CE), with a swathe of arguments. Ontological dependence never terminates: everything depends on other things. Again, it is not clear, Nāgārjuna’s arguments notwithstanding, that the idea is coherent. If everything depends on other things, we have an obvious regress; and, it might well be thought, the regress is vicious. In fact, it is not. It can be shown to be coherent by a mathematical model employing mathematical structures called trees, all of whose branches may be infinitely long. Again, I spare you the details.
Of course, in explaining my two examples, I have slid over many important complexities and subtleties. However, they at least illustrate how the history of philosophy provides a mine of ideas. The ideas are by no means dead. They have potentials which only more recent developments—in the case of my examples, in contemporary logic and mathematics—can actualize. Those who know only the present of philosophy, and not the past, will never, of course, see this. That is why philosophers study the history of philosophy.
The decision by the administrators of the Copenhagen Zoo to kill a two-year-old giraffe named Marius by shooting him in the head in February 2014, then autopsy his body in public and feed Marius’ body parts to the lions held captive at the zoo created quite an uproar. When the same zoo then killed the lions (an adult pair and their two cubs) a month later to make room for a more genetically-worthy captive, the uproar became more ferocious.
Animal lovers across the globe were shocked and sickened by these killings and couldn’t understand why this bloodshed was being carried out at a zoo.
The zoo’s justification for killing Marius was that he had genes that were already “well represented” in the captive giraffe population in Europe. The justification for killing the lions was that the zoo was planning to introduce a younger male who was not genetically related to any of the females in the group.
Sacrificing the well-being and even the lives of individual animals in the name of conserving a diverse gene pool is commonplace in zoos. Euthanasia, usually by means less grotesque than a shotgun to the head, is quite common in European zoos. In US zoos, contraception is often used to prevent “over-representation” of certain gene lines. The European zoos’ reason for not using birth control the way most American zoos do is that they believe allowing animals to reproduce provides the animals with the opportunity to engage the fuller range of species typical behaviors, but that also means killing the undesirable offspring. In both European and US zoos, families are broken up and individuals are shipped to other facilities to diversify and manage the captive gene pool.
If this all has a ring of eugenic reasoning, consider what the executive director of the World Association of Zoos and Aquariums, Gerald Dick, had to say: “In Europe, there is a strict attempt to maintain genetically pure animals and not waste space in a zoo for genetically useless specimens.”
A stuffed giraffe, representing Marius, at a protest against zoos and the confinement of animals in Lisbon, 2014
The high-profile slaughter of Marius and the lions that ate his body focus attention on an important debate about the purpose of zoos and more generally the ethics of captivity. Originally, zoos were designed to amuse, amaze, and entertain visitors. As public awareness of the plight of endangered species and their diminishing habitats grew, zoos increasingly saw their roles as conservation and education. But just what is being conserved and what are the educational lessons that zoo-goers take away from their experiences at the zoo?
A recent study suggests that zoo-goers learn about biodiversity by visiting zoos. Critics have suggested that the study is not particularly convincing in linking the small increase in understanding of biodiversity with the complex demands of conservation. Some zoos are committed to direct conservation efforts; the Wildlife Conservation Society (aka the Bronx Zoo) and the Lincoln Park Zoo are just two examples of zoos that have extensive and successful conservation programs. Despite these laudable programs, these WAZA-accredited zoos, like the European zoos, are also in the business of gene management and a central tenet of the current management ethos is to value genetic diversity over individual well-being.
Awe-inspiring animals such as giraffes and gorillas and cheetahs and chimpanzees are not seen as individuals, with distinct perspectives, when viewed, as Dick says, as either useful or useless “specimens.” They are valued, if at all, as representative carriers of their species’ genes.
This distorts our understanding of other animals and our relationships to them. Part of the problem is that zoos are not places in which animals can be seen as dignified. Zoos are designed to satisfy human interests and desires, even though they largely fail at this. A trip to the zoo creates a relationship in which the observer, often a child, has a feeling of dominant distance over the animals being looked at. It is hard to respect and admire a being that is captive in every respect and viewed as a disposable specimen, one who can be killed to satisfy a mission that is hard for the zoo-going public to fully understand, let alone endorse.
Causing death is what zoos do. It is not all that they do, but it is a big part of what happens at zoos, even if this is usually hidden from the public. Zoos are institutions that not only purposely kill animals, they are also places that in holding certain animals captive, shorten their lives. Some animals, such as elephants and orca whales, cannot thrive in captivity and holding them in zoos and aquaria causes them to die prematurely.
Death is a natural part of life, and perhaps we would do well to have a less fearful, more accepting attitude about death. But those who purposefully bring about premature death run the risk of perpetuating the notion that some lives are disposable. It is that very idea that we can use and dispose of other animals as we please that has led to the problems that have zoos and others thinking about conservation in the first place. When institutions of captivity promote the idea that some animals are disposable by killing “genetically useless specimens” like young Marius and the lions, they may very well be undermining the tenuous conservation claims that are meant to justify their existence.
Lori Gruen is Professor of Philosophy, Feminist, Gender, and Sexuality Studies, and Environmental Studies at Wesleyan University where she also coordinates Wesleyan Animal Studies and directs the Ethics in Society Project. She is the author of The Ethics of Captivity.
Subscribe to the OUPblog via email or RSS.
Subscribe to only philosophy articles on the OUPblog via email or RSS.
Image credit: Sit-in protest in Lisbon. Photo by Mattia Luigi Nappi, 2014. CC-BY-SA-3.0 via Wikimedia Commons
The US military involvement in Iraq has more or less ended, and the war in Afghanistan is limping to a conclusion. Don’t the problems of torture really belong to the bad old days of an earlier administration? Why bring it up again? Why keep harping on something that is over and done with? Because it’s not over, and it’s not done with.
Torture is still happening. Shortly after his first inauguration in 2009, President Obama issued an executive order forbidding the CIA’s “enhanced interrogation techniques” and closing the CIA’s so-called “black sites.” But the order didn’t end “extraordinary rendition”—the practice of sending prisoners to other countries to be tortured. (This is actually forbidden under the UN Convention against Torture, which the United States signed in 1994.) The president’s order didn’t close the prison at Guantánamo, where to this day, prisoners are held in solitary confinement. Periodic hunger strikes are met with brutal force feeding. Samir Naji al Hasan Moqbel described the experience in a New York Times op-ed in April 2013:
I will never forget the first time they passed the feeding tube up my nose. I can’t describe how painful it is to be force-fed this way. As it was thrust in, it made me feel like throwing up. I wanted to vomit, but I couldn’t. There was agony in my chest, throat and stomach. I had never experienced such pain before.
Nor did Obama’s order address the abusive interrogation practices of the Joint Special Operations Command (JSOC) which operates with considerably less oversight than the CIA. Jeremy Scahill has ably documented JSOC’s reign of terror in Iraq in Dirty Wars: The World Is a Battlefield. At JSOC’s Battlefield Interrogation Facility at Camp NAMA (which reportedly stood for “Nasty-Ass Military Area”) the motto—prominently displayed on posters around the camp—was “No blood, no foul.”
Torture also continues daily, hidden in plain sight, in US prisons. It is no accident that the Army reservists responsible for the outrages at Abu Ghraib worked as prison guards in civilian life. As Spec. Charles A. Graner wrote in an email about his work at Abu Ghraib, “The Christian in me says it’s wrong, but the corrections officer in me says, ‘I love to make a grown man piss himself.’” Solitary confinement and the ever-present threat of rape are just two forms of institutionalized torture suffered by the people who make up the world’s largest prison population. In fact, the latter is so common that on TV police procedurals like Law & Order, it is the staple threat interrogators use to prevent a “perp” from “lawyering up.”
We still don’t have a full, official accounting. As yet we have no official government accounting of how the United States has used torture in the “war on terror.” This is partly because so many different agencies, clandestine and otherwise, have been involved in one way or another. The Senate Intelligence Committee has written a 6,000-page report just on the CIA’s involvement, which has never been made public, although recent days have seen moves in this direction. Nor has the Committee been able to shake loose the CIA’s own report on its interrogation program. Most of what we do know is the result of leaks, and the dogged work of dedicated journalists and human rights lawyers. But we have nothing official, on the level, say, of the 1975 Church Committee report on the CIA’s activities in the Vietnam War.
Frustrated because both Congress and the Obama administration seemed unwilling to demand a full accounting, the Constitution Project convened a blue-ribbon bipartisan committee, which produced its own damning report. Members included former DEA head Asa Hutchinson, former FBI chief William Sessions, and former US Ambassador to the United Nations Thomas Pickering. The report reached two important conclusions: (1) “[I]t is indisputable that the United States engaged in the practice of torture,” and (2) “[T]he nation’s highest officials bear some responsibility for allowing and contributing to the spread of torture.”
No high-level officials have been held accountable for US torture. Only enlisted soldiers like Charles Graner and Lynndie England have done jail time for prisoner abuse in the “war on terror.” None of the “highest officials” mentioned in the Detainee Task Force report (people like Donald Rumsfeld, Dick Cheney, and George W. Bush) have faced any consequences for their part in a program of institutionalized state torture. Early in his first administration, President Obama argued that “nothing will be gained by spending our time and energy laying blame for the past,” but this is not true. Laying blame for the past (and the present) is a precondition for preventing torture in the future, because it would represent a public repudiation of the practice. What “will be gained” is the possibility of developing a public consensus that the United States should not practice torture any longer. Such a consensus about torture does not exist today.
Tolerating torture corrupts the moral character of the nation. We tend to think of torture as a set of isolated actions—things desperate people do under desperate circumstances. But institutionalized state torture is not an action. It is an ongoing, socially-embedded practice. It requires an infrastructure and training. It has its own history, traditions, and rituals of initiation. And—importantly—it creates particular ethical habits in those who practice it, and in any democratic nation that allows it.
Since the brutal attacks of 9/11/2001, people in this country have been encouraged to be afraid. Knowing that our government has been forced to torture people in order to keep us safe confirms the belief that each of us must be in terrible danger—a danger from which only that same government can protect us. We have been encouraged to accept any cruelty done to others as the price of our personal survival. There is a word for the moral attitude that sets personal safety as its highest value: cowardice. If as a nation we do not act to end torture, if we do not demand a full accounting from and full accountability for those responsible, we ourselves are responsible. And we risk becoming a nation of cowards.
Rebecca Gordon received her B.A. from Reed College and her M.Div. and Ph.D. in Ethics and Social Theory from Graduate Theological Union. She teaches in the Department of Philosophy and for the Leo T. McCarthy Center for Public Service and the Common Good at the University of San Francisco. She is the author of Letters From Nicaragua, Cruel and Usual: How Welfare “Reform” Punishes Poor People, and Mainstreaming Torture: Ethical Approaches in the Post-9/11 United States.
Subscribe to the OUPblog via email or RSS.
Subscribe to only politics articles on the OUPblog via email or RSS.
Religion has provided the world with some of the most influential and important written works ever known. Here is a reading list made up of just a small selection of the texts we carry in the series, covering religions across the globe.
Bede’s most famous work was finished in 731, and deals with the history of Christianity in England, most notably, the tension between Roman and Celtic forms of Christianity. It is one of the most important texts in English history. As well as providing the authoritative Colgrave translation of the Ecclesiastical History, the Oxford World’s Classics edition includes a translation of the Greater Chronicle, in which Bede discusses the Roman Empire. Meanwhile, Bede’s Letter to Egbert gives further reflections on the English Church just before his death.
This work is William (brother of Henry) James’s classic survey of religious belief in its most personal aspects. Covering such topics as how we define evil to ourselves, the difference between a healthy and a divided mind, the value of saintly behaviour, and what animates and characterizes the mental landscape of sudden conversion, The Varieties of Religious Experience is a key text examining the relationship between belief and culture. At the time James wrote it, faith in organized religion and dogmatic theology was fading away, and the search for an authentic religion rooted in personality and subjectivity was something deemed an urgent necessity. With psychological insight, philosophical rigour, and a determination not to jump to the conclusion that in tracing religion’s mental causes we necessarily diminish its truth or value, in the Varieties James wrote a truly foundational text for modern belief.
This is one of Saint Augustine’s most important works on the classical tradition. Written to enable students to have the skills to interpret the Bible, it provides an outline of Christian theology. It also contains a detailed discussion of moral problems. Further to that, Augustine attempts to determine what elements of classical education are desirable for a Christian, and suggests ways in which Ciceronian rhetorical principles may help in communicating faith.
Along with the King James Bible, the words of the Book of Common Prayer have permeated deep into the English language all over the worldFor countless people, it has provided the framework for a wedding ceremony or a funeral. Yet this familiarity also hides a violent and controversial history. When it was first written, the Book of Common Prayer provoked riots, and it was banned before eventually being translated into a host of global languages. This edition presents the work in three different states: the first edition of 1549, which brought the Reformation into people’s homes; the Elizabethan prayer book of 1559, familiar to Shakespeare and Milton; and the edition of 1662, which embodies the religious temper of the nation down to modern times.
The Qur’an, the Muslim Holy Book, was revealed to the Prophet Muhammad over 1400 year ago. It is the supreme authority in Islam and the source of all Islamic teaching; it is both a sacred text and a book of guidance, that sets out the creed, rituals, ethics, and laws of Islam. The greatest literary masterpiece in Arabic, the message of the Qur’an was directly addressed to all people regardless of class, gender, or age, and this translation aims to be equally accessible to everyone.
Natural Theology is arguably as central to those who believe in Intelligent Design as Darwin’s Origin of Species is to those who come down on the side of evolutionary theory. In it, William Paley set out to prove the existence of God from the evidence of the order and beauty of the natural world. It famously starts by comparing our world to a watch, whose design is self-evident, before going on to provide examples from biology, anatomy, and astronomy in order to demonstrate the intricacy and ingenuity of design that could only come from a wise and benevolent deity. Paley’s work was both hugely successful, and extremely controversial, and Charles Darwin was greatly influenced by the book’s accessible style and structure.
‘I have heard the supreme mystery, yoga, from Krishna, from the lord of yoga himself.’
So ends the Bhagavad Gita, the best known and most widely read Hindu religious text in the Western world. It is the most famous episode from the great Sanskrit epic, the Mahabharata. Across eighteen chapters Krishna’s teaching leads the warrior Arjuna from confusion to understanding, raising and developing many key themes from the history of Indian religions in the process.
It considers religious and social duty, the nature of action and of sacrifice, the means to liberation, and the relationship between God and human. It culminates in an awe-inspiring vision of Krishna as an omnipotent God, disposer and destroyer of the universe.
Kirsty Doole is Publicity Manager for Oxford World’s Classics.
For over 100 years Oxford World’s Classics has made available the broadest spectrum of literature from around the globe. Each affordable volume reflects Oxford’s commitment to scholarship, providing the most accurate text plus a wealth of other valuable features, including expert introductions by leading authorities, voluminous notes to clarify the text, up-to-date bibliographies for further study, and much more. You can follow Oxford World’s Classics on Twitter, Facebook, or here on the OUPblog. Subscribe to only Oxford World’s Classics articles on the OUPblog via email or RSS.
Subscribe to the OUPblog via email or RSS.
Subscribe to only literature articles on the OUPblog via email or RSS.
Image credit: Saint Augustine of Hippo. Public domain via Wikimedia Commons
By Richard Dawid, Stephan Hartmann, and Jan Sprenger
“When you have eliminated the impossible, whatever remains, however improbable, must be the truth.” Thus Arthur Conan Doyle has Sherlock Holmes describe a crucial part of his method of solving detective cases. Sherlock Holmes often takes pride in adhering to principles of scientific reasoning. Whether or not this particular element of his analysis can be called scientific is not straightforward to decide, however. Do scientists use ‘no alternatives arguments’ of the kind described above? Is it justified to infer a theory’s truth from the observation that no other acceptable theory is known? Can this be done even when empirical confirmation of the theory in question is sketchy or entirely absent?
The canonical understanding of scientific reasoning insists that theory confirmation be based exclusively on empirical data predicted by the theory in question. From that point of view, Holmes’ method may at best play the role of a side show; the real work of theory evaluation is done by comparing the theory’s predictions with empirical data.
Actual science often tells a different story. Scientific disciplines like palaeontology or archaeology aim at describing historic events that have left only scarce traces in today’s world. Empirical testing of those theories always remains fragmentary. Under such conditions, assessing a theory’s scientific status crucially relies on the question of whether or not convincing alternative theories have been found.
Just recently, this kind of reasoning scored a striking success in theoretical physics when the Higgs particle was discovered at CERN. Besides confirming the Higgs model itself, the Higgs discovery also vindicated the judgemental prowess of theoretical physicists who were fairly sure about the existence of the Higgs particle already since the mid-1980s. Their assessment had been based on a clear-cut no alternatives argument: there seemed to be no alternative to the Higgs model that could render particle physics consistent.
Similarly, string theory is one of the most influential theories in contemporary physics, even in the absence of favorable empirical evidence and the ability to generate specific predictions. Critics argue that for these reasons, trust in string theory is unjustified, but defenders deploy the no alternatives argument: since the physics community devoted considerable efforts to developing alternatives to string theory, the failure of these attempts and the absence of similarly unified and worked-out competitors provide a strong argument in favor of string theory.
These examples show that the no alternatives argument is in fact used in science. But does it constitute a legitimate way of reasoning? In our work, we aim at identifying the structural basis for the no alternatives argument. We do so by constructing a formal model of the argument with the help of so-called Bayesian nets. That is, the argument is analyzed as a case of reasoning under uncertainty about whether a scientific theory H (e.g. string theory) is right or wrong.
A Bayes nets that captures the inferential relations between the relevant propositions in the no alternatives argument. D=complexity of the problem, F=failure to find an alternative, Y=number of alternatives, T=H is the right theory.
We argue that the failure of finding a viable alternative to theory H, in spite of many attempts by clever scientists, lowers our expectations on the number of existing serious alternatives to H. This provides in turn an argument that H is indeed the right theory. In total, the probability that H is right is increased by the failure to find an alternative, demonstrating that the inference behind the no alternatives argument is valid in principle.
There is an important caveat, however. Based on the no alternatives argument alone, we cannot say how much the probability of the theory in question is raised. It may be substantial, but it may only be a tiny little bit. In that case, the confirmatory force of the no alternatives argument may be negligible.
The no alternatives argument thus is a fascinating mode of reasoning that contains a valid core. However, determining the strength of the argument requires going beyond the mere observation that no alternatives have been found. This matter is highly context-sensitive and may lead to different answers for string theory, paleontology and detective stories.
Richard Dawid, Stephan Hartmann, and Jan Sprenger are the authors of “The No Alternatives Argument” (available to read for free for a limited time) in the British Journal for the Philosophy of Science. Richard Dawid is lecturer (Dozent) and researcher at the University of Vienna. Stephan Hartmann is Alexander von Humboldt Professor at the LMU Munich. Jan Sprenger is Assistant Professor at Tilburg University. Their work focuses on the application of probabilistic methods within the philosophy of science.
For over fifty years The British Journal for the Philosophy of Science has published the best international work in the philosophy of science under a distinguished list of editors including A. C. Crombie, Mary Hesse, Imre Lakatos, D. H. Mellor, David Papineau, James Ladyman, and Alexander Bird. One of the leading international journals in the field, it publishes outstanding new work on a variety of traditional and cutting edge issues, such as the metaphysics of science and the applicability of mathematics to physics, as well as foundational issues in the life sciences, the physical sciences, and the social sciences.
Subscribe to the OUPblog via email or RSS.
Subscribe to only philosophy articles on the OUPblog via email or RSS.