What is JacketFlap

  • JacketFlap connects you to the work of more than 200,000 authors, illustrators, publishers and other creators of books for Children and Young Adults. The site is updated daily with information about every book, author, illustrator, and publisher in the children's / young adult book industry. Members include published authors and illustrators, librarians, agents, editors, publicists, booksellers, publishers and fans.
    Join now (it's free).

Sort Blog Posts

Sort Posts by:

  • in
    from   

Suggest a Blog

Enter a Blog's Feed URL below and click Submit:

Most Commented Posts

In the past 7 days

Recent Comments

JacketFlap Sponsors

Spread the word about books.
Put this Widget on your blog!
  • Powered by JacketFlap.com

Are you a book Publisher?
Learn about Widgets now!

Advertise on JacketFlap

MyJacketFlap Blogs

  • Login or Register for free to create your own customized page of blog posts from your favorite blogs. You can also add blogs by clicking the "Add to MyJacketFlap" links next to the blog name in each post.

Blog Posts by Date

Click days in this calendar to see posts by day or month
<<November 2014>>
SuMoTuWeThFrSa
      01
02030405060708
09101112131415
16171819202122
23242526272829
30      
new posts in all blogs
Viewing: Blog Posts Tagged with: philosophy, Most Recent at Top [Help]
Results 1 - 25 of 328
1. Vampires and life decisions

Imagine that you have a one-time-only chance to become a vampire. With one swift, painless bite, you’ll be permanently transformed into an elegant and fabulous creature of the night. As a member of the Undead, your life will be completely different. You’ll experience a range of intense new sense experiences, you’ll gain immortal strength, speed and power, and you’ll look fantastic in everything you wear. You’ll also need to drink the blood of humanely farmed animals (but not human blood), avoid sunlight, and sleep in a coffin.

Now, suppose that all of your friends, people whose interests, views and lives were similar to yours, have already decided to become vampires. And all of them tell you that they love it. They encourage you to become a vampire too, saying things like: “I’d never go back, even if I could. Life has meaning and a sense of purpose now that it never had when I was human. It’s amazing! But I can’t really explain it to you, a mere human. You’ll have to become a vampire to know what it’s like.”

In this situation, how could you possibly make an informed choice about what to do? For, after all, you cannot know what it is like to become a vampire until you become one. The experience of becoming a vampire is transformative. What I mean by this is that it is an experience that is both radically epistemically new, such that you have to have it in order to know what it will be like for you, and moreover, will change your core personal preferences.

“You’ll have to become a vampire to know what it’s like”

So you can’t rationally choose to become a vampire, but nor can you rationally choose to not become one, if you want to choose based on what you think it would be like to live your life as a vampire. This is because you can’t possibly know what it would be like before you try it. And you can’t possibly know what you’d be missing if you didn’t.

We don’t normally have to consider the choice to become Undead, but the structure of this example generalizes, and this makes trouble for a widely assumed story about how we should make momentous, life-changing choices for ourselves. The story is based on the assumption that, in modern western society, the ideal rational agent is supposed to charge of her own destiny, mapping out the subjective future she hopes to realize by rationally evaluating her options from her authentic, personal point of view. In other words, when we approach major life decisions, we are supposed to introspect on our past experiences and our current desires about what we want our futures to be like in order to guide us in determining our future selves. But if a big life choice is transformative, you can’t know what your future will be like, at least, not in the deeply relevant way that you want to know about it, until you’ve actually undergone the life experience.

Transformative experience cases are special kinds of cases where important ordinary approaches that people try to use to make better decisions, such as making better generalizations based on past experiences, or educating themselves to better evaluate and recognize their true desires or preferences, simply don’t apply. So transformative experience cases are not just cases involving our uncertainty about certain sorts of future experiences. They are special kinds of cases that focus on a distinctive kind of ‘unknowability’—certain important and distinctive values of the lived experiences in our possible futures are fundamentally first-personally unknowable. The problems with knowing what it will be like to undergo life experiences that will transform you can challenge the very coherence of the ordinary way to approach major decisions.

18449204_5002549d02_b
‘Vampire Children,’ by. Shawn Allen. CC-BY-2.0 via Flickr

Moreover, the problem with these kinds of choices isn’t just with the unknowability of your future. Transformative experience cases also raise a distinctive kind of decision-theoretic problem for these decisions made for our future selves. Recall the vampire case I started with. The problem here is that, before you change, you are supposed to perform a simulation of how you’d respond to the experience in order to decide whether to change. But the trouble is, who you are changes as you become a vampire.

Think about it: before you become a vampire, you should assess the decision as a human. But you can’t imaginatively put yourself in the shoes of the vampire you will become and imaginatively assess what that future lived experience will be. And, after you have become a vampire, you’ve changed, such that your assessment of your decision now is different from the assessment you made as a human. So the question is, which assessment is the better one? Which view should determine who you become? The view you have when you are human? Or the one you have when you are a vampire.

The questions I’ve been raising here focus on the fictional case of the choice to be come a vampire. But many real-life experiences and the decisions they involve have the very same structure, such as the choice to have one’s first child. In fact, in many ways, the choice to become a parent is just like the choice to become a vampire! (You won’t have to drink any blood, but you will undergo a major transition, and life will never be the same again.)

In many ways, large and small, as we live our lives, we find ourselves confronted with a brute fact about how little we can know about our futures, just when it is most important to us that we do know. If that’s right, then for many big life choices, we only learn what we need to know after we’ve done it, and we change ourselves in the process of doing it. In the end, it may be that the most rational response to this situation is to change the way we frame these big decisions: instead of choosing based on what we think our futures will be like, we should choose based on whether we want to discover who we’ll become.

The post Vampires and life decisions appeared first on OUPblog.

0 Comments on Vampires and life decisions as of 1/1/1900
Add a Comment
2. World Philosophy Day reading list

World Philosophy Day was created by UNESCO in 2005 in order to “win recognition for and give strong impetus to philosophy and, in particular, to the teaching of philosophy in the world”. To celebrate World Philosophy Day, we have compiled a list of what we consider to be the most essential philosophy titles. We are also providing free access to several key journal articles and online products in philosophy so that you can explore this discipline in more depth. Happy reading!


Free: Why Science Hasn’t Disproved Free Will by Alfred R. Mele
9780199371624
Does free will exist? The question has fueled heated debates spanning from philosophy to psychology and religion. The answer has major implications, and the stakes are high. To put it in the simple terms that have come to dominate these debates, if we are free to make our own decisions, we are accountable for what we do, and if we aren’t free, we’re off the hook.

Philosophy Bites Again by David Edmonds and Nigel Warburton
This is really a conversation, and conversations are the best way to see philosophy in action. It offers engaging and thought-provoking conversations with leading philosophers on a selection of major philosophical issues that affect our lives. Their subjects include pleasure, pain, and humor; consciousness and the self; free will, responsibility, and punishment; the meaning of life and the afterlife.

Think: A Compelling Introduction to Philosophy by Simon Blackburn
Here at last is a coherent, unintimidating introduction to the challenging and fascinating landscape of Western philosophy. Written expressly for “anyone who believes there are big questions out there, but does not know how to approach them.”

What Does It All Mean? A Very Short Introduction to Philosophy by Thomas Nagel
In this cogent and accessible introduction to philosophy, the distinguished author of Mortal Questions and The View From Nowhere brings the central problems of philosophical inquiry to life, demonstrating why they have continued to fascinate and baffle thinkers across the centuries.

Riddles of Existence: A Guided Tour of Metaphysics by Earl Conee and Theodore Sider
Two leading philosophers explore the most fundamental questions there are, about what is, what is not, what must be, and what might be. It has an informal style that brings metaphysical questions to life and shows how stimulating it can be to think about them.
9780199603572

Killing in War by Jeff McMahan
This is a highly controversial challenge to the consensus about responsibility in war. Jeff McMahan argues compellingly that if the leaders are in the wrong, then the soldiers are in the wrong.

Reason in a Dark Time by Dale Jamieson
In this book, philosopher Dale Jamieson explains what climate change is, why we have failed to stop it, and why it still matters what we do. Centered in philosophy, the volume also treats the scientific, historical, economic, and political dimensions of climate change.

Poverty, Agency, and Human Rights edited by Diana Tietjens Meyers
Collects thirteen new essays that analyze how human agency relates to poverty and human rights respectively as well as how agency mediates issues concerning poverty and social and economic human rights. No other collection of philosophical papers focuses on the diverse ways poverty impacts the agency of the poor.
9780199338870
Aha! The Moments of Insight That Shape Our World by William B. Irvine
This book incorporates psychology, neurology, and evolutionary psychology to take apart what we can learn from a variety of significant “aha” moments that have had lasting effects. Unlike other books on intellectual breakthroughs that focus on specific areas such as the arts, Irvine’s addresses aha moments in a variety of areas including science and religion.

On What Matters: Volume One by Derek Parfit
Considered one of the most important works in the field since the 19th century, it is written in the uniquely lucid and compelling style for which Parfit is famous. This is an ambitious treatment of the main theories of ethics.

The Emergent Multiverse: Quantum Theory according to the Everett Interpretation by David Wallace
Quantum physics is the most successful scientific theory we have. But no one knows how to make sense of it. We need to bite the bullet – it’s common sense that must give way. The universe is much stranger than we can think.

The Best Things in Life: A Guide to What Really Matters by Thomas Hurka
An engaging, accessible survey of the different things that can make life worth living: pleasure, knowledge, achievement, virtue, love, and more. A book that considers what really matters in one’s life, and making decisions around those values.

9780192854216 What should I do?: Plato’s Crito’ in Philosophy: A Very Short Introduction by Edward Craig
Plato, born around 427 BC, is not the first important philosopher, with Vedas of India, the Buddha, and Confucius all pre-dating him. However, he is the first philosopher to have left us with a substantial body of complete works that are available to us today, which all take the form of dialogues. This chapter focuses on the dialogue called Crito in which Socrates asks ‘What should I do?’

A biography of John Locke in the Oxford Dictionary of National Biography
A philosopher regarded as one of the most influential of Enlightenment thinkers, John Locke was born on 29th August 1632 in Somerset, England. In the late 1650s he became interested in medicine, which led easily to natural philosophy after being introduced to these new ideas of mechanical philosophy by Robert Boyle. Discover what happened next in Locke’s life with this biography

‘Computing Machinery and Intelligence’ from Mind, published in 1950.14602113
In this seminal paper, celebrated mathematician and pioneer Alan Turing attempts to answer the question, ‘Can machines think?’, and thus introduces his theory of ‘the imitation game’(now known as the Turing test) to the world. Turing skilfully debunks theological and ethical arguments against computational intelligence: he acknowledges the limitations of a machine’s intellect, while boldly exposing those of man, ultimately laying the groundwork for the study of artificial intelligence – and the philosophy behind it.

‘Phenomenology as a Resource for Patients’ from The Journal of Medicine and Philosophy, published in 2012
Patient support tools have drawn on a variety of disciplines, including psychotherapy, social psychology, and social care. One discipline that has not so far been used to support patients is philosophy. This paper proposes that a particular philosophical approach, phenomenology, could prove useful for patients, giving them tools to reflect on and expand their understanding of their illness.

Do you have any philosophy books that you think should be added to this reading list? Let us know in the comments below.

Headline image credit: Rays at Burning Man by foxgrrl. CC-BY-NC-SA-2.0 via Flickr.

The post World Philosophy Day reading list appeared first on OUPblog.

0 Comments on World Philosophy Day reading list as of 11/20/2014 5:54:00 AM
Add a Comment
3. Luciano Floridi responds to NYROB review of The Fourth Revolution

In the October 9th edition of the New York Review of Books, philosopher John Searle criticized Luciano Floridi’s The Fourth Revolution, noting that Floridi “sees himself as the successor to Copernicus, Darwin, and Freud, each of whom announced a revolution that transformed our self-conception into something more modest.” In the response below, Floridi disputes this claim and many others made by Searle in his review of The Fourth Revolution.

John Searle’s review of The Fourth Revolution – How the Infosphere is Reshaping Human Reality (OUP, 2014) is astonishingly shallow and misguided. The silver lining is that, if its factual errors and conceptual confusions are removed, the opportunity for an informed and insightful reading can still be enjoyed.

The review erroneously ascribes to me a fourth revolution in our self-understanding, which I explicitly attribute to Alan Turing. We are not at the center of the universe (Copernicus), of the biological kingdom (Darwin), or of the realm of rationality (Freud). After Turing, we are no longer at the center of the world of information either. We share the infosphere with smart technologies. These are not some unrealistic AI, as the review would have me suggest, but ordinary artefacts that outperform us in ever more tasks, despite being no cleverer than a toaster. Their abilities are humbling and make us revaluate our unique intelligence. Their successes largely depend on the fact that the world has become an IT-friendly environment, where technologies can replace us without having any understanding or semantic skills. We increasingly live onlife (think of apps tracking your location). The pressing problem is not whether our digital systems can think or know, for they cannot, but what our environments are gradually enabling them to achieve. Like Kant, I do not know whether the world in itself is informational, a view that the review erroneously claims I support. What I do know is that our conceptualization of the world is. The distinction is trivial and yet crucial: from DNA as code to force fields as the foundation of matter, from the mind-brain dualism as a software-hardware distinction to computational neuroscience, from network-based societies to digital economies and cyber conflicts, today we understand and deal with the world informationally. To be is to be interactable: this is our new “ontology”.

The review denounces dualisms yet uncritically endorses a dichotomy between relative (or subjective) vs. absolute (or objective) phenomena. This is no longer adequate because today we know that many phenomena are relational. For example, whether some stuff qualifies as food depends on the nature both of the substance and of the organism that is going to absorb it. Yet relativism is mistaken, because not any stuff can count as food, sand never does. Likewise, semantic information (e.g. a train timetable) is a relational phenomenon: it depends on the right kind of message and receiver. Insisting on mapping information as either relative or absolute is as naïve as pretending that a border between two nations must be located in one of them.

The world is getting more complex. We have never been so much in need of good philosophy to understand it and take care for it. But we need to upgrade philosophy into a philosophy of information of our age for our age if we wish it to be relevant. This is what the book is really about.

Feature image credit: Macro computer citrcuit board, by Randy Pertiet. CC-BY-2.0 via Flickr.

The post Luciano Floridi responds to NYROB review of The Fourth Revolution appeared first on OUPblog.

0 Comments on Luciano Floridi responds to NYROB review of The Fourth Revolution as of 11/20/2014 3:06:00 AM
Add a Comment
4. SR... some thoughts on recent reading

As I mentioned last Monday, I'm enjoying Steven Shaviro's new Whitehead-meets-Speculative Realism (SR) book Universe of Things, but before I (hopefully) review it, I should perhaps make a brief comment on why I'm reading it. And that particular story makes better sense if I mention that I'm also reading Peter Wolfendale's Object-Oriented Philosophy: The Noumenon's New Clothes (from the always excellent Urbanomic) and briefly mention why I'm reading that...

I read more philosophy books than books on any other topic – and, to be honest, it's probably more than time that RSB reflected that a little more clearly. It's a little difficult suddenly to begin a "justification" for my interests, but I want to start shaping up to providing one, not least because it will help me (I hope) articulate what I find lacking in a number of the works that have been fascinating me of late.

Like many, my head has been (somewhat) turned by the vibrant SR/OOO blogging community. And if you spend any time in this particular pond you soon come across the work of Graham Harman – one of the big fishes.

I find Harman's work... problematic. And I'll come back to that later. But I also find it profoundly engaging, subtle and intellectually exciting. For now, I'll just mention Harman's notion of withdrawal as an example of a technical term that, I think, is particularly fecund.

Levi Bryant defines withdrawal like this: "Withdrawal is a protest against all ambitions of domination, mastery, and exploitation. What withdrawal says is that all entities harbor – as Graham likes to put it – scarcely imagined volcanic cores bubbling beneath the surface that we are never completely able to master or control. It is this from whence his profound respect for things – human and nonhuman – indeed his indignation against those that would try to reduce things to signifiers, concepts, sensations, lived experiences, intuitions, etc., arises. Harman seldom talks about politics or ethics, but who can fail to hear an ethical refrain throughout all his work..."

Harman proposes that no object is ever exhausted by its relations; that an object's real properties are hidden and can never fully be grasped. I find that a fascinating and productive thought. And I find I read so much philosophy because of a love of – and a quest to find – words and phrases, constructions and contortions, that help me form new thought-words and new thought-worlds. Philosophy, for me, is simply a search for better ways to think about the world, and if that means working through some pretty dreadful prose every now and again, so be it. So, when I ask myself why I'm pushing through pages and pages of dry, technical, definitional analysis and forbidding, unforgivable academicese, it's because of the diamonds in the dirt. A term like withdrawal opens something new up for me.

Shaviro's book is useful because it is telling me that process thought is able to shed light on Harman's philosophy, that a dialogue between those two thinkers is helpful to understanding both. It is also bringing my attention back to how very Deleuzian such thought is... all is becoming, nothing is static being, and so you can, I think, map onto that a constant 'flow' between the 'real' and the 'fictional' which doesn't bespeak a 'reality hunger' but more a constant lack in reality which is areadly always 'over-filled' by the fictive, the constructed. Reality is gappy, and thought is real. There are similarities here to Miguel de Beistegui's Proust as Philosopher. (And the car crash of scare quotes in this paragraph is evidence, of course, that further thought is needed!)

What I'm finding missing in Wolfendale's admirable volume, however, is such food for thought. Wolfendale's Kantian/Sellarsian takedown of Harman was waiting to be written. (You can read a 77-page "taster" in Speculations.) And despite Nick Land's recent comments that Wolfendale's book "deserves to be absorbed in very different terms to those it superficially invites," I'm afraid I find myself amongst the superficial. Wolfendale scores some knockout blows, but Harman bounces back up like a weeble. Wolfendale himself writes: "Whatever else can be said about Harman’s presentation of OOP, it is certainly compelling. On the one hand, it attempts to reveal the inherent oddness of the world we live in, by painting us a landscape of a reality in which everything is radically individual, cut off from everything else in almost every respect, connected only by fleeting glimmers of phenomenal appearance. On the other, it attempts to humble humanity by seeing humans as just one more disparate association of objects within the universal diaspora." Like good fiction, philosophy, for me, doesn't have to prove facts – it doesn't need to limit itself to a theory of knowledge – it needs to open up our minds and make us epistemoligically astute. And that starts with fascination, with an aesthetics perhaps. In such a struggle, Wolfendale can't help but come off sounding like something of a humourless pedant. His book does have virtues, however, and, as I said, I do find Harman problematic... but I've written enough for one day.

Add a Comment
5. After the elections: Thanksgiving, consumerism, and the American soul

The elections, thankfully, are finally over, but America’s search for security and prosperity continues to center on ordinary politics and raw commerce. This ongoing focus is perilous and misconceived. Recalling the ineffably core origins of American philosophy, what we should really be asking these days is the broadly antecedent question: “How can we make the souls of our citizens better?”

To be sure, this is not a scientific question. There is no convincing way in which we could possibly include the concept of “soul” in any meaningfully testable hypotheses or theories. Nonetheless, thinkers from Plato to Freud have understood that science can have substantial intellectual limits, and that sometimes we truly need to look at our problems from the inside.

Pierre Teilhard de Chardin, the Jesuit philosopher, inquired, in The Phenomenon of Man: “Has science ever troubled to look at the world other than from without?” This not a silly or superficial question. Earlier, Ralph Waldo Emerson, the American Transcendentalist, had written wisely in The Over-Soul: “Even the most exact calculator has no prescience that something incalculable may not balk the next moment.” Moreover, he continued later on in the same classic essay: “Before the revelations of the soul, Time, Space, and Nature shrink away.”

That’s quite a claim. What, precisely, do these “phenomenological” insights suggest about elections and consumerism in the present American Commonwealth? To begin, no matter how much we may claim to teach our children diligently about “democracy” and “freedom,” this nation, whatever its recurrent electoral judgments on individual responsibility, remains mired in imitation. More to the point, whenever we begin our annual excursions to Thanksgiving, all Americans are aggressively reminded of this country’s most emphatically soulless mantra.

“You are what you buy.”

This almost sacred American axiom is reassuringly simple. It’s not complicated. Above all, it signals that every sham can have a patina, that gloss should be taken as truth, and that any discernible seriousness of thought, at least when it is detached from tangible considerations of material profit, is of no conceivably estimable value.

Ultimately, we Americans will need to learn an altogether different mantra. As a composite, we should finally come to understand, every society is basically the sum total of individual souls seeking redemption. For this nation, moreover, the favored path to any such redemption has remained narrowly fashioned by cliché, and announced only in chorus.

Where there dominates a palpable fear of standing apart from prevailing social judgments (social networking?), there can remain no consoling tolerance for intellectual courage, or, as corollary, for any reflective soulfulness. In such circumstances, as in our own present-day American society, this fear quickly transforms citizens into consumers.

Black_Friday_at_the_Apple_Store_on_Fifth_Avenue,_New_York_City,_2011
Black Friday at the Apple Store on Fifth Avenue, New York City, 2011by JoeInQueens. CC-BY-2.0 via Wikimedia Commons.

While still citizens, our “education” starts early. From the primary grades onward, each and every American is made to understand that conformance and “fitting in” are the reciprocally core components of individual success. Now, the grievously distressing results of such learning are very easy to see, not just in politics, but also in companies, communities, and families.

Above all, these results exhibit a debilitating fusion of democratic politics with an incessant materialism. Or, as once clarified by Emerson himself: “The reliance on Property, including the reliance on governments which protect it, is the want of self-reliance.”

Nonetheless, “We the people” cannot be fooled all of the time. We already know that nation, society, and economy are endangered not only by war, terrorism, and inequality, but also by a steadily deepening ocean of scientifically incalculable loneliness. For us, let us be candid, elections make little core difference. For us, as Americans, happiness remains painfully elusive.

In essence, no matter how hard we may try to discover or rediscover some tiny hints of joy in the world, and some connecting evidence of progress in politics, we still can’t manage to shake loose a gathering sense of paralyzing futility.

Tangibly, of course, some things are getting better. Stock prices have been rising. The economy — “macro,” at least — is improving.

Still, the immutably primal edifice of American prosperity, driven at its deepest levels by our most overwhelming personal insecurities, remains based upon a viscerally mindless dedication to consumption. Ground down daily by the glibly rehearsed babble of politicians and their media interpreters, we the people are no longer motivated by any credible search for dignity or social harmony, but by the dutifully revered buying expectations of patently crude economics.

Can anything be done to escape this hovering pendulum of our own mad clockwork? To answer, we must consider the pertinent facts. These unflattering facts, moreover, are pretty much irrefutable.

For the most part, we Americans now live shamelessly at the lowest common intellectual denominator. Cocooned in this generally ignored societal arithmetic, our proliferating universities are becoming expensive training schools, promising jobs, but less and less of a real education. Openly “branding” themselves in the unappetizing manner of fast food companies and underarm deodorants, these vaunted institutions of higher education correspondingly instruct each student that learning is just a commodity. Commodities, in turn, learns each student, exist solely for profit, for gainful exchange in the ever-widening marketplace.

Optimally, our students exist at the university in order, ultimately, to be bought and sold. Memorize, regurgitate, and “fit in” the ritualized mold, instructs the college. Then, all be praised, all will make money, and all will be well.

But all is not well. In these times, faced with potentially existential threats from Iran, North Korea, and many other conspicuously volatile places, we prefer to distract ourselves from inconvenient truths with the immense clamor of imitative mass society. Obligingly, America now imposes upon its already-breathless people the grotesque cadence of a vast and over-burdened machine. Predictably, the most likely outcome of this rhythmically calculated delirium will be a thoroughly exhausted country, one that is neither democratic, nor free.

Ironically, we Americans inhabit the one society that could have been different. Once, it seems, we still had a unique opportunity to nudge each single individual to become more than a crowd. Once, Ralph Waldo Emerson, the quintessential American philosopher, had described us as a unique people, one motivated by industry and “self-reliance,” and not by anxiety, fear, and a hideously relentless trembling.

America, Emerson had urged, needed to favor “plain living” and “high thinking.” What he likely feared most was a society wherein individual citizens would “measure their esteem of each other by what each has, and not by what each is.”

No distinctly American philosophy could possibly have been more systematically disregarded. Soon, even if we can somehow avoid the unprecedented paroxysms of nuclear war and nuclear terrorism, the swaying of the American ship will become unsustainable. Then, finally, we will be able to make out and understand the phantoms of other once-great ships of state.

Laden with silver and gold, these other vanished “vessels” are already long forgotten. Then, too, we will learn that those starkly overwhelming perils that once sent the works of Homer, Goethe, Milton, and Shakespeare to join the works of more easily forgotten poets are no longer unimaginable. They are already here, in the newspapers.

In spite of our proudly heroic claim to be a nation of “rugged individuals,” it is actually the delirious mass or crowd that shapes us, as a people, as Americans. Look about. Our unbalanced society absolutely bristles with demeaning hucksterism, humiliating allusions, choreographed violence, and utterly endless political equivocations. Surely, we ought finally to assert, there must be something more to this country than its fundamentally meaningless elections, its stupefying music, its growing tastelessness, and its all-too willing surrender to near-epidemic patterns of mob-directed consumption.

In an 1897 essay titled “On Being Human,” Woodrow Wilson asked plaintively about the authenticity of America. “Is it even open to us,” inquired Wilson, “to choose to be genuine?” This earlier American president had answered “yes,” but only if we would first refuse to stoop so cowardly before corruption, venality, and political double-talk. Otherwise, Wilson had already understood, our entire society would be left bloodless, a skeleton, dead with that rusty death of machinery, more unsightly even than the death of an individual person.

“The crowd,” observed the 19th century Danish philosopher, Søren Kierkegaard, “is untruth.” Today, following recent elections, and approaching another Thanksgiving, America’s democracy continues to flounder upon a cravenly obsequious and still soulless crowd. Before this can change, we Americans will first need to acknowledge that our institutionalized political, social, and economic world has been constructed precariously upon ashes, and that more substantially secure human foundations now require us to regain a dignified identity, as “self-reliant” individual persons, and as thinking public citizens.

Heading image: Boxing Day at the Toronto Eaton Centre by 松林 L. CC-BY-2.0 via Wikimedia Commons.

The post After the elections: Thanksgiving, consumerism, and the American soul appeared first on OUPblog.

0 Comments on After the elections: Thanksgiving, consumerism, and the American soul as of 11/16/2014 6:51:00 AM
Add a Comment
6. How to naturalize God

A former colleague of mine once said that the problem with theology is that it has no subject-matter. I was reminded of Nietzsche’s (unwittingly self-damning) claim that those who have theologians’ blood in their veins see all things in a distorted and dishonest perspective, but it was counterbalanced a few years later by a comment of another philosopher – on hearing of my appointment to Heythrop College – that it was good that I’d be working amongst theologians because they are more open-minded than philosophers.

Can one be too open-minded? And isn’t the limit traversed when we start talking about God, or, even worse, believe in Him? Presumably yes, if atheism is true, but it is not demonstrably true, and it is unclear in any case what it means to be either an atheist or a theist. (Some think that theists make God in their own image, and that the atheist is in a better position to relate to God.)

The atheist with which we are most familiar likewise takes issue with the theist, and A.C. Grayling goes so far as to claim that we should drop the term ‘atheist’ altogether because it invites debate on the ground of the theist. Rather, we should adopt the term ‘naturalist’, the naturalist being someone who accepts that the universe is a natural realm, governed by nature’s laws, and that it contains nothing supernatural: ‘there is nothing supernatural in the universe – no fairies or goblins, angels, demons, gods or goddesses’.

I agree that the universe is a natural realm, governed by nature’s laws, and I do not believe in fairies or goblins, angels, demons, gods or goddesses. However, I cannot accept that there is nothing supernatural in the universe until it is made absolutely clear what this denial really means.

The trouble is that the term ‘naturalism’ is so unclear. To many it involves a commitment to the idea that the scientist has the monopoly on nature and explanation, in which case the realm of the supernatural incorporates whatever is not natural in this scientific sense.

Others object to this brand of naturalism on the ground that there are no good philosophical or scientific reasons for assigning the limits of nature to science. As John McDowell says: ‘scientism is a superstition, not a stance required by a proper respect for the achievements of the natural sciences’.

display_image.php
Lonely place, by Amaldus Clarin Nielsen. Public domain via The Athenaeum.

McDowell endorses a form of naturalism which accommodates value, holding that it cannot be adequately explained in purely scientific terms. Why stick with naturalism? In short, the position – in its original inception – is motivated by sound philosophical presuppositions.

It involves acknowledging that we are natural beings in a natural world, and gives expression to the demand that we avoid metaphysical flights of fancy, ensuring that our claims remain empirically grounded. To use the common term of abuse, we must avoid anything spooky.

The scientific naturalist is spooked by anything that takes us beyond the limits of science; the more liberal or expansive naturalist is not. However, the typical expansive naturalist stops short of God. Understandably so, given his wish to avoid metaphysical flights of fancy, and given the assumption that such a move can be criticised on this score.

Yet what if his reservations in this context can be challenged in the way that he challenges the scientific naturalist’s reluctance to accept his own position? (The scientific naturalist thinks that McDowell’s values are just plain spooky, and McDowell challenges this complaint on anti-scientistic grounds.)

McDowell could object that the two cases are completely different – God is spooky in the way that value is not. Yet this response simply begs the question against the alternative framework at issue – a framework which challenges the assumption that God must be viewed in these pejorative terms.

The idea that there is a naturalism to accommodate God does not mean that God is simply part of nature – I am not a pantheist – but it does mean that the concept of the divine can already be understood as implicated in our understanding of nature, rather than being thought of as entirely outside it.

So I am rejecting deism to recuperate a form of theistic naturalism which will be entirely familiar to the Christian theist and entirely strange (and spooky) to the typical atheist who is a typical naturalist. McDowell is neither of these things – that’s why his position is so interesting.

The post How to naturalize God appeared first on OUPblog.

0 Comments on How to naturalize God as of 11/16/2014 4:07:00 AM
Add a Comment
7. Bioethics and the hidden curriculum

The inherent significance of bioethics and social science in medicine is now widely accepted… at least on the surface. Despite an assortment of practical problems—limited curricular time compounded by increased concern for “whitespace”—few today deny outright that ethical practice and humanistic patient engagement are important and need to be taught. But public acknowledgements all too often are undercut by a different reality, a form of hidden curriculum that overpowers institutional rhetoric and the best-laid syllabi. Most medical schools now make an effort to acknowledge that ethics and humanities training is part of their mission and we have seen growing inclusion of bioethics and medical humanities in medical curricula. However, more curricular time, in and of itself, is not enough.

Even with increases in contact hours, the value of medical ethics and humanities can be undercut by problems of frequency and duration. Many schools have dedicated significant time to bioethics when measured in contact hours, but in the form of intensive seminars that are effectively quarantined from the rest of the curriculum. While this is a challenge for modular curricula in general, it can be harder for students to integrate ethics and humanities content into biomedical contexts. Irrespective of the number of contact hours, placing bioethics in a curricular ghetto risks sending a message that it is simply is a hoop to jump through, something to eventually be set aside as one returns to the real curriculum.

While partitioning ethics and humanities content presents problems, the integration of ethics into systems-based curricula poses different challenges. While, case-based formats make integration easier, they limit the extent to which one can teach core concepts themselves. For organ systems curricula, where ethics lectures often are “sprinkled in,” the linkages with the biomedical components of the course are underspecified or inherently weak. Medical ethics and humanities are diffused in actual practice such that attempts at thematic alignment with organ systems curricula often are noticeably artificial. In turn, there is an unintentional but palpable message that ethics is an interruption to medical learning. Anyone who has delivered an ethics lecture, sandwiched between two pathology lectures in a GI course knows this feeling only too well.

Finally, there is a misalignment of goals and assessment in bioethics that remains a significant challenge. Certainly, one goal of ethics and humanities education in medical curricula is to provide concrete information about legal directives and consensus opinions. Most of us, however, want to go beyond a purely instrumental approach to ethics and promote the ability to empathize with patients and think critically about ethical and humanistic features of patient care. These issues are much more important than an instrumental approach. While there are a variety of ways to assess these higher-order capacities within a course, board exams loom large in the medical student consciousness (and rightfully so). On a multiple choice exam, being reflexive about one’s ethical framework and exploring the large supply of contingencies surrounding a particular case is a recipe for disaster. In turn, I often find myself encouraging students to pursue interesting and creative lines of thought or to challenge consensus statements from professional bodies, only to end the discussion by warning that they should abandon all such efforts on board exams. Most would agree that ethics is a dialogical activity, yet the examinations with the highest stakes send hidden messages that it is formulaic and instrumental. When “assessment drives learning,” it is difficult for students to set aside concerns about gateway exams and engage the genuine complexity of ethics.

Doctor writing. © webphotographeer  via iStock.
Doctor writing. © webphotographeer via iStock.

While these challenges are curricular, pedagogical, and even cultural, I think there are practical ways that medical schools, and even individual instructors, can destabilize the messages of this hidden curriculum. First, with regard to assessment, we can teach both complex and instrumental ethical methodologies. While this may appear a rather dismal prospect, it can be made respectable by explicating the conditions under which each way of thinking is useful (e.g. the former in real life, the latter on exams). Students then learn not only to turn on and off particular test taking strategies, but this also bolsters their ability to be critical and reflexive—in this case about a instrumental processes of ethical decision-making that are problematic, but nonetheless widespread, even in practice.

Second, we need to move beyond simply including more bioethics education and toward addressing its rhythms within our curricula. I have been fortunate enough to recently join a new medical school unencumbered by a historical (read: petrified) curriculum. In addition to an institutional culture genuinely amenable to ethics and humanities, our curriculum utilizes longitudinal courses that run in parallel to the biomedical systems courses. Instructors therefore have the ability to build the sort of conceptual complexity that truly attends ethics and students have the spaced practice that is key to their development. This structure therefore avoids the problems both of quarantining and random inclusion.

Finally, bioethics curricula need to develop less emphasis on information and a greater utilization of “threshold concepts”. No medical curriculum affords enough time to exhaust the terrain of bioethics and medical humanities. Certainly we need to accept the reality that we typically are not training ethics and humanities scholars, but, at a minimum, physicians with those competencies and even more ideal, physicians who embody those values. However, where the idea of delivering ethics at an appropriate level for physicians often serves as a call for simplicity, I believe it supplies a warrant for focus on our most complex concepts, which also are the most generative and useful. When training practitioners, epistemological concepts—for example, integrative and differentiating ways of thinking—often are eschewed in favor of simpler kinds of information that promote instrumental applications to situations, and a limited ability engage the messy nuances of real world situations. Richer, more complex threshold concepts—like the sociological imagination (the ability to see the interweaving of macro and micro level phenomena)—are broadly relevant and transposable to any number of complex situations.

In the contemporary landscape, few deny outright the significance of ethics and humanities in medicine. But the explicit messaging about their importance remains outmatched by implicit messages hidden in curricula. Having just returned from the annual meeting of the American Society for Bioethics and Humanities, I cannot help but feel that we are spending too much time fighting old battles by repetitiously announcing the relevance of bioethics and too little time confronting the more insidious, hidden messages nestled deeper in the trenches of curriculum and pedagogy. This is a critical challenge.

The post Bioethics and the hidden curriculum appeared first on OUPblog.

0 Comments on Bioethics and the hidden curriculum as of 11/14/2014 6:13:00 AM
Add a Comment
8. Why be rational (or payday in Wonderland)?

Please find below a pastiche of Alice’s Adventures in Wonderland that illustrates what it means to choose rationally:

‘Sit down, dear’, said the White Queen.

Alice perched delicately on the edge of a chair fashioned from oyster-shells.

‘Coffee, or tea, or chocolate?’, enquired the Queen.

‘I’ll have chocolate, please.’

The Queen turned to the Unicorn, standing, as ever, behind the throne: ‘Trot along to the kitchen and bring us a pot of chocolate if you would. There’s a good Uni.’

Off he trots. And before you can say ‘jabberwocky’ is back: ‘I’m sorry, Your Majesty, and Miss Alice, but we’ve run out of coffee.’

‘But I said chocolate, not coffee’, said a puzzled Alice.

The Unicorn was unmoved: ‘I am well aware of that, Miss. As well as a horn I have two good ears, and I’m not deaf’.

Alice thought again: ‘In that case’, she said, ‘I’ll have tea, if I may?’

‘Of course you may,’ replied the Queen. ‘But if you do, you’ll be violating a funny little thing that in the so-called Real World is known as the contraction axiom; in Wonderland we never bother about such annoyances. In the Real World they claim that they do, but they don’t.’

‘Don’t they?’ asked Alice.

‘No. I’ve heard it said, though I can scarce believe it, that their politicians ordain that a poor girl like you when faced with the choice between starving or taking out a payday loan is better off if she has only the one option, that of starving. No pedantic worries about contraction there (though I suppose your waist would contract, now I come to think of it). But this doesn’t bother me: like their politicians, I am rich, a Queen in fact, as my name suggests’.

Alice_in_Wonderland
Alice in Wonderland, by Jessie Wilcox Smith. Public Domain via Wikimedia Commons

‘On reflection, I will revert to chocolate, please. And do they have any other axes there?’

‘Axioms, child, not axes. And yes, they do. They’re rather keen on what they call their expansion axiom – the opposite, in a sense, of their contraction axiom. What if Uni had returned from the kitchen saying that they also had frumenty – a disgusting concoction, I know – and you had again insisted on tea? Then as well making your teeth go brown you’d have violated that axiom.’

‘I know I’m only a little girl, Your Majesty, but who cares?’

‘Not I, not one whit. But people in the Real World seem to. If they satisfy both of these axiom things they consider their choice to be rational, which is something they seem to value. It means, for example, that if they prefer coffee to tea, and tea to chocolate, then they prefer coffee to chocolate.’

‘Well, I prefer coffee to tea, tea to chocolate, and chocolate to tea. And why shouldn’t I?’

‘Because, poor child, you’ll be even poorer than you are now. You’ll happily pay a groat to that greedy little oyster over there to change from tea to coffee, pay him another groat to change from coffee to chocolate, and pay him yet another groat to change from chocolate to tea. And then where will you be? Back where you started from, but three groats the poorer. That’s why if you’re not going to be rational you should remain in Wonderland, or be a politician.’

This little fable illustrates three points. The first is that rationality is a property of patterns of choice rather than of individual choices. As Hume famously noted in 1738, ‘it is not contrary to reason to prefer the destruction of the whole world to the scratching of my finger; it is not contrary to reason for me to chuse [sic] my total ruin to prevent the least uneasiness of an Indian’. However, it seems irrational to choose chocolate when the menu comprises coffee, tea, and chocolate; and to choose tea when it comprises just tea and chocolate. It also seems irrational to choose chocolate from a menu that includes tea; and to choose tea from a larger menu. The second point is that making consistent choices (satisfying the two axioms) and having transitive preferences (not cycling, as does Alice) are, essentially, the same thing: each is a characterisation of rationality. And the third point is that people are, on the whole, rational, for natural selection weeds out the irrational: Alice would not lose her three groats just once, but endlessly.

These three points are equally relevant to the trivia of our daily lives (coffee, tea, or chocolate) and to major questions of government policy (for example, the regulation of the loan market).

Featured image credit: ‘Drink me Alice’, by John Tenniel. Public domain via Wikimedia Commons

The post Why be rational (or payday in Wonderland)? appeared first on OUPblog.

0 Comments on Why be rational (or payday in Wonderland)? as of 11/14/2014 3:25:00 AM
Add a Comment
9. Review: The Universe of Things

Austin Roberts reviews Steven Shaviro's The Universe of Things: On Speculative Realism:

One of the most interesting trends in recent philosophy is what is sometimes called Speculative Realism. The name comes from a conference in 2007 at the University of London that brought together four very different philosophers who nevertheless were united in their efforts to resurrect realist metaphysics: Quentin Meillassoux, Ray Brassier, Graham Harman, and Iain Hamilton Grant. Each of them hold quite different metaphysical positions, but all four critique what they name "philosophies of correlation." As a theologian and not a philosopher, I can't help but make a connection to my field here. Just as the Radical Orthodox movement identifies a key moment in the history of philosophy (for RO, this is Duns Scotus' univocity) that leads to its destructive decline, the Speculative Realists point back to Kant's apparently disastrous argument that the thing-in-itself is unknowable. MORE...

I've just started this myself. Lots of Whitehead, and lots of good sense so far...

Add a Comment
10. Collapse Vol. VIII ready for pre-order

Collapse Vol. VIII is finally ready for pre-order. Do it.

With the public trial of 'Casino Capitalism' underway, Collapse VIII examines a pervasive image of thought drawn from games of chance. Surveying those practices in which intellectual resources are most acutely concentrated on the production of capitalizable risk, the volume uncovers the conceptual underpinnings of methods developed to extract value from contingency - in the casino, in the markets, in life.

Add a Comment
11. Ancient voices for today [infographic]

The ancient writers of Greece and Rome are familiar to many, but what do their voices really tell us about who they were and what they believed? In Twelve Voices from Greece and Rome, Christopher Pelling and Maria Wyke provide a vibrant and distinctive introduction to twelve of the greatest authors from ancient Greece and Rome, writers whose voices still resonate across the centuries. Below is an infographic that shows how each of the great classical authors would describe their voice today, if they could.

CF_12voicesIG_100314_final

Download the infographic in pdf or jpeg.

Featured image credit: “Exterior of the Colosseum” by Diana Ringo. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

The post Ancient voices for today [infographic] appeared first on OUPblog.

0 Comments on Ancient voices for today [infographic] as of 11/7/2014 7:27:00 PM
Add a Comment
12. Review: Kierkegaard's Relation to Hegel Reconsidered

According to standard interpretations of 19th-century European philosophy, a stark ’either / or’ divided Hegel and Kierkegaard, and this divide profoundly shaped the subsequent development of Continental philosophy well into the 20th century. While left Hegelians carried on the legacy of Hegel’s rationalism and universalism, existentialists and postmodernists found inspiration, at least in part, in Kierkegaard’s critique of systematic philosophy, rationality, and socially integrated subjectivity. In Kierkegaard’s Relation to Hegel Reconsidered, Jon Stewart provides a detailed historical argument which challenges the standard assumption that Kierkegaard’s position was developed in opposition to Hegel’s philosophy, and as such is antithetical to it. (It is worth noting that, in Hegel: Myths and Legends, Stewart criticized the ’either / or’ from the other direction, arguing that Hegel is not the arch-rationalist he is often taken to be). Without denying the existence of a certain “metalevel” dispute between Hegel and Kierkegaard, Stewart argues that (a) many of Kierkegaard’s central ideas, such as the theory of stages, are creatively, i.e., not uncritically, adopted from Hegel, and, (b) the true target of Kierkegaard’s critique is not Hegel per se, but prominent Danish Hegelians of his time. According to Stewart, ignorance of Kierkegaard’s intellectual milieu, coupled with a distorted and inadequate understanding of Hegel, has led many English-speaking critics to adopt the overly simple ’either / or’. Stewart seeks to correct this problem by showing how Kierkegaard’s writing rose out of, and responded primarily to, debates in Denmark in the 1830’s and 40’s surrounding Hegel’s philosophy and its implications for theology. MORE...

Review of Jon Stewart's Kierkegaard's Relation to Hegel Reconsidered (from way back in 2004) in NDPR.

Add a Comment
13. The ethics of a mercenary

In July 2014, the Ukrainian President, Petro Poroshenko, claimed that Ukraine wasn’t fighting a civil war in the east of the country but rather was “defending its territory from foreign mercenaries.” Conversely, rumours abounded earlier in the year that Academi, the firm formerly known as Blackwater, were operating in support of the Ukrainian government (which Academi strongly denied). What is interesting is not simply whether these claims are true, but also their rhetorical force. Being a mercenary and using mercenaries is seen as one of the worst moral failings in a conflict.

Regardless of the accuracy of the claims and counterclaims about their use in Ukraine, the increased use of mercenaries or ‘private military and security companies’ is one of the most significant transformations of military force in recent times. In short, states now rely heavily on private military and security companies to wage wars. In the First Gulf War, there was a ratio of roughly one contractor to every 100 soldiers; by 2008 in the Second Gulf War, that ratio had risen to roughly one to one. In Afghanistan, the ratio was even higher, peaking at 1.6 US-employed contractors per soldier. The total number of Department of Defense contractors (including logistical contractors) reached approximately 163,000 in Iraq in September 2008 and approximately 117,000 in Afghanistan in March 2012. A lot of the media attention surrounding the use of private military and security companies has been on the use of armed foreign contractors in conflict zones, such as Blackwater in Iraq. But the vast majority of the industry provides much more mundane logistical services, such as cleaning and providing food for regular soldiers.

Does this help to remove the pejorative mercenary tag? The private military and security industry has certainly made a concerted effort to attempt to rid itself of the tag, given its rhetorical force. Industry proponents claim private military and security companies are different to mercenaries because of their increased range of services, their alleged professionalism, their close links to employing states, and their corporate image. None of these alleged differences, however, provides a clear—i.e. an analytically necessary—distinction between mercenaries, and private military and security companies. After all, mercenaries could offer more services, could be professional, could have close links to states, and could have a flashy corporate image. Despite the proclamations of industry proponents, private military and security companies might still then be mercenaries.

Security
Security Watch by The U.S Army. CC-BY-2.0 via Flickr.

But what, if anything, is morally wrong with being a mercenary or a private contractor? Could one be an ethical mercenary? In short, yes. To see this, suppose that you go to fight for a state that is trying to defend itself against attack from a genocidal rebel group, which is intent on killing thousands of innocent civilians. You get paid handsomely for this, but this is not the reason why you agree to fight—you just want to save lives. If fighting as a private contractor will, in fact, save lives, and any use of force will only be against those who are liable, is it morally permissible to be a contractor? I think so, given the import of saving lives. As such, mercenaries/private contractors might behave ethically sometimes.

Does this mean that we are incorrect to view mercenaries/private contractors as morally tainted? This would be too quick. We need to keep in mind that, although the occasional mercenary/private contractor might be fully ethical, it seems unlikely that they will be in general. There are at least two reasons to be sceptical of this. First, although there may be exceptions, it seems that financial considerations will often play a greater role in the decision for mercenaries/private contractors to take up arms than for regular soldiers. And, if we think that individuals should be motivated by concern for others rather than self-interest (manifest through the concern for financial gain), we should worry about the increased propensity for mercenary motives. Second, although it may be morally acceptable to be a mercenary/private contractor when considered in isolation, there is a broader worry about upholding and contributing to the general practice of mercenarism and the private military and security industry. One should be wary about contributing to a general practice that is morally problematic, such as mercenarism.

To elaborate, the central ethical problems surrounding private military force do not concern the employees, but rather the employers of these firms. The worries include the following:

  1. that governments can employ private military and security companies to circumvent many of the constitutional and parliamentary—and ultimately democratic—constraints on the decision to send troops into action;
  2. that it is questionable whether these firms are likely to be effective in the theatre, because, for instance, contractors and the firms can more easily choose not to undertake certain operations; and
  3. that there is an abrogation of a state’s responsibility of care for those fighting on its behalf (private contractors generally don’t receive the same level of support after conflict as regular soldiers since political leaders are often less concerned about the deaths of private contractors).

There are also some more general worries about the effects on market for private force on the international system. It makes it harder to maintain the current formal constraints (e.g. current international laws) on the frequency and awfulness of warfare that are designed for the statist use of force. And a market for force can be expected to increase international instability by enabling more wars and unilateralism, as well as by increasing the ability of state and nonstate actors to use military force.

These are the major problems of mercanarism and the increased use of private military force. To that extent, I think that behind the rhetorical force of the claims about mercenaries in Ukraine, there are good reasons to be worried about their use, if not in Ukraine (where the facts are still to be ascertained), but more generally elsewhere. Despite the increased use of private military and security companies and the claims that they differ to mercenaries, we should be wary of the use of private military and security companies as well.

The post The ethics of a mercenary appeared first on OUPblog.

0 Comments on The ethics of a mercenary as of 11/2/2014 4:35:00 AM
Add a Comment
14. Efficient causation: Our debt to Aristotle and Hume

Causation is now commonly supposed to involve a succession that instantiates some lawlike regularity. This understanding of causality has a history that includes various interrelated conceptions of efficient causation that date from ancient Greek philosophy and that extend to discussions of causation in contemporary metaphysics and philosophy of science. Yet the fact that we now often speak only of causation, as opposed to efficient causation, serves to highlight the distance of our thought on this issue from its ancient origins. In particular, Aristotle (384-322 BCE) introduced four different kinds of “cause” (aitia): material, formal, efficient, and final. We can illustrate this distinction in terms of the generation of living organisms, which for Aristotle was a particularly important case of natural causation. In terms of Aristotle’s (outdated) account of the generation of higher animals, for instance, the matter of the menstrual flow of the mother serves as the material cause, the specially disposed matter from which the organism is formed, whereas the father (working through his semen) is the efficient cause that actually produces the effect. In contrast, the formal cause is the internal principle that drives the growth of the fetus, and the final cause is the healthy adult animal, the end point toward which the natural process of growth is directed.

Aristotle_by_Raphael
Aristotle, by Raphael Sanzio. Public domain via Wikimedia Commons.

From a contemporary perspective, it would seem that in this case only the contribution of the father (or perhaps his act of procreation) is a “true” cause. Somewhere along the road that leads from Aristotle to our own time, material, formal and final aitiai were lost, leaving behind only something like efficient aitiai to serve as the central element in our causal explanations. One reason for this transformation is that the historical journey from Aristotle to us passes by way of David Hume (1711-1776). For it is Hume who wrote: “[A]ll causes are of the same kind, and that in particular there is no foundation for that distinction, which we sometimes make betwixt efficient causes, and formal, and material … and final causes” (Treatise of Human Nature, I.iii.14). The one type of cause that remains in Hume serves to explain the producing of the effect, and thus is most similar to Aristotle’s efficient cause. And so, for the most part, it is today.

However, there is a further feature of Hume’s account of causation that has profoundly shaped our current conversation regarding causation. I have in mind his claim that the interrelated notions of cause, force and power are reducible to more basic non-causal notions. In Hume’s case, the causal notions (or our beliefs concerning such notions) are to be understood in terms of the constant conjunction of objects or events, on the one hand, and the mental expectation that an effect will follow from its cause, on the other. This specific account differs from more recent attempts to reduce causality to, for instance, regularity or counterfactual/probabilistic dependence. Hume himself arguably focused more on our beliefs concerning causation (thus the parenthetical above) than, as is more common today, directly on the metaphysical nature of causal relations. Nonetheless, these attempts remain “Humean” insofar as they are guided by the assumption that an analysis of causation must reduce it to non-causal terms. This is reflected, for instance, in the version of “Humean supervenience” in the work of the late David Lewis. According to Lewis’s own guarded statement of this view: “The world has its laws of nature, its chances and causal relationships; and yet — perhaps! — all there is to the world is its point-by-point distribution of local qualitative character” (On the Plurality of Worlds, 14).

David_Hume
Portrait of David Hume, by Allan Ramsey (1766). Public domain via Wikimedia Commons.

Admittedly, Lewis’s particular version of Humean supervenience has some distinctively non-Humean elements. Specifically — and notoriously — Lewis has offered a counterfactural analysis of causation that invokes “modal realism,” that is, the thesis that the actual world is just one of a plurality of concrete possible worlds that are spatio-temporally discontinuous. One can imagine that Hume would have said of this thesis what he said of Malebranche’s occasionalist conclusion that God is the only true cause, namely: “We are got into fairy land, long ere we have reached the last steps of our theory; and there we have no reason to trust our common methods of argument, or to think that our usual analogies and probabilities have any authority” (Enquiry concerning Human Understanding, §VII.1). Yet the basic Humean thesis in Lewis remains, namely, that causal relations must be understood in terms of something more basic.

And it is at this point that Aristotle re-enters the contemporary conversation. For there has been a broadly Aristotelian move recently to re-introduce powers, along with capacities, dispositions, tendencies and propensities, at the ground level, as metaphysically basic features of the world. The new slogan is: “Out with Hume, in with Aristotle.” (I borrow the slogan from Troy Cross’s online review of Powers and Capacities in Philosophy: The New Aristotelianism.) Whereas for contemporary Humeans causal powers are to be understood in terms of regularities or non-causal dependencies, proponents of the new Aristotelian metaphysics of powers insist that regularities and dependencies must be understood rather in terms of causal powers.

Should we be Humean or Aristotelian with respect to the question of whether causal powers are basic or reducible features of the world? Obviously I cannot offer any decisive answer to this question here. But the very fact that the question remains relevant indicates the extent of our historical and philosophical debt to Aristotle and Hume.

Headline image: Face to face. Photo by Eugenio. CC-BY-SA-2.0 via Flickr

The post Efficient causation: Our debt to Aristotle and Hume appeared first on OUPblog.

0 Comments on Efficient causation: Our debt to Aristotle and Hume as of 10/19/2014 6:03:00 AM
Add a Comment
15. Chicken by Chicken: Accepting Who We Are.

Hi, folks, this week is another response  blog. I heard a song called Constellations by Brendan James and it resonated with me. This is a long ramble, a thought journey, inspired by that song, and I hope that you find something to take with you.

I feel like don't really understand the world, and it makes me cry. I feel so out of step with the seasons and times. I can't stand reading the news, or even checking out my Facebook half the time. There are too many wars. Nation against nation. Neighbor against neighbor. Here inside me, I hunger to see people come together, to take a deep breath and just figure out where to go from here. I hope bridges are built, coalitions are made, and every voice is heard. I dream that we would all listen and find better ways. I don't want to join the madding crowd that wants to heckle the stupid, drop bombs, and dehumanize others, all in the name of a better world.

I see the Universe at night and how it is able to spin out wondrous things and at the same time wreak great destruction. I feel the transience of life and yet eternity hums in my heart. Everyone I know is trying to get through the day without dwelling on the darkness. Some take the "be positive about everything" route. Some take the "find a cause" route. I swing between the route of despair and the route of hope, that I might be the voice that breaks through the noise and says something helpful.

I have had unshakable confidence throughout my life that if I got a chance on a stage that I would move the hearts of those shivering on the edges. I have believed that I would grow like a wild weed, but now see so clearly that my life is just a breath and is gone. A Monarch butterfly was caught in between the window and the screen in my house. Some hapless caterpillar crawled between the window and screen and formed a chrysalis. The butterfly emerged and now would die if I did not figure out a gentle way to remove the screen and let it go on it's way to the graveyards of Mexico for the day of dead. When I figured out a way to set the butterfly free, it occurred to me that all of my life might be just for that. Perhaps those beautiful wings have more purpose than I will ever have.

This brings me to the heart of this thought journey. I have hungered for purpose. I have believed all my life that a day was coming that the gifts within me would become visible, like the span over us -- Orion, the Pleiades, the evening star, the moon, and the swath of the Milky Way. I have believed my gifts would come clear like those lights in the heavens. But here I am making less than minimum wage and imploding under the stress of another miss in terms of my intended goal.

In the end we are not in control of our story, and hence I must embrace the days given us. I find embracing the smallness of who I am is difficult. Megalomania is expected in rock stars, but not here in Suburbia. I have to laugh at myself a little and laugh at my little dramas.There is certainly a ridiculousness to me.

Ah, you are just a onion flower in the yard. Most folks will pass by the onion flower but, hey, go ahead and bloom. Touch ten hearts, fifty hearts, A copper star for you.  Not the silver, not the gold. That's all, dear. Work it out.

Thank you for dropping by and remember every little thing shines.  See you next week.

This week is a page from my Halloween project: CHICKENS TAKE OVER HALLOWEEN. 


Here is a quote for your pocket.
The end of law is not to abolish or restrain, but to preserve and enlarge freedom. For in all the states of created beings capable of law, where there is no law, there is no freedom.John Locke.

0 Comments on Chicken by Chicken: Accepting Who We Are. as of 10/18/2014 3:48:00 PM
Add a Comment
16. Reading Marcus Aurelius’s Meditations with a modern perspective

Marcus Aurelius’s Meditations is a remarkable phenomenon, a philosophical diary written by a Roman emperor, probably in 168-80 AD, and intended simply for his own use. It offers exceptional insights into the private thoughts of someone who had a very weighty public role, and may well have been composed when he was leading a military campaign in Germany. What features might strike us today as being especially valuable, bearing in mind our contemporary concerns?

At a time when the question of public trust in politicians is constantly being raised, Marcus emerges, in this completely personal document, as a model of integrity. Not only does he define for himself his political ideal (“a monarchy that values above all things the freedom of the subject”) and spell out what this ideal means in his reflections on the character and lifestyle of his adoptive father and predecessor as emperor, Antoninus Pius, but he also reminds himself repeatedly of the triviality of celebrity, wealth and status, describing with contempt the lavish purple imperial robe he wore as stained with “blood from a shellfish”. Of course, Marcus was not a democratic politician and, with hindsight, we can find things to criticize in his acts as emperor — though he was certainly among the most reasonable and responsible of Roman emperors. But I think we would be glad if we knew that our own prime ministers or presidents approached their role, in their most private hours, with an equal degree of thoughtfulness and breadth of vision.

Another striking feature of the Meditations, and one that may well resonate with modern experience, is the way that Marcus aims to combine a local and universal perspective. In line with the Stoic philosophy that underpins his diary, Marcus often recalls that the men and women he encounters each day are fellow-members of the brotherhood of humanity and fellow-citizens in the universe. He uses this fact to remind himself that working for his brothers is an essential part of his role as an emperor and a human being. This reminder helps him to counteract the responses of irritation and resentment that, he admits, the behavior of other people might otherwise arouse in him. At a time when we too are trying to bridge and negotiate local and global perspectives, Marcus’s thoughts may be worth reflecting on. Certainly, this seems to me a more balanced response than ignoring the friend or partner at your side in the café while engrossed in phone conversations with others across the world.

By Pierre-Selim. CC-BY-SA-3.0 via Wikimedia Commons.
Bust of Marcus Aurelius, Musée Saint-Raymond. By Pierre-Selim. CC-BY-SA-3.0 via Wikimedia Commons.

More broadly, Marcus, again in line with Stoic thinking, underlines that the ethics of human behavior need to take account of the wider fact that human beings form an integral part of the natural universe and are subject to its laws. Of course, we may not share his confidence that the universe is shaped by order, structure and providential care — though I think it is worth thinking seriously about just how much of that view we have to reject. But the looming environmental crisis, along with the world-wide rise in obesity and the alarming healthcare consequences, represent for us a powerful reminder that we need to rethink the ethics of our relationship to the natural world and re-examine our understanding of what is natural in human life. Marcus’s readiness to see himself, and humanity, as inseparable parts of a larger whole, and to subordinate himself to that whole, may serve as a striking example to us, even if the way we pursue that thought is likely to be different from that of Stoicism.

Another striking theme in the Meditations is the looming presence of death, our own and those of others we are close to. This might seem very alien to the modern secular Western world, where death is often either ignored or treated as something too terrible to mention. But the fact that Marcus’s attitude is so different from our own may be precisely what makes it worth considering. He not only underlines the inevitability of death and the fact that death is a wholly natural process, and for that reason something we should accept. He couples this with the claim that knowledge of the certainty of death does not undermine the value of doing all that what we can while alive to lead a good human life and to develop in ourselves the virtues essential for this life. Although such ideas have often formed part of religious responses to death (which have lost their hold over many people today) Marcus puts them in a form that modern non-religious people can accept. This is another reason, I think, why Marcus’s philosophical diary can speak to us today in a language we can make sense of.

Featured image: Marcus Aurelius’s original statue in Rome, by Zanner. Public domain via Wikimedia Commons.

The post Reading Marcus Aurelius’s Meditations with a modern perspective appeared first on OUPblog.

0 Comments on Reading Marcus Aurelius’s Meditations with a modern perspective as of 1/1/1900
Add a Comment
17. When tragedy strikes, should theists expect to know why?

My uncle used to believe in God. But that was before he served in Iraq. Now he’s an atheist. How could a God of perfect power and perfect love allow the innocent to suffer and the wicked to flourish?

Philosophers call this the problem of evil. It’s the problem of trying to reconcile two things that at first glance seem incompatible: God and evil. If the world were really governed by a being like God, shouldn’t we expect the world to be a whole lot better off than it is? But given the amount, kind, and distribution of evil things on earth, many philosophers conclude that there is no God. Tragedy, it seems, can make atheism reasonable.

Theists—people who believe in God—may share this sentiment in some ways, but in the end they think that the existence of God and the existence of evil are compatible. But how could this be? Well, many theists attempt to offer what philosophers call a theodicy – an explanation for why God would allow evils of the sort we find.

Perhaps good can’t exist without evil. But would that make God’s existence dependent on another? Perhaps evil is the necessary byproduct of human free will. But would that explain evils like ebola and tsunamis? Perhaps evil is a necessary ingredient to make humans stronger and more virtuous. But would that justify a loving human father in inflicting similar evil on his children? Other theists reject the attempt to explain the existence of evils in our world and yet deny that the existence of unexplained evil is a problem for rational belief in God.

The central idea is simple: just as a human child cannot decipher all of the seemingly pointless things that her parent does for her benefit, so, too, we cannot decipher all of the seemingly pointless evils in our world. Maybe they really are pointless, but maybe they aren’t — the catch is that things would look the same to us either way. And if they would look the same either way, then the existence of these evils cannot be evidence for atheism over theism.

The_Good_and_Evil_Angels_Tate_Blake
The Good and Evil Angels by William Blake. Public domain via Wikimedia Commons.

Philosophers call such theists ‘skeptical’ theists since they believe that God exists but are skeptical of our abilities to decipher whether the evils in our world are justified just by considering them.

The debate over the viability of skeptical theism involves many issues in philosophy including skepticism and ethics. With regard to the former, how far does the skepticism go? Should theists also withhold judgment about whether a particular book counts as divine revelation or whether the apparent design in the world is actual design? With regard to the latter, if we should be skeptical of our abilities to determine whether an evil we encounter is justified, does that give us a moral reason to allow it to happen?

It seems that skeptical theism might invoke a kind of moral paralysis as we move through the world unable to see which evils further God’s plans and which do not.

Skeptical theists have marshalled replies to these concerns. Whether the replies are successful is up for debate. In either case, the renewed interest in the problem of evil has resurrected one of the most prevalent responses to evil in the history of theism — the response of Job when he rejects the explanations of his calamity offered by his friends and yet maintains his belief in God despite his ignorance about the evils he faces.

Headline image credit: Job’s evil dreams. Watercolor illustration by William Blake. Public domain via Wikimedia Commons.

The post When tragedy strikes, should theists expect to know why? appeared first on OUPblog.

0 Comments on When tragedy strikes, should theists expect to know why? as of 1/1/1900
Add a Comment
18. Plato and contemporary bioethics

Since its advent in the early 1970s, bioethics has exploded, with practitioners’ thinking expressed not only in still-expanding scholarly venues but also in the gamut of popular media. Not surprisingly, bioethicists’ disputes are often linked with technological advances of relatively recent vintage, including organ transplantation and artificial-reproductive measures like preimplantation genetic diagnosis and prenatal genetic testing. It’s therefore tempting to figure that the only pertinent reflective sources are recent as well, extending back — glancingly at most — to Immanuel Kant’s groundbreaking 18th-century reflections on autonomy. Surely Plato, who perforce could not have tackled such issues, has nothing at all to contribute to current debates.

This view is false — and dangerously so — because it deprives us of avenues and impetuses of reflection that are distinctive and could help us negotiate present quandaries. First, key topics in contemporary bioethics are richly addressed in Greek thought both within Plato’s corpus and through his critical engagement with Hippocratic medicine. This is so regarding the nature of the doctor-patient tie, medical professionalism, and medicine’s societal embedment, whose construction ineluctably concerns us all individually and as citizens irrespective of profession.

Second, the most pressing bioethical topics — whatever their identity — ultimately grip us not on technological grounds but instead for their bearing on human flourishing (in Greek, eudaimonia). Surprisingly, this foundational plane is often not singled out in bioethical discussions, which regularly tend toward circumscription. The fundamental grip obtains either way, but its neglect as a conscious focus harms our prospects for existing in a way that is most thoughtful, accountable, and holistic. Again a look at Plato can help, for his handling of all salient topics shows fruitfully expansive contextualization.

1847-code-of-ethics (1)
AMA Code of Medical Ethics. Public domain via Wikipedia Commons

Regarding the doctor-patient tie, attempts to circumvent Scylla and Charybdis — extremes of paternalism and autonomy, both oppositional modes — are garnering significant bioethical attention. Dismayingly given the stakes, prominent attempts to reconceive the tie fail because they veer into paternalism, allegedly supplanted by autonomy’s growing preeminence in recent decades. If tweaking and reconfiguration of existing templates are insufficient, what sources not yet plumbed might offer fresh reference points for bioethical conversation?

Prima facie, invoking Plato, staunch proponent of top-down autocracy in the Republic, looks misguided. In fact, however, the trajectory of his thought — Republic to Laws via the Statesman — provides a rare look at how this profound ancient philosopher came at once to recognize core human fallibility and to stare firmly at its implications without capitulating to pessimism about human aptitudes generally. Captivated no longer by the extravagant gifts of a few — philosophers of Kallipolis, the Republic’s ideal city — Plato comes to appreciate for the first time the intellectual and ethical aptitudes of ordinary citizens and nonphilosophical professionals.

Human motivation occupies Plato in the Laws, his final dialogue. His unprecedented handling of it there and philosophical trajectory on the topic warrant our consideration. While the Republic shows Plato’s unvarnished confidence in philosophers to rule — indeed, even one would suffice (502b, 540d) — the Laws insists that human nature as such entails that no one could govern without succumbing to arrogance and injustice (713c). Even one with “adequate” theoretical understanding could not properly restrain himself should he come to be in charge: far from reliably promoting communal welfare as his paramount concern, he would be distracted by and cater to his own yearnings (875b). “Adequate” understanding is what we have at best, but only “genuine” apprehension — that of philosophers in the Republic, seen in the Laws as purely wishful — would assure incorruptibility.

The Laws’ collaborative model of the optimal doctor-patient tie in Magnesia, that dialogue’s ideal city, is one striking outcome of Plato’s recognition that even the best among us are fallible in both insight and character. Shared human aptitudes enable reciprocal exchanges of logoi (rational accounts), with patients’ contributing as equal, even superior, partners concerning eudaimonia. This doctor-patient tie is firmly rooted in society at large, which means for Plato that there is close and unveiled continuity between medicine and human existence generally in values’ application. From a contemporary standpoint, the Laws suggests a fresh approach — one that Plato himself arrived at only by pressing past the Republic’s attachment to philosophers’ profound intellectual and values-edge, whose bioethical counterpart is a persistent investment in the same regarding physicians.

If values-spheres aren’t discrete, it’s unsurprising that medicine’s quest to demarcate medical from non-medical values, which extends back to the American Medical Association’s original Code of Medical Ethics (1847), has been combined with an inability to make it stick. In addition, a tension between the medical profession’s healing mission and associated virtues, on the one side, and other goods, particularly remuneration, on the other, is present already in that code. This conflict is now more overt, with rampancy foreseeable in financial incentives’ express provision to intensify or reduce care and to improve doctors’ behavior without concern for whether relevant qualities (e.g., self-restraint, courage) belong to practitioners themselves.

“As Plato rightly reminds us, professional and other endeavors transpire and gain their traction from their socio-political milieu”

Though medicine’s greater pecuniary occupation is far from an isolated event, the human import of it is great. Remuneration’s increasing use to shape doctors’ behavior is harmful not just because it sends the flawed message that health and remuneration are commensurable but for what it reveals more generally about our priorities. Plato’s nuanced account of goods (agatha), which does not orbit tangible items but covers whatever may be spoken of as good, may be helpful here, particularly its addressing of where and why goods are — or aren’t — cross-categorically translatable.

Furthermore, if Plato is right that certain appetites, including that for financial gain, are by nature insatiable — as weakly susceptible to real fulfillment as the odds of filling a sieve or leaky jar are dim (Gorgias 493a-494a) — then even as we hope to make doctors more virtuous via pecuniary incentives, we may actually be promoting vice. Engagement with Plato supports our retreat from calibrated remuneration and greater devotion to sources of inspiration that occupy the same plane of good as the features of doctors we want to promote. If the goods at issue aren’t commensurable, then the core reward for right conduct and attitudes by doctors shouldn’t be monetary but something more in keeping with the tier of good reflected thereby, such as appreciative expressions visible to the community (a Platonic example is seats of honor at athletic games, Laws 881b). Of course, this directional shift shouldn’t be sprung on doctors and medical students in a vacuum. Instead, human values-education (paideia) must be devotedly and thoughtfully instilled in educational curricula from primary school on up. From this vantage point, Plato’s vision of paideia as a lifelong endeavor is worth a fresh look.

As Plato rightly reminds us, professional and other endeavors transpire and gain their traction from their socio-political milieu: we belong first to human communities, with professions’ meaning and broader purposes rooted in that milieu. The guiding values and priorities of this human setting must be transparent and vigorously discussed by professionals and non-professionals alike, whose ability to weigh in is, as the Laws suggests, far more substantive than intra-professional standpoints usually acknowledge. This same line of thought, combined with Plato’s account of universal human fallibility, bears on the matter of medicine’s continued self-policing.

Linda Emanuel claims that “professional associations — whether national, state or county, specialty, licensing, or accrediting — are the natural parties to articulate tangible standards for professional accountability. Almost by definition, there are no other entities that have such ability and extensive responsibility to be the guardians of health care values — for the medical profession and for society” (53-54). Further, accountability “procedures” may include “a moral disposition, with only an internal conscience for monitoring accountability” (54). On grounds above all of our fallibility, which is strongly operative both with and absent malice, the Laws foregrounds reciprocal oversight of all, including high officials, not just from within but across professional and sociopolitical roles; crucially, no one venue is the arbiter in all cases. Whatever the number of intra-medical umbrellas that house the profession’s oversight, transparency operates within circumscribed bounds at most, and medicine remains the source of the very standards to which practitioners — and “good” patients — will be held. Moreover, endorsing moral self-oversight here without undergirding pedagogical and aspirational structures is less likely to be effective than to hold constant or even amplify countervailing motivations.

As can be only briefly suggested here, not only the themes but also their intertwining makes further bioethical consideration of Plato vastly promising. I’m not proposing our endorsement of Plato’s account as such. Rather, some positions themselves, alongside the rich expansiveness and trajectory of his explorations, are two of Plato’s greatest legacies to us — both of which, however, have been largely untapped to date. Not only does reflection on Plato stand to enrich current bioethical debates regarding the doctor-patient tie, medical professionalism, and medicine’s societal embedment, it offers a fresh orientation to pressing debates on other bioethical topics, prominent among them high-stakes discord over the technologically-spurred project of radical human “enhancement.”

Headline image credit: Doctor Office 1. Photo by Subconsci Productions. CC BY-SA 2.0 via Flickr

The post Plato and contemporary bioethics appeared first on OUPblog.

0 Comments on Plato and contemporary bioethics as of 10/12/2014 4:31:00 AM
Add a Comment
19. The vision of Confucius

To understand China, it is essential to understand Confucianism. There are many teachings of Confucianist tradition, but before we can truly understand them, it is important to look at the vision Confucius himself had. In this excerpt below from Confucianism: A Very Short Introduction, Daniel K. Gardner discusses the future the teacher behind the ideas imagined.

Confucius imagined a future where social harmony and sage rulership would once again prevail. It was a vision of the future that looked heavily to the past. Convinced that a golden age had been fully realized in China’s known history, Confucius thought it necessary to turn to that history, to the political institutions, the social relations, the ideals of personal cultivation that he believed prevailed in the early Zhou period, in order to piece together a vision to serve for all times. Here a comparison with Plato, who lived a few decades after the death of Confucius, is instructive. Like Confucius, Plato was eager to improve on contemporary political and social life. But unlike Confucius, he did not believe that the past offered up a normative model for the present. In constructing his ideal society in the Republic, Plato resorted much less to reconstruction of the past than to philosophical reflection and intellectual dialogue with others.

This is not to say, of course, that Confucius did not engage in philosophical reflection and dialogue with others. But it was the past, and learning from it, that especially consumed him. This learning took the form of studying received texts, especially the Book of Odes and the Book of History. He explains to his disciples:

“The Odes can be a source of inspiration and a basis for evaluation; they can help you to get on with others and to give proper expression to grievances. In the home, they teach you about how to serve your father, and in public life they teach you about how to serve your lord”.

The frequent references to verses from the Odes and to stories and legends from the History indicate Confucius’s deep admiration for these texts in particular and the values, the ritual practices, the legends, and the institutions recorded in them.

But books were not the sole source of Confucius’s knowledge about the past. The oral tradition was a source of instructive ancient lore for him as well. Myths and stories about the legendary sage kings Yao, Shun, and Yu; about Kings Wen and Wu and the Duke of Zhou, who founded the Zhou and inaugurated an age of extraordinary social and political harmony; and about famous or infamous rulers and officials like Bo Yi, Duke Huan of Qi, Guan Zhong, and Liuxia Hui—all mentioned by Confucius in the Analects—would have supplemented what he learned from texts and served to provide a fuller picture of the past.

Ma Lin - Emperor Yao" by Ma Lin - National Palace Museum, Taipei. Licensed under Public domain via Wikimedia Commons.
“Ma Lin – Emperor Yao” by Ma Lin – National Palace Museum, Taipei. Licensed under Public domain via Wikimedia Commons.

Still another source of knowledge for Confucius, interestingly, was the behavior of his contemporaries. In observing them, he would select out for praise those manners and practices that struck him as consistent with the cultural norms of the early Zhou and for condemnation those that in his view were contributing to the Zhou decline. The Analects shows him railing against clever speech, glibness, ingratiating appearances, affectation of respect, servility to authority, courage unaccompanied by a sense of right, and single-minded pursuit of worldly success—behavior he found prevalent among contemporaries and that he identified with the moral deterioration of the Zhou. To reverse such deterioration, people had to learn again to be genuinely respectful in dealing with others, slow in speech and quick in action, trustworthy and true to their word, openly but gently critical of friends, families, and rulers who strayed from the proper path, free of resentment when poor, free of arrogance when rich, and faithful to the sacred three-year mourning period for parents, which to Confucius’s great chagrin, had fallen into disuse. In sum, they had to relearn the ritual behavior that had created the harmonious society of the early Zhou.

That Confucius’s characterization of the period as a golden age may have been an idealization is irrelevant. Continuity with a “golden age” lent his vision greater authority and legitimacy, and such continuity validated the rites and practices he advocated. This desire for historical authority and legitimacy—during a period of disrupture and chaos—may help to explain Confucius’s eagerness to present himself as a mere transmitter, a lover of the ancients. Indeed, the Master’s insistence on mere transmission notwithstanding, there can be little doubt that from his study and reconstruction of the early Zhou period he forged an innovative—and enduring—sociopolitical vision. Still, in his presentation of himself as reliant on the past, nothing but a transmitter of what had been, Confucius established what would become something of a cultural template in China. Grand innovation that broke entirely with the past was not much prized in the pre-modern Chinese tradition. A Jackson Pollock who consciously and proudly rejected artistic precedent, for example, would not be acclaimed the creative genius in China that he was in the West. Great writers, great thinkers, and great artists were considered great precisely because they had mastered the tradition—the best ideas and techniques of the past. They learned to be great by linking themselves to past greats and by fully absorbing their styles and techniques. Of course, mere imitation was hardly sufficient; imitation could never be slavish. One had to add something creative, something entirely of one’s own, to mastery of the past.

Thus when you go into a museum gallery to view pre-modern Chinese landscapes, one hanging next to another, they appear at first blush to be quite similar. With closer inspection, however, you find that this artist developed a new sort of brush stroke, and that one a new use of ink-wash, and this one a new style of depicting trees and their vegetation. Now that your eye is becoming trained, more sensitive, it sees the subtle differences in the landscape paintings, with their range of masterful techniques an expression. But even as it sees the differences, it recognizes that the paintings evolved out of a common landscape tradition, in which artists built consciously on the achievements of past masters.

Featured image credit: “Altar of Confucius (7360546688)” by Francisco Anzola – Altar of Confucius. Licensed under CC BY 2.0 via Wikimedia Commons.

The post The vision of Confucius appeared first on OUPblog.

0 Comments on The vision of Confucius as of 9/12/2014 7:14:00 AM
Add a Comment
20. The construction of the Cartesian System as a rival to the Scholastic Summa

René Descartes wrote his third book, Principles of Philosophy, as something of a rival to scholastic textbooks. He prided himself in ‘that those who have not yet learned the philosophy of the schools will learn it more easily from this book than from their teachers, because by the same means they will learn to scorn it, and even the most mediocre teachers will be capable of teaching my philosophy by means of this book alone’ (Descartes to Marin Mersenne, December 1640).

Still, what Descartes produced was inadequate for the task. The topics of scholastic textbooks ranged much more broadly than those of Descartes’ Principles; they usually had four-part arrangements mirroring the structure of the collegiate curriculum, divided as they typically were into logic, ethics, physics, and metaphysics.

But Descartes produced at best only what could be called a general metaphysics and a partial physics.

Knowing what a scholastic course in physics would look like, Descartes understood that he needed to write at least two further parts to his Principles of Philosophy: a fifth part on living things, i.e., animals and plants, and a sixth part on man. And he did not issue what would be called a particular metaphysics.

Frans_Hals_-_Portret_van_René_Descartes
Portrait of René Descartes by Frans Hans. Public domain via Wikimedia Commons.

Descartes, of course, saw himself as presenting Cartesian metaphysics as well as physics, both the roots and trunk of his tree of philosophy.

But from the point of view of school texts, the metaphysical elements of physics (general metaphysics) that Descartes discussed—such as the principles of bodies: matter, form, and privation; causation; motion: generation and corruption, growth and diminution; place, void, infinity, and time—were usually taught at the beginning of the course on physics.

The scholastic course on metaphysics—particular metaphysics—dealt with other topics, not discussed directly in the Principles, such as: being, existence, and essence; unity, quantity, and individuation; truth and falsity; good and evil.

Such courses usually ended up with questions about knowledge of God, names or attributes of God, God’s will and power, and God’s goodness.

Thus the Principles of Philosophy by itself was not sufficient as a text for the standard course in metaphysics. And Descartes also did not produce texts in ethics or logic for his followers to use or to teach from.

These must have been perceived as glaring deficiencies in the Cartesian program and in the aspiration to replace Aristotelian philosophy in the schools.

So the Cartesians rushed in to fill the voids. One could mention their attempts to complete the physics—Louis de la Forge’s additions to the Treatise on Man, for example—or to produce more conventional-looking metaphysics—such as Johann Clauberg’s later editions of his Ontosophia or Baruch Spinoza’s Metaphysical Thoughts.

Cartesians in the 17th century began to supplement the Principles and to produce the kinds of texts not normally associated with their intellectual movement, that is treatises on ethics and logic, the most prominent of the latter being the Port-Royal Logic (Paris, 1662).

By the end of the 17th century, the Cartesians, having lost many battles, ulti­mately won the war against the Scholastics.

The attempt to publish a Cartesian textbook that would mirror what was taught in the schools culminated in the famous multi-volume works of Pierre-Sylvain Régis and of Antoine Le Grand.

The Franciscan friar Le Grand initially published a popular version of Descartes’ philosophy in the form of a scholastic textbook, expanding it in the 1670s and 1680s; the work, Institution of Philosophy, was then translated into English together with other texts of Le Grand and published as An Entire Body of Philosophy according to the Principles of the famous Renate Descartes (London, 1694).

On the Continent, Régis issued his General System According to the Principles of Descartes at about the same time (Amsterdam, 1691), having had difficulties receiving permission to publish. Ultimately, Régis’ oddly unsystematic (and very often un-Cartesian) System set the standard for Cartesian textbooks.

By the end of the 17th century, the Cartesians, having lost many battles, ulti­mately won the war against the Scholastics. The changes in the contents of textbooks from the scholastic Summa at beginning of the 17th century to the Cartesian System at the end can enable one to demonstrate the full range of the attempted Cartesian revolution whose scope was not limited to physics (narrowly conceived) and its epistemology, but included logic, ethics, physics (more broadly conceived), and metaphysics.

Headline image credit: Dispute of Queen Cristina Vasa and René Descartes, by Nils Forsberg (1842-1934) after Pierre-Louis Dumesnil the Younger (1698-1781). Public domain via Wikimedia Commons.

The post The construction of the Cartesian System as a rival to the Scholastic Summa appeared first on OUPblog.

0 Comments on The construction of the Cartesian System as a rival to the Scholastic Summa as of 9/15/2014 9:34:00 AM
Add a Comment
21. Coffee tasting with Aristotle

Imagine a possible world where you are having coffee with … Aristotle! You begin exchanging views on how you like the coffee; you examine its qualities – it is bitter, hot, aromatic, etc. It tastes to you this way or this other way. But how do you make these perceptual judgments? It might seem obvious to say that it is via the senses we are endowed with. Which senses though? How many senses are involved in coffee tasting? And how many senses do we have in all?

The question of how many senses we have is far from being of interest to philosophers only; perhaps surprisingly, it appears to be at the forefront of our thinking – so much so that it was even made the topic of an episode of the BBC comedy program QI. Yet, it is a question that is very difficult to answer. Neurologists, computer scientists and philosophers alike are divided on what the right answer might be. 5? 7? 22? Uncertainty prevails.

Even if the number of the senses is a question for future research to settle, it is in fact as old as rational thought. Aristotle raised it, argued about it, and even illuminated the problem, setting the stage for future generations to investigate it. Aristotle’s views are almost invariably the point of departure of current discussions, and get mentioned in what one might think unlikely places, such as the Harvard Medical School blog, the John Hopkins University Press blog, and QI. “Why did they teach me they are five?” says Alan Davies on the QI panel. “Because Aristotle said it,” replies Stephen Fry in an eye blink. (Probably) the senses are in fact more than the five Aristotle identified, but his views remain very much a point of departure in our thinking about this topic.

Aristotle thought the senses are five because there are five types of perceptible properties in the world to be experienced. This criterion for individuating the senses has had a very longstanding influence, in many domains including for example the visual arts.

Yet, something as ‘mundane’ as coffee tasting generates one of the most challenging philosophical questions, and not only for Aristotle. As you are enjoying your cup of coffee, you appreciate its flavor with your senses of taste and smell: this is one experience and not two, even if two senses are involved. So how do senses do this? For Aristotle, no sense can by itself enable the perceiver to receive input of more than one modality, precisely because uni-modal sensitivity is what according to Aristotle identifies uniquely each sense. On the other hand, it would be of no use to the perceiving subject to have two different types of perceptual input delivered by two different senses simultaneously, but as two distinct perceptual contents. If this were the case, the difficulty would remain unsolved. In which way would the subject make a perceptual judgment (e.g. about the flavor of the coffee), given that not one of the senses could operate outside its own special perceptual domain, but perceptual judgment presupposes discriminating, comparing, binding, etc. different types of perceptual input? One might think that perceptual judgments are made at the conceptual rather than perceptual level. Aristotle (and Plato) however would reject this explanation because they seek an account of animal perception that generalizes to all species and is not only applicable to human beings. In sum, for Aristotle to deliver a unified multimodal perceptual content the senses need to somehow cooperate and gain access in some way to each other’s special domain. But how do they do this?

Linard, Les cinq sens
Linard, Les cinq sens. Public domain via Wikimedia Commons.

A sixth sense? Is that the solution? Is this what Aristotle means when talking about the ‘common’ sense? There cannot be room for a sixth sense in Aristotle’s theory of perception, for as we have seen each sense is individuated by the special type of perceptible quality it is sensitive to, and of these types there are only five in the world. There is no sixth type of perceptible quality that the common sense would be sensitive to. (And even if there were a sixth sense so individuated, this would not solve the problem of delivering multimodal content to the perceiver, because the sixth sense would be sensitive only to its own special type of perceptibles). The way forward is then to investigate how modally different perceptual contents, each delivered by one sense, can be somehow unified, in such a way that my perceptual experience of coffee may be bitter and hot at once. But how can bitter and hot be unified?

Modeling (metaphysically) of how the senses cooperate to deliver to the perceiving subject unified but complex perceptual content is another breakthrough Aristotle made in his theory of perception. But it is much less known than his criterion for the senses’ individuation. In fact, Aristotle is often thought to have given an ad hoc and unsatisfactory solution to the problem of multimodal binding (of which tasting the coffee’s flavor is an instance), by postulating that there is a ‘common’ sense that somehow enables the subject to perform all the perceptual functions that the five sense singly cannot do. It is timely to take a departure form this received view which does not pay justice to Aristotle’s insights. Investigating Aristotle’s thoughts on complex perceptual content (often scattered among his various works, which adds to the interpretative challenge) reveals a much richer theory of perception that it is by and large thought he has.

If the number of the senses is a difficult question to address, how the senses combine their contents is an even harder one. Aristotle’s answer to it deserves at least as much attention as his views on the number of the senses currently receive in scholarly as well as ‘popular’ culture.

Headline image credit: Coffee. CC0 Public Domain via Pixabay

The post Coffee tasting with Aristotle appeared first on OUPblog.

0 Comments on Coffee tasting with Aristotle as of 9/16/2014 5:46:00 AM
Add a Comment
22. What commuters know about knowing

If your morning commute involves crowded public transportation, you definitely want to find yourself standing next to someone who is saying something like, “I know he’s stabbed people, but has he ever killed one?” . It’s of course best to enjoy moments like this in the wild, but I am not above patrolling Overheard in London for its little gems (“Shall I give you a ring when my penguins are available?”), or, on an especially desperate day, going all the way back to the London-Lund Corpus of Spoken English, a treasury of oddly informative conversations (many secretly recorded) from the 1960s and 1970s. Speaker 1: “When I worked on the railways these many years ago, I was working the claims department, at Pretona Station Warmington as office boy for a short time, and one noticed that the tremendous number of claims against the railway companies were people whose fingers had been caught in doors as the porters had slammed them.” Speaker 2: “Really. Oh my goodness.” (Speaker 1 then reports that the railway found it cheaper to pay claims for lost fingers than to install safety trim on the doors.)

Photo by CGPGrey and Alex Tenenbaum. Image supplied with permission by Jennifer Nagel.
Photo by CGPGrey and Alex Tenenbaum. Image supplied with permission by Jennifer Nagel.

If you ever need a good cover story for your eavesdropping, you are welcome to use mine: as an epistemologist, I study the line that divides knowing from merely thinking that something is the case, a line we are constantly marking in everyday conversation. There it was, in the first quotation: “I know he’s stabbed people.” How, exactly was this known, one wonders, and why was knowledge of this fact reported? There’s no shortage of data: knowledge, as it turns out, is reported heavily. In spoken English (as measured most authoritatively, by the 450-million-word Corpus of Contemporary American English), ‘know’ and ‘think’ figure as the sixth and seventh most commonly used verbs, muscling out what might seem to be more obvious contenders like ‘get’ and ‘make’. Spoken English is deeply invested in knowing, easily outshining other genres on this score. In academic writing, for example, ‘know’ and ‘think’ are only the 17th and 22nd-most popular verbs, well behind the scholar’s pallid friends ‘should’ and ‘could’. To be fair, some of the conversational traffic in ‘know’ is coming from fixed phrases, like — you know — invitations to conversational partners to make some inference, or — I know — indications that you are accepting what conversational partners are saying. But even after we strip out those formulaic uses, the database’s randomly sampled conversations remain thickly larded with genuine references to knowing and thinking. Meanwhile, similar results are found in the 100-million-word British National Corpus; this is not just an American thing.

Kanye West performing at Lollapalooza on April 3, 2011 in Santiago, Chile. Photo by rodrigoferrari. CC-BY-SA-2.0 via Wikimedia Commons
Kanye West performing at Lollapalooza on April 3, 2011 in Santiago, Chile. Photo by rodrigoferrari. CC-BY-SA-2.0 via Wikimedia Commons

It’s perhaps a basic human thing: conversations naturally slide towards the social. When we are not using language to do something artificial (like academic writing), we relate topics to ourselves. Field research in English pubs, cafeterias, and trains convinced British psychologist Robin Dunbar that most of our casual conversation time is taken up with ‘social topics’: personal relationships, personal experiences, and social plans. Anthropologist John Haviland apparently found similar patterns among the Zinacantan people in the remote highlands of Mexico. We talk about what people think, like, and want, constantly linking conversational topics back to human perspectives and feelings.

There’s an extreme philosophical theory about this tendency, advanced in Ancient Greece by Protagoras, and in our day by the best-known living American philosopher, Kanye West. Protagoras’s ideas reach us only in fragments transmitted through the reports of others, so I’ll give you Kanye’s formulation, transmitted through Twitter: “Feelings are the only facts”. Against the notion that the realm of the subjective is unreal, this theory maintains that reality can never be anything other than subjective. Here (as elsewhere) Kanye goes too far. The mental state verbs we use to link conversational topics back to humanity fall into two families, with interestingly different levels of subjectivity, divided along a line which has to do with the status of claims as fact. The first family is labeled factive, and includes such expressions as realizes, notices, is aware that, and sees that; the mother of all factive verbs is knows (and according to Oxford philosopher Timothy Williamson, knowledge is what unites the whole factive family). Non-factives make up the second family, whose members include thinks, suspects, believes and is sure. Factive verbs, rather predictably, properly attach themselves only to facts: you can know that Jack has stabbed someone only if he really has. Non-factive verbs are less informative: Jane might think that Edwin is following her even if he isn’t. In saying that Jane suspects Edwin has been stabbing people, I leave it an open question whether her suspicions are right: I report her feelings while remaining neutral on the relevant facts. Even when they mark strong degrees of subjective conviction — “Edwin is sure that Jane likes him” — non-factive expressions do not, unfortunately for Edwin in this case, necessarily attach themselves to facts. Feelings and facts can come apart.

Factives like ‘know’, meanwhile, allow us to report facts and feelings together at a single stroke. If I say that Lucy knows that the train is delayed, I’m simultaneously sharing news about the train and about Lucy’s attitude. Sometimes we use factives to reveal our attitudes to facts already known to the audience (“I know what you did last summer”), but most conversational uses of factives are bringing fresh facts into the picture. That last finding is from the work of linguist Jennifer Spenader, whose analysis of the dialogue about railway claims pulled me into the London-Lund Corpus in the first place (my goodness, so many fresh facts with those factives). Spenader and I both struggle with some deep theoretical problems about the line between knowing and thinking, but it nevertheless remains a line whose basic significance can be felt instinctively and without special training, even in casual conversation. No, wait, we have more than a feeling for this. We know something about it.

The post What commuters know about knowing appeared first on OUPblog.

0 Comments on What commuters know about knowing as of 9/20/2014 4:17:00 PM
Add a Comment
23. The problem with moral knowledge

Traveling through Scotland, one is struck by the number of memorials devoted to those who lost their lives in World War I. Nearly every town seems to have at least one memorial listing the names of local boys and men killed in the Great War (St. Andrews, where I am spending the year, has more than one).

Scotland endured a disproportionate number of casualties in comparison with most other Allied nations as Scotland’s military history and the Scots’ reputation as particularly effective fighters contributed to both a proportionally greater number of Scottish recruits as well as a tendency for Allied commanders to give Scottish units the most dangerous combat assignments.

Many who served in World War I undoubtedly suffered from what some contemporary psychologists and psychiatrists have labeled ‘moral injury’, a psychological affliction that occurs when one acts in a way that runs contrary to one’s most deeply-held moral convictions. Journalist David Wood characterizes moral injury as ‘the pain that results from damage to a person’s moral foundation’ and declares that it is ‘the signature wound of [the current] generation of veterans.’

By definition, one cannot suffer from moral injury unless one has deeply-held moral convictions. At the same time that some psychologists have been studying moral injury and how best to treat those afflicted by it, other psychologists have been uncovering the cognitive mechanisms that are responsible for our moral convictions. Among the central findings of that research are that our emotions often influence our moral judgments in significant ways and that such judgments are often produced by quick, automatic, behind-the-scenes cognition to which we lack conscious access.

Thus, it is a familiar phenomenon of human moral life that we find ourselves simply feeling strongly that something is right or wrong without having consciously reasoned our way to a moral conclusion. The hidden nature of much of our moral cognition probably helps to explain the doubt on the part of some philosophers that there really is such a thing as moral knowledge at all.

Edinburgh_Castle,_Scottish_National_War_Memorial_rear
Scottish National War Memorial, Edinburgh Castle. Photo by Nilfanion, CC BY-SA 3.0 via Wikimedia Commons.

In 1977, philosopher John Mackie famously pointed out that defenders of the reality of objective moral values were at a loss when it comes to explaining how human beings might acquire knowledge of such values. He declared that believers in objective values would be forced in the end to appeal to ‘a special sort of intuition’— an appeal that he bluntly characterized as ‘lame’. It turns out that ‘intuition’ is indeed a good label for the way many of our moral judgments are formed. In this way, it might appear that contemporary psychology vindicates Mackie’s skepticism and casts doubt on the existence of human moral knowledge.

Not so fast. In addition to discovering that non-conscious cognition has an important role to play in generating our moral beliefs, psychologists have discovered that such cognition also has an important role to play in generating a great many of our beliefs outside of the moral realm.

According to psychologist Daniel Kahneman, quick, automatic, non-conscious processing (which he has labeled ‘System 1′ processing) is both ubiquitous and an important source of knowledge of all kinds:

‘We marvel at the story of the firefighter who has a sudden urge to escape a burning house just before it collapses, because the firefighter knows the danger intuitively, ‘without knowing how he knows.’ However, we also do not know how we immediately know that a person we see as we enter a room is our friend Peter. … [T]he mystery of knowing without knowing … is the norm of mental life.’

This should provide some consolation for friends of moral knowledge. If the processes that produce our moral convictions are of roughly the same sort that enable us to recognize a friend’s face, detect anger in the first word of a telephone call (another of Kahneman’s examples), or distinguish grammatical and ungrammatical sentences, then maybe we shouldn’t be so suspicious of our moral convictions after all.

The good news is that hope for the reality of moral knowledge remains.

The good news is that hope for the reality of moral knowledge remains. – See more at: http://blog.oup.com/?p=75592&preview=true#sthash.aozalMuy.dpuf

In all of these cases, we are often at a loss to explain how we know, yet it is clear enough that we know. Perhaps the same is true of moral knowledge.

Still, there is more work to be done here, by both psychologists and philosophers. Ironically, some propose a worry that runs in the opposite direction of Mackie’s: that uncovering the details of how the human moral sense works might provide support for skepticism about at least some of our moral convictions.

Psychologist and philosopher Joshua Greene puts the worry this way:

‘I view science as offering a ‘behind the scenes’ look at human morality. Just as a well-researched biography can, depending on what it reveals, boost or deflate one’s esteem for its subject, the scientific investigation of human morality can help us to understand human moral nature, and in so doing change our opinion of it. … Understanding where our moral instincts come from and how they work can … lead us to doubt that our moral convictions stem from perceptions of moral truth rather than projections of moral attitudes.’

The challenge advanced by Greene and others should motivate philosophers who believe in moral knowledge to pay attention to findings in empirical moral psychology. The good news is that hope for the reality of moral knowledge remains.

And if there is moral knowledge, there can be increased moral wisdom and progress, which in turn makes room for hope that someday we can solve the problem of war-related moral injury not by finding an effective way of treating it but rather by finding a way of avoiding the tragedy of war altogether. Reflection on ‘the war to end war’ may yet enable it to live up to its name.

The post The problem with moral knowledge appeared first on OUPblog.

0 Comments on The problem with moral knowledge as of 9/21/2014 5:46:00 AM
Add a Comment
24. Should Britain intervene militarily to stop Islamic State?

Britain and the United States have been suffering from intervention fatigue. The reason is obvious: our interventions in Iraq and Afghanistan have proven far more costly and their results far more mixed and uncertain than we had hoped.

This fatigue manifested itself in almost exactly a year ago, when Britain’s Parliament refused to let the Government offer military support to the U.S. and France in threatening punitive strikes against Syria’s Assad regime for its use of chemical weapons. Since then, however, developments in Syria have shown that our choosing not to intervene doesn’t necessarily make the world a safer place. Nor does it mean that distant strife stays away from our shores.

There is reason to suppose that the West’s failure to intervene early in support of the 2011 rebellion against the repressive Assad regime left a vacuum for the jihadists to fill—jihadists whose ranks now include several hundred British citizens.

A
A Syrian woman sits in front her home as Free Syrian Army fighters stand guard during a break in fighting in a neighborhood of Damascus, Syria. April 1, 2012. Photo by Freedom House, CC BY 2.0 via Flickr.

There’s also some reason to suppose that the West’s failure to support Georgia militarily against Russia in 2008, and to punish the Assad regime for its use of chemical weapons, has encouraged President Putin to risk at least covert military aggression in Ukraine. I’m not saying that the West should have supported Georgia and punished Assad. I’m merely pointing out that inaction has consequences, too, sometimes bad ones.

Now, however, despite out best efforts to keep out of direct involvement in Syria, we are being drawn in again. The rapid expansion of ‘Islamic State’, involving numerous mass atrocities, has put back on our national desk the question of whether we should intervene militarily to help stop them.

What guidance does the tradition of just war thinking give us in deliberating about military intervention? The first thing to say is that there are different streams in the tradition of just war thinking. In the stream that flows from Michael Walzer, the paradigm of a just war is national self-defence. More coherently, I think, the Christian stream, in which I swim, holds that the paradigm of a just war is the rescue of the innocent from grave injustice. This rescue can take either defensive or aggressive forms. The stipulation that the injustice must be ‘grave’ implies that some kinds of injustice should be borne rather than ended by war. This because war is a destructive and hazardous business, and so shouldn’t be ventured except for very strong reasons.

What qualifies as ‘grave’ injustice, then? In the 16th and 17th centuries just war theorists like Vitoria and Grotius proposed as candidates such inhumane social practices as cannibalism or human sacrifice. International law currently stipulates ‘genocide’. The doctrine of the Responsibility to Protest (‘R2P’) would broaden the law to encompass mass atrocity. Let’s suppose that mass atrocity characteristic of a ruling body is just cause for military intervention. Some nevertheless argue, in the light of Iraq and Afghanistan, that intervention is not an appropriate response, because it just ddoesn’twork. Against that conclusion, I call two witnesses, both of whom have served as soldiers, diplomats, and politicians, and have had direct experience of responsibility for nation-building: Paddy Ashdown and Rory Stewart.

RAF Merlin Helicopter Supplies Troops in Iraq
A Royal Air Force Merlin helicopter delivers supplies to an element of the Queens Royal Lancers during a patrol in Maysan Province, Iraq in 2007. Photo: Cpl Ian Forsyth RLC/MOD, via Wikimedia Commons

Ashdown, the international High Representative for Bosnia and Herzegovina from 2002-6, argues that “[h]igh profile failures like Iraq should not … blind us to the fact that, overall, the success stories outnumber the failures by a wide margin”.

Rory Stewart was the Coalition Provisional Authority’s deputy governor of two provinces of southern Iraq from 2003-4. He approached the task of building a more stable, prosperous Iraq with optimism, but experience brought him disillusion. Nevertheless, Stewart writes that “it is possible to walk the tightrope between the horrors of over-intervention and non-intervention; that there is still a possibility of avoiding the horrors not only of Iraq but also of Rwanda; and that there is a way of approaching intervention that can be good for us and good for the country concerned”.

Notwithstanding that, one lesson from our interventions in Iraq and Afghanistan—and indeed from British imperial history—is that successful interventions in foreign places, which go beyond the immediate fending off of indiscriminate slaughter on a massive scale to attempting some kind of political reconstruction, cannot be done quickly or on the cheap.

Here’s where national interest comes in. National interest isn’t necessarily immoral. A national government has a moral duty to look after the well being of its own people and to advance its genuine interests. What’s more, some kind of national interest must be involved if military intervention is to attract popular support, without which intervention is hard, eventually impossible, to sustain. One such interest can be moral integrity. Nations usually care about more than just being safe and fat. Usually they want to believe that they are doing the right thing, and they will tolerate the costs of war—up to a point—in a just cause that looks set to succeed. I have yet to meet a Briton who is not proud of what British troops achieved in Sierra Leone in the year 2000, even though Britain had no material stake in the outcome of that country’s civil war.

It is not unreasonable for them to ask why their sons and daughters should be put in harm’s way.

However, the nation’s interest in its own moral integrity alone will probably not underwrite military intervention that incurs very heavy costs. So other interests—such as national security—are needed to stiffen popular support for a major intervention. It is not unreasonable for a national people to ask why they should bear the burdens of military intervention, especially in remote parts of the world.

It is not unreasonable for them to ask why their sons and daughters should be put in harm’s way. And the answer to those reasonable questions will have to present itself in terms of the nation’s own interests. This brings us back to Syria and Islamic State. Repressive though the Assad regime was and is, and nasty though the civil war is, it probably wasn’t sufficiently in Britain’s national interest to become deeply involved militarily in 2011. The expansion of Islamic State, however, engages our interest in national security more directly, partly because as part of the West we are its declared enemy and partly because some of our own citizens are fighting for it and might bring their jihad back onto our own streets.

We do have a stronger interest, therefore, in taking the risks and bearing the costs of military intervention to stop and to disable Islamic State, and of subsequent political intervention to help create sustainable polities in Syria and Iraq.

The post Should Britain intervene militarily to stop Islamic State? appeared first on OUPblog.

0 Comments on Should Britain intervene militarily to stop Islamic State? as of 1/1/1900
Add a Comment
25. The Meaning of Human Existence

Through a brilliant melding of science and philosophy, "The father of sociobiology" boldly tackles humanity's biggest questions — namely, what is our role on earth, and how can we continue to evolve as a species? Wilson's writing style is accessible as always, and his passion and empathy continue to push us toward greater levels of [...]

0 Comments on The Meaning of Human Existence as of 1/1/1900
Add a Comment

View Next 25 Posts